NYCJUG/2013-07-09

From J Wiki
Jump to navigation Jump to search

ordering files, conjunction/adverb/verb/noun hierarchy, card game simulation, software development estimation, arc consistency, code presentation, javascript, astronomy, Lobster games programming, Harlan GPGPU programming

Beginner's Regatta

Numbering Files to Put Names in Date/Time Order

We have a directory of photos but our slideshow program will only show them in alphabetical order which is a problem if we’ve selected some photos and named them meaningfully. However, a little J code will quickly fix this.

First, let’s move to the directory where we have these photos.

   qts''
2013 7 8 19 2 49.037
   1!:44 '../Vacation201304'

   1!:43''
C:\amisc\pix\Photos\2013Q2\Vacation201304

Now extract the names and timestamps of the photos from the directory listing.

   'jfls dtms'=. <"1|:0 1{"1 dir '*.jpg'
   $jfls
892

Check that these look correct.

   >3{.&.>jfls;<dtms
+------------------+------------------+-----------------+
|LeavingNYC.jpg    |LeavingNYC2.jpg   |LeavingNYC3.jpg  |
+------------------+------------------+-----------------+
|2013 6 16 23 48 20|2013 6 30 13 27 22|2013 5 26 13 6 58|
+------------------+------------------+-----------------+

Order the names by the timestamps and check that the resulting start and ending files look like the ones we expect. We'll look at the first and last three files:

   (3&{.,:_3&{.) jfls=. jfls/:dtms
+------------------+-------------------+-------------------+
|LeavingNYC.jpg    |LeavingNYC2.jpg    |LeavingNYC3.jpg    |
+------------------+-------------------+-------------------+
|returningToNYC.jpg|returningToNYC2.jpg|returningToNYC3.jpg|
+------------------+-------------------+-------------------+

Create the list of new names which are the same as the existing ones but prefixed by ascending numbers padded with leading zeros so they’ll alphabetize correctly.

   newnms=. (4 lead0s 10*>:i.#jfls),&.>jfls

where

   lead0s
[: ]`>@.(1 = [: # ,) ('r<0>','.0',~ [: ":[) 8!:0 [:|[: ".`]@.(0=[: {.0#]) ]

Why "4"? Because of how many numbers we generate (which count up by tens in case we want to insert other photos later):

   10^.10*#fls
3.94988
   <.>:10^.10*#fls
4

So we need four digits. Check that the names look as we expect:

   (3&{.,:_3&{.) newnms
+----------------------+-----------------------+-----------------------+
|0010LeavingNYC.jpg    |0020LeavingNYC2.jpg    |0030LeavingNYC3.jpg    |
+----------------------+-----------------------+-----------------------+
|8900returningToNYC.jpg|8910returningToNYC2.jpg|8920returningToNYC3.jpg|
+----------------------+-----------------------+-----------------------+

Build the DOS copy commands and check that they look right.

   cmds=. (<'copy "'),&.>jfls,&.>(<'" "..\orderedSlides\'),&.>newnms,&.>'"'
   ({.,:{:) cmds
+---------------------------------------------------------------------+
|copy "LeavingNYC.jpg" "..\orderedSlides\0010LeavingNYC.jpg"          |
+---------------------------------------------------------------------+
|copy "returningToNYC3.jpg" "..\orderedSlides\8920returningToNYC3.jpg"|
+---------------------------------------------------------------------+

Make sure our destination directory exists:

   shell 'mkdir ..\orderedSlides'
   dir '..\*'
+----------------+------------------+-+---+------+
|orderedSlides   |2013 7 8 19 5 57  |0|rw-|----d-|
+----------------+------------------+-+---+------+
|Vacation201304  |2013 7 8 11 54 22 |0|rw-|----d-|
+----------------+------------------+-+---+------+

Make sure we’re still in the correct source directory:

   1!:43''
C:\amisc\pix\Photos\2013Q2\Vacation201304

Run the commands and time how long they take:

   6!:2 'shell&.>cmds'
76.8691

Perhaps more important than this reasonably quick run time is the fact that it took less than seven minutes to produce this code.

Dan Explains Adverbs

from:	 Dan Bron <j@bron.us> via srs.acm.org
to:	 chat@jsoftware.com
date:	 Thu, Jun 13, 2013 at 12:48 PM
subject: Re: [Jchat] "|value error: m | x m write_jpeg y" - what???

Alexander Epifanov wrote:

> but I did not understand what is the different, I mean how it works if it is adverb.

To understand this error, we must first discuss what adverbs are, how they behave, and how they differ from verbs. So let's start there.

Adverbs are a different class, or order, of words than verbs. In particular, they have higher grammatical precedence than verbs, and so gobble up any suitable arguments lying around before verbs can even see them. Conjunctions are in this same high-precedence class, but whereas adverbs only take one argument (on the left), conjunctions take two (one on the left, the other on the right). You can think of adverbs and conjunctions as higher-order analogs to monadic and dyadic verbs respectively.*

Adverbs are called adverbs because they normally modify verbs: that is, in typical use, they accept a verb argument and produce verb result, which is related in some (consistent) way to the argument. The most famous example is / :

 +/ 2 3 4  NB.  Sum of data (Σ s[i])
 */ 2 3 4  NB.  Product of data  (Π s[i])
 ^/ 2 3 4  NB.  Tetration ("power tower") of data

Here, / takes a dyad (two-argument verb) as an argument, and produces a monad (one-argument verb)*. The output is related to the input in the following sense: when the output verb is provided an noun, it inserts the input verb between each pair of items in the noun, such that: +/ 2 3 4 is 2+3+4 */ 2 3 4 is 2*3*4 ^/ 2 3 4 is 2^3^4 NB. J executes right-to-left, so this is 2^(3^4) and

   +/ 2 3 4 , 5 6 7 ,: 8 9 10

is:

      2  3  4
         +
      5  6  7
         +
      8  9 10

which, because + is rank 0 (scalar), is:

      2  3  4
      +  +  +
      5  6  7
      +  +  +
      8  9 10

etc.

But bear in mind that taking verb arguments and deriving (consistently) related verbal results is only the typical case for an adverb. Adverbs can also take a noun for an argument (an "adjective"); the most common example is  } , which normally takes a noun argument specifying which indices the derived verb should modify (when it, itself, is applied to nouns):

  putFirst        =:    0}
  putLast         =:   _1}
  putFirstAndLast =: 0 _1}

  '*' putFirst '12345'
*2345
  '*' putLast 'ABCDE'
ABCD*
  '*' putFirstAndLast 'ABCDE'
*BCD*

So adverbs can take verbs or nouns as inputs, and normally produce verbs as outputs. But adverbs are not restricted to verbal output; they can produce anything, including verbs, nouns, and even other adverbs and conjunctions. Primitive adverbs which produce non-verb results are unusual (primitive conjunctions are a little more diverse in this regard), but they exist. For example, when the adverb ~ is applied to a string, it treats the string in as a name and evokes it, such that 'someName'~ is equivalent to someName. Therefore ~ can produce anything at all:

   someNoun =: 42
   someVerb =: +
   someAdverb =: /
   someConjunction =: @

   'someNoun'~
42
   'someVerb'~
+
   'someAdverb'~
/
   'someConjunction'~
@

Of course user-defined adverbs will produce anything they're defined to produce, so you can't know what they'll do without reading the definition or documentation. That said, user-defined adverbs tend to follow the same patterns as primitive adverbs: they're almost always abstractions over verbs which produce verb results; sometimes they take noun arguments and/or produce noun results, and only very rarely do they produce other adverbs or conjunctions.

Ok, with that as a background, we're ready to discuss write_image and the error you observed.

The word write_image falls into this "user defined adverb" category. The reason it was defined as an adverb instead of a verb is so that it can accept up to 3 arguments (filename, data to write, and a set of options like image quality or scaling), whereas if it were defined as a verb, it could accept no more than two arguments. Meaning if write_image had been defined as a verb, it would have to find some way to pack two arguments into a single noun, and unpack them inside the definition, which can sometimes lead to convoluted code. Keeping it as an adverb with three distinct arguments is very clear and clean.

But it does stymie attempts to use it like a verb, as you discovered. In particular, when you embedded it in

   (('small/'&, (write_image)~ ((3 3)&resize_image)@:read_image)@:>) i

, its higher grammatical priority caused the adverb to seek out an argument immediately, and since the verb 'small/'&, was on its left and suitable (because verbs are perfectly acceptable arguments for adverbs), the result was that write_image bound with 'small/'&, .

Now, the specific coding style** of write_image prevented it from being executed immediately (if it'd been executed, you'd know it, because you would have gotten an error: write_image is expecting data [a noun] as an argument, not a verb like 'small/'&,), but it also allowed the J interpreter to infer that when it is executed, it will produce a verb.

So write_image woke up, looked around for an argument, found 'small/'&, , bound with it, and though it didn't actually execute, the J interpreter knew its product would be a verb. Knowing this, J proceeded parsing the sentence, found another verb ((3 3)&resize_image)@:read_image)@:>, and hit a close paren. Since it had found two verbs in isolation (nestled inside a cozy pair of parens), it interpreted the train as a hook. This is really no different from the sentence (%~ i.) 10 where ~ immediately binds to %, the product of that binding and i. form a hook.

After forming the hook, the interpreter hit the noun i and applied the hook as ('small/'&,write_image~ 3 3&resize_image@:read_image)@:> i . The interpreter executed 3 3 resize_image read_image > i. and got a result. Up to this point, everything was fine. But now it came time to use the results it had calculated, and actually execute write_image . That's where the problem occurred: and it was exactly the error I mentioned earlier, that the interpreter avoided by deferring the execution of write_image (you can delay the inevitable, but you can't avoid it).

That adverb was written expecting that its argument be a noun, and refers to m, which is the name for the noun argument to an adverb (or conjunction). But given how you expressed your sentence, in this case argument to write_image was a verb: 'small/'&, . Therefore m (the name for a noun argument to an adverb) was undefined, yet write_image tried to use it anyway.

J calls the use of undefined names a "value error". This is the same error as when you type

    someNameIHaventDefinedYet
|value error: someNameIHaventDefinedYet

in the session manager. But a closer analogy is the value error you'd get if you tried to use x (which names a left argument) in a monadic verb which only has a right argument:

   monad def 'x + y' 4
|value error: x
|       x+y

You get a value error because x is undefined, and x is undefined because monadic (valences of) verbs don't have the concept of a left argument: x is literally meaningless.

Similarly, when write_image referred to the noun argument m, the J interpreter balked: "What noun argument? Your argument is a verb, 'small/'&, . I don't know what you're talking about." . The name for the (non-existent) noun argument to write_image, m, was literally meaningless. All because adverbs have higher precedence than verbs and can accept verbs as well as nouns as arguments.

Well, actually, because Cliff decided to define write_image as an adverb so he could have three separate arguments, without boxing. I know that's a lot to digest. I'm not known for my laconic style (cf Roger Hui), but I hope this helps.

-Dan

--

  • Technically, all verbs in J are ambivalent; that is, they can be called with either one argument (on the right) or two arguments (one on the right, and one on the left). The words "monad"/"monadic" and "dyad"/"dyadic" are just shorthand for the "one-argument valence of the verb" and "the two argument valence of the verb" respectively.

Note that some valences of some verbs have empty domains, such as the dyad ~. or the monad E. or the monad 4 : 'x + y' etc. That doesn't mean the valence doesn't exist; it does exist, but it rejects all arguments (a generalization of the concept that e.g. + rejects any argument that's not a number).

Now adverbs and conjunctions (collectively called operators) are analogous to the monadic and dyadic valence of a verb respectively, but it is exactly because of their higher grammatical precedence that there is no operator analog to an ambivalent verb. That is, there is no operator that can take either one argument or two arguments. Operators' higher binding power requires that we treat these cases separately - and, incidentally, is the reason adverbs (monadic operators) take their argument from the left, as opposed to monadic verbs which take their argument from the right.

---

from:	 bob therriault <bobtherriault@mac.com>
date:	 Thu, Jun 13, 2013 at 1:08 PM

Great explanation Dan,

A couple of years ago I put this video together about the adverb '~' : http://bobtherriault.wordpress.com/2010/11/17/those-tricky-adverbs/

Let me know if you have any interest in developing your explanations into a more multimedia mode. I don't have huge amounts of time (and it does take some time), but I do enjoy doing this stuff.

Cheers, bob

---

from:	 Dan Bron <j@bron.us>
date:	 Thu, Jun 13, 2013 at 1:45 PM

Yes, I remember your video series warmly. Personally, I'm stuck in the text age. I don't have the tools or ambition to go multimedia. And a related pet peeve is when I click through a news story or other link and am presented with a video, which can only be scanned linearly, as opposed to text, which is random access.)

---

from:	 bob therriault <bobtherriault@mac.com>
date:	 Thu, Jun 13, 2013 at 2:36 PM

Hi Dan,

You explain things beautifully in your medium of choice (but I would hardly think of you as stuck in text) and if you do want to add some other media at some point, let me know.

I think that your related pet peeve is a result of media not matching its audience. You want a sound bite, but you are given a lecture. If you combined a series of short videos to be displayed beside text, you would have structured the sound bites to a form that would be closer to your needs. In the text world, you might consider this related to the effect of white space on the structure of your writing. It is a matter of packaging more than a matter of content, but it can affect comprehension and retention.

I believe Ian Clark was pushing for this in the short animations he was using in new vocab a few years ago.

http://www.jsoftware.com/jwiki/NuVoc with http://www.youtube.com/v/aTRONIqXFVI as an example of a short form video. In any case, your ability to explain is a valuable commodity in the world of J (or any other programming language). I look forward to your future contributions.

Cheers, bob

J's Functional Hierarchy

J has an implicit hierarchy of items:

           conjunction                       adverb
        verb          noun or verb       noun      verb
   [noun]   noun

Show-and-tell

Quickly Building a Simple Simulation

We want to build a File:Simulate727.ijs to get an idea of the distribution of starting hands.

The rules of the game are as follows:

  1. two cards are dealt face-down initially.
  2. face cards are half a point,
  3. aces are one or eleven points (at the player's discretion),
  4. tens are zero or ten points,
  5. all other cards are "face value" points, e.g. a five is five points.
  6. Two aces and a five constitute a special hand that wins the whole pot.

The value of a hand is the sum of its points. The object is to be as close to either 7 or 27 (or both). The players closet to these values split the pot.

Here’s how we might start to build a simulation of this game:

   $deck=. (0.5#~*/3 4),4#>:i.10   NB. Initialize aces and tens at face value
52
   +/deck
226
   mean deck
4.34615

   _2<\deck{~10?#deck
+-----+------+---+---+-----+
|9 0.5|0.5 10|4 1|8 1|5 0.5|
+-----+------+---+---+-----+

   hands=. _2<\deck{~(2*5)?#deck
   +/&>hands
11 1.5 12 12 9
   hands
+---+-----+---+---+---+
|3 8|1 0.5|4 8|3 9|3 6|
+---+-----+---+---+---+

   init727Hands=: 13 : '_2<\x{~(2*y)?#x'
   init727Hands
_2 <\ [ {~ (2 * ]) ? [: # [

   deck init727Hands 5
+-----+----+---+-----+---+
|6 0.5|10 8|9 1|0.5 4|2 5|
+-----+----+---+-----+---+
   sums=. +/&>&> h0=. (<deck) init727Hands &.> 10000$5
10000 5
   3{.sums
 16   6  12 2.5 18
8.5  12   5 6.5 12
  9 5.5 4.5 1.5  4

   BKTS=: i.21 [ PCT=: 0   NB. Globals for "plotHistoMulti": fixed buckets, not % numbers.
   ss=. '10K 727 Hands' plotHistoMulti ,sums

Here’s an initial look at the distribution of hands:

initial distribution of "727" hands (Apologies for the "bar-creep" - the bars are supposed to line up with the integer intervals but do so only in the center of the chart.)

We can examine these 10,000 cases manually to get a rough idea of some of the distributions. For instance, how many deals have one hand that hits seven exactly on these first two cards?

   1+/ . =7+/ . = |:sums
1629

But how many deals have two hands that hit seven exactly on these first two cards?

   2+/ . =7+/ . = |:sums
134
   3+/ . =7+/ . = |:sums
1

So, how many deals have one hand that is only half a point away on the first two cards?

   1+/ . =6.5+/ . = |:sums
1630
   2+/ . =6.5+/ . = |:sums
86
   1+/ . =7.5+/ . = |:sums
1642
   2+/ . =7.5+/ . = |:sums
101
   3+/ . =7.5+/ . = |:sums
1

From the histogram, it looks like the most common first two cards total is around ten or eleven – which is it and how common is this?

   1+/ . =11+/ . = |:sums
2309
   2+/ . =11+/ . = |:sums
285
   3+/ . =11+/ . = |:sums
20
   4+/ . =11+/ . = |:sums
0
   1+/ . =12+/ . = |:sums
2152
   1+/ . =10+/ . = |:sums
2230

Now let’s formalize the initialization step of dealing: we’ll want to split the deck into a set of hands and the remaining deck.

init727=: 4 : 0
   ixs=. (2 * y) ? # x
   (<x#~(0) ixs}1$~#x),<_2 <\ x {~ ixs
)

   'd2 h0'=. deck init727 5
   $d2
42
   $h0
5
   h0
+-----+-------+----+----+---+
|0.5 2|0.5 0.5|10 3|7 10|4 5|
+-----+-------+----+----+---+

Now we start to hit the more difficult part of the simulation: how do we proceed? Which player will take another card in the hope of improving? How does a player decide what to do? Let’s start with a simple rule: if we’re within one point of 7, we stand; otherwise, we take another card.

hit0727=: 4 : 0
   wh=. 1<|7-+/&>y   NB. Which will hit (initially)?
   ixs=. (+/wh)?#x
   y=. (--.wh)}.&.>y,&.>wh#^:_1 ixs{x
   x=. x#~(0) ixs}1$~#x
   (<x),<y
)

Going forward, we’ll want to test different rules – this implies that we should come up with a good way to represent these rules so we can generate our search space programmatically. But first, we have another problem to solve: the ambiguity of scoring the hands based on the dual points possible with some of the cards: an ace can be one or eleven and a ten can be zero or ten, based on the player’s preference. How do we score hands with this consideration?

After some experimentation, we come up with this:

   scoreHand=: 3 : '~.+/&>,{(y=1)}((y=10)}(<"0 y),:<0 10),:<1 11'
It passes our initial test cases:
   scoreHand 1 1    NB. Two aces
2 12 22
   scoreHand 1 10   NB. An ace and a ten
1 11 21
   scoreHand 10 10  NB. Two tens
0 10 20
   scoreHand 3 2    NB. Some non-ambiguous cards
5
   scoreHand 1 1 1  NB. Could get to three aces after one draw
3 13 23 33
   scoreHand 1 1 10 NB. Or two aces and a ten
2 12 22 32

It took about an hour to get this far in the simulation.

Advanced topics

We looked at an essay about explaining why software development estimates are regularly off by a factor of 2 or 3, though some of us thought this factor was generously low.

Why Are Software Development Estimations Regularly Off by a Factor of 2-3?

Michael Wolfe, Startup founder

Let's take a hike on the coast from San Francisco to Los Angeles to visit our friends in Newport Beach. I'll whip out my map and draw our route down the coast:

Hike down the coast0.jpg
The line is about 400 miles long; we can walk 4 miles per hour for 10 hours per day, so we'll be there in 10 days. We call our friends and book dinner for next Sunday night, when we will roll in triumphantly at 6 p.m. They can't wait!

We get up early the next day giddy with the excitement of fresh adventure. We strap on our backpacks, whip out our map, and plan our first day. We look at the map. Uh oh:

Hike down the coast1.jpg
Wow, there are a million little twists and turns on this coast. A 40-mile day will barely get us past Half Moon Bay. This trip is at least 500, not 400 miles. We call our friends and push back dinner til Tuesday. It is best to be realistic. They are disappointed, but they are looking forward to seeing us. And 12 days from SF to LA still is not bad.

With that unpleasantness out of the way, we head off. Two hours later, we are barely past the zoo. What gives? We look down the trail:

Hike down the coast3.jpg
Man, this is slow going! Sand, water, stairs, creeks, angry sea lions! We are walking at most 2 miles per hour, half as fast as we wanted. We can either start walking 20 hours per day, or we can push our friends out another week. OK, let's split the difference: we'll walk 12 hours per day and push our friends out til the following weekend. We call them and delay dinner until the following Sunday. They are a little peeved but say OK, we'll see you then.

We pitch camp in Moss Beach after a tough 12 hour day. Shit, it takes forever to get these tents up in the wind. We don't go to bed until midnight. Not a big deal: we'll iron things out and increase velocity tomorrow. We oversleep and wake up sore and exhausted at 10 a.m. Fuck! No way we are getting our 12 hours in. We'll aim for 10, then we can do 14 tomorrow. We grab our stuff and go.

After a slow slog for a couple of hours, I notice my friend limping. Oh shit, blisters. We need to fix this now... we are the kind of team who nips problems in the bud before they slow our velocity. I jog 45 minutes, 3 miles inland to Pescadero, grab some band-aids, and race back to patch up my friend. I'm exhausted, and the sun is going down, so we bail for the day. We go to bed after only covering 6 miles for the day. But we do have fresh supplies. We'll be fine. We'll make up the difference tomorrow. We get up the next morning, bandage up our feet and get going. We turn a corner. Shit! What's this? Goddamn map doesn't show this shit! We have to walk 3 miles inland, around some fenced-off, federally-protected land, get lost twice, then make it back to the coast around noon. Most of the day gone for one mile of progress. OK, we are *not* calling our friends to push back again. We walk until midnight to try to catch up and get back on schedule.

Hike down the coast4.jpg
After a fitful night of sleep in the fog, my friend wakes up in the morning with a raging headache and fever. I ask him if he can rally. "What do you think, asshole, I've been walking in freezing fog for 3 days without a break!" OK, today is a loss. Let's hunker down and recover. Tomorrow we'll ramp up to 14 hours per day since we'll be rested and trained... it is only a few more days, so we can do it!

We wake up the next morning groggy. I look at our map: Holy shit! We are starting day 5 of a 10 day trip and haven't even left the Bay Area! This is ludicrous! Let's do the work to make an accurate estimate, call our friends, probably get yelled at, but get a realistic target once and for all.

Hike down the coast5.jpg
My friend says, well, we've gone 40 miles in 4 days, it is at least a 600 mile trip, so that's 60 days, probably 70 to be safe. I say, "no f--ing way... yes, I've never done this walk before, but I *know* it does not take 70 days to walk from San Francisco to Los Angeles. Our friends are going to laugh at us if we call and tell them we won't see them until Easter!

I continue, "if you can commit to walking 16 hours a day, we can make up the difference! It will be hard, but this is crunch time. Suck it up!" My friend yells back, "I'm not the one who told our friends we'd make it by Sunday in the first place! You're killing me because you made a mistake!" A tense silence falls between us. The phone call goes unmade. I'll call tomorrow once my comrade regains his senses and is willing to commit to something reasonable. The next morning, we stay in our tents til a rainstorm blows over. We pack our stuff and shuffle off at 10 a.m. nursing sore muscles and new blisters. The previous night's fight goes unmentioned, although I snap at my idiot friend when he leaves his water bottle behind, and we have to waste 30 minutes going back to get it. I make a mental note that we are out of toilet paper and need to stock up when we hit the next town. We turn the corner: a raging river is blocking our path. I feel a massive bout of diarrhea coming on...

Comments

169 [votes] Revett Eldred, Grey haired aging geek.

Wow. Plus ca change, plus c'est le meme chose. I started developing computer systems in 1965. Back then, it was a general rule that a problem that could be fixed for $1 during the specification phase would cost $10 to fix if not found until system design, and would cost $100 to fix if not found until implementation. In other words, it pays to spend way more time analyzing the issue and specifying the functionality of the system than most people do. Fixing those problems is what takes extra time and blows the schedule. Even with totally different programming fundamentals and languages today, and with the widespread reuse of code, that rule still applies.

When I owned a software development company (back in the '80s and '90s) we used earned value to measure progress. Any measurable task had to have some deliverable associated with it. No task could take less than half a day nor more than five days. In other words, if a "task" were estimated at three weeks, say, then it had to be split into smaller tasks, each with their own measurable deliverable, before a proper estimate would be believed. Then, when the task was being implemented, it could never be a certain percentage complete; it was either done or not done -- in other words, the deliverable was either delivered or not, the value of the deliverable either earned or not. I am constantly amazed that this method is still so rarely used. If nothing else, it avoids my first rule of traditional project measurement, which is that 90% of projects are 90% complete 90% of the time.

Attempting to produce accurate estimates of implementation before specification and design have been completed is just wishful thinking, or at best just a bunch of guesstimates. That said, I love Michael Wolfe's hiking analogy as it contains a great deal of truth, and I can relate to other people's comments about managers who won't accept reality. They are problem #1 in the system development business.

Two things you learn the hard way when you build a successful application development business: 1. Never commit to a fixed schedule for the whole project. Commit to a fixed schedule (and price) for the specification phase, and when that is complete commit to a fixed schedule/price for the design phase, and when that is complete commit to a fixed schedule/price for the implementation phase. 2. Build in to your contract the world's most comprehensive change control rules and procedures, and apply them rigorously.

Now that I'm retired from the business, it is both disconcerting and kind of pleasing to see that nothing much seems to have changed!

288 [votes] Devdas Bhagat, Just another geek.

Do you want precise estimates, or accurate ones? Accurate estimates have ranges (we will be done between 1 year and 3 years from now with a 95% chance of this being correct.). Precise estimates are absolute, but almost always absolutely wrong. Developers are also the only group where they are asked to do something which has never been done before, and tell someone else how long it will take before they even know what actually needs to be done.

115 [votes] Chris Moschini

I've discussed this with people who work in construction. When they build an estimate they have known hour numbers for various small tasks. Every small task has been done millions of times before and has low variability. Every large task can be eventually broken down to some tally of these smaller, known tasks. Their estimates are very precise. The longer the overall project the more risk there is of the unknown interrupting this number, but this too is a straight-forward formula of padding.

In programming, if a developer can break down everything in a project into a set of tasks they've done many times before, they're doing a bad job. Everything in programming can be automated, so if you've done it enough before that you have that strong an understanding of it, you should be automating it by now. How long will it take to automate?

Don't know. Never done that before.

There can also be the issue of surprising complexity. Computers struggle with many things humans take for granted. Humans provide estimates to other humans; it's not until coding begins that the computer applies its skepticism. In the response to the confusion scope creep arises.

For example, if you tell me all quotes in a system expire in 10 days, I might say fine, that's simple, and estimate that expiration task as an hour. If I estimated a week for this, you'd call me crazy. Then I attempt to code it. When do they expire, at midnight, or the time they were provided? How do I expire them - do I write a job that continuously wakes up and expires old quotes? Do I wait for users to check for them and expire old quotes there while I make the user wait?

Then I find out it's actually 10 business days. What's a business day? How do I advance the date properly so I skip weekends? What's the holiday schedule? What about leap years? When IS a leap year? (hint: It's not every 4 years. It's not every 4 years except every 100, either.)

OK, we'll build an interface for you to maintain the holiday schedule, since it's subject to change.

Then I find out you want the user to receive a warning email the day before a quote expires. Not a day, a business day. In their time zone. Make sure it only goes out during business hours where they are. HOW did an hour turn into 2 weeks?

79 [votes] Arun Shroff, B.Tech in Engineering & MBA

Fred Brooks who managed the development of IBM's mainframe operating system OS/360 - a mammoth feat of software engineering by any standard - wrote the seminal book on this topic The Mythical Man Month. It describes his experience and lessons learnt about what causes most software projects to be delayed. Listed below are some of these reasons

(based on various excerpts from Wikipedia that have been compiled and edited for relevance: The Mythical Man-Month)

1. The Mythical Man-Month and Brook's Law: Brooks discusses several causes of scheduling failures. The most enduring is his discussion of Brooks's law: Adding manpower to a late software project makes it later. A man-month is a concept of a unit of work proportional to the number of people working multiplied by the time that they work; Brooks's law says that this relation is a myth, and is hence the centerpiece of the book. Complex programming projects cannot be perfectly partitioned into discrete tasks that can be worked on without communication between the workers and without establishing a set of complex interrelationships between tasks and the workers performing them.

TheMythicalMan-Monthcover.jpg

Therefore, assigning more programmers to a project running behind schedule will make it even later. This is because the time required for the new programmers to learn about the project and the increased communication overhead will consume an ever increasing quantity of the calendar time available.

When n people have to communicate among themselves, as n increases, their output decreases and when it becomes negative the project is delayed further with every person added.

Group intercommunication formula: n(n − 1) / 2
Example: 50 developers give 50 · (50 – 1) / 2 = 1225 channels of communication.

DilbertAdd2PeopleToProject.jpg

2. The tendency towards irreducible number of errors: In a suitably complex system there is a certain irreducible number of errors. Any attempt to fix observed errors tends to result in the introduction of other errors. This is very difficult to anticipate and causes unpredictable delays in debugging the system leading to delays.

3. Feature creep, creeping featurism or featuritis: is the ongoing expansion or addition of new features in a product, such as in computer software. Extra features go beyond the basic function of the product and so can result in over-complication rather than simple design. Viewed over a longer time period, extra or unnecessary features seem to creep into the system, beyond the initial goals.

Occasionally, uncontrolled feature creep can lead to products far beyond the scope of what was originally intended. For example: Microsoft's Windows Vista was planned to be a minor release between Windows XP and then the codenamed Windows "Blackcomb" (Windows 7), but it turned out to become a major release which took 5 years of development. And was still a disaster!

IllustrationOfFeaturitis.jpg

4. Accidental complexity: This is complexity that arises in computer programs or their development process which is non-essential to the problem to be solved. While essential complexity is inherent and unavoidable, accidental complexity is caused by the approach chosen to solve the problem.

While sometimes accidental complexity can be due to mistakes such as ineffective planning, or low priority placed on a project, some accidental complexity always occurs as the side effect of solving any problem. For example, the complexity caused by out of memory errors is an accidental complexity to most programs that occurs because one decided to use a computer to solve the problem.

89 [votes] Walt Howard

30 years experience. The biggest reason software is late is because non-programmers don't understand how complex the process is. They will not accept an estimate 2 to 3 times longer and programmers typically don't get fired because doing so will just make the project even later! The knowledge of the project that developers acquire makes them essentially indispensable (if the company isn't willing to be even later). This is why barely functional developers who are there first are often more important that better developers who come later. Essentially whoever comes later in a software project has to work within the universe the previous developers have created, even if it's a total wreck. I call it "being handed a turd and told to make it float". Over, and over, and over and over again in my career I see the same mistakes, the same bad estimates, the same stress, the same serious faces, the same repressed frustration in everyone involved. I try to point out impending disasters which I see coming 6 months ahead of time but no matter what reasoning I use, authoritarian management with low IQ cannot understand the true complexity of software. Here are some other axioms: 1) Multiply the estimate by the number of developers who are interacting.

Yes, that's right. If the project looks like it will take a month, but 4 developers are involved, it will take 4 months. That's because the most friction occurs in the human to human interface. What could occur in seconds if one developer were involved often takes days to turn around when 2 are involved. "Here's that interface you wanted", "Ok, I'll get to it as soon as I'm finished with this"... 2 days later, "Oh, I just tried your interface found a bug in it". 2 more days pass for you to fix it, and give it to Joe again. …

48 [votes] Lance Walton, Father, Programmer, Failing Composer

I think software people make too much of software being "different", giving rise to poor estimates. See http://en.wikipedia.org/wiki/Cost_overrun#List_of_projects_with_large_cost_overruns for examples of non-software projects that have gone massively over budget. If you try to build something big, something that you haven't built before, in an environment that you haven't built in before, you're probably going to have a hard time coming up with an estimate that conforms to reality. Unless you start multiplying your (no doubt methodically derived) estimate by some number significantly bigger than 1. What I would say is that if the value of building the thing is not significantly greater than any reasonable estimate of cost (let's say a couple of orders of magnitude), you probably shouldn't do it. And if the value is significantly greater than the cost, then stop worrying about whether it costs you 2 or 3 times more than you thought and start concerning yourself with how to derive some of that value earlier to support the remainder of the work.

37 [votes] Michael O. Church, NYC machine learning functional programmer, writer, and game designer.

Many, many reasons. Lots of great answers to this have been given. I'll add one meta-level insight that applies not only to software. I'll ignore psychological aspects about pessimism vs. optimism and assume that people aren't prima facie biased in one direction or the other.

First of all, it's easy to explain why time estimates tend to be erroneous. There's just a lot of variation, and numerous "unknown unknowns". This explains error, but not the systematic bias. This is a massive oversimplification, but we tend to classify tasks according to the median time that each takes. A "10-minute drive" is one that takes 10 minutes in the 50th-percentile case. It might take 40, if traffic is bad. It will rarely take less than 8. For high-impact, low-frequency problems (such as traffic jams) the median behavior is for the bad thing not to happen at all. So the resource consumption of a task usually has a distribution where the mean is much greater than the median.

When tasks are too complex to estimate based on experience, the tendency is for people to estimate by summing the times-to-completion of subtasks. We can't sum distributions of random variables in our heads very easily, but we can sum scalars (numbers)-- so we add the medians and call that the "median-case scenario".

However, as the number of these variables becomes large, the mean is more indicative of how long the compound task should take, not the median. Include communication overhead and task dependencies, and we see even more of a tendency for low-frequency, high-impact problems to dominate.

Optimistic bias and political factors also apply, of course, but I think this technical aspect of it (the fact of the median of the sum being, for this sort of distribution, much greater than the sum of the medians) is a contributor as well.

Tristan Kromer, I build stuff. GrasshopperHerder.com

Because engineers do not get prompt feedback on the quality of their estimations.

Let's say you want to estimate the speed of a passing car. How would you do it?

You could take a guess, and then check that guess immediately against a speed gun. Police do this quite regularly and get very proficient by virtue of immediate and accurate feedback.

They become experts through considerable practice and have ample opportunities to practice.

Software estimation lacks feedback.

Software engineering tasks may take hours, days, or even months before a "result" can be known to compare to their estimation. Even if an engineer had the focus to deliberately integrate that feedback with their estimation, they simply have fewer opportunities for practice.

Moreover, neurons just aren't geared to grow the right connections without constant reinforcement over a long period of time. So even the world's greatest machine learning algorithm (your brain) can't solve the problem.

Extremely experienced engineers or project managers can actually be very accurate with their estimates. But it may take them well over 10 years to generate that level of expertise.

Even then, they'll still suck at estimate how long another engineer with a different skill level might take to perform the same task. And unfortunately...

Software depends on many bad estimates

When dealing with a large software projects, you may receive estimates from dozens of engineers, some (or all) or whom may not have enough experience in their areas to offer a reasonable estimate.

So now you've compounded the problem by many factors.

The standard solution?

A good project manager will track how good at engineers are giving estimates and create a multiple for each engineer's estimates or for the team as a whole. Pivotal Tracker works on a similar principle to manage scheduling.

Derek Reinhold, Process Expert, Learner, & Business P... (more)

Being off by a factor of 2-3 leads me to think the main driver is accountability. However, there are several other factors that lead to poor estimates:

Lack of clearly defined requirements. This is not a complexity equation. It is simply knowledge gathering. Often, requirements are not clearly documented and reviewed by the client prior to the time of estimation.

Naked numbers. Average duration estimates should be discounted immediately. There is variation inherent in any work task and a basic understanding of task complexity should produce a range of data, not one lonely number. At minimum, a triangular distribution could be used to estimate min, max, and most likely scenarios.

Ignoring historical data. More often than not, new work is not completely new. If broken down to the appropriate level of discrete tasks, there tends to be some historical data from projects that are analogous to work that is masquerading as new. Over time, estimates should become more precise as we learn from those past experiences.

Once and done. Project management standards encourage both high level estimates and estimates at cost. During planning phases and at key milestones, original estimates should be revisited and updated to reflect reality.

'Bias. Wisdom of the crowd is important when it comes to estimation. Relying only on the estimation capability of an individual introduces too much opportunity for bias. Whether it is having a neutral 3rd party check and balance or just bouncing off estimates during morning huddles, it is better to rely on feedback from multiple sources.

Ben Stucki, Founder of DAIO (http://daio.com).

There's also an inherent problem worth mentioning, which is that estimates always have a lower limit of 0, but they have no upper limit. So while a 2 hour estimate may sometimes take 6 hours to complete, it will never take -2 hours to complete. This means that estimates don't balance out as most people think they might. They are either exactly right all the time (unlikely) or the average naturally trends in a positive direction because of the lower limit. Mike Schwab You can't look at your brain thinking and predict the results of each thought, because the ability to analyze your brain would require a system more complicated than your brain. This is why you cannot predict development time - knowing how you are going to solve the problem takes approximately as much time as actually solving it.

Code is very abstract, and you basically conceive of it in a very inaccurate, imaginary way in order to understand it in human terms. Thus, in order to approach each aspect of your project, you need to get into a certain zone, get comfy with certain assumptions, and block out other distractions. So, you can't readily simulate the development process from afar. As we use higher-level languages and more productive environments, it gets more preposterous to "run simulations" of the development process. Accounting for the stack from Linux to MySQL to compilers to plugins and browsers, you're talking about probably billions of lines of code.

With that said, you can compare certain deliverables to things you've seen before, of course there is a certain accuracy that can be achieved. Yet this is often clouded with the optimistic (albeit realistic) notion that you can do it faster this time. Developers are not the heroes they perhaps should be, and they are eager to deliver good news just to stay out of the doghouse.

Then there is the tradeoffs that are available in the form of technical debt. This means doing something the quick, lazy way to get it ready faster. Technical debt is a lot like financial debt: it lets you accomplish more, quicker, but you end up paying for it later and spending lots of time dealing with it. Many developers understand that it's usually not a good tradeoff, but they lack the ability to convince their coworkers of the wisdom of this approach. How many projects do you know where the developers even pretend to say the code quality is good? Even Bloomberg LP's code is crap, and this is a company with nothing but money, technology, developers, and more money, and an undying need for extremely performant, stable, flexible systems (and a few sexy reporters).

Everyone seems to always want to surge forward, code quality be damned. This takes your best estimates and injects days and weeks lost to shitty grunt work that accommodates the shortcuts you know you never should have allowed.

On top of the poor decisions that prevail throughout the industry, there is the maddening inability of anyone to actually predict what features they will want. Deliverables change every day (in the best case). Developers don't blame product people, of course - we are used to the impossibility of imagining a fully functioning software product. But the distractions break down the mental focus that is important for productive coding. It also creates an environment where you have to second-guess every instruction, taking up twice the time.

You also have to re-architect underlying structures when deliverables change. And then do so again. You can see how morale becomes a factor, and cynicism reigns.

Plus you know where the blame goes. Product people and managers are also annoyingly reluctant to even try to understand technical issues most of the time. Talking through problems is often the easiest way to solve them (even when no feedback is furnished). However, this is somehow unacceptable to attempt.

Even worse, there is little regard for the impact of the development environment on productivity. Things like taking breaks every hour, or even naps every afternoon, are poorly understood. Learning new techniques and technologies is not given proper time. A lot of old-school geeks profess that all languages do the same thing, so people use Java when they should know better. And you would be amazed at how intransigent our Bloomberg friends were when we told them that using Windows as a Ruby development environment was a daily time sink when their own news service was running front-page headlines every day about how Apple is taking over the business world.

So in short - technical debt, an insufficient focus on the primacy of the role of technology and developers, bad environments and morale, changing deliverables, and a lack of efforts to evaluate whose estimates have the best track record.

Francis Fish, Agile nut, Rails developer, programmi... (more)

I think the walking analogy right at the top makes perfect sense - essentially it's about fractals and scale - eventually you get to the environment which is your size in the real world and then it doesn't make any more difference, but of course you've moved down several orders of magnitude. An ant doing the same journey would take even longer because it would have to go over every grain of sand you would step over. But that said - it's a good analogy.

My own experience, outside of "big" projects - which is what people seem to be talking about here. I think that developers always think in terms of "keyboard time" when they estimate. They forget to factor in the back and forth with the customer and plain old misunderstandings. I'm running a team where we do small consultancy jobs that sometimes are only a couple of hours. Consistently 2 hours is usually at least 3. It's keyboard time. When you add this up over a whole month - everything takes longer. The other thing, that people forget, is interruptions and "staying in the zone". People who spend their days in "interrupt mode" don't get how this ruins productivity and keep interrupting. Productivity has increased quite a bit since I became the person people ask and leave the devs alone.

Used to work for a manager that said to multiply by Pi, because it sounds scientific. :)

Gary S. Weaver

In the early 2000's in my first "real" development job, I was lucky enough to be on an XP team where the lead decided to have us record our task estimates and then record the actual time required for each task, so that we could compare them over time and try to adjust. I learned at that time that without feedback or adjustment of my own estimates, I underestimated by close to 3x, similar to your estimate. The others on the team similarly underestimated. Unlike some of the others in this thread stating that meetings, customer interference, bugs, etc. caused this, it was purely our gut instinct that lead us astray. Since then, I take that into consideration when possible and adjust estimates accordingly, when it matters. Like others mentioned, the primary reason for this average gut instinct underestimate is unforeseen complexity and hardships. I'm not aware of any scientific reason that it is specifically 2-3x on average; it isn't like we have the technology or science to determine something so complex.

Miguel Paraz, Student of Software Engineering

Estimates depend on the list of features - which change, and are made by people who don't have a full understanding on how to implement them.

Discussion

The essay uses the analogy for software estimation of planning a hike down the the California coast from San Francisco to Los Angeles. The author illustrates all the sorts of things that are likely to go wrong. The comparison to software development projects is implicit but does a good job of showing how a 10-day estimate can quite reasonably balloon to 60 or 70 days once a project has started.

Some of the more pertinent comments pointed out things like "Developers are also the only group where they are asked to do something which has never been done before, and tell someone else how long it will take before they even know what actually needs to be done." (Devdas Bhagat)

Several others pointed out how much communications overhead between developers and others enormously adds to the time. One commenter (Walt Howard) with 30 years of experience suggested multiplying any rough estimate by the number of developers involved. This is pertinent to the expectation of much smaller teams with which one might run a project using more powerful tools like J.

Learning, teaching and promoting J

However, as we know, writing in J is no silver bullet. We looked at an example of some J code written in a style one ought not emulate. The code attempts something that would be interesting - see Arc Consistency for Constraint below - if it worked but apparently it does not.

This does not appear to be an isolated case in the world of software. We next looked over a (favorable) review of a book (on PHP) - also [1] below] - that was notable for the multitude of errors it noted. One interesting comment in the review was that although there was extensive example code, "[s]ome of it is concise enough so as not to distract from the narrative flow, but far too many examples involve much more code than necessary. This at first glance might seem to be an advantage, but it actually makes it more difficult for the reader to see the parts of the code relevant to the topic at hand."

We fans of concise code are not surprised by such a conclusion.

Some Other Languages

We looked briefly at the Lobster programming language - see LobsterGameProgrammingLang.pdf in Materials below for more:

Lobster is a game programming language. Unlike other game making systems that focus on an engine/editor that happens to be able to call out to a scripting language, Lobster is a general purpose stand-alone programming language that comes with a built-in library suitable for making games and other graphical things.

We also briefly considered the Harlan language - see HarlanGPUProgrammingLanguage.pdf in Materials below:

The young, declarative and domain-specific Harlan programming language promises to simplify the development of applications that run on the GPU. The language syntax itself is based on Scheme, a dialect of the Lisp functional programming language.

Astronomical Calculations on a Phone

We noted with amusement and awe the following communication from the astronomer J. Patrick Harrington.

from:	 J. Patrick Harrington <jph@astro.umd.edu>
to:	 Programming forum <programming@jsoftware.com>
date:	 Fri, Jun 14, 2013 at 12:36 PM
subject: [Jprogramming] Limits of J iOS - not!

I continue to be surprised by the ability of J on the iPhone to run rather substantial programs. I have a J version of a program to compute photoionization models of planetary nebulae. It has a main section of 500 lines or so and calls about 30 other verbs and data sets. One of these (2D grids of the collisional cooling by each of 16 ions) has dimensions of 55 250 16 4. The program integrates a system of 28 differential equations.

Last night, expecting the worst, I loaded it into my iPhone (on my jailbroken iPhone 4S, I used the freeware "Fugu" to make transfers from my MacBook Pro). And the thing runs! I do have to make use the unofficial "Insomnia.app" to keep the phone from going to sleep during the computations, which can take 10's of minutes. I'm just writing this to encourage others to explore the limits of J on your phone - you may be surprised.

Patrick

Materials

-- Devon McCormick (talk) 18:00, 7 February 2022 (UTC)