From J Wiki
Jump to navigation Jump to search

richness of J, teaching languages, learning to program, smoothing graphs, Bayesian induction, publishing J, tacit versus explicit, quantum thought, personal life of pronouns

Meeting Agenda for NYCJUG 20111011

1. Beginner's regatta: problems and possible solutions for leading on beginners:
see "Lost in the Richness.pdf", "Teaching Languages.pdf" and "How I Failed and
Finally Succeeded at Learning How to Code.pdf" (from "The Atlantic Monthly").

2. Show-and-tell: work in progress: "Smooth Operators": compare the different
versions aimed at different audience - the more technical "Smooth Operators.pdf"
as opposed to the more general "Smooth Operators - method overview.pdf".

3. Advanced topics: beginning Bayesian statistics - see "Bayesian Induction from Bo.pdf" - and a hint of some more advanced topics possible in "BayesianDynamicLinearModelIntro.pdf".

4. Learning, teaching and promoting J, et al.: a place to publish?  What is the "Journal of J"?  See "Journal of J - Call for papers.pdf", "Journal of J.pdf", and
"Guidelines for Contributors.pdf".

Using J as a tool of thought - see "Tacit Vs Explicit LT impact.pdf".

Miscellaneous: see "Quantum minds.pdf" and "The secret life of pronouns.pdf".

Beginner's regatta

We discussed some of the problems beginners in J encounter due to the strange richness of the language.

Lost in the Richness

from	Ian Clark via
to	Programming forum <>
date	Mon, Aug 15, 2011 at 7:51 PM
subject	Re: [Jprogramming] Scan (expand)

Complete J novices knowing some APL might find this useful (for the first week or so): where '\' is referred-to as {backslash}. The following is a deeper introduction to J for APLers. Authoritative, it is more tutorial than reference: ...needs to be read through at least once. A far less formal intro to J for (sceptical) APLers is: This emphasises j602 JWD and is less good for j701. No good as a language reference as such. The References section is probably the most use.

On Mon, Aug 15, 2011 at 1:08 PM, Gian Medri <> wrote:

- Show quoted text -

> Hi!
> To expand a vector APL has a primitive "\" to do expansion.
> b=: 1 0 0 1 0 1
> b\1 2 3
> 1 0 0 2 0 3
> b\'abc'
> a  b c
> I didn't find any primitive in J to do this. How can J solve this problem?


from via
date	Sat, Aug 20, 2011 at 11:38 PM

I am an APL + user from way back and have run into the limits of what I have (V11 dos based &64 bit win7 don't get along and dos box is a pain). I know that there are more modern APL's but since I am retired, I am spending my money on wine which I can share with my spouse and friends rather than on software. I have written a J script that deals with calculations of electric and magnetic fields under power lines- all is good. I can plot the results after some cut and try work to get what I want.

I recognize the potential of J but I do have problems with the non-J utilities I was put on smoutput by someone on the forum in response to a query but there are so many potentially useful "verbs"' that are obscure because:

. a) they are not well documented . b) documentation tries to be all-fulfilling in a terse manner rather than start simply and work up to more complex situations (e.g. to make a simple plot, the documentation gives anal discomfort- the information is there but buried). . c) I am too stupid or old to find what I need quickly. There may be the rub.

(old applies-first program in MAD in '61, introduced to APL by Ted Edwards, in early 70's and after once writing a program in Pascal and struggling with C++ avoided these and their kin because the emphasis was on the details of the program structure rather than the logic and objectives of the program).

Don Kelly

Teaching Languages

Next, we looked at some suggestions about good examples for teaching language, in general. The following is from a section on introductory programming language problems from Keith Smillie's article Discovering Array Languages in Vector.

. Most texts used in introductory computing courses give either an introduction to whatever language is currently in fashion or an overview of computing science in which programming is treated somewhat briefly. The programming examples and exercises are carefully and often meticulously presented with the purpose of illustrating the syntax of the language. While many of the problems may have some intrinsic interest, when viewed together they present no continuous narrative which the student can see developing. Unfortunately in some texts many of the examples are artificial or even juvenile.

. An example in one C++ text was a program that gave a prompt for the user to input a “favourite” number and gave the response I think that [whatever number was input] is a nice number. One Java text has as an example a program to print either ho-ho, he-he or ha-ha which was then modified to print yuk-yuk. One programming assignment I have seen required the student to prepare a table of n^7 and 7^n for a range of values of n, an exercise of little interest and of doubtful application. Examples such as these undoubtedly prompted a colleague to remark that most introductory programming courses were as interesting as courses in the conjugation of verbs.

. It is of interest to compare such approaches to teaching programming languages with the teaching of natural languages. We shall give two examples, one in teaching children to understand and read English, and the second in teaching a foreign language to native speakers of English.

. One delightful book intended to introduce English to young children is Richard Scarry’s Storybook Dictionary (Paul Hamlyn, London, 1967), a book I purchased years ago for my young daughter. I now have the pleasure of reading it to her children, much to their own delight as well as to mine. This large format book introduces the child to 2500 words by means of 1000 pictures through the adventures of such colourful characters as Ali Cat, Dingo Dog, Gogo Goat, Hannibal Elephant and Andy Anteater. In the Introduction we are told that “He [presumably girls are included too] will not be given rules. Rather, he will be shown by examples in contexts which completely catch his interest and hold his attention.” If we would only teach programming in the same way!

. The second example is related to the teaching of Japanese, a language which I took up shortly after retirement and which I have pursued doggedly for several years. My periods of despair with Japanese – and there have been many – might best be described by the following paraphrase of the well-known epigram of Samuel Johnson: “An elderly gentleman trying to learn Japanese is like a dog walking on its hind legs. He does not learn well, but one is surprised that he learns anything at all.” However there have been many unexpected pleasures resulting directly or indirectly from my study of Japanese. I have met many interesting people both in Canada and in Japan; I have had several delightful trips to Japan; I have eaten a very large number of most enjoyable Japanese meals; and I have gained just a little understanding of the Japanese people and their history. Also I think that I just may have a happier and fuller personal life.

. Most of my Japanese texts teach the language by the telling of some continuing story which although fictional is intended to be realistic. In my first text, Japanese for Busy People I (Association for Japanese Language Teaching, Kodansha International, 1984) a prominent figure in most of the reading exercises which begin each chapter is Mr Smith (“Sumisu-san” in Japanese), a lawyer working in Tokyo, and we see him as he meets Japanese colleagues and visits some of them in their homes. In another little book, Conversational Japanese in 7 Days (Etsuko Tsujita and Colin Lloyd, Passport Books, 1991) – the title is not to be believed – we are introduced to the Japanese language and culture as we accompany Dave and Kate Williams as they spend a week as tourists in Japan.

. My favourite text is Business Japanese by Michael Jenkins and Lynne Strugnell (NTC Publishing Group, 1993) and is in the well-known English “Teach Yourself Books” series. The story features the Wajima Trading Company in Tokyo and the British company Dando Sports which wants to market its sporting equipment and clothing in Japan through Wajima. We are introduced to various members of the staff at Wajima and learn about the company’s organization and how business operates in Japan. One of the main characters is a Mr Lloyd, marketing manager for Dando, who visits Japan on two occasions. We follow Mr Lloyd as he works with the company and meets some of the staff both at work and socially. Each of the twenty chapters has the same format: a summary of the story so far and another installment of the story; a list of new vocabulary; grammatical notes; exercises; a short reading exercise; and a one-page essay in English on some aspect of Japanese business. The Japanese hiragana and katakana syllabics are introduced at the beginning and the kanji (Chinese) characters a few at a time starting in Chapter 5, and blend well with the rōmaji (Roman) characters which are also used.

. Ken Iverson was motivated, as was mentioned earlier in this article, to develop APL and J because of his concern for the inadequacy of conventional mathematical notation for teaching many of the topics arisng in computing. He would return to this theme frequently in his writings, one example being given in A Personal View of APL where he writes that “As stated at the outset, the initial motive for developing APL, was to provide a tool for writing and teaching. Although APL has been exploited mostly in commercial programming, I continue to believe that its most use remains to be exploited: as a simple. precise, executable notation for the teaching of a wide range of subjects.”

. Because of their terseness APL and J are ideal tools for exposition, since much of the detail required in conventional computing languages may be omitted. Of course some introduction, however brief, must be given to either language and its interactive use before it may be used. However, such an introduction need not be much more than given in this article with possibly a few remarks on programs (which are “functions” in APL and “verbs” in J). With this simple introduction one can begin an exposition of the desired topic introducing additional features of APL or J as required.

. Ken Iverson wrote and lectured unceasingly on the use of first APL and then J for the exposition of a variety of topics. One of his earliest works was APL in Exposition (IBM Philadelphia Scientific Center Technical Report No. 320-3010, 1972), a very small technical report beginning with a short summary of the entire language and followed by short accounts of the use of APL in the teaching of various topics such as elementary algebra, coordinate geometry, finite differences, logic, sets, electrical circuits, and the computer. …

. He also published in print and in releases of J and on the J Website at a number of “J companions” intended to supplement well-known texts. A typical one published initially in print form was Concrete Math Companion intended to be read with Concrete Mathematics: A Foundation for Computer Science by R.L. Graham et al. (Addison-Wesley, 1988). One of his projects at the time of his death (on October 19, 2004) was a companion to the encyclopaedic Handbook of Mathematical Functions by M. Abramowitz and I. A. Stegun.

. Another very early publication on the use of APL as a notation rather than a programming language was “The Architectural Elegance of Crystals Made Clear by APL” by Donald McIntyre, then Professor of Geology at Pomona College in Claremont, California (Proceedings of an APL Users Conference, I. P. Sharp Associates, Toronto, Ontario, pp. 233 – 250, 1978) which examined the geometry of the atomic structure of crystals using APL. In the Introduction the author states that “... I introduce notation only as needed for the work in hand, minimizing the computer and machine characteristics. Indeed, I do not mention APL to start with, treating the primary functions as natural extensions and revisions of algebra, and I use no APL text.”

. Finally we should mention the extensive use of APL and J in the exposition of a variety of topics in probability and statistics and in the development of statistical packages in these languages. The conciseness of either language and its interactive implementation make it ideal for the exposition of statistical concepts in the classroom, further practice in the laboratory, and analyses arising in research. Furthermore statistical packages which may be used with very little knowledge of APL or J may be developed with relatively little effort from the material prepared for classroom and laboratory use.

How I Failed, Failed, and Finally Succeeded at Learning How to Code

We considered selections from this narrative from The Atlantic Monthly, Jun 3 2011, 10:19 AM ET, by James Somers - James Somers is the chief technology officer of He blogs at

. Project Euler, named for the Swiss mathematician Leonhard Euler, is popular (more than 150,000 users have submitted 2,630,835 solutions) precisely because Colin Hughes -- and later, a team of eight or nine hand-picked helpers -- crafted problems that lots of people get the itch to solve. And it's an effective teacher because those problems are arranged like the programs in the ORIC-1's manual, in what Hughes calls an "inductive chain":

. The problems range in difficulty and for many the experience is inductive chain learning. That is, by solving one problem it will expose you to a new concept that allows you to undertake a previously inaccessible problem. So the determined participant will slowly but surely work his/her way through every problem.

. This is an idea that's long been familiar to video game designers, who know that players have the most fun when they're pushed always to the edge of their ability. The trick is to craft a ladder of increasingly difficult levels, each one building on the last. New skills are introduced with an easier version of a challenge -- a quick demonstration that's hard to screw up -- and certified with a harder version, the idea being to only let players move on when they've shown that they're ready. The result is a gradual ratcheting up the learning curve.

. Project Euler is engaging in part because it's set up like a video game, with 340 fun, very carefully ordered problems. Each has its own page, like this one that asks you to discover the three most popular squares in a game of Monopoly played with 4-sided (instead of 6-sided) dice. At the bottom of the puzzle description is a box where you can enter your answer, usually just a whole number. The only "rule" is that the program you use to solve the problem should take no more than one minute of computer time to run.

. On top of this there is one brilliant feature: once you get the right answer you're given access to a forum where successful solvers share their approaches. It's the ideal time to pick up new ideas -- after you've wrapped your head around a problem enough to solve it.

. This is also why a lot of experienced programmers use Project Euler to learn a new language. Each problem's forum is a kind of Rosetta stone. For a single simple problem you might find annotated solutions in Python, C, Assembler, BASIC, Ruby, Java, J and FORTRAN.

. Even if you're not a programmer, it's worth solving a Project Euler problem just to see what happens in these forums. What you'll find there is something that educators, technologists and journalists have been talking about for decades. And for nine years it's been quietly thriving on this site. It's the global, distributed classroom, a nurturing community of self-motivated learners -- old, young, from more than two hundred countries -- all sharing in the pleasure of finding things out.

. * * *

. It's tempting to generalize: If programming is best learned in this playful, bottom-up way, why not everything else? Could there be a Project Euler for English or Biology?

. Maybe. But I think it helps to recognize that programming is actually a very unusual activity. Two features in particular stick out.

. The first is that it's naturally addictive. Computers are really fast; even in the '80s they were really fast. What that means is there is almost no time between changing your program and seeing the results. That short feedback loop is mentally very powerful. Every few minutes you get a little payoff -- perhaps a small hit of dopamine -- as you hack and tweak, hack and tweak, and see that your program is a little bit better, a little bit closer to what you had in mind.

. It's important because learning is all about solving hard problems, and solving hard problems is all about not giving up. So a machine that triggers hours-long bouts of frantic obsessive excitement is a pretty nifty learning tool.

. The second feature, by contrast, is something that at first glance looks totally immaterial. It's the simple fact that code is text.

. Let's say that your sink is broken, maybe clogged, and you're feeling bold -- instead of calling a plumber you decide to fix it yourself. It would be nice if you could take a picture of your pipes, plug it into Google, and instantly find a page where five or six other people explained in detail how they dealt with the same problem. It would be especially nice if once you found a solution you liked, you could somehow immediately apply it to your sink.

. Unfortunately that's not going to happen. You can't just copy and paste a Bob Villa video to fix your garage door.

. But the really crazy thing is that this is what programmers do all day, and the reason they can do it is because code is text.

. I think that goes a long way toward explaining why so many programmers are self-taught. Sharing solutions to programming problems is easy, perhaps easier than sharing solutions to anything else, because the medium of information exchange -- text -- is the medium of action. Code is its own description. There's no translation involved in making it go.

. Programmers take advantage of that fact every day. The Web is teeming with code because code is text and text is cheap, portable and searchable. Copying is encouraged, not frowned upon. The neophyte programmer never has to learn alone.

. * * *

. Garry Kasparov, a chess grandmaster who was famously bested by IBM's Deep Blue supercomputer, notes how machines have changed the way the game is learned: . There have been many unintended consequences, both positive and negative, of the rapid proliferation of powerful chess software. Kids love computers and take to them naturally, so it's no surprise that the same is true of the combination of chess and computers. With the introduction of super-powerful software it became possible for a youngster to have a top- level opponent at home instead of needing a professional trainer from an early age. Countries with little by way of chess tradition and few available coaches can now produce prodigies.

. A student can now download a free program that plays better than any living human. He can use it as a sparring partner, a coach, an encyclopedia of important games and openings, or a highly technical analyst of individual positions. He can become an expert without ever leaving the house.

. Take that thought to its logical end. Imagine a future in which the best way to learn how to do something -- how to write prose, how to solve differential equations, how to fly a plane -- is to download software, not unlike today's chess engines, that takes you from zero to sixty by way of a delightfully addictive inductive chain.

. If the idea sounds far-fetched, consider that I was taught to program by a program whose programmer, more than twenty-five years earlier, was taught to program by a program.


Compare a more technical presentation to a less technical one in which we present some techniques for smoothing out "jagged" curves generated by data to make them more amenable to different kinds of analysis.

Advanced topics

Bayesian Induction

We looked at Bayesian induction as introduced in this e-mail exchange. Another exposition on this subject has appeared in Vector.

from	Bo Jacoby via
to	Programming forum <>
date	Tue, Oct 4, 2011 at 4:52 AM
subject	[Jprogramming] statistical induction

Dear friends in the J-community:

I wish to share with you a formula for Bayesian induction. Your comments and improvements are welcome. Verbs for deduction and induction are defined like this.

   a     =. *`%`:3"2
   b     =. ,: (%:@* -.)
   c     =. (,: , 1:) % +/@]
   deduc =. a@b@c f.
   T     =. -@(+ #)
   induc =. (T@}: , }.)@(T~ deduc T) f.

Consider (following Laplace) an urn containing, say, 50 red balls, 30 yellow balls and 20 green balls. Close your eyes and pick 10 balls out of the urn. How many balls of each color will you get? You cannot know for sure, but the order of magnitude will be 5 red balls, 3 yellow balls, and 2 green balls, computed like this.

   10 (* % +/@]) 50 30 20
5 3 2

The statistical uncertainties are written under the orders of magnitude like this.

   10 deduc 50 30 20
      5      3       2
1.50756 1.3817 1.20605

You get 5 red balls, give or take 1.5; you get 3 yellow balls, give or take 1.4; you get 2 green balls, give or take 1.2. Here are some simple examples.

   1 deduc 1 1 NB. uncertain result
0.5 0.5
0.5 0.5
   1 deduc 2 0 NB. but absolute certainty when both balls have the same color
1 0
0 0
   2 deduc 1 1 NB. or when both balls are picked
1 1
0 0

When the sample is known and the population is unknown, it is called induction. Close your eyes and pick 10 balls out of an urn containing 100 balls. Open your eyes and count 5 red, 3 yellow, and 2 green balls. What can be said about the number of balls of each color in the urn?

   5 3 2 induc 100
46.5385 30.6923 22.7692
12.8279 11.8764 10.8416

There are 47 red balls, give or take 13; there are 31 yellow balls, give or take 12; there are 23 green balls, give or take 11. For fun I choose some examples here where all the results are exact integers.

   1 0 induc 4
3 1
1 1

NB. Even if there are no yellow balls in the sample there may still be some yellow balls in the population. As is well known in the philosophy of science, induction is not absolutely certain. Unless you investigate the whole population of course.

   4 0 induc 4
4 0
0 0
   0 0 0 induc 3
1 1 1
1 1 1
   0 0 induc 6
3 3
2 2	   1 1 induc 18
9 9
4 4
   2 2 induc 12
6 6
2 2
   2 0 induc 62
47 15
12 12

The technical term for 'order of magnitude' is Mean Value, or Expected Value. The technical term for 'statistical uncertainty' is Standard Deviation. For only two colors, deduction is known as the Hypergeometric Distribution. But the induction formula, and its relation to the transformation T, seems not to be known in the statistical industry, although it is very useful.

Have fun!


from	[[User:Raul Miller|Raul Miller]] via
date	Tue, Oct 4, 2011 at 9:19 AM

On Tue, Oct 4, 2011 at 4:52 AM, Bo Jacoby <> wrote:

> Verbs for deduction and induction are defined like this.
>    a     =. *`%`:3"2
>    b     =. ,: (%:@* -.)
>    c     =. (,: , 1:) % +/@]
>    deduc =. a@b@c f.
>    T     =. -@(+ #)
>    induc =. (T@}: , }.)@(T~ deduc T) f.

This looks fun.

I do have some cosmetic suggestions.

First, I prefer =: over =. for definitions that I am going to use, because =. definitions in a script I load from file vanish before I can use them.

Second, I would probably use / instead of `:3 -- I know that J will convert gerund / to gerund `:3 but / is still more concise (and, for me, saves a round trip to the vocabulary page -- but that might not be the case for someone else).

Also, it's interesting (and educational) to try to put names to each of the intermediate results that arise in this kind of computation.



Learning and Teaching J

We looked at some places where material on J is published: long-time ones such as Vector well as a new one called "the Journal of J". This latter, online journal aims to be the definitive repository of scholarly articles on the J programming language.

Tacit versus Explicit J

We discussed the issues raised in this e-mail exchange about the advantages and disadvantages of writing J in an explicit as opposed to a tacit form.

Tacit Vs. Explicit – Possible Long-Term Impacts

from	Christopher McIntosh via
to	J Software - Programming Forum <>
date	Mon, Sep 26, 2011 at 8:38 PM
subject	[Jprogramming] Tacit vs. Explicit Paradigm and its Long-Term Impact

(Despite the conversations that have brought us to this point -- and ignoring that scenario altogether) I have a hypothesis that for the long-term, a team equipped in a J-focused environment needs to have a very detailed design-time road map to avoid a possible scenario in its project whereby re-designing one aspect has a negative and significant impact on unrelated aspects of the project. For our team, we had become interested in J because of highly regarded recommendations about its fit into an XP environment. At the same time, it appears that folks are suggesting that design decisions are required to be made earlier (moreso comparable to the waterfall paradigm). And this is important to the long-term maintainability (in terms of cost and time) of the project.

For example, when considering a function (or function group) and choosing the tacit implementation, I have been advised from 2 (seemingly?) incompatible perspectives.

One one hand, it is my understanding that one of the primary advantages to a tacit design is the ability to abstract a dependency on names. On the other hand, I understand that, in an ambivalent function, such is not the case. That, unfortunately, this won't work for the dyadic half, since it could not distinguish between local x and global x, as x is defined locally in both the verb and the adverb.

And when I look at some implementations, I see that this appears to be the case.

NB.*nl v selective namelist
NB. Form: [mp] nl sel

NB.   sel:  one or more integer name classes, or a name list.
NB.         if empty use: 0 1 2 3.
NB.   mp:   optional matching pattern. If mp contains '*', list names
NB.         containing mp, otherwise list names starting mp. If mp
NB.         contains '~', list names that do not match.
NB.  e.g. 'f' nl 3      - list verbs that begin with 'f'
NB.       '*com' nl ''  - list names containing 'com'
nl1=.(([:4!:1])&(]`(0 1 2 3"_)@.(0=#))) :: cutopen_*z*_
nlz=:(nl1 : ((4 : 0)nl1)) f.
if. 0 e. #y do. y return. end.

if. #t=. x -. ' ' do.
 'n s'=. '~*' e. t
 t=. t -. '~*'
 b=. t&E. &> y
 if. s do. b=. +./"1 b
 else. b=. {."1 b end.
 y=. y #~ n ~: b

I see that (on the dyad side of the picture) there is still reference to local x. But, this should not be an issue, should it? Since we could go ahead and reference x__anotherlocale. In my prima facie testing, I don't notice the issue. Tests which had failed in previous tests, pass with this example.

I realize that the road ahead is a long one to become proficient at recognizing some of the subtleties that, presently, have us perplexed.


from 	Henry Rich
date	 Mon, Sep 26, 2011 at 8:54 PM

I think it is a mistake to use tacit programs for anything that might possibly be rewritten. When you have a well-defined task, you can make it a tacit verb. For normal work, write explicit verbs.

The problem with large tacit systems is just as you said: maintainability. And, there's not much to gain from tacit verbs. The important thing is to get the design right, and implement for ease of debugging and maintenance. That doesn't mean that tacit programming is useless. Whenever you write a hook, fork, or compound verb, you're doing a little bit of tacit programming.

Just don't go overboard. I keep a file with all my well-thought-out tacit programs, which I use as utilities in the (explicit) rest of my code. Perhaps 2% of a large system will be tacit verbs.

The reasons to use J are speed of coding, ease of debugging, robust implementation, ability to fix bugs instantly, expressiveness. To me, large tacit systems forfeit some of those advantages. Pepe Quintana writes large tacit systems. Maybe he can tell you to do it.


from 	Henry Rich
date 	Mon, Sep 26, 2011 at 9:44 PM

Ah yes, but you can write a large project tacitly before you're fully equipped to do so. I did that on my first J project, and it was a real trial by fire. A hundred tacit verbs, each dependent on the ones above (the application was texture-map filtering for a flight-simulator database).

It worked great. It was fast. It was virtually unmaintainable. I would hate for a new user to try that and give up on J.

It was a great experience, and I'd say it got me about 25% of the way to being proficient at tacit programming, but I think being a stylite is an easier job if self-improvement is what you're after.

J is fast to code and easy to change. Large tacit programs are hard to change. Why go there?

Henry Rich

On 9/26/2011 9:16 PM, bill lam wrote:

. > I guess it is too early to ask. Until you can write a large project with . > tacit form only, you do not have to choose.


from 	Don Guinn
date	 Mon, Sep 26, 2011 at 9:55 PM

The use of tacit in this case is to allow access to global x and y which is not a normal type of problem in an application. This one is unusual in that it is actually a mixture of an explicit part within a tacit part. The reference to locally defined x is long after the list of global names defined was found tacitly.

Tacit definitions can get difficult to read, particularly if they are very long. Conversely it is normal for simple tacit expressions to be used extensively in explicit definitions. What is important, as in any programming language, is to document well. There are fewer clues in a tacit definition than in an explicit definition as no names are given other than the definition name.

Tacit definitions are one liners.This can make them even more difficult to read. One way to make tacit definitions easier to document well and to break it into manageable pieces. Take a recent problem posed grouping lines based on a given word:

NB.*group v Group lines (]) starting with a line containing keyword ([)
tomatrix=. ,;._2      NB. Separate lines into rows of a matrix
findstring=. [:+./E.  NB. Check if line (]) contains the keyword ([)
selectlines=.<;.1     NB. Group lines starting with keyword
group=:((findstring"1 selectlines ]) tomatrix) f. NB. Put it together

Granted, it is a little overboard on the documentation as the definition would not be that hard to follow as a one liner, but by breaking the definition into pieces one can test the pieces more easily before putting it altogether. And each part can be commented as can be done in explicit definitions.

But most definitions I make are explicit. Tacit definitions I do make are usually small and relatively simple. And usually I make them tacit as a fun challenge.

Sometimes I will enclose an explicit definition within a tacit part like adding a specification of rank to the definition. But that really messes up the debugger. So even though it makes the definition clearer, it seldom works well. But what makes all the difference in maintaining an application after having not touched it for a year or so is being able to read the programming language, whatever it is. Books and classes teach one how to write a programming language. Unfortunately, they seldom teach one how to read the language.


from 	Bo Jacoby
date	 Tue, Sep 27, 2011 at 4:53 AM

Allow me to comment on Don's text below.'

"Tacit definitions can get difficult to read."

Programs are not read - they are deciphered! That is the case with all programming languages. The compactness of J programs makes the deciphering slower per line, but not per unit of functionality.

"What is important is to document well."

The time used for writing documentation was better used for improving the code, and reading documentation is waste of time because software-documentation is not to be trusted anyway. Every change in a program requires change in documentation and the workload is doubled. When documentation is made the software becomes untouchable.

"But what makes all the difference in maintaining an application after having not touched it for a year or so is being able to read the programming language, whatever it is."

Don't maintain a program after a year. If it is not broke, don't fix it. If it is broke, don't fix it either. Don't study software that doesn't work, but write some software that does work.


from 	Don Guinn
date 	Tue, Sep 27, 2011 at 9:50 AM

There are two kinds of documentation. One is to restate what the program does. The other is to tell what you want it to do. Unfortunately my example was a poor one as the comments were simply a restatement of what the code did. The first line is really all the documentation it needed. But I was trying to show a method of breaking up a tacit definition for easier deciphering and commenting.

Code seldom breaks. But requirements change. Operating systems evolve. Take J7 for example. Programs I have written years ago still work in jconsole and I am continually amazed at the stability of J applications over the many releases of J. But the GUI programs do not work in JGTK nor JHS. Either I go in and decipher all the code line by line, or, if I have a brief description of the overall flow and what input and output look like, throw in brief descriptions of what each definition is supposed to do including its arguments and returns, I can get back into the application much more easily to fix it. And documentation is not so much for the author of an application, but the poor rookie who has it dumped in his lap when the author is long gone.


from	[Busy] Devon McCormick
date	Tue, Sep 27, 2011 at 3:25 PM

The issue of working around the special status of the names "m", "n", "u", "v", "x", and "y" seems like a long walk off a short pier and the solution seems just as simple:

   Don't do that.

They are only six names - many languages have hundreds of reserved words - and they are pretty crummy names as well.

They're fine in their simple roles as generic place-holders but I don't understand the desire to use bad names: either use no names at all or use meaningful ones. At least pick names with more than one letter - the time you save when you have to do a search will be well worth it.

On the related issue of tacit versus explicit, I, too find that long, tacit expressions are hard to read though I'm slowly improving my ability to do so. Compare this simple explicit version of a verb

quoteIfSp=: 3 : 0
   ifsp=. '"'#~' ' e. y
   flnm=. ifsp,(y-.'"'),ifsp

(where the final assignment is purely documentary) to its tacit equivalent

(] ([,[,~ '"' -.~ ])~ '"' #~ ' ' e. ])

Perhaps a compromise would be useful:

quoteIfSp=: ] ([,[,~ '"' -.~ ])~ '"' #~ ' '&e.  NB.* quoteIfSp: surround name with '"'s if embedded spaces (stupid MS!).

In any case, with locales and local assignment, I don't see namespace clutter as being much of a problem.


from	[[User:Raul Miller|Raul Miller]] via
date	Tue, Sep 27, 2011 at 3:48 PM

On Tue, Sep 27, 2011 at 3:25 PM, Devon McCormick ... Or, perhaps:

  quoteIfSp=: '"' ([,],[)^:(' 'e.]) -.&'"'


from	Ric Sherlock via
date	Tue, Sep 27, 2011 at 3:49 PM

I don't disagree with the idea that a mixing explicit and tacit gives the best of both worlds.

However I think that like any language ease of reading explicit vs tacit has a lot to do with fluency. And of course writing style. Chunking (as was mentioned earlier in the thread) can be very useful:

  ifspc=: ' ' e. ]
  quot=: '"' ,~ '"' , -.&'"'

or in J7:



from	Jose Mario Quintana via
date	Tue, Sep 27, 2011 at 5:36 PM
From: Behalf Of Henry Rich
Pepe Quintana writes large tacit systems.  Maybe he can tell you to do it.

Well, I probably would neither tell anybody to do it tacitly, nor in J, for that matter (unless he or she is an employee of the firm). However, I can give an insight why I decided to do so, and I also can say that I have never regretted it; on the contrary, I have always enjoyed the ride.

It has been a decade since I decided to migrate, a trading system, from APL to tacit J. There were two main reasons: it was intellectually challenging and therefore amusing; and point-free, or pointless (depending on your viewpoint), programming surely would have its benefits (see, for example, and references therein).

A group of us continues to develop, maintain and operate the system without difficulty. We had to create our own tools to facilitate the process because it is apparent that J was not designed to be used, or abused, in this way. However, it has been happily crunching numbers, and characters, this way without any complaints.

I cannot pass the opportunity to note that after all these years the seemingly innocuous change from x. y. … to x y … remains a mischievous troublemaker. I thought that I was immune to these side effects (see, ) but not so long ago, while experimenting, I stared hopelessly for an awful long time trying to make sense of something similar to this:

  v=. ]
  7!:2 'v 0'
  w=. v f.
  7!:2 'w 0'

I could have been bitten because I usually smile and move on when I browse reports of problems involving x y n …


from	Tracy Harms via
date	Wed, Sep 28, 2011 at 10:09 AM

I see the removal of possible double-quotes as "data qualification." I like to segregate such things from the main logic. This phrasing is one way to move it to the side:

  quoteIfSp=: (] (],[,]) '"' #~ ' ' e. ])@ ( -.&'"' )

I originally phrased the conclusion as a hook, then decided it's easier to read as a fork. At that point I realized that I don't like shifting my interpretation of ] by context, and propose this one in its place:

  quoteIfSp=: (('"' #~ ' ' e. ]) ([,],[) ])@ ( -.&'"' )

That strays from a couple conventions, but I like how it emphasizes what I see as the main structure. ([,],[) But this encourages me to make that structure more prominent by cloaking the other aspects in names. This is where that takes me. (I happen to depart from camelCase in the process.)

  make_valid   =: -.&'"'
  maybe_a_quote=: '"' #~ ' ' e. ]
  quote_if_sp  =: maybe_a_quote ([,],[) make_valid

In this form I personally find it as approachable as an explicit definition.

None of what I've written here applies to Christopher's original question, in my opinion, but comparison of tacit and explicit techniques can be interesting in its own right.

On the matter of working around the special status of the 6 system-sensitive names, I wholly agree that there is no benefit in doing so for production work.



from	Bill Harris via
date	Thu, Sep 29, 2011 at 12:26 AM 	

Bo Jacoby <> writes:

. > "Tacit definitions can get difficult to read." Programs are not read - they are deciphered! That is the case with all . > programming languages. The compactness of J programs makes the deciphering slower per line, but not per unit . > of functionality.

Far be it from me to weigh in seriously against the comments made here, for many of you have programmed far more lines of J than I have. Still, something in Bo's note led me to comment.

I discovered APL in college through Hellerman's _Digital Computer Systems Principles_. For years thereafter, I used APL as a hardware description language and, to a degree, as a note-taking tool. I have probably programmed for about 10 minutes total in APL, most or all of that in I-APL.

Ken wrote Notation as a Tool of Thought (1980?) that seems to support the notion of using these languages as natural languages, as ways to express our thoughts to ourselves, to others, and to computers.

Does anyone do that with J? I do take notes that way sometimes but not as often as I did with APL. Perhaps that's partially due to my changing professional roles, but I still do analysis. Perhaps that's partially due to much of my current analysis being done in R (*). Perhaps that's because J gave us the ability to craft really powerful and really compact one-line programs that are hard to get right and expressive without a computer to test them.

I don't think I'd often use explicit programs as a note-taking tool, although I might in some cases. Tacit expressions seem more aligned with the notation as a tool of thought.

How do others react to this notion? Do you regularly take notes in J? In what domains? In tacit / explicit notation? Bill

(*): I much prefer J to R as a programming language, but R has two things I think I need: powerful, standard regression tools (lm, lmer, and connections to JAGS, among other things), and the ggplot2 graphics package. I like and often use J's graphics for work, but I have found ggplot2 useful. I'd code data handling in J and just connect to R for specific library calls, but I've had problems in the past figuring out how to do that reliably. Has anyone made that a regular part of their workflow?

-- Bill Harris


from	[[User:Raul Miller|Raul Miller]] via
date	Thu, Sep 29, 2011 at 5:50 AM

On Thu, Sep 29, 2011 at 12:26 AM, Bill Harris <> wrote:

> Ken wrote Notation as a Tool of Thought (1980?) that seems to support
> the notion of using these languages as natural languages, as ways to
> express our thoughts to ourselves, to others, and to computers.
> Does anyone do that with J?

I do not do so directly, but I often find that I think about problems in terms of J, and that that often helps me focus on relevant issues and useful approaches.

But shouldn't this discussion be in chat, rather than in programming?



Raul ---

from	Ian Clark via
date	Sat, Oct 1, 2011 at 11:44 AM 	
. > Personally, most of my "thinking in J" uses my visual/spatial

. > reasoning rather than my verbal/grammatical reasoning.

An old colleague of mine, Dr Nick Hammond, of York University, once designed a psychological "test" or "index" to determine a person's cognitive style (at least, in one important respect), and showed it to be remarkably constant across all aspects of life. It seems to have nothing to do with (general) intelligence, left/right hemisphere orientation, nor other (popular) mental quality ...but that's a research question. Essentially...

. == Type A picks up a hammer and then looks for something to hit. . == Type B first selects a nail and then looks for something to hit it with.

Mac/Windows (WIMP) interfaces cater for B's. Unix suits A's. If both are forced to use a command-line language, A's perform measurably better with a syntax resembling: <verb>...<object>, B's with <object>...<verb>. (I can give you a cartload of references... all work paid for by IBM.)

It also turns out that in normal speech A's have difficulty remembering nouns in a hurry (and will resort to placeholders like "whatsit", "gizmo"...) but verbs give no trouble. B's on the other hand are good at quickly coming up with just the right noun. (In fact, this is what the Test is based on.)

B's might well agree with [St] Augustine of Hippo's assertion that the purpose of a university education was to "learn to call everything by its Right Name". A's on the other hand might be impatient with that, and prefer to stress achievement and the scope for it. This is not to say A's have no interest in names of things: they may collect them meticulously and keep lists of them. Possibly as compensating behaviour.

I've always felt Nick's work needed to be more widely known. A schoolteacher would benefit from knowing each pupil's rating on the Test -- as would the pupil, because I don't think it's amenable to training or lifestyle choice, rather the other way round. But ever since Cyril Burt and the IQ scandal, there's been a reluctance among teachers (in England at least) to let a shrink slap a label on their pupils.

But I've benefitted from learning I'm strongly-A (and so I recall was my mother, whose favourite word was "thingmajig", as in: "put it on the thingmajig") -- and I wish my teachers at school had known it too, and guided me accordingly. I'd love to know the test scores of others on the J lists. (If only I could remember its dratted name...!)

Ian Clark ---

from	Johann Hibschman via
date	Mon, Oct 3, 2011 at 9:44 AM

Bill Harris <> writes:

> Does anyone do that with J?  I do take notes that way sometimes but not as often as I did with APL.
> How do others react to this notion?  Do you regularly take notes in J?
> In what domains?  In tacit / explicit notation?

I find that J's dot- and colon-inflected symbols work well when typing into a computer, but poorly when handwriting. When working things out on paper, I mostly use "traditional notation," with a melange of J and APL symbols thrown in when they make sense.

However, speaking for myself, any notes that I want to be able to find in a year have to be electronic. I find J works very well for that, at least for "computation-oriented" notes. It works less well for more symbolic things like stochastic calculus or Bayesian inference.

As an example, I just read a paper on methods of combining multiple econometric forecasts into a smart consensus and used J to write down the different "win" metrics discussed in the paper. It was handy and concise, with one big caveat: I completely ignored any issues with missing data, assuming that all the arrays were dense. That was fine for note-taking, but if I wanted to implement it "for real", I would have to carry around a mask array and update it on each step, which would make the notation much less appealing.

Right now, I'm implementing the "for real" version in R and appreciating the built-in missing-value support.


Miscellaneous Thoughts

We considered a theory that the peculiar behavior of the quantum world is reflected in the the common illogic of human thought processes. We also looked at another theory that our use of some of the most common, low-information words - specifically pronouns -- gives unexpected insight into how we think about things.


File:How I Failed and Finally Succeeded at Learning How to Code.pdf

File:Lost in the Richness.pdf

File:Teaching Languages.pdf

File:Smooth Operators - method overview.pdf

File:Smooth Operators.pdf

File:Bayesian Induction from Bo.pdf


File:Journal of J - Call for papers.pdf

File:Journal of J.pdf

File:Guidelines for Contributors.pdf

File:Tacit Vs Explicit LT impact.pdf

File:Quantum minds.pdf

File:The secret life of pronouns.pdf

-- Devon McCormick <<DateTime(2015-01-28T00:13:22-0200)>>