# NYCJUG/2014-03-11

introduction to J in 5 minutes videos, example of terse versus prolix expression of algorithms, logical proof using J, array-thinking to avoid errors, computer language cognitive tool, parallel processing examples wanted, translate between J and Excel

Location::

Regular monthly meeting of NYCJUG on Tuesday, March 11th, 2014.

## Meeting Agenda for NYCJUG 20140311

```1. Beginner's regatta: what demos should we have to entice beginners?
See "J in 5 Minutes.doc".

Can we show the positives of J compared to more prevalent languages?
See "The Prolixity of C++".

Getting more digits in a continued fraction: see "Phi for All".

2. Show-and-tell: J as a logic language: see "How I use J for Learning
about Logic.doc" and "A Proof in J.doc".

3. Advanced topics: array thinking - could it help avoid errors like the
Apple iOS "goto bug"?  See "AppleiOSEncryptionGotoBug.doc".

Computer languages as cognitive tools: see "Computer programming languages
as cognitive tools.doc", "On the Social Value of 'Programming for All'",
and "Scientists Begin Looking at Programmers' Brains".

4. Learning, teaching and promoting J, et al.: an example of parallelism
like this one in Matlab "Birthday problem submitted remotely in parallel.doc"?

An Excel rosetta: LINEST - how to calculate standard error for multiple linear
regression?  See "LINEST function in Excel.doc" and "Standard Error for
Multiple Linear Regression.doc".
```

## Proceedings

We talked about the recent set of videos from Martin Saurer intending to introduce J to people in small pieces and the relative utility of different introduction methods.

We looked at how a lengthy expression of an algorithm can conceal how it works compared to a terse expression.

We considered some essays on pseudo-mathematical proofs using J and how its conciseness helps simplify expressions and substitute equivalent expressions to help understand formulas.

On the broader topic of computer languages in general as cognitive tools, we discussed some of the controversy surrounding the promotion of learning computer languages as substitutes for learning foreign languages and how suitable programming is as a widespread educational goal. We also looked a some recent research showing that programming primarily activates linguistic areas of the brain.

Looking at an example from Matlab in parallel programming of a simple simulation, we discussed the need for J to have similar tutorials to emphasize the ease of using it for parallelism.

Finally, we discussed the various Rosetta projects such as the translation between Excel fuctions and J equivalents, as well as the desireability to tackle harder mathematical problems for which there is still very little available on the 'Net.

## Beginner's Regatta

### J in 5 Minutes

Recently, we’ve seen a flurry of activity of people producing videos based on the theme of “J in 5 Minutes”.

```from:  Henry Rich HenryHRich@nc.rr.com
to:   Programming forum <programming@jsoftware.com>
date:  Fri, Feb 14, 2014 at 7:00 PM
subject: [Jprogramming] J in 5 minutes
```
```. Ian Clark's NuVoc project is a great start at making J easier for newcomers.  As part of that effort, the FrontPage of the Wiki now has a section aimed at newcomers.
```

. As Ian observed, a newcomer's first 5 minutes with J will be decisive in establishing their attitude towards the language. As things stand, it takes a serious geek to take a shine to J in 5 minutes. Just between us geeks, I wish there were more of us, but that's not the way to bet.

. No, we need a snappy demo: an application that everyone can relate to, showing how we can code something meaningful and get a pretty display in under 5 minutes. Ideally it should be a YouTube video, with an accompanying Lab so the interested user can reproduce the results.

. I call for somebody, or a small group, to produce /J in 5 Minutes/, the demo that really makes the case for the language. The job requires creative and production skills that I lack, but I will be willing to write code to further the project.

I pitched in with some links to things I’ve done.

. Here's a couple of stubs I've added recently based on what I think beginners might care about:

. I've written and presented (or not presented) a number of talks to different groups:

. Also, here are some examples of writing J code:

. Additionally, there are any number of items in the NYCJUG pages, including compilations of beginner materials from other languages.

Bob Therriault responded with a link to a promotional brochure for Python to give us some ideas about what the Python community defines as the “sizzle” in that language: http://brochure.getpython.info/media/releases/prerelases/psf-python-brochure-vol-1-final-content-preview .

Following is an example from that brochure.

Murray Eisenberg responded with links to some demos for Mathematica, which are impressive and which give a pitch that speaks to some of the strengths of J: quick application development:

Below is a screen-shot from one of these demos.

 The slick demo shows the code being entered, quickly, a character at a time as if being typed, all while the little pie-chart timer progresses. Don Guinn commented on a possible downside of this slickness:

. But what really bothers me about demos like this is that they look so easy when they do it, but if I were to try to do it I wouldn't know where to start. He implied that one could do it without knowing much of anything of their system. I really get tired of videos like this where they type really fast and it looks so easy if one just knew their system well, but I usually don't. If I was presented that screen and wanted to do what he did I wouldn't have a clue what to do.

. We need to present similar videos on J, but somehow we need to make it obvious and logical as to what to do. His video was neat, but could I do it as quickly and easily as he did it without putting in hours, possibly days learning their system? I doubt it.

Raul mentioned the numerous good J examples on Rosetta Code, and Joe provided a list of several examples worth examining:

. I think the ultimate 5 minute experience is a combination of:

. 1. Video

http://www.youtube.com/watch?v=WBXsCeW9qfc(we could do the same with the latest websockets implementation fairly easily I think)

. 2. REPL

. 3. Some simple examples:

. 4. Cheat sheet / quick reference

Henry elaborated:

. What I am thinking we want is 5 minutes that leaves the user thinking, "Wow. That was cool. They did all that and it was so fast and short. I want to learn how they did that." The demo should make extensive use of libraries and packages to get a job done. It needs to be a job that takes some coding - something that would take pages of C - but one where the J program is short. You don't have to write the J program in 5 minutes! We can have the program already written, but you should be able to type it, run it, and see flashy results in that time.

He also mentioned:

. My idea about that is, we need to appeal to young programmers. The more experience people have with scalar languages, the less able they are to learn J. The more experience they have with other languages in a class with J, the less they need to learn J.

. The application needs to be of obvious interest to a non-mathematical, non-financial user. My target would be a scientist/engineer/IT person who has a computation to perform and no canned package to do it, so they have to write a little code.

Bob T. suggested leveraging the “calculator on steroids aspect of J”, opining that many students would benefit from J’s advantage in an experimental approach to math.

Jim Russell pitched in with these suggestions:

. One "teaser" ought to be a list of the very many powerful J features that are unique to the language. Then a "getting started" intro to the JSoftware site: the existence of, and how to search, the site, the wiki, the forums, and the code. Them perhaps a guided tour, with links, of some of the remarkable highlights, including a comprehensive glossary for when they (we) get hopelessly confused.

R. E. Boss injected a note of disagreement with the concept of learning “…J in 5 minutes or presenting different types of nouns in different colors or formats.” He continued:

. IMHO the steep learning curve of J is very useful (and, who knows, intended), it selects the right kind of people. That's how evolution works. To paraphrase Mark Twain: "If we try to convince stupid people, they will drag us down to their level and beat us by experience". So let's not try to make learning J easier, we will end up simplifying the language to a level we don't want.

. And for people who want to teach J to kids, read http://www.scientificamerican.com/article/why-gaming-could-be-the-future-of-education/ .

 There were a number of other, useful suggestions, but Joe was the first one to put his money where his mouth is by producing the video here - http://www.youtube.com/watch?v=qXFpgYvbogw – in which he demonstrates using JHS to set up an application to read stock prices from Yahoo and graph them. It was not as slick as many of the demos to which people had presented links in the discussion, but it flowed well, using the style of voice-over combined with entering code on-the-fly. ### Phi for All

Linda Alvord posed a couple of questions about getting an extended number of digits from a continued fraction evaluation for the Golden Ratio, "phi" or "φ".

```from:    Linda Alvord lindaalvord@verizon.net
to:      programming@jsoftware.com
date:    Mon, Mar 10, 2014 at 9:31 PM
subject: Re: [Jprogramming] Approximating e
```

Thanks for your hints. I always wanted to get rational approximations for the Golden Section.

```   {.|.+`%/\1x, 300#1            NB. Could be {:(+%)/300\$1x
26099748102093884802012313146549r16130531424904581415797907386349
32#'O'                        NB. Match to number below for visual
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO NB. indication of 32 digits.
{.|.+`%/\1x, 400#1
734544867157818093234908902110449296423351r453973694165307953197296969697410619233826734544867157818093234908902110449296423351%453973694165307953197296969697410619233826
NB. 1.61803...
```

```from:    Devon McCormick devonmcc@gmail.com
date:    Mon, Mar 10, 2014 at 9:55 PM
```

Show 50 digits of approximation of phi

```   0j50":{.|.+`%/\1x, 100#1     NB. for 100 terms
1.61803398874989484820350851924118133676198756188312
0j50":{.|.+`%/\1x, 200#1     NB. for 200 terms
1.61803398874989484820458683436563811772030781841854
0j50":{.|.+`%/\1x, 300#1     NB. for 300 terms
1.61803398874989484820458683436563811772030917980576
```

Generalize to x digits of y 1s approximation of phi

```   nDigitsPhi=: ((0j1 * [) ": [: {. [: |. [: +`%`:3\ 1x , 1 \$~ ])"0
50 nDigitsPhi 10 20
1.62500000000000000000000000000000000000000000000000
1.61797752808988764044943820224719101123595505617978
```

Where they first differ tells us how precise each approximation is:

```   2=/\50 nDigitsPhi 100*>:i.3
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0
```

Positions of divergence when we increase the number of terms by 100 each time:

```   0 i.~"(0 1) 2=/\99 nDigitsPhi 100*>:i.5
22 43 64 85
```

First difference of the points of divergence:

```   2-~/\22 43 64 85
21 21 21
```

So, we get about 21 digits for each 100 ones. ---

```from:    Linda Alvord lindaalvord@verizon.net
date:    Mon, Mar 10, 2014 at 10:07 PM
```

... If I have 32-bit numbers, when does this information become fiction? Also how can I get the best possible and correct decimal approximation from these rational numbers? ---

```from:    Devon McCormick devonmcc@gmail.com
date:    Mon, Mar 10, 2014 at 10:54 PM
```

It doesn't matter if you have a 32-bit system: J's extended precision is not limited by that. The digits you were seeing in the earlier blurb I wrote are correct and you should be able to replicate them on your system. ---

```from:    Roger Hui rogerhui.canada@gmail.com
date:    Tue, Mar 11, 2014 at 2:11 AM
```
```. > If I have 32-bit numbers, when does this information become fiction. > Also how can I get the best possible and correct decimal approximation > from these rational numbers?
```

32-bits give you about 10 decimal digits. (0 j. n) ": x displays x to n decimal places. Where the following two outputs agree tells you how many digits for +%/401#1x can be trusted.

```    _100]\(98\$' '),0j200 ": +`%/1x, 400#1
1.61803398874989484820458683436563811772030917980576286213544862270526046281890244970503723472931369481774505251856655015609156645743553332885289608910521042249892203715062899055972967323443765486133617

_100]\(98\$' '),0j200 ": +`%/1x, 500#1
1.61803398874989484820458683436563811772030917980576286213544862270526046281890244970720720418939113748475134846000035278501181299327946966700279337151010482725318315024081083474561817534485930797695298
```

In fact, the alternate version

```   nDigitsPhi=: ((0j1 * [) ": [: {: [: (+ %)/ 1x \$~ ])"0
```

is considerably more efficient.

```   0 i.~"(0 1) 2=/\199 nDigitsPhi 100*>:i.5
42 84 126 167
2-~/\42 84 126 167
42 42 41
```

In fact, it's about twice as efficient.

## Show-and-tell

### How I Use J for Learning about Logic

The following is extracted from here.

J is a rather unusual programming language and mathematical notation. If you hang out in the ##logic IRC channel (see the sidebar here in /r/logic), you may have seen me using the j evaluator bot to demonstrate or experiment with propositional or modal logic.

The logic.ijs library I use is up on my github account, but the two most interesting lines are the definition of given and the declaration of five boolean arrays:

```   given =: ,@([: _:^:(2=#)@ ~."1  [ #"1~ *./@])
'p q r s t' =: |: #: ,. i. 2 ^ 5
```

I told you it was an unusual language. :)

These two lines, combined with a handful of J's primitives are all you need to experiment with and solve many different problems in propositional logic. I'd be happy to explain the code in another post, but for now I just want to show how you can use them.

J Crash course: J is open source, and you can download it from jsoftware.com. Just run jconsole and copy and paste the two lines above if you want to follow along. The text I'm typing in is indented 3 spaces, and J's response for each input follows below. The word NB. marks the start of a comment.

```   p,q,r,s,t               NB. every combination of 5 boolean values
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
```

Each of the variables I defined is an array of 32 bits. (Actually, they're 1x32 matrices, so they line up nice when you use the comma).

In the example above, the first result line is the value of p, the second line is the value of q, and so on. You might notice that the columns correspond to the binary representation of the range (0..32]. Basically, these cover every possible combination of 5 Boolean variables.

```   p <: q                   NB. 'p implies q'. (Can also be written 'p ! q')
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
p +. q                   NB. 'p or q'
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
p *. q                   NB. 'p and q'
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
p *. -.q                 NB. 'p and not-q'
1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
(-.p) *. q               NB. 'not-p and q'
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
-. p *. q                NB. 'not (p and q)'
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0
p *: q                   NB. 'p nand q'
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0
p +. -.p                 NB. some things are always true
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
p *. -.p                 NB. some things are never true
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
(p *. -.p) given ''      NB. 'given' can summarize the results
0
p given ''               NB. If it can't deduce an answer, it shows '_'.
_

(p,q) given ''           NB. You can query multiple variables.
_ _

(p,q) given p            NB. Given p, it knows p=1, but q is still unknown.
1 _

(p,q) given p,q          NB. Commas on the right are treated as 'AND'.
1 1

(p,q) given p, p <: q     NB. Given p=1 and p implies q, it can deduce q for you.
1 1
NB. But, it needn't be so simple:
(p,q,r,s,t) given ((-.s) <: q), (q <: (r *. t)), (-.t), (p=r), q~:p
1 0 1 1 0
```

I find that (+. , *, -. , <: , =, ~:, !) are the primitives I use most often, but you can write all 16 logical connectives quite easily.

The symbols <: and ! (literally, "less than or equal to" and "combinations") are both equivalent to logical implication given boolean arguments. Likewise, <. and >. ("min" and "max") are standins for *. and .+ ("and" and "or"), respectively.

Having two symbols is nice because you can use one for propositions and the other for inference rules:

```   ((p<:q) *. (q<:r)) ! (p <:r)    NB. this says if (p->q) and (q->r), then you can deduce (p->r)
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
```

Together, the boolean primitives, the canned variables, and the given verb make solving many basic logic problems a snap.

For example, you can solve knights and knaves puzzles simply by figuring out how to restate the facts in J.

I'm happy to demonstrate if anyone would like, but I thought I'd give someone else a chance to try it first. :)

Edit: made it show all 5 values and described how to read the results.

[–]abathologist 4 points 1 day ago*

Aside from the niftiness of doing it with your own implementation, and the novelty of learning a weird looking language (which are already good reasons to experiment with it) I wonder if there are any practical advantages here over just using Prolog, or maybe a proof assistant like Coq?

[–]tangentstorm[S] 4 points 1 day ago*

What I've shown here so far is much weaker than prolog or Coq, because it's limited to propositional logic, whereas in those other languages you have access to symbolic predicates.

In prolog you can say:

```mortal(X) :- man(X).  % all men are mortal
man(socrates).        % socrates is a man
```

It can then deduce that socrates is mortal, symbolically.

You can't express a symbolic relationship with my tool (yet!) so you have to reduce it down to simple propositions.

```   mortal =. p
man    =. q
NB. let s mean 'a man named socrates exists'
mortal given s, (s <: man), man <: mortal
1
```

I suppose this is saying something like if socrates exists, and socrates is a man, and men are mortal, then a mortal exists, so it's not really the same.

On the other hand, there are two advantages I can think of: first, this implements classical logic rather than constructive logic, so it's easier to demonstrate that something is false, rather than unknown or just not proven yet.

```   p given q, p <: -.q            NB. '-.' means 'not'
0
(-.p) given q, p <: -.q
1
```

Second, you can easily see that two statements are equivalent, because the answer is a truth table.

```   p <: q    NB. p implies q
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1

q >: p    NB. q is implied by p
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
```

[–]tangentstorm[S] 3 points 1 day ago*

BTW, I don't think of this as "my own implementation" of logic so much as a way to make what's already there in J a little nicer. My examples have all been truth tables for 5 inputs, but most only use 1 or 2. It's often simpler to just ask J for the truth table directly:

```   0 0 1 1 +. 0 1 0 1
0 1 1 1
0 0 1 1 *. 0 1 0 1
0 0 0 1
```

Or you can use /~ to make a table:

```   +. /~ 0 1
0 1
1 1
*. /~ 0 1
0 0
0 1
```

The /~ is actually two separate words. The / adverb modifies a verb so that it creates a table from its left and right arguments:

```    0 1 +./ 0 1
0 1
1 1
```

Adding ~ makes the verb reflexive, meaning it will re-use the right-hand argument on the left side too:

```   +./~ 0 1
0 1
1 1
```

One interesting thing about J, though, is that almost everything in the language has two meanings, depending on whether you give it one argument or two.

For example, it's probably not surprising that -y means "negative y" while x-y means "x minus y". But it might be surprising that every other word in J works the same way.

```   0 0 1 1 *: 0 1 0 1   NB. dyadic (2-argument) *: means 'nand'
1 1 1 0

*: 0 1                NB. monadic *: is the boolean identity function.
0 1

*: 0 1 2 3 4 5 6      NB. well, okay really it means square :)
0 1 4 9 16 25 36
```

What's interesting is that if you use ~ but also use an argument on the left, then what ~ does is flip the arguments.

```   2 %  1    NB. division
2

2 %~ 1
0.5
```

In my other comment, I ended by showing that p <: q and q >: p have the same truth table. Here's a simple proof that <: is the same as >: with the arguments reversed:

```   (<: /~)  0 1      NB. truth table for 'implies'
1 1
0 1

(>: /~)  0 1      NB. truth table for 'is implied by'
1 0
1 1

(>:~ /~) 0 1      NB. again, but with the arguments reversed.
1 1
0 1
```

A train of three verbs is called a fork. For verbs f, g, h and nouns x and y, the phrase:

```   x (f g h) y
```

means:

```   (x f y) g (x h y)
```

So now, here is a 1 line proof (by truth table) that 'implies' is equivalent to 'is implied by' with the arguments reversed.

```   (( <: /~ ) = ( >:~ /~ )) 0 1
1 1
1 1
```

And I think that's kind of neat.

### A Proof in J

The following is here.

```NB. The Quadratic formula
NB. --------------------------------------------
NB. In a more traditional syntax (maxima),
NB. a quadratic equation looks like this:
NB.
NB.      a*x^2 + b*x + c = 0
NB.
NB. and the quadratic formula (which is just the result
NB. of solving the above equation for x) looks like this:
NB.
NB.     x = (-b + sqrt(b^2 - 4*a*c)) / (2*a);
NB.
NB. (the + should be ± but it doesn't really matter)
NB.
NB. Now, let's derive this formula using J syntax.
NB. First, we will plug in our randomly generated data in
NB. order to check each step at runtime. We will continue
NB. names for the coefficients:

(a =. x:1) [ (b =. -x:nms) [ (c =. x:nmp)   NB. p9. given

NB. Again, the x: is just telling j to use extended precision
NB. values, so that floating point issues don't mess up division.

A     0 = n p.~ c, b, a                     NB. p10. ← p8 by p9

NB. The line above is the 'proper' j syntax for 'a*x^2 + b*x + c = 0'
NB. and is quite handy for expressing specific polynomial equations,
NB. but we will write the expanded form. (Compare p11..p18 to j0..j7 above)

A   0 = n ([: +/ ] * [ ^ [: i. [: #) c,b,a  NB. p11. ← p10 by definition of p.~ in j1
A   0 = +/ (c,b,a) * n ^ i. # c,b,a         NB. p12. ← p11 by application
A   0 = +/ (c,b,a) * n ^ i. 3               NB. p13. 3 ← # c,b,a
A   0 = +/ (c,b,a) * n ^ 0 1 2              NB. p14. 0 1 2 ← i. 3
A   0 = +/ (c,b,a) * (n^0), (n^1), (n^2)    NB. p15. ← p14 by distribution of n&^
A   0 = +/ (c*n^0), (b*n^1), (a*n^2)        NB. p16. ← p15 by distribution of (c,b,a)&*
A   0 = (c*n^0) + (b*n^1) + (a*n^2)         NB. p17. ← p16 by distribution of +/
NB. --------------------------------------
A   0 = c + (b*n) + a*n^2                   NB. p18. ← p17 by simplification of ^

NB. Now let's derive the quadratic formula.
NB. (roughly following the derivation at http://en.wikipedia.org/wiki/Quadratic_formula )

A                        0 = c + (b*n) + a*n^2                NB. q0. ← p18. (but also just any quadratic equation)
A                        0 = (c % a) + (b * n % a) + (n^2)    NB. q1. ← q0 by (% & a) (given that a ~: 0)
A                        0 = (c%a) + (b*n%a) + (*:n)          NB. q2. ← q1 by *: (square)
A                (- c % a) = (b * n % a) + (*:n)              NB. q3. ← q2 by subtracting (c % a)
A    ((*: -: b%a) - c % a) = (*: -: b%a) + (b * n%a) + (*:n)  NB. q4. ← q3 ('completing the square')
A    ((*: -: b%a) - c % a) = (*: n + b % +: a)                NB. q6. ← q5 factor the square
A (lhs=.(*: n + b % +: a)) = (*: -:b % a) - (c % a)           NB. q7. ← q6 swap sides to reduce parens
A                    (lhs) = (*: b % +:a) - (c % a)           NB. q8. ← q7 (-:x % y) <--> (x % +:y)
A                    (lhs) = ((*:b)% *:+:a)-(c % a)           NB. q9. ← q8  distribute *:
A                    (lhs) = ((*:b)% *:+:a)-(c * a % *:a)     NB. q10. ← q9 rightmost term * a % a
A                    (lhs) = ((*:b)% *:+:a)-(c*a*4 % 4**:a)   NB. q11. ← q10 rightmost term * 4 % 4
A                    (lhs) = ((*:b)% *:+:a)-(c*a*4 % *:+:a)   NB. q12. ← q11 4**:x <--> *:+:x
A                    (lhs) = ((*:b) - c*a*4) % (*:+:a)        NB. q13. ← q12 factor out %(*:+:a)
A        (*: n + b % +: a) = ((*:b) - c*a*4) % (*:+:a)        NB. q14. ← q7 restore lhs

pm =. (+,-)  NB. ± operator
A             1 _1 =   pm 1         NB. monadic sense
A             6  4 = 5 pm 1         NB. dyadic sense

A           (n + b % +: a) = ((+,-) %:(*:b)-c*a*4) % +:a      NB. q15. ← q14 %: (sqrt) of both sides
A                       n  = (-b (+,-) %:(*:b)-c*a*4) % +:a   NB. q14. ← q13 by (- b % +: a)
NB. ----------------------------------------------------------
A               n  = (b (+,-) %: (*:b)-c*a*4) % +:a

)

qf =: 3 : 0  NB. the quadratic formula
'c b a' =. y'
(b (+,-) %: (*:b)-c*a*4) % +:a'
)

NB. todo: now plug in the values and solve the original problem :)

NB. verification
NB. ---------------------------------------------
verify''
derivation''
```

---

```from:	 Michal Wallace michal.wallace@gmail.com
to:	 chat@jsoftware.com
date:	 Sun, Mar 2, 2014 at 5:27 AM
subject:	 Re: [Jchat] Proof: a unique aspect of j.
```

I'm not sure this counts as a proof, exactly, but here's a derivation of the quadratic formula in J: https://github.com/tangentstorm/tangentlabs/blob/master/j/quadratic.ijs#L132

I also use this quite a bit on irc: https://github.com/tangentstorm/tangentlabs/blob/master/j/logic.ijs

Most of that file is just assertions of the basic laws of propositional logic.

[The section elided here is more fully elaborated in “How I use J for Learning about Logic” (above)]

```   (p,q) given p, p <: q NB. Given p=1 and p implies q, it can deduce q for you.
1 1
NB. But, it needn't be so simple:
(p,q,r,s,t) given ((-.s) <: q), (q <: (r *. t)), (-.t), (p=r), q~:p
1 0 1 1 0
```

You can use these canned variables and the 'given' verb to solve many logic problems quite easily. For example, you can solve these "knights and knaves" puzzles, for example, simply by figuring out how to restate the facts in J: http://philosophy.hku.hk/think/logic/knight.php

---

```from:  [[User:Raul Miller|Raul Miller]] rauldmiller@gmail.com
date:  Mon, Mar 3, 2014 at 2:06 AM
```

I have not read the full proof, yet, but I see a flaw:

```   a1 =: _100 + rnd 27 \$ 200
```

This should be

```   a1 =: _100 + 27 rnd 200
```

Otherwise you might have multiple values that are the same in a1, and if one of those is selected by ij that violates your condition b1 (this happens when you assign a value to 't').

Also, I do not understand what n and m represent, so I had to stop reading the proof when I got to that point.

Thanks,

---

```from:  Michal Wallace michal.wallace@gmail.com
date:  Mon, Mar 3, 2014 at 3:39 AM
```

Ah, yeah... This was just something I happened to have lying around, and I didn't actually set out to write the quadratic formula.

That file came out of a discussion where someone asked the following questions:

1. There are two arrays, (N and M) that contain the same numbers, except in different (random) orders, and one of the arrays (let's say M) contains one extra value. The extra value can be isolated in O(n) time. How?

2. Same setup, but now M contains /two/ extra values. These values can still be isolated in O(n) time. How?

I guess I've already kind of spoiled it at this point, so I'll just say the answer to the second problem involves a quadratic equation. (Or at least my answer did.) I knew I could probably solve it directly in J using 'p.' but I was learning about proofs and wanted to brush up on my long forgotten algebra, so I figured I'd try deriving the quadratic formula myself in J.

I did some of the work interactively in the j console, but I think I also did quite a bit just by re-arranging lines in my text editor, using the notation. Now that I think about it, I wouldn't be surprised if I never actually ran the code.

Nonetheless, It was actually a really interesting exercise, and I remember finding myself noticing relationships that I'd never picked up on before in conventional notation. Two of them are noted in the comments with the symbol '<-->':

```  (-:x % y) <--> (x % +:y)
4**:x <--> *:+:x
```

They're both kind of obvious things everybody knows, but for some reason they kind of jumped out at me using the notation.

Like... The first rule would be (2 * x / y ) <--> ( x / y / 2 ) and that just doesn't seem nearly as interesting to me as the visual symmetry when you write it with -: and +: .

In the second one, I remember thinking how ugly the 4**:x looked when all crammed together, and when I realized you could move (4*]) from the left side of *: and get +: I kind of felt like J had taught me something I never knew that I knew about math. :)

Anyway... Consider that file a very rough draft. Perhaps I will clean it up and post it to the wiki next time I'm feeling mathematical. :) ---

```from:  [[User:Raul Miller|Raul Miller]] rauldmiller@gmail.com
date:  Mon, Mar 3, 2014 at 9:00 AM
```

On Mon, Mar 3, 2014 at 3:39 AM, Michal Wallace < michal.wallace@gmail.com > wrote:

``` > (-:x % y) <--> (x % +:y)
> 4**:x <--> *:+:x
>
> They're both kind of obvious things everybody knows, but for some reason they kind of jumped out at me
>  using the notation.
```

That is a nice perspective, and yes - I can see what you are saying.

For the other problem, it's perhaps worth noting that you can sort in linear time if you are sorting a sequence of integers with a known minimum and a known maximum. And you can determine the minimum and the maximum of a list in linear time.

Thus:

```   radixsort=: (] #~ [: +/ =/) i.@(>./)
```

This is actually something of a trick - we are treating the maximum value of the members of the list as a constant rather than as a variable. Also, this version assumes no negative integers (so you'd need to add 100 before sorting and then subtract 100 after, for your example).

There's something of a problem reasoning about finite machines using concepts based on infinities. What really matters is the time needed to complete the operation.

Thanks,