Stories/RichardBrown

From J Wiki
Jump to navigation Jump to search

 Appendix A: A Functional Program Summary
 Appendix B: A 'Higher Level' Functional Version
 Appendix C: J Program in Imperative Style
 Appendix D: Primitive J Functions Used

What is Functional Programming?

Functional programming is a style of computer programming which, in its purest form, involves only functions. (For a short survey article on functional programming see the August 1994 issue of Byte.) If x represents the input data of the program, then the goal is to design a function f such that y = f(x) is the desired output. Thus a program is a function. The function f is built by combining various primitive functions that are provided by the functional programming language. This approach to programming is quite different from the conventional approach (exemplified by assembly language, FORTRAN, Pascal, C, etc.) which may be called imperative programming because a program is a list of commands. In imperative programming, data is stored in memory cells with symbolic addresses and a list of commands is executed in order, usually with some branching or looping. This conventional approach is really a sugaring of machine language for a von Neumann type of computer architecture with stored program, stored data, and step-at-a-time execution of commands. The functional approach is machine independent: functional programs do not have to be rewritten for a different architecture, say, for a parallel machine.

What is J?

J is a programming language developed from A Programming Language (APL) by Ken Iverson (father of APL) and Roger Hui. Unlike APL which uses a special character set, J uses only the ASCII characters. Primitive functions are denoted by single characters like + or by following such characters with a period or colon as in +. or +: . Like APL, J has primitive functions for operating on arrays without using indexes or looping. For example, if A and B are two matrices of the same shape then A+B computes the elementwise sum of A and B , and >./A computes the list of greatest elements of the columns of A .

Why Use J for Functional Programming?

There are many reasons. J has a vast collection of primitive functions to call upon and these extend to arrays easily. More importantly, J has a variety of ways of combining functions that go beyond the usual composition of functions. One of the most important combinations is called the fork. If f and g are monadic functions and h is dyadic then f h g is a monadic function and its meaning is as follows: (f h g)(x) = f(x) h g(x) . This construction is familiar in mathematics where, for example, the sum of two functions f+g is a function whose value on an argument x is defined to be f(x)+g(x) . A standard example of a fork from the J tutorials illustrates the idea. Suppose that x is a list of numbers and we want y = a(x) to be the arithmetic mean. We write

sum =: +/

(read as 'plus over', see Appendix D for a dictionary of all the J primitive functions used in this article) and then the fork

a =: sum % #

(which reads as 'sum divided by number') is the desired arithmetic mean function.

The assignment statements used in the sentence above are assignments of names to functions (or programs or verbs as they are called in J). No use has been made of assignments of names to data (nouns in J). Moreover, the assignments made above are permanent names. Let us contrast this with the traditional approach. If you write the code for arithmetic mean in a conventional language, you will write a sequence of commands with assignment statements of data to named memory cells (variables). The sequence of statements will have control structures that produce loops. Some variables will refer to memory cells whose contents change each time you go around a loop. Thus variables are transient things that are used in the program but have no permanent meaning. (In functional programming, an assignment of the form x =: x+1 makes no sense! In imperative programming such statements are commonplace!)

In J, a dyadic fork is also defined. This definition is more general than normal mathematical notation or the kind of compositions defined in other functional programming languages. The fork +*- is defined as follows: x(+*-)y means (x+y)*(x-y) . We will use the dyadic fork extensively below.

A sequence of three verbs is a fork. A sequence of two verbs in J is called a hook. The first verb must be dyadic and the second monadic. For example, x(+%)y means x + 1/y and (+%)y means y + 1/y . Thus we can read the hook +% as 'plus reciprocal'.

What is Operations Research?

Most problems in operations research involve optimizing an objective subject to constraints. Typical examples are maximizing profit subject to resource constraints or minimizing shipping costs subject to the constraints of a transportation system. One of the most common types of problems has a linear objective function and linear constraints and such problems define a subject called linear programming. (This is a quite different use of the word 'programming'!) Here is a very small example. (We use J-notation: <: means 'less than or equal' and >: means 'greater than or equal'.)

maximize z = 2x + 3y
subject to 4x + y <: 12 and 2x + 5y <: 15
with x >: 0 and y >: 0 .

The simplex method was developed by G. Dantzig in the 1950's and has been the standard method for solving linear programming problems since then. Only recently has the simplex method been challenged by other methods and only in large problems with thousands of variables and constraints. To apply the simplex method, we convert the inequalities to equalities using slack variables: 4x + y + s = 12 and 2x + 5y + t = 15 with s >: 0 and t >: 0 . We write our problem as a system of linear equations:

z - 2x - 3y         = 0
    4x +  y + s     = 12
    2x + 5y     + t = 15

and study various feasible (i.e. non-negative) solutions of this system. Notice that the system is 'solved' for the variables z , s , and t in terms of x and y . We can set x = 0 and y = 0 and we get three equations in three unknowns with solution z = 0 , s = 12 , t = 15 . This solution is feasible because all variables are non negative; but this solution does not have a very good value of z . In the simplex method, we move from solution to solution in such a way that z increases with each move until no further increase is possible.

If we divide an equation by a nonzero constant or if we add a multiple of one equation to another equation, then we get a new system of equations that is equivalent to the old system. On the computer, we do these operations on the matrix of coefficients. We divide a row by a constant and add multiples of a row to other rows.

Below we show the original coefficient matrix and beside it we show the result of dividing the bottom row by 5 and then adding multiples of the new bottom row to the rows above to create zeros in the rest of the y-column.

z    x   y   s   t              z    x    y   s   t
1   _2  _3   0   0   0          1  _0.8   0   0  0.6   9
0    4   1   1   0  12  -->     0   3.6   0   1 _0.2   9
0    2   5   0   1  15          0   0.4   1   0  0.2   3

This represents a system of equations

z - 0.8x         + 0.6t = 9
    3.6x     + s - 0.2t = 9
    0.4x + y     + 0.2t = 3

which is solved for z , y , and s . If we set x = 0 and t = 0 we get the feasible solution z = 9 , s = 9 , and y = 3 . The row operations above are called a 'pivot' for geometric reasons (the feasible set has been pivoted around so that a new point of it is at the origin.) It turns out that if we do another pivot that converts the 3.6 to 1 and clears the rest of the x column then we get a new solution with x = 2.5 , y = 2 , z = 11 . Moreover, this turns out to be the optimal solution.

The simplex method can be divided into two parts: (1) finding the location of the pivot (including testing for optimality and unboundedness) and (2) the pivot operation itself.

Pivoting in J

Let M denote the original matrix of coefficients above. Let R denote the pivot row, R =: 0 2 5 0 1 15 , let P denote the pivot, P =: 5 and let C denote the pivot column, C =: _3 1 5 . Then the row operations making up the pivot can be summarized in one J expression:

M - (C-B)*/(R%P)

where B =: 0 0 1 . This formula says that the pivoted matrix is the original matrix M minus various multiples (the multiples are in the expression C-B) of the row R divided by the pivot P . The B term is necessary because the third row is treated differently from the other rows and without the B term, the pivot row would become a row of zeros. Given this formula, it is clear how to write J code in non-functional style for doing the pivot. (See Appendix C for the simplex method written in J using a traditional approach.) But we will show how to use a purely functional approach based on the use of dyadic forks.

We will define a function pat ('pivot at') that would be used as follows to do the above pivot:

M pat 3 3

meaning 'M pivot at row 3 and column 3'. Note that we are indexing rows from 1 to 3 and columns from 1 to 6 (not 0 to 2 and 0 to 5 as is normally done in J). This means that our functions must convert back and forth between indexing with origin 1 and J indexing with origin 0.

According to APPENDIX D, the dyadic verb [ gives its left argument. Therefore the definition of pat is the dyadic fork

pat =: [ - ext

where ext is the dyadic function such that M ext 3 3 computes (C-B)*/(R%P) . If we now use mc and mr ('modified column' and 'modified row') for the dyadic functions computing (C-B) and (R%P) then the definition of ext is the fork

ext =: mc */ mr

and it remains to define mc and mr. Assume we have defined dyadic functions xc and xr and xe ('extract column', 'extract row', 'extract element') that work as follows. M xr 2 3 extracts the second row of M (ignoring the column index) M xc 2 3 extracts the third column of M (ignoring the row index) and M xe 2 3 extracts the element in the second row and third column. Also suppose bv is a dyadic function such that M bv 2 3 is 0 1 0 (a basis vector with a 1 in the second position, ignoring the column index 3). Then mc and mr can be defined as forks

mc =: xc - bv
mr =: xr % xe

Now we define xc , xr and xe . Recall that we are referring to the first row as row 1 not as row 0. Each verb is a fork with central verb 'from' (which is { in J, see Appendix D). The right verb is [ which means 'left argument'. The left verb is a composition (using @) of several verbs. The monadic verb <: is 'decrement' and is used to change to J indexing. Refer to Appendix D for other verbs.

xr =: (<:@{.@]) {   [
xc =: (<:@{:@]) {"1 [
xe =: (<@:<:@]) {   [

Actually we could make shorter definitions using hook instead of fork and using commute ~ to swap the arguments of { .

xr =: {~   <:@{.
xc =: {~"1 <:@{:
xe =: {~   <@:<:

Finally, we define bv . We use a fork with central verb equals. The left verb takes the left argument, finds the number of items (rows) m , and then creates an index set 1 2 ... m . The right verb takes the right argument, takes the first entry (which is the pivot row index) and decrements it. Under equals, the result is a vector of 0's and a single 1 where the index equals the pivot row index.

bv =: (i.@#@[) = (<:@{.@])

This completes the definition of pat. It can be tested as shown in the examples below.

The Simplex Method in J

It remains to specify where to pivot and when to stop pivoting. The pivot column is selected as that column (excluding the last column) whose first row has the most negative entry. In the example above, starting with the solution x=0, y=0, z=0, s=12, t=15 one sees that z=2x+3y can be improved by increasing either x or y; but an increase in y is better. So the y column is selected as the pivot column. In the matrix, this choice corresponds to selecting the most negative element in the top row. If none of these entries is negative then no improvement in z is possible so no further pivot is required and the optimal solution has been found. Once the pivot column has been selected, we need to know how large to make the variable (y in the example) that is being increased. Each constraint (that is, each row of the matrix below the first row) sets a limit on the size of the variable being increased. The limit can be computed as a quotient with numerator from the right border and denominator from the pivot column.

For example, the first constraint gives the limit y <: 12/1 and the second constraint gives the limit y <: 15/5. All such constraints must be satisfied. If the smallest is satisfied then all will be satisfied; hence the pivot row is selected to minimize the quotients. However, there is a catch: if the denominator is zero or negative then the constraint should be ignored.

In teaching the simplex method, one finds that students have no problem doing the simple calculations that find the pivot column and the pivot row as outlined in the previous paragraph. But the pivot operation involves tedious arithmetic. Hence it is convenient for students, who are learning about the simplex method, to find the pivot manually and then use the function pat defined above to do the pivot.

It is, perhaps, surprising that although finding the pivot is a trivial part of the simplex method when done by hand, the computer code for finding the pivot is rather tricky!

First we need a function to find the location of the minimum element in a list. Least is 'lesser over' and is therefore <./ The hook below can be read as 'index of least'

ml =: i. <./

Now we can write the function pc that finds the pivot column. We take the first row, drop the last element, and find the minimum location. Then we increment this index so the first row has index 1 (not 0) as mentioned above. This can be accomplished as a composition (using @) of ml with primitive functions:

pc =: >: @ ml @}: @ {.

However, this is not good enough! Recall that we also want to detect an optimal matrix so we can tell the simplex method when to stop. Therefore, we adopt a somewhat different approach. We take the first row, drop the last element, append a leading zero, and apply ml. If the row has a negative element then ml will find the location of the most negative element and, because of the leading zero, that index will already be incremented into the range 1 to n. If there is no negative element in the top row, then ml will find the appended leading zero at location 0 and pc will return a zero result. This can be used to detect optimality.

pc =: ml @ (0&,) @ }: @ {.
opt =: 0&= @ ]

To find the pivot row we need a new version of division. We want ordinary division, else, if the denominator is not positive, infinity. When such 'quotients' are searched by ml for the location of the minimum, those whose denominators are not positive will be ignored. In the code below, the function inf is a constant function with value infinity. The function dnp ('denominator not positive') is 1 if the denominator is not positive and 0 otherwise. The new division operation is dp which stands for 'divide positive'.

else =: `
if =: @.
inf =: _: @ [
dnp =: 0&>: @ ]"0
dp =: % else inf if dnp

We can now find the pivot row with the dyadic function pr whose left argument is the matrix and whose right argument is the index of the pivot column.

pr =: >: @ml @ ((xc 0:) dp xc)

(The dirty trick in the above line is this: The hook (xc 0:) appears to extract column zero but actually extracts the LAST column to be used as the list of numerators! This is because xc includes a decrement function that reduces the 0 to -1. In J the index -1 means last.)

Note that the function pr works on the entire columns including the first row. The first row is not a constraint and should be ignored. But the first row will always give an infinite 'quotient' under dp and so will be ignored by ml as desired.

What happens if ALL 'quotients' are infinite? In that case there is no limit on the size of the variable being increased and the objective function is UNBOUNDED. This situation can be detected by the condition that pr selects the first row and produces the result 1. The function unb detects an unbounded problem.

unb =: 1&= @ {. @ ]

We create a function cpat ('conditional pivot at') which is 'pivot at else infinity if the problem is unbounded'.

    cpat =: pat else inf if unb

Now we create a function pin ('pivot in') such that M pin 2 means 'M pivot in column 2'. The pivot row is found automatically by pr.

    pin =: [ cpat pr,]

This train of five dyadic verbs is parsed by J as [ cpat (pr,]) which is a fork in which the third verb is itself a fork.

We can now define either a recursive or iterative version of the simplex method. The recursive version is 'simplex':

    simplex =: (simplex @ pin else [ if opt) pc

For the iterative version, we define 'conditional pivot in' and 'pivot'

    cpin =: pin else [ if opt
    pivot =: cpin pc

Finally, the 'smplx' function iterates the 'pivot' function 'to infinity'! This means the matrix is pivoted until the same matrix occurs twice in succession (and hence no further change will occur). The result will be either the optimal matrix or infinity if the problem is unbounded.

    smplx =: pivot^:_

NOTE: As one might expect, there are many ways to write the simplex method in J using a functional programming style. For a different approach, see Appendix B.

Examples

To solve the small problem posed above, we define the matrix

    M=:1 _2 _3 0 0 0 0 4 1 1 0 12 0 2 5 0 1 15
    M=:3 6$M
    M
 1 _2 _3 0 0  0
 0  4  1 1 0 12
 0  2  5 0 1 15

Then we apply either of the functions smplx or simplex

    smplx M
 1 0 0  0.222222  0.555556 11
 0 1 0  0.277778 _0.055556 2.5
 0 0 1 _0.111111  0.222222 2

and get the optimal matrix. As a system of equations, this is

   z      + 0.22s + 0.55t = 11
   x    + 0.27s - 0.55t = 2.5
      y - 0.11s + 0.22t = 2

and it tells us that the optimal solution is x = 2.5, y = 2, and z = 11. (And s = 0 and t = 0.)

One advantage of the functional approach is that we have developed a collection of useful functions along the way. These can be tested separately or used to show students the steps of the simplex method. For example

    pc M    NB. Finds pivot column.
 3

    M pr 3  NB. Pivot row in third column.
 3
    M1 =: M pat 3 3  NB. Pivots at 3, 3.
    M1
 1 _0.8 0 0  0.6 9
 0  3.6 0 1 _0.2 9
 0  0.4 1 0  0.2 3
    pc M1
 2
    M1 pr 2
 2
    M1 pat 2 2
 1 0 0  0.222222  0.555556 11
 0 1 0  0.277778 _0.055556 2.5
 0 0 1 _0.111111  0.222222 2

APPENDIX A: Functional Program Summary

    pat     =:   [ - ext
    ext     =:   mc */ mr
    mr      =:   xr % xe
    mc      =:   xc - bv
    xr      =:   {~   <:@{.
    xc      =:   {~"1 <:@{:
    xe      =:   {~   <@:<:
    bv      =:   (i.@#@:[) = (<:@{.@])
    ml      =:   i. <./
    pc      =:   ml@(0&,)@}:@{.
    if      =:   @.
    else    =:   `
    inf     =:   _:@[
    dnp     =:   0&>:@]"0
    dp      =:   % else inf if dnp
    pr      =:   >:@ml@((xc 0:) dp xc)
    unb     =:   1&=@{.@]
    cpat    =:   pat else inf if unb
    pin     =:   [ cpat pr,]
    opt     =:   0&= @ ]
    simplex =:  (simplex@pin else [ if opt) pc
    cpin    =:   pin else [ if opt
    pivot   =:   cpin pc
    smplx   =:   pivot^:_

Note that it may be desirable to use J's fix adverb to 'compile' some of these functions into compositions of primitives. For example we can write

   pat =: pat f.

and then we find that instead of the original definition above, pat is now defined as

   [-(({~"1<:@{:)-(i.@#@[=<:@{.@]))*/(({~<:@{.)%({~<@<:))

The new 'compiled' version runs a bit faster than the original. Moreover, we could now erase some of the functions that were used to define pat if we have no need for these functions and don't want them cluttering up the symbol table.

   erase =: (4!:55)@;:
   erase 'ext mr mc xr xe bv'
1 1 1 1 1 1

(The output of 1's shows that all six items were erased.)

APPENDIX B: A 'Higher Level' Functional Version

Here is a different approach to the simplex method. First we assume the pivot is in the first row and first column. That allows us to define a monadic function piv such that piv M will do the pivot operation on M. Moreover, we will give a new way of doing this pivot. We handle the first row first (by just dividing it by its first element) and then do the lower rows using a */ product.

 fr  =: {.%{.@{.
 lr  =: }.-(({."1@}.)*/{.)
 piv =: ({.,lr)@(fr,}.)

The dyadic function row is used as in M row 4 which takes the first 4 rows of M and reverses their order, thereby bringing the 4th row to the top. (Of course M row 4 applied again will restore M to its original form.)

 row =: ((|.@{.),}.)~

Similarly, col does the same for columns.

 col =: row"1

Now we define a conjunction at so that M piv at row 4 will bring row 4 to the top, do the pivot and then put row 4 back where it belongs.

 at =: ((@:`].)`])(`:6)

(At the end of this appendix is an explanation of at.)

We define a function pr that finds the pivot row of a matrix assuming that the pivot column is the first column. For that, we use the functions if, else, dnp, dp, ml defined in the main article above.

pr =: >:@ml@:(({:"1)dp({."1))

We also use the same functions inf, unb, opt, and pc. Then we put it all together as follows:

pin  =: (piv at row else inf if unb pr) at col
simplex =: (simplex@pin else [ if opt) pc

This code is more sophisticated than that of the main article because it uses a defined conjunction at which is a higher level function (its arguments are functions).

To describe the conjunction at, it is best to introduce the explicit method of defining functions. In J one may define functions in two ways: tacitly or using explicit variables as parameters. We have used the tacit method in this article. For example, consider the dyadic function x f y which computes 1+x+y. We could define this tacitly by f =: >:@+ or by f =: (1&+)@+ or by the fork f =: 1: + +. Or we could define it explicitly using J's locally constrained left and right parameters x. and y. (more recently denoted simply as x and y) by

 f =: 3 : '1+x.+y.'

(Here the '3' tells J to make a verb. For a conjunction, use '2', for an adverb use '1' and for a noun use '0'.) In this case, the function f will be stored by J in the explicit form given and when called, as in 5 f 6, the values 5 and 6 will be substituted for x. and y. respectively. It is also possible to ask J to convert an explicit form to a tacit form. To do this, we use '13' instead of '3'. Thus

 f =: 13 : '1+x.+y.'

causes J to produce the tacit verb f =: 1: + + .

In our case, we want to create a conjunction at that applies its right argument (some verb, in our case row or col) first, then applies its left argument (some other verb, in our case piv) and then applies its right argument again. (Note: This is similar to, but not exactly the same as, the J primitive conjunction &. .) Thus we can define at explicitly by writing

 at =: 2 : '(x.@:y.)y.]'

When given an instance like M piv at row 4 the conjunction at forms a fork (piv@:row)row] which combines with the arguments M and 4 to give (piv (M row 4)) row 4 which is just what we want.

If, instead of the above, we ask J to create a tacit version by writing

 at =: 13 : '(x.@:y.)y.]'

then J produces the tacit definition at =: ((@:].)])(:6) given above. However, to program this tacit conjunction directly (or even to untangle its curious grammar!) one must go into much more detail on J than is appropriate in this short article.

APPENDIX C: J Program in Imperative Style

Here is the simplex method written in the traditional non-functional way - except that we define two functions at the start! This illustrates the fact that one may combine the functional programming style and the imperative programming style as one sees fit to create an efficient and readable program.

    all =: *./
    ml  =: i.<./

    simplex =: 3 : 0
 M =. y.
 obj =. }:{.M
 if. all obj >: 0 do. M else.
    m =. #M
    pc =. ml obj
    C =. pc{"1 M
    den =. }.C
    if. all den <: 0 do. _ else.
       pos =. den > 0
       num =. }.{:"1 M
       quo =. (pos#num)%(pos#den)
       r =. (ml quo) { pos#}.i.m
       R =. r{M
       P =. r{C
       simplex M-(C-r=i.m)*/(R%P)
    end.
 end.
 )

APPENDIX D: Primitive J Functions Used

Here are the J primitive functions that were used above. For arithmetic we used the four basic operations of addition, subtraction, multiplication, and division.

  2+3              2-3                2*3             2%3
5               _1                 6               0.66666666667

Note that the negative sign, which is part of a negative number, is the underbar. An underbar by itself is the 'number' positive infinity and negative infinity is __ . Also note that division is % not / .

   1+_              1%0                _1%0
 _                _                   __

Primitive verbs in J are denoted by single characters like + or by such a character followed by a period or colon like +. or +: . Most primitive function symbols in J have separate monadic and dyadic meanings. For example, the monadic meaning of % is reciprocal. The monadic meaning of <. is 'floor' and the dyadic meaning is 'lesser' (i.e. 'min').

    %2               <. 2.56             3 <. 2         2 <. 3
 0.5              2                    2              2

The obvious analogue statements are true about >. .

The monadic meaning of <: is 'decrement' and the dyadic meaning is 'less than or equals'.

    <: 2.56          2 <: 3              3 <: 2
 1.56             1                    0

There are monadic and dyadic constant functions and monadic and dyadic identity functions:

    0: 2             2 0: 3               ] 3            2] 3
 0                0                   3               3

(But note that 2[3 is 2 because [ is the identity function of its left argument, ignoring the right.) Constant functions 0: to 9: are defined; to make more general constant functions, the conjunction " may be used.

Thus 10"0 is a constant function that has value 10 on 0-cells. Applied to a matrix, it gives a matrix of 10's. Or 10"1 has value 10 on arguments of rank 1. Applied to a matrix it gives a 10 for each row of the matrix. And 10"_ applied to a matrix gives the number 10.

We used both the monadic and the dyadic versions of the adverb / . The monadic version applies its left argument (a dyadic verb) between the items of its right argument.

    2+3+4            +/ 2 3 4              */ 2 3 4    <./ 7 3 9 6
 9                 9                   24            3

The dyadic version gives a 'multiplication table' for the verb to the left.

    2 3 4 */ 7 8 9
 14 16 18
 21 24 27
 28 32 36

The & symbol is a conjunction. One common use of it is to combine a dyadic verb with an argument to make a monadic verb.

   f =: 7&+            g =: %&4            f 2 3 4       g 2 3 4
                                      9 10 11         0.5 0.75 1

Composition of functions is represented by @ .

    f g 2 3 4           (f@g) 2 3 4        7+(5+6)      5(f@+) 6
 7.5 7.75 8        7.5 7.75 8          18           18

For working with arrays (vectors, matrices and higher arrays) the following operations are available.

    0 2 _1 { 8 3 4 7    {. 8 3 4 7         {: 8 3 4 7
 8 4 7                8                  7

    }. 8 3 4 7          }: 8 3 4 7         2 {. 8 3 4 7
 3 4 7               8 3 4              8 3

    2 }. 8 3 4 7        i. 8               8 3 4 7 i. 4
 4 7                  0 1 2 3 4 5 6 7    2

    $8 3 4 7            #8 3 4 7           M =: 2 2$8 3 4 7
 4                    4

    M                   $M                 #M
 8 3                  2 2                2
 4 7

The last two functions show that M has the shape of a two by two matrix and that M has two items (each item is a row).

The conjunction " is used to specify how a verb will act on an array. A matrix is a 2-cell, each of its rows is a 1-cell, and its elements are 0-cells.

    9,M                 9(,"2)M             9(,"1)M        9(,"0)M
 9  9                 9  9                9  8  3        9 8
 8  3                 8  3                9  4  7        9 3
 4  7                 4  7
                                                         9 4
                                                         9 7

The power conjunction ^: is used to iterate a function.

    (f^:3) 3            7+7+7+3             (g^:3) 16      (g^:_) 16
 24                  24                  0.25           0

If an iteration gives the same result as the previous case (to within a small tolerance) then the iteration stops. Therefore limits of iterations can be programmed by simply using the infinite power.

Gerunds are arrays of verbs that can be handled like nouns and then used as verbs. For example

ger =: f`g

is a gerund with components f and g . On the one hand we can treat it like a noun by selecting its components or by shaping it into a matrix, etc. On the other hand we can evoke it as a verb in a variety of ways. One way is through the use of @. which is called 'agenda'. The expression f`g@.v x applies the verb v to the argument x to get a result v(x) which should be an integer, in this case either 0 or 1. The index v(x) is used to select f or g to act on x . Thus the final answer will be f(x) if v(x) is 0 or g(x) if v(x) is 1. In effect, we get the result f(x) else g(x) if v(x)=1 . Thus we make the code h=:f`g@.v more readable by defining

else =: `
if   =: @.
h    =: f else g if v



By Richard L.W. Brown, York University. Substantially the same text previously appeared as Functional Programming in J for Operations Research in Vector, Volume 11, Number 4, April 1995.