Help / Learning / Ch 22: Vectors and Matrices

From J Wiki
Jump to navigation Jump to search


>> << Pri JfC LJ Phr Dic Voc !: Rel NuVoc wd Help Learning J



Chapter 22: Vectors and Matrices

In this chapter we look at built-in functions which support computation with vectors and matrices.

22.1 The Dot Product Conjunction

Recall the composition of verbs, from Chapter 08. A sum-of-products verb can be composed from sum and product with the @: conjunction.

P =: 2 3 4 Q =: 1 0 2 P * Q +/ P * Q P (+/ @: *) Q
2 3 4 1 0 2 2 0 8 10 10

There is a conjunction . (dot, called "Dot Product"). It can be used instead of @: to compute the sum-of-products of two lists.

P Q P (+/ @: *) Q P (+/ . *) Q
2 3 4 1 0 2 10 10


Evidently, the . conjunction is a form of composition, a variation of @: or @. We will see below that it is more convenient for working with vectors and matrices.

22.2 Scalar Product of Vectors

Recall that P is a list of 3 numbers. If we interpret these numbers as coordinates of a point in 3-dimensional space, then P can be regarded as defining a vector, a line-segment with length and direction, from the origin at 0 0 0 to the point P. We can refer to the vector P.
With P and Q interpreted as vectors, then the expression P (+/ . *) Q gives what is called the "scalar product" of P and Q. Other names for the same thing are "dot product", or "inner product", or "matrix product", depending on context. In this chapter let us stick to the neutral term "dot product", for which we define a function dot:

dot =: +/ . * P Q P dot Q
+/ .* 2 3 4 1 0 2 10

A textbook definition of scalar product of vectors P and Q may appear in the form:

          (magnitude P) * (magnitude Q) * (cos alpha)

where the magnitude (or length) of a vector is the square root of sum of squares of components, and alpha is the smallest non-negative angle between P and Q. To show the equivalence of this form with P dot Q, we can define utility-verbs ma for magnitude-of-a-vector and ca for cos-of-angle-between-vectors.

   ma  =: %: @: (+/ @: *:)
   ca  =: 4 : '(-/ *: b,(ma x-y), c) % (2*(b=.ma x)*(c=. ma y))'

We expect the magnitude of vector 3 4 to be 5, and expect the angle between P and itself to be zero, and thus cosine to be 1.

ma 3 4 P ca P
5 1

then we see that the dot verb is equivalent to the textbook form above

P Q P dot Q (ma P)*(ma Q)*(P ca Q)
2 3 4 1 0 2 10 10

22.3 Matrix Product

The verb we called dot is "matrix product" for vectors and matrices.

M =: 3 4 ,: 2 3 V =: 3 5 V dot M M dot V M dot M
3 4
2 3
3 5 19 27 29 21 17 24
12 17

To compute Z =: A dot B the last dimension of A must equal the first dimension of B.

   A =: 2 5 $ 1
   B =: 5 4 $ 2

$ A $ B Z =: A dot B $ Z
2 5 5 4 10 10 10 10
10 10 10 10
2 4

The example shows that the last-and-first dimensions disappear from the result. If these two dimensions are not equal then an error is signalled.

$ B $ A B dot A
5 4 2 5 error

22.4 Generalizations

22.4.1 Various Verbs

The "Dot Product" conjunction forms the dot-product verb with (+/ . *). Other verbs can be formed on the pattern (u . v).
For example, consider a relationship between people: person i is a child of person j, represented by a square boolean matrix true at row i column j. Using verbs +. (logical-or) and *. (logical-and) we can compute a grandchild relationship with the verb (+./ . *.).

   g   =: +. / . *.

Taking the "child" relationship to be the matrix C:

   C =: 4 4 $ 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0

Then the grandchild relationship is, so to speak, the child relationship squared.

C G =: C g C
0 0 0 0

1 0 0 0
1 0 0 0

0 1 0 0
0 0 0 0

0 0 0 0
0 0 0 0

1 0 0 0

We can see from C that person 3 is a child of person 1, and person 1 is a child of person 0. Hence, as we see in G person 3 is a grandchild of person 0.

22.4.2 Symbolic Arithmetic

By 'symbolic arithmetic' is meant, for example, symbolically adding the strings 'a' and 'b' to get the string 'a+b'. Here is a small collection of utility functions to do some limited symbolic arithmetic on (boxed) strings.

   pa     =: ('('&,) @: (,&')')   
   cp     =: [ ` pa @. (+./ @: ('+-*' & e.))
   symbol =: (1 : (':';'< (cp > x), u, (cp > y)'))(" 0 0)
   
   splus  =: '+' symbol 
   sminus =: '-' symbol 
   sprod  =: '*' symbol 
   
   a =: <'a'
   b =: <'b'
   c =: <'c'

a b c a splus b a sprod b splus c
+-+

|a|

+-+
+-+

|b|

+-+
+-+

|c|

+-+
+---+

|a+b|

+---+
+-------+

|a*(b+c)|

+-------+

As a variant of the symbolic product, we could elide the multiplication symbol to give an effect more like conventional notation:

   sprodc =: '' symbol 

a sprod b a sprodc b
+---+

|a*b|

+---+
+--+

|ab|

+--+

As arguments to the "Dot Product" conjunction we could supply verbs to perform symbolic arithmetic. For the dot verb, which we recall is (+/ . *), a symbolic version is:

   sdot =: splus / . sprodc

To illustrate:

   S =: 3 2 $ < "0 'abcdef'
   T =: 2 3 $ < "0 'pqrstu'

S T S sdot T
+-+-+

|a|b|
+-+-+
|c|d|
+-+-+
|e|f|

+-+-+
+-+-+-+

|p|q|r|
+-+-+-+
|s|t|u|

+-+-+-+
+-----+-----+-----+

|ap+bs|aq+bt|ar+bu|
+-----+-----+-----+
|cp+ds|cq+dt|cr+du|
+-----+-----+-----+
|ep+fs|eq+ft|er+fu|

+-----+-----+-----+

22.4.3 Matrix Product in More than 2 Dimensions

An example in 3 dimensions will be sufficiently general. Symbolically:

   A =: 1 2 3 $ <"0 'abcdef'
   B =: 3 2 2 $ <"0 'mnopqrstuvwx'

A B Z =: A sdot B
+-+-+-+

|a|b|c|
+-+-+-+
|d|e|f|

+-+-+-+
+-+-+

|m|n|
+-+-+
|o|p|
+-+-+

+-+-+
|q|r|
+-+-+
|s|t|
+-+-+

+-+-+
|u|v|
+-+-+
|w|x|

+-+-+
+----------+----------+

|am+(bq+cu)|an+(br+cv)|
+----------+----------+
|ao+(bs+cw)|ap+(bt+cx)|
+----------+----------+

+----------+----------+
|dm+(eq+fu)|dn+(er+fv)|
+----------+----------+
|do+(es+fw)|dp+(et+fx)|

+----------+----------+

The last dimension of A must equal the first dimension of B. The shape of the result Z is the leading dimensions of A

followed

by the trailing dimensions of B.

$A $B $Z
1 2 3 3 2 2 1 2 2 2

The last-and-first dimension of A and B have disappeared, because each dimensionless scalar in Z combines a "row" of A with a "column" of B. We see in the result Z that each row of A is combined separately with the whole of B.

22.4.4 Dot Compared With @:

Recall from Chapter 07 that a dyadic verb v has a left and right rank. Here are some utility functions to extract the ranks from a given verb.

   RANKS   =: 1 : 'u b. 0'
   LRANK   =: 1 : '1 { (u RANKS)'   NB. left rank only

* RANKS * LRANK
0 0 0 0

The general scheme defining dyadic verbs of the form (u . v) is:

             u . v   means  u @ (v " (1+L, _))   where L = (v LRANK)

or equivalently,

              u . v   means (u @: v) " (1+L, _)
        and hence
             +/.*   means (+/ @: *)" 1 _

and so we see the difference between . and @:. For simple vector arguments they are the same, in which case the dimensions of the arguments must be the same, but this is not the condition we require for matrix multiplication in general, where (in the example above) each row of A is combined with the whole of B.

22.5 Determinant

The monadic verb (- / . *) computes the determinant of a matrix.

   det =: - / . *

M det M (3*3)-(2*4)
3 4
2 3
1 1

Symbolically:

   sdet =: sminus / . sprodc

S sdet S
+-+-+

|a|b|
+-+-+
|c|d|
+-+-+
|e|f|

+-+-+
+----------------------------+

|(a(d-f))-((c(b-f))-(e(b-d)))|

+----------------------------+

22.5.1 Singular Matrices

A matrix is said to be singular if the rows (or columns) are not linearly independent, that is, if one row (or column) can be obtained from another by multiplying by a constant. A singular matrix has a zero determinant.
In the following example A is a (symbolic) singular matrix, with m the constant multiplier.

A =: 2 2 $ 'a';'b';'ma';'mb' sdet A
+--+--+

|a |b |
+--+--+
|ma|mb|

+--+--+
+-------+

|amb-mab|

+-------+

We see that the resulting term (amb-mab) must be zero for all a, b and m.

22.6 Matrix Divide

22.6.1 Simultaneous Equations

The built-in verb %. (percent dot) is called "Matrix Divide". It can be used to find solutions to systems of simultaneous linear equations. For example, consider the equations written conventionally as:

              3x + 4y = 11
              2x + 3y =  8

Rewriting as a matrix equation, we have, informally,

              M dot U = R

where M is the matrix of coefficients U is the vector of unknowns x,y and R is the vector of right-hand-side values:

M =: 3 4 ,: 2 3 R =: 11 8
3 4
2 3
11 8

The vector of unknowns U (that is, x,y) can be found by dividing R by matrix M.

M R U =: R %. M M dot U
3 4
2 3
11 8 1 2 11 8

We see that M dot U equals R, that is, U solves the equations.

22.6.2 Complex, Rational and Vector Variables

The equations to be solved may be in complex variables. For example:

M R =: 15j22 11j16 U =: R %. M M dot U
3 4
2 3
15j22 11j16 1j2 3j4 15j22 11j16

or in rationals. In this case both M and R must be rationals to give a rational result.

   M =: 2 2 $ 3x 4x 2x 3x
   R =: 15r22 11r16

M R U =: R %. M M dot U
3 4
2 3
15r22 11r16 _31r44 123r176 15r22 11r16

In the previous examples the unknowns in U were scalars. Now suppose the unknowns are vectors and our equations for solving are:

              3x + 4y = 15 22
              2x + 3y = 11 16

so we write:

   M =: 2 2 $ 3 4 2 3
   R =: 2 2 $ 15 22 11 16

M R U =: R %. M M dot U
3 4
2 3
15 22
11 16
1 2
3 4
15 22
11 16

The unknowns x and y are the rows of U, that is, vectors.

22.6.3 Curve Fitting

Suppose we aim to plot the best straight line fitting a set of data points. If the data points are x,y pairs given as:

   x =: 10 20 30
   y =: 31 49 70

we aim to find a and b for the equation:

                  y = a + bx 

The 3 data points give us 3 equations in the 2 unknowns a and b. Conventionally:

         1 . a  +  10 . b  =   31
         1 . a  +  20 . b  =   49
         1 . a  +  30 . b  =   70

so we take the matrix of coefficients M to be

   M =: 3 2 $ 1 10  1 20  1 30

and divide y by matrix M to get the vector of unknowns U, (that is, a,b)

M y U =: y %. M M dot U
1 10

1 20

1 30
31 49 70 11 1.95 30.5 50 69.5

Here we have more equations than unknowns, (more rows than columns in M) and so the solutions U are the best fit to all the equations together. We see that M dot U is close to, but not exactly equal to, y.
"Best fit" means that the sum of the squares of the errors is minimized, where the errors are given by y - M dot U. If the sum of squares is minimized, then we expect that by perturbing U slightly, the sum of squares is increased.

+/ *: y - M dot U +/ *: y - M dot (U + 0.01)
1.5 1.6523

The method extends straightforwardly to fitting a polynomial to a set of data points. Suppose we aim to fit


          y = a + bx + cx2

to the data points:

   x =: 0   1  2  3 
   y =: 1   6 17 34.1

The four equations to be solved are:


      1.a + bx0 + cx02 = y0
      1 a + bx1 + cx12 = y1
      1.a + bx2 + cx22 = y2
      1.a + bx3 + cx32 = y3

and so the columns of matrix M are 1, x, x^2, conveniently given by: x ^/ 0 1 2

   M =: x ^/ 0 1 2

and the unknowns a, b, c are given by vector U as follows:

M y U =: y %. M M dot U
1 0 0

1 1 1
1 2 4

1 3 9
1 6 17 34.1 1.005 1.955 3.025 1.005 5.985 17.02 34.09

There may be more equations than unknowns, as this example shows, but evidently there cannot be fewer. That is, in R %. M matrix M must have no more columns than rows.

22.6.4 Dividing by Higher-Rank Arrays

Here is an example of U =: R %. M, in which the divisor M is of rank 3.

   M =: 3 2 2 $ 3 4 2 3 0 3 1 2 3 1 2 3
   R =: 21 42

M R U =: R %. M M dot U M dot"2 1 U
3 4

2 3

0 3
1 2

3 1

2 3
21 42 _105 84

  28  7

   3 12
error 21 42

21 42

21 42

The dyadic rank of %. is _ 2,

   %. b. 0
2 _ 2

and so in this example the whole of R is combined separately with each of the 3 matrices in M. That is, we have 3 separate sets of equations, each with the same right-hand-side R Thus we have 3 separate solutions (the rows of U).
The condition R=M dot U evidently does not hold (because the last dimension of M is not equal to the first of U), but it does hold separately for each matrix in M with corresponding row of U.

22.7 Identity Matrix

A (non-singular) square matrix M divided by itself yields an "identity matrix", I say, such that (M dot I) = M.

   M =: 3 3 $ 3 4 7 0 0 4 6 0 3

M I =: M %. M M dot I
3 4 7

0 0 4

6 0 3
1 0 0

0 1 0

0 0 1
3 4 7

0 0 4

6 0 3

22.8 Matrix Inverse

The monadic verb %. computes the inverse of a matrix That is, %. M is equivalent to I %. M for a suitable identity matrix I:

M I =: M %. M I %. M %. M
3 4 7

0 0 4

6 0 3
1 0 0

0 1 0

0 0 1
   0  _0.125 0.1667

0.25 _0.3438 _0.125

   0    0.25      0
   0  _0.125 0.1667

0.25 _0.3438 _0.125

   0    0.25      0

For a vector V, the inverse W has the reciprocal magnitude and the same direction. Thus the product of the magnitudes is 1 and the cosine of the angle between is 1.

V W =: %. V (ma V) * (ma W) V ca W
3 5 0.08824 0.1471 1 1

This is the end of Chapter 22.



NEXT
Table of Contents

Index



The examples in this chapter were executed using J version 802 beta.

This chapter last updated 19 May 2015

Copyright © Roger Stokes 2014.

This material may be freely reproduced, 

provided that acknowledgement is made.


>> << Pri JfC LJ Phr Dic Voc !: Rel NuVoc wd Help Learning J