Essays/IntRoot

From J Wiki
Jump to navigation Jump to search

Information.png J 601 calculates roots correctly. This article was written before J 601 was released.

Warning Warning: This page is of historic interest and while we hope it is still useful, it may also include deprecated or obsolete content


On integer and floating point operands, the integer part of nth root of x can be calculated as n <.@%: x. This expression does not work correctly on extended integers if n is 3 or more:

<<TableOfContents>>

On integer and floating point operands, the integer part of nth root of x can be calculated as n <.@%: x. This expression does not work correctly on extended integers if n is 3 or more:

   4<.@%:391^4x
43922645           NB. Only in J before 6.01
   4%:391^4x
391

Ancient history

Heron from Alexandria (same guy that proved formula for the area of a triangle) described the following method for square root.

To find the square root of , let be some initial approximation (does not really matter what you choose here: can be 1, can be a or anything in between), then let and repeat until reaches the desired precision.

Intuitively it is a very simple and obvious algorithm. When then . If, say, where is some approximation of the root, then and lies between and . So, to get a better approximation, we just take the average of the two.

Much literature is devoted to analysis of this method and it suffices to say that it is loudly praised as good (and it is so).

Textbook approach: Newton-Raphson method

To find nth degree root of is equivalent to solving equation . The programmer's method of choice for solving this equation seems to be Newton-Raphson method (probably, because it is described within first few pages of a typical textbook on numerical methods). Starting with some reasonably good approximation this method iterates until required precision is reached. In the case of nth degree root it becomes . Also, typical textbook usually does not fail to mention that

a) this method only works well if the initial approximation is so close to the root that the function is almost linear in its vicinity

and

b) if the function is convex, that is , then (and all subsequent approximations) must stay above root, otherwise method might not work at all.

It is easy to satisfy b) by picking , however, this will contradict a) since the function in question (especially for bigger n) is very much not linear. A better approximation is (logarithm here is just the number of bits of a and can be calculated easily).

introot=: 4 : 0
  a=. y                   NB. not so good initial approximation, but will work, albeit slowly
  a=. 2^x >.@%~ 2 <.@^. y NB. better initial approximation
  n1=. _1+x
  while.
    t=. y <.@% a^n1
    t<a
  do.
    a=. x <.@%~ (n1 * a) + t
  end.
  a
)

This method seems to work. For smaller degrees, it even seems to work reasonably fast, but as n grows and function worsens, it seems to slow down. This also depends on how close initial approximation is to an actual root.

  • Assuming J implements Newton-Raphson internally, why does not it work? My guess would be either bad choice of initial approximation or not enough care to keep iterations above root or wrong exit condition or some combination of the above -- Andrew Nikitin <<DateTime(2005-10-22T00:06:56Z)>>

Dichotomy

The method that cannot fail is dichotomy. Start with highest power of 2 which power is still below a and keep adding subsequently smaller powers of 2 if it does not bring power above a.

introot2=:4 : 0&x:
  r=. p2=. 2x^n=. x<.@%~2x<.@^.y
  for. i.n do.
    p2=. p2 <.@%2
    if. y>:x^~r2=. r+p2 do.
      r=.r2
    end.
  end.
  r
)

introot3=: 4 : '(] [`]@.(y&>:@^&x.@]) +)/ 1x,|.*/\.2x#~x<.@%~2x<.@^.y'

p2 is actually a power of 2 (has just one non-zero bit), there is no divisions (division of power of 2 by 2 does not count), so the only time-consuming operation is raising iteration to the power n-1.

As n grows, the algorithm becomes faster just because there are fewer bits to guess.

Timing

Timings show that for small n and big a Newton-Raphson is better and for bigger n dichotomy becomes better.

  • Probably it would be possible to devise algorithm that uses dichotomy to close in and then Newton-Raphson to finish the deal that would work equally fast for big and small n, but I could not figure out how to do it without extra calculations.

Applications

Just about the only place where higher degree roots can be applied to extended integers are certain primality testing and factorization algorithms that require that the number is not a whole power. The way to check if number is whole power is to try to extract all roots up to degree log2(number).

ispower=: +./@(= i.@>:&.(p:^:_1)@(2&(>.@^.)) (introot3"0 ^ [) ])

Appendix: "long division" integer square root

Direct implementation of paper and pencil "long division" style algorithm for square root extraction, after adapting it from decimal to binary, yields

NB. Long division for square root
intsqrt=: 3 : 0
  a=. 2^2*2<.@%~ 2 <.@^. y
  tb=. a<.@%4
  rm=. y-a
  while. tb>0 do.
    if. rm>:a+tb do.
      rm=. rm-a+tb
      a=. tb+a <.@% 2
    else.
      a=. a <.@% 2
    end.
    tb=. tb <.@% 4
  end.
  a
)

This algorithm can be implemented very efficiently.

All "divisions" here are shifts by one bit. tb is power of 2, so adding tb and shifting tb can be performed in constant time. Total number of operations is less than what is needed to perform a single division.

Contributed by Andrew Nikitin