In J, you add a list of numbers with +/ y.
For floating-point values, especially where the intermediate totals are large compared to the individual values, naive summation can produce inaccurate results. The classic example of loss of precision is (1e10+1e_10+_1e10) when added left-to-right. The first addition, (1e10+1e_10), yields (1e10) - the (1e_10) is completely lost. The final addition gives (0.
If you are beset with roundoff on +/ y, you might want to try +/!.0 y. This uses compensated summation, which gives a more accurate result. Compensated summation works very well when the source is from intermediate results' swamping the size of the inputs. It will not altogether solve the problem of cancellation of large values of opposite signs, though it does give somewhat better results in that case.
+/!.0 y operates on the assumption that the values in y are the result of computation, and thus have precision limited to 53 bits (as IEEE-754 doubles). On this interpretation, the value 1e10 in the example is considered to mean (1e10+ε) where ε is a value on the order of ±1e_5. Given this limited precision, it is not worthwhile to use very-high-precision methods, and thus compensated summation will not stretch to preserve the 1e_10 in the example sum, since it is below the intrinsic uncertainty level.
What compensated summation does is limit the loss of precision to ε, which ordinary summation does not:
0j20 ": +/ 1e7 # 10000000000.1 100000000000211936.00000000000000000000
That can't be right! What happened?
After adding around 100,000 values, the sum got too big to fit precisely. At that point, low bits of each new summand value were lost. After a while, the low-order digit was lost entirely, as you can see if we add another hundred million terms:
0j20 ": +/ 1e8 # 10000000000.1 1000000000000211968.00000000000000000000
Compensated summation prevents the loss of significance in the summands:
0j20 ": (+/!.0) 1e7 # 10000000000.1 100000000001000000.00000000000000000000
On floating-point inputs, the form x +/@:*"1!.0 y calculates the dot-product at double the resolution of ordinary floats. This is useful under a different interpretation of floating-point error from that used in compensated summation.
In the view taken by x +/@:*"1!.0 y, at least one of x and y is considered to be infinitely precise. The value might actually be infinitely precise, or it might be an unchanging input to an repetitive calculation. Either way, the error in that input is taken to be 0. Then, the product of x and y has a full 104 bits of precision, and x +/@:*"1!.0 y calculates in full 104-bit precision so as to preserve an accurate result even in the presence of massive cancellation of signed inputs.
1e6 1e_6 0 1e6 (+/@:*"1) 1e6 1e_6 0 _1e6 0 1e6 1e_6 0 1e6 (+/@:*"1!.0) 1e6 1e_6 0 _1e6 1e_12