As an exercise, try to write mergesort using array-oriented programming, with the best complexity possible. The divide-and-conquer recursion car hardly be avoided, so the exercise mainly focus on the merge part, that is, given two sorted arrays, find a way to merge them so that the result is also sorted, without loops or recursion.
Solution using a dichotomy search
Given two sorted arrays, x and y, the idea is to find where will land an element of y in the final merge. Such an index is exactly the number of elements that will come before that element. It's easy to count the number of such elements that will come directly from y, it's exactly the index of that element. On the other hand, the primitive I. answers the same question for the number of elements that will come before it from x, by performing a binary search to find the smallest index such that the element would be inserted in that index if it were to be inserted into x, to preserve the sorting.
This is done by the o verb.
o =. (+ i.@#)@:I.
1 3 This means that in the merged array ('acdfeg'), the two elements of y will land in position 1 and 3.
To now retrieve the indexes of the other array, one might be tempted to use o~ as the original author of this idea did, but this will not work if the two arrays are not disjoints (because some indexes will be used twice). The solution comes from the observation that, in the end, this whole operation we're doing is going to make a permutation of i.x+&#y, and that it's going to be composed of two increasing sequences of indexes (because the two arrays are already sorted).
So, the indexes can be generated by
0 2 4 5
And so, finally, by packing up, we have
oo=.o,~u-.o NB. generates all the indexes
x merge y
However, this solution runs in just for the merge, so the whole sort is in (by the Master theorem), which is clearly suboptimal, but the worst part is that it's actually as fast as simply merging the array, then sorting it.