An isolated sequence of verbs f g h is defined as a fork:
(f g h) y ↔ ( f y) g ( h y) x (f g h) y ↔ (x f y) g (x h y)
However, if f is the verb [: (cap) the definition is:
([: g h) y ↔ g ( h y) x ([: g h) y ↔ g (x h y)
What is to be accomplished with the capped fork is to implement "at" (g@:h) as a fork, because with (the ordinary) fork and "at" there is expressive completeness: Every explicit sentence with one or two arguments which does not use the argument(s) as an operator argument, can be expressed tacitly by fork and at. When [: g h is interpreted as g@:h , it means that "everything" can be expressed as a fork (ordinary and capped).
How is the magic of capped fork accomplished? When the fork parser action is invoked for f g h , it "knows" what f , g , and h are, and if f is [: , it produces something different than if f is not. As a thought experiment, one could imagine using the leading verb + to denote the capped fork. But then that would rule out the use of + g h as an ordinary fork. Alternatively one could use the verb C.A.p. :-) to denote the capped fork. That too would rule out (C.A.p.) g h as an ordinary fork, but C.A.p. is much less useful than + eh? Likewise the capped fork rules out the use of [: g h as an ordinary fork, but since the monadic and dyadic domains of the verb [: are empty ([:y and x[:y signal domain error for all x and y), not much is lost.
The same place where this capped fork magic is effected is also where the magic for > i. 1: and the like happens. Consider the benchmark:
ts=: 6!:2 , 7!:2@] NB. time and space (seconds and bytes) x=: 1e6 ?@$ 0 ts 'x (> i. 1:) 0.5' 2.48635e_5 1216 ts '(x>0.5) i. (x 1: 0.5)' 0.143149 5.24403e6
It is evident from the benchmark that there must be some magic (special code) going on in > i. 1: , because the ordinary evaluation of the fork x (> i. 1:) 0.5 is (x>0.5) i. (x 1: 1) .
Contributed by Roger Hui.