# User:Devon McCormick/DLA/SeekingRelease

## Introduction

We explore how the "release policy" shapes the resulting shape of a diffusion-limited aggregation (DLA).

## Basic Code

We start with this basic code, all of which can be found here. First, load some sub-routines, define our own namespace dla, and create an initialization routine which defines a few globals:

```load 'coutil'
cocurrent 'dla'

init=: 3 : 0
D=: y            NB. How many dimensions?
RNI=: D dimnbr 1 NB. Relative neighbor indexes: 1-away in D-space
NB.EG  init 2       NB. Initialize 2-dimensional space.
)
```

Here's the main routine, which aggregates points to the global PTS:

```NB.* aggStruc: randomly walk a point y times to aggregate it to cluster:
NB. (>0{x) away from random point in >1{x.
aggStruc=: 4 : 0"(_ 0)
point=. (>0{x) release >1{x [ ctr=. _1
while. y>ctr=. >:ctr do. point=. walk point
if. PTS check_collision point do. y=. ctr [ addPt point end. end.
ctr
NB.EG (5;PTS) aggStruc 1e2\$1e2
)
```

The basic sub-routines:

```NB.* check_collision: 1 iff x in neighborhood of points y.
check_collision=: 4 : '1 e. x e. RNI+"1 y'

NB.* dimnbr: x-dimensional y-nearest neighbor offsets.
dimnbr=: 13 : '(x\$0)-.~,/>{x\$<y-~i.>:+:y'
NB. RNI=: (D\$0)-.~(_1 0 1){~(D\$3)#:i.3^D     NB. Relative neighbor offsets

NB.* walk: walk one random step.
walk=: 3 : 'y+RNI{~?#RNI'
```

Finally, an early version of the nub of the problem: the function that "releases" a new particle.

```NB.* releaseIfOpen: release new particle x away to find open neighborhood.
releaseIfOpen=: 4 : 0
while. 1 e. PTS e. RNI+"1 newpt=. ((]{~[:?#)y)+x*_1 1{~?D\$2 do. end.
newpt
)
```

Make this our release policy (for now):

```release=: releaseIfOpen
```

The main routine randomly moves the newly-released point until it either encounters the existing cluster and sticks, or finishes a pre-set number of moves without encountering the cluster, at which point it disappears.

We'll explain in more detail how the release policy works shortly but first let's see how to run the code. If we run the above lines of J, then move back into the base namespace, and make the dla namespace transparently available here, we can initialize our cluster.

```   coclass 'base'
coinsert 'dla'
init 2
0 0
\$PTS
1 2
```

So, we start with one point at the origin; its shape is 1x2 because it's one point in two dimensions: we plan to concatenate new points to the bottom of this matrix (as can be seen by the definition of addPt above).

The arguments to the main routine are a parameter of 3 - to be explained shortly - the existing list of points and a right argument of how long our random walks should be before we give up. We'll arbitrarily try 20 walks of length 10 each:

```   (3;PTS) aggStruc 20\$10
4 10 10 10 10 10 10 1 10 10 10 10 3 10 10 7 10 9 10 10
```

The result is how long each walk went on. We see that most of them went the maximum - which means they didn't encounter the cluster (or encountered it only on the last step) - but a few ended early, hence became part of the cluster.

So, how many points do we now have and what are they?

```   \$PTS
7 2
|:PTS
0 _1 _2 _2  0 _1 _2
0 _1 _2  0 _2 _3  1
```

We have 7 points which we show transposed simply because it's a more efficient use of display space.

Let's try the same thing again:

```   (3;PTS) aggStruc 20\$10
10 10 1 10 6 10 10 10 3 10 10 10 2 3 10 1 10 4 4 10
(\$PTS);|:PTS
+----+--------------------------------------------+
|16 2|0 _1 _2 _2  0 _1 _2 _3  1 _4 _1 0 _2 1  2 _3|
|    |0 _1 _2  0 _2 _3  1  2 _2  1  2 3 _4 1 _1 _3|
+----+--------------------------------------------+
```

This time we see that more points stuck. Again, we display the shape and the values as a vector of boxed items just to make more efficient use of our display.

How about a picture of these points? To do this, re-enter the namespace and define a new utility function bordFill, then return to the base namespace. This new function takes a left argument of how many empty cells we want to border our points and a right argument of our point co-ordinates.

```   coclass 'dla'

NB.* bordFill: fill 0-mat w/1 according to y, having border of x cells.
bordFill=: 4 : '(1)(<"1 y-"1 x-~<./y)}0\$~2\$(>:+:x)+(>./y)-<./y'
NB.EG viewmat 1 bordFill PTS

coclass 'base'
```

We use this in conjunction with the general "viewmat" utility to display the points:

```   viewmat 1 bordFill PTS
```

Which looks like this:

## Explaining First Release Policy

Our first release function is very crude and its limitations will become apparent as the cluster grows but it's fine for a start. Here's most of the code:

```while. 1 e. PTS e. RNI+"1 newpt=. ((]{~[:?#)y)+x*_1 1{~?D\$2 do. end.
```

This is a while loop where all the work gets done in the conditional part. Here, we check if any of the immediate neighbors of the new, randomly chosen point newpt, are in the existing set of points. If so, we do nothing and try again until this is false. Once we exit the loop, we return newpt: a neighborless point near the cluster.

Let's examine this line in detail. The first part after while. is

```1 e. PTS e. RNI+"1 newpt
```

which evaluates true if one or more points are in the set formed by RNI+"1 newpt. RNI is one of the three globals from our initialization: it looks like this (transposed for more efficient display):

```   |:RNI
_1 _1 _1  0 0  1 1 1
_1  0  1 _1 1 _1 0 1
```

Arranging these eight points around an origin,

```   3 3\$<"1]1 1 1 1 0 1 1 1 1#^:_1]RNI
+-----+----+----+
|_1 _1|_1 0|_1 1|
+-----+----+----+
|0 _1 |0 0 |0 1 |
+-----+----+----+
|1 _1 |1 0 |1 1 |
+-----+----+----+
```

we can see that they are the two-dimensional offsets around a given point. So, if we add each of these (one-dimensional) elements of RNI to a single point like newpt, we get the co-ordinates of all of the neighbors of newpt, e.g. for (5,5) shown here arranged two-dimensionally with the point itself inserted into the middle:

```   (13 : '(<y)(<1 1)}3 3\$<"1]1 1 1 1 0 1 1 1 1#^:_1]RNI+"1 y') 5 5
+---+---+---+
|4 4|4 5|4 6|
+---+---+---+
|5 4|5 5|5 6|
+---+---+---+
|6 4|6 5|6 6|
+---+---+---+
```

The first part of the assigment of newpt:  newpt=. ((]{~[:?#)y)+x*_1 1{~?D\$2 uses a tacit function  (]{~[:?#) which generates a random number ( ? ) based on the length ( # ) of its right argument and uses this to select ( {~ ) an element (row). This is applied to the right argument y which is the list of points in the cluster, so we randomly choose one of our existing points.

This random point is added to the latter part of this expression -  x*_1 1{~?D\$2 in which we apply an offset determined by the left argument x multiplied by a random combination of ones and negative ones set by _1 1{~?D\$2. This multiplication essentially moves x points in a random direction from the original point. The D is in there to support generalization to more than two dimensions.

So, to return to our long-awaited explanation for the magic number 3 in our invocation of the main function ( (3;PTS) aggStruc 20\$10 ), this is the random distance we travel to find an empty neighborhood in which to release our new point. So, if we pick a larger number for this argument, we'll pick release points, on average, further away from the cluster. This means the random walk will have to go on longer to have good chance of hitting the cluster.

So, if we choose a much bigger number than 3, we might see something like this:

```   (20;PTS) aggStruc 20\$10
10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
\$PTS
16 2
```

We added no new points because a random walk of 10 steps is not long enough to travel (more or less) 20 points to where the cluster is. If we are willing to walk a lot more, say on the order of 20^2 steps, it's a different story:

```   (20;PTS) aggStruc 20\$400
400 288 400 400 400 400 309 400 400 400 400 400 400 247 400 400 400 400 400 400
\$PTS
19 2
```

However, as you might guess, it takes more time to walk further, i.e. starting with the original 16 points and timing 20 walks of 10 versus 20 walks of 400:

```   6!:2 '(3;PTS) aggStruc 20\$10'
0.006241296
\$PTS
30 2
\$PTS_dla_=: 16{.PTS_dla_
16 2
4!:55 <'PTS'   NB. Ensure local copy doesn't shadow one in "dla" namespace.
1
6!:2 '(20;PTS) aggStruc 20\$400'
0.22534233
\$PTS
18 2
```

So, we see that the longer walks take many times as long and add fewer points. In fact, this set of parameters will often add no points.

## Tricky Parameter

It would seem that we would want to make the initial offset parameter as small as possible for more efficient code. However, how small can this value be? If it's zero, we'll never find an empty neighborhood because we'll always choose a point on the cluster. If it's one, it fails for the same reason. How about two?

```   6!:2 '(2;PTS) aggStruc 20\$10'
0.0076180073
\$PTS
30 2
```

This seems to work fine. How many points per second are we adding?

```   (30-16)%0.0076180073
1837.7509
```

That's over 1800 points/second - what if we continue doing this?

```   6!:2 '(2;PTS) aggStruc 20\$10'
0.0043779307
\$PTS
44 2
(44-30)%0.0043779307
3197.8578
6!:2 '(2;PTS) aggStruc 20\$10'
0.008769271
\$PTS
57 2
(57-44)%0.008769271
1482.4493
```

It's a variable amount but still thousands per second. To reduce our typing, let's combine these separate lines into one. We'll have to save the starting number of points to calculate the number per second.

```   (sv-~#PTS)%tm=. 6!:2 '(2;PTS) aggStruc 20\$10' [ sv=. #PTS
1375.6458
(sv-~#PTS)%tm=. 6!:2 '(2;PTS) aggStruc 20\$10' [ sv=. #PTS
1370.8268
(sv-~#PTS)%tm=. 6!:2 '(2;PTS) aggStruc 20\$10' [ sv=. #PTS
3250.1922
(sv-~#PTS)%tm=. 6!:2 '(2;PTS) aggStruc 20\$10' [ sv=. #PTS
1716.175
(sv-~#PTS)%tm=. 6!:2 '(2;PTS) aggStruc 20\$10' [ sv=. #PTS
1467.7207
\$PTS
123 2
```

This seems to be going well - what's it look like?

```   viewmat 1 bordFill PTS
savemat_jviewmat_ 'showUse01.bmp'
```

Let's continue but with 30 releases each time instead of 20 but, first, we'll save even more typing by defining a calling function to encapsulate what we're doing repetitively:

```   do1_dla_=: 3 : '(sv-~#PTS)%tm=. 6!:2 ''(2;PTS) aggStruc y'' [ sv=. #PTS'
```

Since this is only one line long, we'll create it directly in the namespace instead of entering and exiting.

Now use this to run 3 batches of 30 walks:

```   do1&>3\$<30\$10
1400.4003 931.17767 1604.4576
\$PTS
190 2
```

Now try 40 at a time:

```   do1&>3\$<40\$10
1476.1391 822.55663 789.23452
```

The number per second seems to be declining. Let's try 50 at a time:

```   do1&>4\$<50\$10
754.1739 855.81707 584.68598 669.20533
```

The rate of increase has definitely dropped. What if we try 100 at a time?

```   do1&>3\$<100\$10
332.41549 394.68151 306.51331
\$PTS
643 2
```

Is it slowing because we're increasing the number of points we try at once? This seems counter-intuitive but let's test it.

```   do1&>9\$<20\$10
146.76733 229.80636 277.27224 333.09454 379.94374 473.80991 247.69881 318.09695 179.85791
\$PTS
780 2
viewmat 1 bordFill PTS
savemat_jviewmat_ 'showUse04.bmp'
```

It's definitely slowing. A little reflection gives us a clue: since we're moving a small distance away from points in the cluster to find an empty neighborhood, we must make more attempts each time as the cluster grows since only points from the perimeter are likely to give us good hits.

## Concentrating on the Perimeter

Fortunately, it's easy to see how to address this: instead of giving all the points as an argument to aggStruc, why don't we give only the perimeter points? This will reduce the number of false hits. How do we know which points are on the perimeter? One simple way is to look at only the most recently added points since we are, by the nature of our aggregation, adding to the perimeter.

Guessing that the most recent 10% of points is a reasonable bunch to try, let's extract those:

```   \$PERIMPTS_dla_=: PTS{.~->.10%~#PTS
78 2
```

Looking at how these compare to the whole set:

```   viewmat 2 (<"1]1+PERIMPTS-"1 <./PTS)}1 bordFill PTS
```

The manipulation we do to PERIMPTS is necessary to map the origin-centered co-ordinates to positive only co-ordinates. This is a replication of what bordFill does internally as well.

This looks good as the perimeter is indicated by the pink points around the edge. Let's try using this as our argument to aggStruc and see how it affects our timings.

```   do1&>3\$<20\$10
1005.5137 995.09202 564.62669
do1&>3\$<20\$10
|attention interrupt: releaseIfOpen
|   point=.(>0{x)    release>1{x[ctr=._1
\$PTS
831 2
```

The points/sec initially goes back up but quickly declines, then the code freezes up and has to be interrupted. A look at the points and their perimeter makes the problem clear:

```   viewmat 2 (<"1]1+PERIMPTS-"1 <./PTS)}1 bordFill PTS
```

We see that we quickly lose our original perimeter. Let's see how costly it is to re-calculate the perimeter with each call to the main routine:

```   do1_dla_=: 3 : '(sv-~#PTS)%tm=. 6!:2 ''(2;PTS{.~->.10%~#PTS) aggStruc y'' [ sv=. #PTS'
NB.  Inserted the perimeter calculation   ^^^^^ here ^^^^^^
do1&>5\$<20\$10
1009.626 1297.3126 1809.8097 1466.0253 1293.6868
```

Increasing the number of attempts per call:

```   do1&>4\$<30\$10
1578.9208 857.49015 1561.9932 1310.9005
```

Increasing the number of attempts again:

```   do1&>4\$<40\$10
1033.4018 1404.6138 1208.4397 1019.4336
```

Increasing again:

```   do1&>7\$<50\$10
997.21846 893.30982 1098.0115 820.66574 1135.0023 2141.7607 807.80606
```

This seems to be working fine though there does seem to be a gradual decrease in efficiency.

```   \$PTS
1358 2
do1&>(50\$10);4\$<100\$10
892.7828 859.53069 827.92105 809.63688 852.42751
```

This looks pretty good but there is an aesthetic problem:

This DLA looks a bit "chunky" compared to other examples we've seen. Our policy of releasing very near the existing cluster has a tendency to fill in the gaps between branches of the cluster. Also, since we pick release points randomly in relation to the perimeter, sometimes we release within the existing cluster as indicated by the interior pink points on the lower part of this picture. What we'd really like to do is to pick release points only outside the perimeter. One way to do this is shown here as well in the following section.

A simple measure of how densely packed the above-pictured DLA is this:

```   (13 : '(#y)%~*/\$0 bordFill y') PTS
2.5402844
```

This shows us that 1 in every 2.5 points in the bounding rectangle is part of the cluster. If we add more points, getting up to almost 62,000 using these parameters, this sparsity ratio goes up to 4.07.

## Same Method, Different Parameters

Before we show a new and different release policy, let's build a cluster with our existing policy using slightly different parameters. First, initialize a new set of points:

```   init 2
0
do1 10\$10
1244.1936
\$PTS
6 2
```

Then we'll vary the minimal offset distance by parameterizing our do1 function:

```do1_dla_=: 4 : '(sv-~#PTS)%tm=. 6!:2 ''(x;PTS) aggStruc y'' [ sv=. #PTS'
```

and use it with a larger offset distance than the "2" we used for the previous cluster:

```   5 do1&>10\$<10\$30
72.617714 607.66382 155.76784 157.00792 281.65434 334.79505 543.02723 342.61396 477.80781 496.56591
```

Since we'll be doing this a lot, let's summarize the "points-per-second" result we get from do1. The verb usus shows the usual statistics in which we're interested: minimum, maximum, mean, and standard deviation of the points-per-second results generated by the do1 verb.

```   usus 10 do1&>100\$<10\$100
29.740074 1414.9642 320.73076 187.40271
usus 10 do1&>100\$<10\$100
50.720809 720.61703 243.40365 126.31516
usus 10 do1&>100\$<10\$100
37.930561 2157.7823 222.73005 227.97863
usus 10 do1&>100\$<10\$100
25.294278 441.61624 160.07025 77.476433
```

From these statistics, we see the gradual decline we expect as the cluster grows, giving us more fruitless picks from the interior proportional to fruitful ones from the perimeter. We continue until we have 11,652 points.

```   usus 10 do1&>100\$<20\$100
41.947957 316.08637 115.16387 46.413008
usus 10 do1&>100\$<40\$100
36.47076 130.91957 71.789116 17.662557
usus 10 do1&>100\$<80\$100
24.403297 74.751489 43.270225 10.192174
\$PTS
11652 2
\$PPTS=: PTS{.~->.10%~#PTS
1166 2
viewmat 2 (<"1]1+PPTS-"1 <./PTS)}1 bordFill PTS
```

Taking a look at the perimeter points we'd get at this point, we see the stray interior ones that tend to fill up the cluster. This one has a sparsity slightly greater than that (4.07) of our previous cluster generated using this same method but with a smaller offset.

```   (13 : '(#y)%~*/\$0 bordFill y') show2pts03
5.0468589    NB. Bounding rectangle points/cluster points
```

## New Release Policy

(The following method is also elaborated here in the context of an earlier version of the DLA code which works on a fixed-size, boolean matrix rather than an explicit set of points.)

First, we define a verb to generate all the immediate neighbors of a point set not in that set.

```neigh2=: 13 : 'x-.~~.,/y+"1/RNI'   NB. Empty neighbors of these
```

We use this to get the nearest neighbors:

```   \$NP=: PTS neigh2 PTS
26386 2
```

Then we expand the neighborhood by finding the next closest set of neighbors to these, then their neighbors, and so on five times:

```   6!:2 'NP=: PTS neigh2^:5]NP'
0.27077715
```

This gives us an envelope six deep around our original points. We find the edge of this envelope by isolating those outermost neighbors (those with neighbors not in the set of neighbors):

```   \$edges=. ~.PTS-.~NP -.~ ,/NP+"1/RNI
1029 2
viewmat 2 (<"1]1+edges-"1 <./PTS,edges)}1 bordFill PTS,edges
```

Here's what the edge of the envelope looks like in relation to our existing cluster:

This looks like a good perimeter set on which to release points but it's a bit thin. Let's thicken it by finding its neighbors two levels deep:

```   \$edges=. ~.PTS neigh2^:2]edges
5138 2
```

Giving us this fatter release area:

Adjust our top-level calling routine to use this perimeter points global:

```   do1_dla_=: 4 : '(sv-~#PTS)%tm=. 6!:2 ''(x;PPTS) aggStruc y'' [ sv=. #PTS'
PPTS_dla_=: edges
1 do1&>10\$<10\$25
12.281323 21.910126 23.042773 79.635403 23.813149 22.408149 34.180425 48.340221 21.374501 50.795483
```

Our initial points/second are a bit low but seem to be improving. Let's try some more:

```   \$PTS
12590 2
viewmat 2 (<"1]1+PPTS-"1 <./PTS,PPTS)}1 bordFill PTS,PPTS
1 do1&>20\$<20\$25
24.342477 61.743025 41.15663 27.630435 34.954674 34.740216 60.580126 52.822625 57.748258 22.060313 22.321549 52.2986 21.649241 40.913895 67.019374 34.597901 40.404442 52.186518 39.882113 42.753015
```

One problem we'll have if we try to generate too many points using one perimeter is that the cluster will eventually "invade" the perimeter. If we don't do something to adjust it, this will lead to unsightly clumps of aggregation within the perimeter band. First, we'll make a new verb to create a perimeter:

```calcPerim=: 3 : 0
NP=: PTS neigh2^:(0{y)]PTS
edges=. ~.PTS-.~NP -.~ ,/NP+"1/RNI
edges=. ~.PTS neigh2^:(1{y)]edges
NB.EG PPTS_dla_=: calcPerim 5 2
)
\$PPTS_dla_=: calcPerim 5 2
5314 2
```

Running this gives satisfactory statistics (min, max, mean, SD of points/second):

```   usus 1 do1&>50\$<20\$25
5.0021381 49.32456 27.373553 10.544996
\$PTS
12969 2
```

We can adjust our main routine to incorporate the perimeter calculation directly:

```   do1_dla_=: 4 : '(sv-~#PTS)%tm=. 6!:2 ''(x;calcPerim 3 1) aggStruc y'' [ sv=. #PTS'
usus 1 do1&>50\$<20\$25
12.040371 61.52522 30.262127 10.363449
```

After trying out this version, it seems that the perimeter calculation, being somewhat expensive, should be done only every so often with the result assigned to a global like this:

```   do1_dla_=: 4 : 0
if. 0=?10 do. PPTS=: calcPerim 5 2 end.
sv=. #PTS
tm=. 6!:2 '(x;PPTS) aggStruc y'
tm%~sv-~#PTS
)
usus 1 do1&>50\$<20\$25
0 62.962738 22.075551 13.289167
```

After running this enough times to add a significant number of points, we look at the cluster with its perimeter: This is interesting because we can plainly see the effect of the new release policy on the appearance of the cluster. The inner, denser portion was formed using our flawed "releaseIfOpen" method. The outer, sparser part of the cluster was formed using our new perimeter-based release policy.

If we generate a cluster solely based on this newer release policy, we get something like the following with a sparsity measure of 6.7, which makes this less dense than the clusters generated by our earlier policies.