NYCJUG/2011-09-13

From J Wiki
Jump to navigation Jump to search

Notation, functional programming, algorithm implementation, Sugar

Location:: ThomasNet

Agenda

             Meeting Agenda for NYCJUG 20110913
             ----------------------------------

1. Beginner's regatta: array thinking is more natural than "loopy" thinking -
see "Towards More Natural Functional Programming Languages_p1-myers.pdf".


2. Show-and-tell: thoughts on notation - see "Excerpts from Cognitive
Dimensions of Notation.pdf".

Introducing J to functional programmers - see "Elaborations
on the Importance of Notation.pdf", "JFunctionalMeetupIntro.pdf", and
"nonNotationalSprawlEGinExcel.pdf".


3. Learning, teaching and promoting J, et al.: see "Algorithm Implementations -
Present and Future.pdf" and "Getting J into Sugar.pdf".

Beginner's Regatta

Towards More Natural Functional Programming Languages

                                   Invited Talk
                                     Brad A. Myers 
                          Human Computer Interaction Institute 
               School of Computer Science, Carnegie Mellon University 
                               Pittsburgh, PA 15213-3891 
                                   bam+@cs.cmu.edu 
                           http://www.cs.cmu.edu/~bam 

Abstract

Programming languages are the way for a person to express a mental plan in a way that the computer can understand. Therefore, it is appropriate to consider properties of people when designing new programming languages. In our research, we are investigating how people think about algorithms, and how programming languages can be made easier to learn and more effective for people to use. By taking human-productivity aspects of programming languages seriously, designers can more effectively match programming language features with human capabilities and problem solving methods. Human factors methods can be used to measure the effects, so unsubstantiated claims can be avoided.

This talk will present a quick summary of new and old results in what is known about people and programming, from areas that are sometimes called “empirical studies of programmers” and “psychology of programming.” Much is known about what people find difficult, and what syntax and language features are especially tricky and bug-prone. Our new research has discovered how people naturally think about algorithms and data structures, which can help with making programming languages more closely match people’s problem solving techniques.

Copyright is held by the author/owner(s). ICFP'02, October 4-6, 2002, Pittsburgh, Pennsylvania, USA. ACM 1-58113-487-8/02/0010.

Examples

As part of his PhD thesis [1], my student John Pane performed three formative studies to see how non-programmers naturally thought about algorithms for manipulating graphics and numbers.

In the first study, we showed 10-year old children pictures of various scenes from the PacMan game, and asked how they would implement them. In the second study, we showed children various situations in a database, and asked them to perform arithmetic operations. In both cases, we had independent analysts evaluate the answers looking for patterns. One of the results was that people consistently operated on sets of objects, rather than iterating or recursing through the set [3]. For example, people said, “When PacMan eats all of the dots, he goes to the next level,” and “Subtract 20,000 from all elements in Round 2.” Another result is that most people tended to use an event-based style for graphics, such as “If PacMan hits a wall, he stops.” In contrast, some people used a constraint style: “PacMan cannot go through a wall.”

In extensive study of Boolean expressions, we found that children and adults use words such as “AND,” “OR” and “NOT” with inconsistent meanings [2]. For example, “AND” often was used where the Boolean operator “OR” would be required, as in “Scores of 10,000 and up are extraordinary” (since no score can be 10,000 AND up at the same time). Another result is that people were not consistent in the precedence that they expected for operators. For example “Not A or B” often meant “Not (A or B)”, but not always.

Using the results of these and prior studies and human-factors principles we have created a new programming language for children called HANDS. User studies showed that novel features of HANDS made it easier for non-programmers to create programs.

References

  1. Pane, J., A Programming System for Children that is Designed for Usability. PhD Thesis, Computer Science Department Carnegie Mellon University, 2002, Pittsburgh, PA. Computer Science Technical Report CMU-CS-02-127.
  2. . Pane, J.F. and Myers, B.A. “Tabular and Textual Methods for Selecting Objects from a Group,” in Proceedings of VL 2000: IEEE International Symposium on Visual Languages. 2000. Seattle, WA: IEEE Computer Society. pp. 157-164.
  3. Pane, J.F., Ratanamahatana, C.A., and Myers, B.A., “Studying the Language and Structure in Non-Programmers’ Solutions to Programming Problems.” International Journal of Human�Computer Studies, 2001. 54(2): pp. 237-264.

Show and Tell

Excerpts from Cognitive Dimensions of Notations

In M. Beynon, C.L. Nehaniv, and K. Dautenhahn (Eds.) Cognitive Technology 2001 (LNAI 2117). Springer-Verlag, pp. 325-341

Cognitive Dimensions of Notations: Design Tools for Cognitive Technology

A.F. Blackwell[1], C. Britton[2], A. Cox[2], T.R.G. Green[3], C. Gurr[4], G. Kadoda[5], M.S. Kutar[2], M. Loomes[2], C.L. Nehaniv[2], M. Petre[6], C. Roast[7], C. Roes, A. Wong and R.M. Young[2]

Abstract

The Cognitive Dimensions of Notations framework has been created to assist the designers of notational systems and information artifacts to evaluate their designs with respect to the impact that they will have on the users of those designs. The framework emphasizes the design choices available to such designers, including characterization of the users activity, and the inevitable tradeoffs that will occur between potential design options. The resulting framework has been under development for over 10 years, and now has an active community of researchers devoted to it. This paper summarizes the current activity, especially the results of a one-day workshop devoted to Cognitive Dimensions in December 2000, and reviews the ways in which it applies to the field of Cognitive Technology.

Review of Dimensions

Viscosity: resistance to change.
A viscous system needs many user actions to accomplish one goal. Changing all headings to upper-case may need one action per heading. (Environments containing suitable abstractions can reduce viscosity.) We distinguish repetition viscosity, many actions of the same type, from knock-on viscosity, where further actions are required to restore consistency.

Visibility: ability to view components easily.
Systems that bury information in encapsulations reduce visibility. Since examples are important for problem solving, such systems are to be deprecated for exploratory activities; likewise, if consistency of transcription is to be maintained, high visibility may be needed.

Premature commitment: constraints on the order of doing things.
Self-explanatory. Examples: being forced to declare identifiers too soon; choosing a search path down a decision tree; having to select your cutlery before you choose your food.

Hidden dependencies: important links between entities are not visible.
If one entity cites another entity, which in turn cites a third, changing the value of the third entity may have unexpected repercussions. Examples: cells of spreadsheets; style definitions in Word; complex class hierarchies; HTML links. There are sometimes actions that cause dependencies to get frozen – e.g. soft figure numbering can be frozen when changing platforms; these interactions with changes over time are still problematic in the framework.

Role-expressiveness: the purpose of an entity is readily inferred.
Role-expressive notations make it easy to discover why the programmer or composer has built the structure in a particular way; in other notations each entity looks much the same and discovering their relationships is difficult. Assessing role-expressiveness requires a reasonable conjecture about cognitive representations (see the Prolog analysis below) but does not require the analyst to develop his/her own cognitive model or analysis.

Error-proneness: the notation invites mistakes and the system gives little protection.
Enough is known about the cognitive psychology of slips and errors to predict that certain notations will invite them. Prevention (e.g. check digits, declarations of identifiers, etc) can redeem the problem.

Abstraction: types and availability of abstraction mechanisms.
Abstractions (redefinitions) change the underlying notation. Macros, data structures, global find-and-replace commands, quick-dial telephone codes, and word-processor styles are all abstractions. Some are persistent, some are transient.

Abstractions, if the user is allowed to modify them, always require an abstraction manager -- a redefinition sub-device. It will sometimes have its own notation and environment (e.g. the Word style sheet manager) but not always (for example, a class hierarchy can be built in a conventional text editor).

Systems that allow many abstractions are potentially difficult to learn.

Secondary notation: extra information in means other than formal syntax.
Users often need to record things that have not been anticipated by the notation designer. Rather than anticipating every possible user requirement, many systems support secondary notations that can be used however the user likes. One example is comments in a programming language, another is the use of colours or format choices to indicate information additional to the content of text.

Closeness of mapping: closeness of representation to domain.
How closely related is the notation to the result it is describing?

Consistency: similar semantics are expressed in similar syntactic forms.
Users often infer the structure of information artefacts from patterns in notation. If similar information is obscured by presenting it in different ways, usability is compromised.

Diffuseness: verbosity of language.
Some notations can be annoyingly long-winded, or occupy too much valuable “real-estate” within a display area. Big icons and long words reduce the available working area.

Hard mental operations: high demand on cognitive resources.
A notation can make things complex or difficult to work out in your head, by making inordinate demands on working memory, or requiring deeply nested goal structures.

Provisionality: degree of commitment to actions or marks.
Premature commitment refers to hard constraints on the order of doing things, but whether ot not hard constraints exist, it can be useful to make provisional actions – recording potential design options, sketching, or playing “what-if” games.

Progressive evaluation: work-to-date can be checked at any time.
Evaluation is an important part of the design process, and notational systems can facilitate evaluation by allowing users to stop in the middle to check work so far, find out how much progress has been made, or check what stage in the work they are up to. A major advantage of interpreted programming environments such as BASIC is that users can try out partially-completed versions of the product program, perhaps leaving type information or declarations incomplete.

  1. University of Cambridge
  2. 2.0 2.1 2.2 2.3 2.4 2.5 University of Hertfordshire
  3. University of Leeds
  4. University of Edinburgh
  5. University of Bournemouth
  6. Open University
  7. Sheffield Hallam University

Introducing J to Functional Programmers

Some of these following examples were used in an introduction to J at a functional programming Meetup.

Elaborations on the Importance of Notation

Here are some miscellaneous excerpts on notation. The first is from Knuth, the second references a paper written by Thomas Bayes using a now defunct notation for writing higher-order differentials.

Two Notes on Notation

by Donald E. Knuth, Computer Science Department, Stanford University
Mathematical notation evolves like all languages do. As new experiments are made, we sometimes witness the survival of the fittest, sometimes the survival of the most familiar. A healthy conservatism keeps things from changing too rapidly; a healthy radicalism keeps things in tune with new theoretical emphases. Our mathematical language continues to improve, just as “the d-ism of Leibniz overtook the dotage of Newton” in past centuries …

1. Iverson’s convention. The first notational development I want to discuss was introduced by Kenneth E. Iverson in the early 60s, on page 11 of the pioneering book [21] that led to his well known APL. “If α and β are arbitrary entities and R is any relation defined on them, the relational statement (aRb) is a logical variable which is true (equal to 1) if and only if α stands in the relation R to β. For example, if x is any real number, then the function

(x > 0) − (x < 0)

(commonly called the sign function or sgn x) assumes the values 1, 0, or −1 according as x is strictly positive, 0, or strictly negative.” When I read that, long ago, I found it mildly interesting but not especially significant. I began using his convention informally but infrequently, in class discussions and in private notes. I allowed it to slip, undefined, into an obscure corner of one of my books... But when I prepared the final manuscript of [15], I began to notice that Iverson’s idea led to substantial improvements in exposition and in technique. ...

Discarded Notations

This is seen in some recently discovered papers by Thomas Bayes, the discoverer of Bayes's Theorem. We see this old version of a notation favored by Isaac Newton for higher-order differentials. Whereas the contemporary notation uses superscript numerals to denote higher-order differentials, this one use a number of dots.
Theorem mentioned to me at Tunbridge Wells.jpg
Theorem mentioned to me at Tunbridge Wells-dot notation.jpg

Looking at another example from this same collection, we see that parentheses for grouping expressions were not standard at that time.
Integral of Log from Bayes.jpg

Notation for Great Big Numbers

There is this exposition on how to express extremely large numbers.

It's hard to appreciate the Singularity properly without first appreciating really large numbers. I'm not talking about little tiny numbers, barely distinguishable from zero, like the number of atoms in the Universe or the number of years it would take a monkey to duplicate the works of Shakespeare. I invite you to consider what was, circa 1977, the largest number ever to be used in a serious mathematical proof. The proof, by Ronald L. Graham, is an upper bound to a certain question of Ramsey theory. In order to explain the proof, one must introduce a new notation, due to Donald E. Knuth in the article Coping With Finiteness. The notation is usually a small arrow, pointing upwards, here abbreviated as ^. Written as a function:

int arrow (int num, int power, int arrownum) {
    int answer = num;
    if (arrownum == 0)
        return num * power;
    for (int i = 1; i < power; i++)
        answer = arrow(num, answer, arrownum - 1);
    return answer;
} // end arrow

2^4 = 24 = 16.
3^^4 = 3^(3^(3^3)) = 3^(3^27) = 37,625,597,484,987

7^^^^3 = 7^^^(7^^^7).

3^3 = 3 * 3 * 3 = 27. This number is small enough to visualize.

3^^3 = 3^(3^3) = 3^27 = 7,625,597,484,987. Larger than 27, but so small I can actually type it. Nobody can visualize seven trillion of anything, but we can easily understand it as being on roughly the same order as, say, the gross national product.

3^^^3 = 3^^(3^^3) = 3^(3^(3^(3^...^(3^3)...))). The "..." is 7,625,597,484,987 threes long. In other words, 3^^^3 or arrow(3, 3, 3) is an exponential tower of threes 7,625,597,484,987 levels high. The number is now beyond the human ability to understand, but the procedure for producing it can be visualized. You take x=1. You let x equal 3^x. Repeat seven trillion times. While the very first stages of the number are far too large to be contained in the entire Universe, the exponential tower, written as "3^3^3^3...^3", is still so small that it could be stored on a modern supercomputer.

3^^^^3 = 3^^^(3^^^3) = 3^^(3^^(3^^...^^(3^^3)...)). Both the number and the procedure for producing it are now beyond human visualization, although the procedure can be understood. Take a number x=1. Let x equal an exponential tower of threes of height x. Repeat 3^^^3 times, where 3^^^3 equals an exponential tower seven trillion threes high.

And yet, in the words of Martin Gardner: "3^^^^3 is unimaginably larger than 3^^^3, but it is still small as finite numbers go, since most finite numbers are very much larger."

And now, Graham's number. Let x equal 3^^^^3, or the unimaginable number just described above. Let x equal 3^^^^^^^(x arrows)^^^^^^^3. Repeat 63 times, or 64 including the starting 3^^^^3.

Graham's number is far beyond my ability to grasp. I can describe it, but I cannot properly appreciate it. (Perhaps Graham can appreciate it, having written a mathematical proof that uses it.) This number is far larger than most people's conception of infinity. I know that it was larger than mine. My sense of awe when I first encountered this number was beyond words. It was the sense of looking upon something so much larger than the world inside my head that my conception of the Universe was shattered and rebuilt to fit. All theologians should face a number like that, so they can properly appreciate what they invoke by talking about the "infinite" intelligence of God.

My happiness was completed when I learned that the actual answer to the Ramsey problem that gave birth to that number - rather than the upper bound - was probably six.

Why was all of this necessary, mathematical aesthetics aside? Because until you understand the hollowness of the words "infinity", "large" and "transhuman", you cannot appreciate the Singularity. Even appreciating the Singularity is as far beyond us as visualizing Graham's number is to a chimpanzee. Farther beyond us than that. No human analogies will ever be able to describe the Singularity, because we are only human.

The number above was forged of the human mind. It is nothing but a finite positive integer, though a large one. It is composite and odd, rather than prime or even; it is perfectly divisible by three. Encoded in the decimal digits of that number, by almost any encoding scheme one cares to name, are all the works ever written by the human hand, and all the works that could have been written, at a hundred thousand words per minute, over the age of the Universe raised to its own power a thousand times. And yet, if we add up all the base-ten digits the result will be divisible by nine. The number is still a finite positive integer. It may contain Universes unimaginably larger than this one, but it is still only a number. It is a number so small that the algorithm to produce it can be held in a single human mind.

The Big "L"

This essay on teaching mathematics mentions the inordinate difficulty some students have understanding the concept of a logarithm which may have something to do with the notation commonly used to express it.

There are some students who, no matter what, can’t seem to comprehend what a logarithm (when treated like an operation) is doing. I see students that:

Problems with log.jpg

These mistakes indicate that “log” is being perceived as some sort of quantity to be manipulated, not as an operation. This may be due to the fact that “log” is the first time students are exposed to an operation that is represented as a word instead of as a symbol or other numerical notation. Texts apparently assume that this is a natural transition, not even worth mentioning, but it’s pretty clear that it is not as obvious as one might think.

It's too bad there's not some sort of notational equivalent of log, like ^., which was visually similar to its inverse, like ^.

Non-notational Sprawl

Here we see one of the downsides of relying on keywords rather than symbols by taking a look at formulas introduced into Microsoft Excel 2007. An example from Excel 2007, from “Excel 2007 Pocket Guide, Second Edition” by Curtis D. Frye, published by O'Reilly Media, Inc., October 25, 2007.

NewFormulasInExcel2007.jpg

Here, in the first four of these, Excel assigns a new reserved word for the combination of an existing function with a new twist: the introduction of the conditional. This is a feature of keyword systems which subsume the complexity of combinations of primitive operations under distinct words. There is very little indication of the common fillip that unites the first four items here except for the suffix of “IFS” (or “IF”). The distinction between the “IF” and “IFS” versions of average further emphasizes how the vocabulary proliferates to distinguish between special cases of a more general idea since this generality cannot easily be combined with the existing vocabulary in the context of a flat, linear, wordy notation.

Learning, Teaching, and Promoting J

Algorithm Implementations: Present and Future

We need to continue to increase the number of off-the-shelf implementations of useful algorithms. Someone commented recently on the forum about being happy to find a Levenberg-Marquardt implementation in J – we need more of this! Here is a sample of what we’ve gotten in the past and what is being looked for in the future.

Algorithms for B-Splines and Least-Squares and NURBS – Oh No!

from		J. Patrick Harrington jph@astro.umd.edu via srs.acm.org   
to		programming@jsoftware.com  
date		Wed, Aug 31, 2011 at 3:24 PM  
subject	[Jprogramming] Least squares splines, B-splines, NURBS, etc.

Like many astronomers, my numerical analysis tool box is pretty much defined by the topics in Press et al. "Numerical Recipes". Recently I had some Monte Carlo data that I wanted to fit with a spline and couldn't find that in NR. I.e., I didn't want to pass the curve through the (noisy) data points, I wanted a least squares fit. This led me to the realm of curve and surface fitting, and such dense things as "The NURBS Book". Since I like to have the tools I'm using in J, rather than call a routine in some other language (R?) I was faced with a lot of learning & translation. I finally found a source for the limited problem I wanted to solve, and now have working routine.

(See my J page: http://www.astro.umd.edu/~jph/ls_spline.ijs)

This code is just translated from pseudocode and is loopy and not J-like -- that's still to do.)

So I was just wondering if any in the wider J community has produced code or experimented with such things as B-splines, NURBS, etc. I realize this is a huge area because of computer graphics applications, but it would be good to have some of the basic stuff implemented in J. (I did find the script for the Levenberg-Marquardt Algorithm on the wiki -- I've used that in IDL and it's great to find a J version.) Excuse the long post, but I didn't want to reinvent something that was already out there.

Patrick

Power Conjunction Used for Binary Search

from		Steven Taylor taylste@gmail.com via srs.acm.org 
to		Programming forum <programming@jsoftware.com>
date		Mon, Aug 29, 2011 at 5:03 AM
subject	[Jprogramming] binary search

I was looking for an example of power being used for a binary search. Is there an example of this in a wiki or a list post?

So far I've ended up with the below, which works well, but originally I was aiming at doing a binary search using symbols. Probably there is a crossover point where one outperforms the other. I don't need this to be super fast right now, but probably it's a good thing to know.

t=.s:'`zero`one`two`three`four`five'
{. I. ; t =/ s:'`three'   N.B. take first incase of duplicates.  Also, s: returns a one item list, hence the need for /

thanks,

-Steven --

from	Marshall Lochbaum mwlochbaum@gmail.com via srs.acm.org 
date	Mon, Aug 29, 2011 at 12:48 PM

Here is a quick example with numbers:

  list =. /:~ ?.@$~ 100
  item =. 19
  {: ((,~{.)`(,{:)@.(item > list{~[)~ <.@-:@+/)^:_ ]0,#list
23
  list I. item
23
  I =: 4 :'{: ((,~{.)`(,{:)@.(y > x{~[)~ <.@-:@+/)^:_ ]0,#x'
  list I item
23
  (list I. i.100) -: (list I"_ 0 i.100)
1

However, symbols don't compare in the same way, so a binary search isn't applicable here.

  </~ t
0 0 0 0 0 0
1 0 1 1 0 0
1 0 0 0 0 0
1 0 1 0 0 0
1 1 1 1 0 0
1 1 1 1 1 0

Depending on what you are doing, you could re-engineer the items that you use or just use regular indexing with i. .

- - Marshall

Wigner-Ville Time-Frequency Analysis

[From comp.lang.apl]

From: Graham   	Sep 4, 5:26 am

As a follow up to my third party dll question I finally gave up on that dll as it required the specification of a callback function and a buffer to receive the callback data via one of its functions of the form AssignResponseFunction(FUNCTION pointer, BUFFER pointer). I had no problem with the function using []wcall W_CreateFiler but could not find any way to pass a buffer pointer. All attempts resulted in the interpreter crashing immediately the function ran. Whilst I would still be interested if anyone has any ideas I have found that I can communicate directly with the USB device driver for which the original dll was a higher level wrapper.

Now I have that working its time to move on to the data analysis part of my project. Has anyone implemented the smoothed pseudo Wigner-Ville distribution time-frequency analysis method in APL and would be prepared to share the details.

Graham.

Wigner distribution function

[From Wikipedia] The Wigner distribution function (WDF) was first proposed to account for quantum corrections to classical statistical mechanics in 1932 by Eugene Wigner, cf. Wigner quasi-probability distribution. Given the shared algebraic structure between position-momentum and time-frequency pairs, it may also usefully serve as a transform in time-frequency analysis. Compared to a short-time Fourier transform, such as the Gabor transform, the Wigner distribution function can furnish higher clarity in some cases.

Mathematical definition

There are several different definitions for the Wigner distribution function. The definition given here is specific to time-frequency analysis. The Wigner distribution function Wx(t,f) is where is the imaginary unit. The WDF is essentially the Fourier transform of the input signal’s autocorrelation function — the Fourier spectrum of the product between the signal and its delayed, time reversed copy, as a function of the delay.

A Matlab Implementation of the Wigner-Ville Distribution
function [wv, ff, tt] = wvdc(x, res, win, sps);
% wvdc		creates a Wigner-Ville spectrogram
%
% (wv,ff,tt)=wvd(x,res,win,sps)
%
% x=		real input time series
% res=		resolution, number of samples between windows
%		(for full resolution:  1)
% win=		window, becomes length of frequency axis
%		(a good default:  length(x)/2)
% sps=		samples per second of signal
%
% wv=		the W-V spectrum, each row represents a
%		frequency, each column a time instant
% ff=		frequency vector (optional)
% tt=		time vector (optional)
%
% 	Display using: 
%		imagesc(tt,ff,log10(abs(wv)));axis xy
% 	or:						
%		surf(tt,ff,log10(abs(wv)));shading interp
%
%	...of course modifying the abs or log10 as desired.
%						-Case Bradford, February 2005
%		Adapted from Rene Laterveer, 1999, wvd.m package available from:
%http://www.mathworks.com/matlabcentral/link_exchange/MATLAB/Signal_processing/

z=hilbert(x);

% make even number of points, at given resolution
npts = floor(floor(length(z)/res)/2)*2;

% make sure that we entered in an integer for the window
win=floor(win);

% round window length down to nearest odd integer
oddwin = (floor((win-1)/2)*2)+1;

% half point (for indexing reasons we need it later, we're
% filling two columns per loop, so we only index through half)
halfwin  = (oddwin+1)/2-1;

% create tt and ff
tt=[0:npts-1]*res/sps;
ff=[0:(win-1)]*(sps/2)/(win-1);

% pad with zeros
z = [zeros(1,oddwin-1), z, zeros(1,oddwin-1)];

% initialize (important when creating huge arrays)
wv = zeros(win,npts);

R = zeros(1, win);
idx = 1:halfwin;

for n=0:npts/2-1

  t = 2*n*res+oddwin;
  R(1) = z(t)*conj(z(t)) + i*z(t+res)*conj(z(t+res));
  v1 = z(t+idx).*conj(z(t-idx));
  v2 = z(t+res+idx).*conj(z(t+res-idx));
  R(idx+1) = v1+i*v2;
  R(win-idx+1) = conj(v1)+i*conj(v2);

  RF = fft(R, win);

  wv(:,2*n+1) = real(RF);
  wv(:,2*n+2) = imag(RF);

end

return;

Meeting Materials

File:Towards More Natural Functional Programming Languages p1-myers.pdf

File:Excerpts from Cognitive Dimensions of Notation.pdf

File:Elaborations on the Importance of Notation.pdf

File:JFunctionalMeetupIntro.pdf

File:NonNotationalSprawlEGinExcel.pdf

File:Algorithm Implementations - Present and Future.pdf

File:Getting J into Sugar.pdf

File:Odds and Ends for functional J talk.pdf