Wiki/Report of Meeting 2023-08-10

From J Wiki
Jump to navigation Jump to search

Report of Meeting 2023-08-10

Present: Art Anger, Ed Gottsman, and Bob Therriault

Full transcripts of this meeting are now available on the its wiki page. https://code.jsoftware.com/wiki/Wiki/Report_of_Meeting_2023-08-10

1) Ed presented the issues that he was having incorporating Live Search into AWS because successive uses of curl are completed sequentially, which slows down the refinement of the search as characters are added by the user.

2) Bob and Ed further discussed whether curl might be used to keep a version of the wiki stored locally on the user's machine up to date from AWS, with the Live Search referencing this local repository.

3) Bob talked about the wiki being an educational tool. This requires further categorization before it is sufficiently advanced to allow more effective participation. Ed posited that search is an aspect of categorization in the sense that search automatically categorizes to present its results. Ed feels most of his work is more presentation.

4) Ed felt that there is a predisposition in the J community towards math rather than as a general purpose language. The direction of the wiki as an education tool seems to reinforce math uses within the J community. Bob mentioned combinators and the fact that the language can use those effectively beyond math. The standard library is an example of the general purpose programming language in practice and that way of using J could be a better bridge to general programmers. Ed feels that this is a missed opportunity as he finds J a really good general purpose programming environment.

For access to previous meeting reports https://code.jsoftware.com/wiki/Wiki_Development

If you would like to participate in the development of the J wiki please contact us on the general forum and we will get you an invitation to the next J wiki meeting held on Thursdays at 23:00 (UTC) Next meeting is August 17th, 2023.

Transcript

>> Okay.

>> So I've been working with AWS, and AWS has a lot of really good facilities for doing data management and search especially.

And they are for your first year free as long as you don't exceed certain limits, which we would not with what we're doing.

There are a lot of moving parts as you try to put together, for example, a properly architected search service.

You can do raw search with a user ID and a password that you set up, but you can't expose that service publicly and expect it to be.

.

.

Let me start over when Art shows up.

Hey, Art.

Hello, Art.

Good evening.

We're just talking about the challenges of live search AWS and figuring out how to get that working within the system.

My goal is to get something like JSaurus working, where as you type an individual character here, @, colon, plus, over, whatever, you get instantly a set of results back that consist of on the left snippets or keywords in context, and on the right, the names of the documents.

And I wanted to work for an integrated data set of both the wiki and the forum posts.

I've actually, I'm close.

I've got a search facility that I've exposed publicly that will do that just for the forum.

So 120,000 documents.

And it's pretty fast.

It's not as fast as the thing I prototyped that uses a local database.

You can't deliver a service publicly the way I've got it set up.

It would be too easily subjected to a denial of service attack.

The permissions are not right.

You've really got to set up the service and AWS calls it a Lambda function, which is basically just an event handler and a API endpoint so that you can do properly protected HTTPS calls, requests against it.

So I'm wrestling with that and the real killer is permissions.

You have permissions all over the place and they all have to line up for all three of those elements.

I think I can prop between sort of perseverance and Dr.

Google, I think I can probably make it work.

But it's already just with a raw search service, that is without the event handler and without the properly configured HTTP endpoint.

It's already not as fast as what I've got running now.

And it's just going to get slower as I put these additional hops into place.

period paragraph.

The other problem is that I discovered when I was working on the crawler and looking at ways to speed up crawling the forums and crawling the wiki.

This was before Chris very kindly gave me direct access to the file system.

I discovered that if you're doing curls off of your client, there is no way to make them operate in parallel.

So if you've got an outstanding curl request and you submit another curl request, it will line up behind the first one.

That's true even on Linux, if you spawn it from J, so it's 2-banco-1, I think, it's the foreign that'll spawn a shell command.

Even if you do that, they line up.

So, with that in mind, you would not be able to respond to individual typed characters instantly.

So a person types a character, maybe you wait a hundred milliseconds and submit a curl request with that query that he just typed.

But before it comes back, he types another character.

So what do you do.

You submit another curl request.

But it's got to line up behind the first one.

The first one's got to complete before the second one even starts.

So it slows down.

And where's the gatekeep for that.

Is it at your machine.

Or is it.

Yes.

Okay.

Actually, that is a fair question.

Because what I'm wondering is.

.

.

That never occurred to me, Bob.

Could you use a core.

No, no, no, no.

It's a process thing.

It's at the process level.

So when you, well, when you spawn in Linux, as I understand it, it actually forks off another process.

You get another address space to play with.

And it's still lining up.

It's still serializing.

But what you're suggesting, and I think you may be onto something, is that it's not happening on the client side.

It's at the server level that my requests are being serialized.

And that actually would surprise me now that I think about it.

I'll have to look into that.

Because what I was actually thinking was the most recent threads option, right.

Right.

Threads in J.

Threads would actually be a step back from spawning a curl.

I got you.

That would stay within the local address space.

OK.

At least I think that's right.

Yeah, no, the most spawnful separate thing you can do is to do Banco 1 and anything short of that, you're operating, I think, in your own address space, in your own processes address space.

- And it's the process level where the curls are serialized.

- I'm not sure.

Maybe the operating system saying I'm not gonna submit, maybe the curl command itself saying not going to submit more than one of these.

But in fact, even if I could get it to work under Linux, it's not going to work under Windows.

You can't spawn under Windows.

Right.

Yeah.

You can do it in Macs, but you can't.

Yeah.

Yeah.

So I'm thinking that maybe I'm screwed.

I'm not sure this can work.

And so I'm Hesitating, again.

I don't know whether I want to go back to the original approach, which was to have a local database that's kept up to date somehow.

I know not how.

The full forum plus wiki database compresses down to about 256 megabytes, which is about on a 100 megabit per second line, it's about 20 seconds to download.

I think I got that right.

Unless I could come up with some sort of incrementalist approach, which I started to think about.

Well, what about-- is it 250 megabytes zipped.

Yeah, OK.

Yeah, it's quite a bit larger on disk.

OK, yeah, yeah.

And-- So I'm at a loss at this point, basically.

That's the end of my presentation.

[LAUGHTER] So time for some brainstorming.

Chris mentioned that he could provide almost hourly increments, depending on how-- Yeah, increments are not really a problem.

Yeah.

Because it's really only a few kilobytes a day.

Yeah.

in terms of forum posts and changed wiki pages, the volume is very modest.

There's no question about it.

And that may be the answer is just to get people to take the hit of downloading the database once.

- Yeah.

- And that, and then have a second, a sequel.

if it's gigabytes, I'm just thinking, yeah, it's, it's, it's, you know, probably less than a second, even on a heavy day.

Oh yeah.

Oh, you could almost do it without even telling them you were doing it if that were ethical and it wouldn't slow them down.

Yeah.

And maybe that is the right answer because you really can, if it's a local machine, you really can do, You could provide a wonderful user experience as a result.

And the fact you can do it on a local machine makes me think that it might be server driven.

Say that again, I'm sorry.

You're saying you could do it on a local machine.

Does that mean that it might indicate that it could be server driven.

It's on the server end of things.

Well, that's what I'm trying to do with AWS is make it work on the server side.

It's just not as fast.

There's no getting away from that.

But it also gives you that stacked curl issue, right.

And that's the reason you can't make it or part of the reason you can't make it as fast.

But when you go to the local, you don't have the stack curling.

You know, curl goes away at that point.

You're just doing database calls.

Yeah, you're just doing the database.

Yeah, you're just doing your yeah.

Okay.

OK, well, and like at one point, we were talking about the fact that Live Search isn't tied to JQT the way the app is.

That's correct.

And having it local, that doesn't change anything.

You can still do it.

You could still feed it to HTML or whatever you wanted, right.

Peel back a layer of detail on that question.

I'm not quite understanding it.

Well, I'm just saying you wouldn't need a JQT to, in order to do live search, uh, if you're doing it locally, it's still not required.

You don't need JQT, but you do need this gigabyte database sitting on your hard drive, which is getting it down.

It it's not like it's a lightweight service you could provide.

It's heavyweight.

you got to get that data down onto your machine.

You got to make that investment.

- And then, so the question is, if you did an update, how well is that, how well can that update be incorporated into your- - Right, and I think the answer is, I think the answer is you're in pretty good shape in that regard.

I mean, one.

.

.

It's a little, I don't think there's a satisfactory way to approach it.

If you take the hit of loading up 35 years worth of J documentation over the wire once and get it down there, there will thereafter be a steadily, what you could do is say, all right, I'm gonna have a steadily growing incremental file of deltas.

- Yeah.

- So, and those are just new pages that have changed in the forum.

And those are new posts, excuse me, new changes that have changed on the Wiki, pages that have changed on the Wiki and new forum posts.

- Yeah.

And that file, which is your only other file, is just going to keep growing over time.

And at the rate of a few kilobytes a day, and if it really is only a few kilobytes a day, it might take it a year to grow to a couple of megabytes.

Downloading a couple of megabyte file is not a big deal.

So what you might do is say, look, every time you launch the application, you'll have the option of downloading this relatively small incremental file.

At some point, maybe after a couple of years, we might say, all right, we're gonna fold the incremental file into the main database and start over with the incremental file.

And any new user who comes along has to download the main database as always.

And thereafter will like all the other users be downloading this now much smaller, but still growing steadily incremental file.

It's not entirely satisfactory, but it could work.

What if you, when you download it originally, everything you download, you put a date stamp on it.

Sure.

So in your original clump, everything's going to have the same date stamp.

- Yeah.

- And then all you're doing is you got your incremental coming in, and it's going to have a different date stamp.

- Yeah.

- And the pages that change, could they just be incorporated straight into that database.

- Right.

So that, the way, what that says is I have to solve, and that is not impossible.

solve the AWS permissions problems and get my act together in terms of supplying a properly protected architected AWS service endpoint.

And I think I can do that.

And at that point, you're not just delivering a file, you're providing a service that says, give me all the deltas since a particular date.

So the client says, last time I updated was July 1st, 2023, just give me everything since then.

And that's nifty because you're maintaining a single database, a single file.

You're not trying to maintain a secondary incremental file.

But now you're in the service business.

You're no longer just doing file delivery.

You're supplying an endpoint with a database on the other end of it that'll respond to this most recent set of of changes query.

- Okay, so on the AWS side, if you had, just trying to think of how, well, yeah, what you're gonna be doing is the first time you load up your AWS, this is on the server side, you're gonna date stamp all those things, right.

So they're all gonna have the date stamp.

And then when the person downloads, they're gonna have a new date stamp.

It won't be the same as what the AWS has.

they're going to have the date stamp of when they download it.

Because as AWS gets updated, it's going to have new date stamps coming in, right.

Well, we can maintain our own date stamps for our own purposes.

We can keep a file of documents that has-- each document might have a couple of date stamps on it.

One is the date it was changed on disk, which might be some time ago.

One is the data was loaded into the database, which is whenever we do that.

- Yeah.

- And so what you're asking from the client is give me the documents that were loaded into the database since the last time I inquired.

- Yeah.

- And that works.

But at some point I worked all that out.

It makes sense and you're absolutely right.

It can definitely be done.

And that may.

.

.

I had sort of hoped to find a solution that didn't involve coming to grips with setting up a properly built AWS endpoint.

I see.

Okay.

Yeah.

But maybe there's no way to do that.

Maybe I have to.

That may be the right answer.

So you never actually download a file in that case.

If you're a new user, you've never done it before.

The client wakes up and says to the service, give me all the documents that are more recent than zero.

And that'll just give you the whole-- - Everything to that point.

- Incredibly expensive, yeah, right.

So anyway, that's my dilemma.

That's what I'm wrestling with now.

I'm hoping for inspiration.

There may be no good solution that is easy.

The good solution may in fact be kind of involved.

Yeah.

Just trying to think of different ways of looking at the problem.

Yeah.

'Cause, and you know, as I'm saying that, I realize there's a whole army of really, really smart people at Google who've already thought about this.

- Oh, it's not a terribly hard problem.

It's just me trying to avoid a certain amount of work that I'm finding very challenging.

Yeah, if I could just do it as file delivery, I would be a very happy person.

I just can't quite work out how to do that.

- I was reading a post today from Rob Pike who was one of the guys who developed Go language and is now developing Ivy as a bit of a vanity project, I think, which is an APL.

And we're gonna actually interview him on Tuesday.

It's kind of cool.

- Cool.

- Yeah, no, it's really neat.

But he recently posted about files.

And the fact that he's really upset right now is that so many, at one point, if you just knew the name of your data file, you could go in with any program and use it.

It did, the program didn't care.

Like the file didn't care what program was using it.

It had its information organized.

The program could go in and pull it.

Basically takes the simplest thing is it's plain text.

You'd pull it in, you could do whatever you want with send it back.

But now what's happening, I guess, I think he was talking about Photoshop.

The files are now proprietary.

So if you try and put Photoshop into a text editor, it won't even take it.

It's not that it gives you garbage, it won't even take it.

It's just, no, this is how I'm set up.

You don't know how to read me done.

That's it.

And I'm wondering whether there's an aspect of that with this is the simpler is a better, there's a number of places you can use it.

You can use it for HTML display.

Is there, would there be a reason to actually include a date in the actual HTML.

I don't know.

You mean on the wiki pages.

Within the content, yeah.

I don't know whether MediaWiki does that.

MediaWiki's got a last updated.

Yeah, that's true.

And of course, you can always consult the history and see all the updates back to the creation.

Yep.

And MediaWiki's got a changed page, which will give you a few days changes.

Which is probably, possibly how I would go about discovering wiki deltas.

Let's just crawl that page.

>> I wonder how quickly can you crawl that page.

>> Very quickly.

It's not much data.

>> Yeah.

>> It's not much data.

>> Is that going to tell you, rather than having to go back to all the files.

>> Yeah.

I very much appreciate Chris's willingness to give you direct access to the file system.

Since I've already gone to the trouble of building the crawlers, And since I've already got my own copies of 35 years worth of data, I'm not sure I'm going to have an immediate use for the file systems that he's giving me access to.

So do you- Just crawling for.

.

.

Go ahead.

I was gonna say, so do you have a crawler that will do recent changes.

No, but it's trivial.

I mean, the problem with a crawler isn't.

.

.

The problem with a crawler sort of isn't the work you put into it, it's the time it takes to run it.

To run it, okay.

So building that 10 megabyte database that I do every day, that's about 20 minutes in the morning.

Okay.

But that's a full, it's not a full crawl of the whole Wiki, that's the wrong way to look at it, but in some sense it is.

What we'd be talking about for Deltas is a much more modest crawl.

Yeah.

Is it possible that when a person say went on to live search, even before they request anything, they would trigger a crawler that would go in, it would know the date that they'd had, they'd last accessed.

Yeah, they would know that.

Yep.

And then it would do a crawl of the site, and it would have all the changes.

Just waiting for them.

Just waiting for them.

Type the first character.

Yeah.

Yeah.

You wouldn't, ethically, you wouldn't do that, I think.

You wouldn't-- Well, you'd let them know that that process is going to happen.

Well, you'd give them the option.

Yes.

Do you want to update.

Do you want to update or not.

Yeah.

Which is how it works now.

I mean, a little button comes up and it says, I got new updates for you.

Just click here if you want them.

Yeah.

Yeah.

And that also is not entirely satisfactory.

Yeah, butter grumble, butter grumble.

What you might do is you might fold those live search updates into the standard update that we do already rather than a second update button that they've got to decide whether to click or not.

Sure.

Which again is only downloading probably a few kilobytes or tens of kilobytes or whatever, hundreds maybe depending on when the last time was they used the application.

You might with that one button press upload two or excuse me, update two data sets, the 10 megabyte database, which you just downloaded in its entirety, and the deltas to the live search index.

I hadn't thought about that.

That's good.

I like that.

Well, what I'm thinking is you've got your existing blob of megabytes, and you've got a date on it that was the last time I accessed the system.

And the system knows all the changes since that time.

Oh yeah.

But you're going to have to do a crawl to do that, right.

No.

The crawl happens.

.

.

I'm sorry.

Let me walk that back.

You could do it as a crawl.

That is a true statement.

I am very reluctant to be doing crawls from the client.

Right.

That was my question.

Yeah.

'cause crawls are dicey.

Things happen, things fail to get parsed.

The line goes down, whatever.

They are delicate operations.

The ones I've been doing have been pretty robust, as it turns out, but they can be a little slow.

It is much faster and safer to centralize the crawling with me, produce a digested dataset that it's much smaller and that can be downloaded in a single operation.

And that's what the client base at large would wind up using.

- Okay.

But the only thing the client needs, it's not, the only thing the client needs is the most recent changes back to their time, right.

- Yeah, that's right.

That's why it should be in most many cases a fairly small amount of data.

- Yeah, I'm just wondering whether you can do the incremental part on the server side.

So there's a series of crawls and it only pulls the ones that it needs.

- That's, then you get into the file juggling business.

So you've got, I mean, one outcome of that line of thinking would be you'd have an individual file for each day's crawl.

each file being a couple of kilobytes.

And your problem as a client, and it's just the code, the user doesn't need to know any of this, would be to download the last N files that will bring you up to date.

Yeah, that's feasible.

I hate that.

Winding up with several hundred teeny tiny files, each of which needs to be individually downloaded.

It's insane.

- Several hundred is if you were several hundred days since you were last here.

- No, but I'm sorry, yes.

But it also means several hundred teeny tiny files sitting on the server.

- Yes, it does mean that.

- Technically is not a problem, but it seems crufty.

- Okay, so the alternative, I would go to that.

And this is becomes, what will the user tolerate.

You do a monthly glob that you have to log in and grab the glob every month.

And then there's gonna be 30 other update 31.

in some months, 28 and others.

Other files, it'll be updates, right.

- Yeah.

- But then the downside for the user-- - It doesn't work if I'm gone for six months.

- Oh, you just go and get the glob.

- Oh.

Well, what if I, well, okay, so how is the glob.

Oh, I see, you get the whole thing again.

Every month it's going to take me 20 seconds to download.

Oh, I see.

And the rest of the month I'm gold.

I don't think that's too much to ask.

Well, it's a little embarrassing to ask you to do that.

Are you sure you shouldn't be doing stand-up comedy.

I have my issues.

because if I could get the whole AWS service thing working properly, I wouldn't need to incur that cost.

And the answer may be just to have me marinating this for another week, which would be fine.

Sure.

Yeah, yeah.

Yeah, we're not on a tight timeline or anything.

No, it's actually wonderful.

Yeah, yeah.

You get time to develop stuff.

It's really good.

Yeah.

Had I known, I would have done it years ago.

Had I known I'd be independently wealthy, I would have switched over to this approach a long time ago.

All right.

That's all I got.

I'm just in an awkward place at the moment, and the meeting happens to have been scheduled in the middle of my awkward place.

Okay.

Well, I'm happy to use the meeting that way.

So I don't think that's… I appreciate it.

Thank you.

It moves stuff forward.

Yeah, I think it does.

Do you have any thoughts, Arthur.

Art.

Well, it sounds like there are several related issues there, but I think they're all evident.

He's working on them.

Yeah, yeah, let's process.

Do you have anything else you wanted to cover, Bob.

Well, the only other thing was the email back and forth and the people identify that we might do the.

.

.

Right.

I recognize most of those names, and I don't want to do that yet.

I really think.

.

.

I have an N of one in terms of using this thing, which is a shame.

Yeah, yeah.

But what I found, I found I don't particularly, I used, before I had live search, I used regular search because it was kind of nice to have an integrated look at forums and the wiki and not have to go to two different search engines.

That's kind of nice.

I did not use the persistent results that are kept by the application that didn't happen.

So I'm not enamored of that anymore, and I'm thinking of just pitching it.

I'm thinking that live search strikes me as really compelling.

So I'm thinking that the default use case that I would like to support is, you know, command shift H, it launches, and it launches with live search selected, and the focus is on the search input text box.

So control shift H and just start typing and stuff happens.

And if you're also interested in browsing the recent forum posts or interested in browsing the wiki, obviously that's great.

But I think live search is the killer app for this thing.

If I was to use an analogy, I'd say LiveSearch is your card index, catalog index for a library.

And if you wanted to go and browse through all the books, absolutely you can, but chances are… There you go.

Right, right, right, right, right, right.

That's exactly right.

And browsing is great and often you do it.

But more often you go to Google or you go to, as you say, the card index.

So yeah, I think that's, so I'd really like that to work and be nicely implemented before expanding the audience for the application for the add-on.

So again, I very much appreciate you putting those names together.

I recognize most of them and I think you're right.

Just judging by the traffic I've seen go by, I think they've been great.

Well, and they, to me, they had a, they had a diversity of background.

So you cover a lot of things with those guys.

Yeah.

There's different points of view there, which is good.

Yeah.

But anyway, that was really the only other thing I had is the expanding the beta.

And if I'm happy to let that ride for a bit.

And I don't really have anything else.

Personally, I think I'm going to work.

I'm going to continue on in the vein.

I'm going to, I came around to a realization today, really what I wanna do with the wiki is I wanna make it an educational tool.

And that's why working on the primary is very useful.

But it also means that when I look at it, as I get in and it's gonna require more categorization, like I'm gonna have to have another really much better pass on categorization be able to – that's my cat, by the way.

You see how… Yes, I gathered.

You are in Canada, so I thought maybe it's a lynx, but it's too small.

It's too small for a lynx, yeah.

But I wanted to – when I was thinking about information for developers, I want to be able to really dive into that and actually provide a path that you can follow through and basically get curation into a very productive path out of all the information that's there.

So that's my end goal.

But in the process, the next step is to go in and at least categorize well enough so that we can make it go live, and then have people use it.

And then as people go through, they'll find all the parts that are broken.

So we'll find all the work we need to do.

And because it's a wiki, hopefully, I'll be able to enlist some of them to go in and do that work.

And then we'll, my end goal of trying to make this in certain ways, educational or directed information, just become something anybody else could do as well.

And it's almost like an enhancement to the wiki.

But we have to get to the level where the wiki has to be categorized well enough that we can bring the people in.

And then they can complain about it.

They can change it.

They can work with it.

And I think that is the next step.

But that, I think there should be another level of categorization that I go through.

And I think it will also make your job a lot easier.

When we were talking to Stephen, I thought you did a really good job of explaining the two aspects of it.

One is categorization, curation, and the other is search.

They have to work together.

I think your search is getting to the point where it's working really well.

Categorization has been stagnant for the last six months.

It's got to get better.

So I think that's where my focus is going to be.

You know, I don't think of it as search.

I think of search as being an aspect of categorization or curation, really.

Because the curation that you choose to do, some of it is, most of it, I would argue, it's sort of hard to decide how to measure it, is manual.

So it's what you do, Bob.

But search is also a species of curation.

When you build an index, you're doing a curation.

It's automatic, but you are creating yet another categorization of the content.

It's a big inverted list of index terms, but that's what you're doing.

You're creating a curation.

I think of what I do as presentation.

And that's more general or different from search.

And I'm with you.

I think it's important.

I've been struck by something and I've sort of always been vaguely aware of it, but this is on the topic of teaching, learning, which I think you're dead on that should be the wiki's primary goal in life because the people who already know Jay, Chris actually said something interesting about the add-on.

He said, "Well, it wouldn't really appeal to me because I already know all this stuff.

" The people who already know all this stuff occasionally perhaps have a search they want to perform, but they're not trying to be educated in the same sense that a beginner is.

They're skilled with the language they understand.

They occasionally have a question, but they're not in learn mode anymore.

They're in build mode most of the time.

I've been struck by the culture that's grown up around Jay, which is nearly as I can tell, just based on the forum traffic and on the contents of the wiki.

It's very much a culture of mathematical programming, which makes a lot of sense.

We fully, I don't know what it is, a third, a half of the primitives in J are math primitives, derivatives.

What language has a primitive for derivative, right.

Well, it doesn't anymore, but yes, it did.

And that is an indication.

Yeah.

Oh, they took it out.

Yeah, they did.

Yeah.

They made it an add-on.

Oh, it's not a primitive anymore.

It's not a primitive.

Yeah.

Because Henry just got tired of having everybody come at him from different directions saying, "No, it should be doing this.

It could be doing this.

" And he obviously said, "Well, I'll make it an add-on.

You go and change it.

Make it work the way you want.

" Yeah.

Oh, good for him.

Well done.

Yeah.

But Jay, if you took out all the math primitives, which Henry has now won, okay, so you took out the rest of them, it would be a perfectly wonderful general purpose language.

And I'm not sure, I mean, the best one I've ever used, that's why I use it.

I'm not sure how many people are using it that way.

Yeah.

In other words, I do things in J that I used to do in TypeScript, for example, or in Python.

I'm much happier doing it in J.

But what I'm doing is not mathematical programming.

It's just general purpose programming.

And pedagogically, I don't know if that has any impact on anything.

on approaches to teaching.

Maybe we don't care about general purpose programmers.

That's entirely possible.

Maybe the J community is fixated on mathematical programming, and those are the people they want to recruit.

That's the culture they want to maintain.

But I think that is unnecessary.

If that's true, I don't know that it's true.

I'm asking.

I think it's unnecessarily narrow.

My sense of it, and I come around to the same point of view from a different path, is partly from doing the podcast with Connor, who's very much into combinators.

Yeah.

And in fact, if you take the mathematical aspects out, pretty much what you're left with is a combinator language.

That's a facet of it, but it's a general purpose programming.

It says so in the brochure, Bob.

It says J is a general purpose programming language, but it's not treated that way.

Common doesn't treat it that way.

And the J community doesn't treat it that way, as far as I can tell.

No, I don't think they do, but it exists there anyway.

I think it's an undiscovered country.

Yeah, that's my point, I guess.

Yeah.

I was going to say, I come about to it a different way, but I agree with you.

I think it's an undiscovered country.

And it hasn't, you know, and at the closest, I think people have come to discovering it and they haven't taken this approach, but it may be the add-ons, maybe some of the scripts that you can get into.

They've created not primitives that can do this kind of stuff, but what they've done is they've created verbs that can do the kind of things you'd want to do.

And the verbs are based on combinators, but they haven't put them together that way.

All of that is true, but I think combinators are an unnecessary facet.

Unnecessary is an overstatement.

They're what Connor is fixated on, and that's great.

But as a general-purpose programmer, I really don't care.

I mean, I use them.

I don't use them the way Connor does.

I I don't use them the way tacit programmers do.

I add things, I top things.

I occasionally will do a fork, a Trident, but I'm not a tacit programmer by any stretch of the imagination.

But I can make Jay do stuff.

Even so, as a general purpose programmer solving general purpose programmer problems.

I don't see any attempt to appeal to that audience.

- Yeah, and if you took a lot of the, like take for instance, look at the standard library, you know, the file that's a standard library.

It seems to me what you're saying is, if you publicized that, if you showed how things worked in that, that would be closer to general programming because it is in fact general programming, right.

- I'm not familiar with the standard library.

Oh, the standard library, if you go into, I think it's std lib.

It's a file.

If you go into your, let me see here.

I'll share a screen and dive into my J.

- Bob, I may flake out at some point soon because my battery's-- - Yeah, no, you're right.

You were up on our hour, yep.

We'll continue this and I'll send you a link to your standard library.

Look through that and see if that's what you're thinking of when you're talking about general purpose programming.

Because I think a lot of that exists, but isn't shown.

It's under the covers.

Undoubtedly.

There's an immense amount of general purpose programming that gets done in the add-ons that are sort of done for utilitarian purposes.

That's certainly true.

What I'm wondering is, in terms of presenting J to the world, it's definitely not the foot that's put forward first.

Is it put forward at all.

I would say… Is there pedagogical material for the general purpose programmer.

I'm not sure about that.

Okay, just a snippet of what's going through my brain right now.

I don't think it's been done.

I think that's a really good bridge to traditional programmers, C programmers.

J4C kind of builds on that.

Yes, absolutely.

It's the one thing that does, but all that information is there, but nobody's tried to aim it that way.

What they've tried to aim at is bringing people over, maybe from APL or other array languages, and this is the key thing.

I think a lot of stuff that they've aimed at has been people who have not used computers before.

Well, they're an easy audience because they don't have any unlearning to do.

They may lack motivation though.

Well, there's that problem.

The object-oriented programming crowd always liked to deal with children and novice people who hadn't programmed because they had no design patterns in the backs of their heads that had to be identified and rooted out so that they could imprint on this new paradigm.

But it really becomes a question of the stuff that Papert was doing with turtle graphics and things like that.

He happened to go into using things like Smalltalk.

He sort of fell into that crowd, but he didn't have to fall into that crowd.

His idea was, "Give me something you can move around and then you can control.

And then I just need to pull the curtain back a bit and you'll rip it open and go in and find out how it works.

" Yeah.

I've always been struck that he talked a very good game, there's no question about it.

But his ideas did not, in fact, revolutionize education.

It didn't happen.

And I've always wondered about that.

Part of it is he died at the top of his career, or effectively died.

He was put in a coma.

Yeah, but it's not just one person.

There's an ecosystem, there's a community.

I mean, Why didn't it live on.

Nobody teaches like turtle graphics.

Yeah, I know.

And his idea was, I'm going to put you in a math environment where you can play an experiment, and you will absorb.

And it just didn't happen.

Critical mass.

It has to get just big enough, and it didn't.

That's what I think.

Yeah.

And education takes a huge critical mass.

Yeah.

The education is the most conservative group of… Right.

Most difficult thing to move.

…or we're thinking people you've ever met.

Try and change the curriculum.

It's not going to happen.

Yeah.

We've been going through gyrations in the US recently over phonics.

I don't know if you've been following that.

But there was, over the last, I don't know, 10, 15, 20 years, there've been a a couple of revolutions in reading instruction that were utterly misguided, guru, populist-based, no data to support.

And basically a third of American.

.

.

I'm sorry, I'm not even going to quote any statistics because I'll get them wrong, but there are a lot of American adults out there who went through bad reading programs and are uncomfortable reading.

We did that to them.

And only now is there a movement in state governments, they've started to pass laws saying, "Oh, well, you can use any curriculum you want as long as you've got data to back up its efficacy.

" And phonics is being slotted in just as rapidly as they can.

So you're right, it is conservative, but it's more than willing to engage in manias.

Why didn't it engage in in the Papert mania.

I don't know.

I don't know.

Might have been too expensive at that time.

Maybe.

Maybe.

All right.

I'll let you go.