Wiki/2025-01-09

From J Wiki
Jump to navigation Jump to search

Report of Meeting 2025-01-09

Present: Ed Gottsman, Raul Miller and Bob Therriault

Full transcripts of this meeting are now available below on this wiki page.

1) Bob began by reviewing the work that he had done on the Category Table https://code.jsoftware.com/wiki/Wiki/Category_Tree_Template including links directly to some pages such as the J for C Programmers link or to the category pages for standard links like Plot or Publish. The difference between Category Pages and Content Pages is that only Category Pages will have the Category Table on the bottom of the page. The Essay section remains to be done and Bob believes that he will make a link for each essay above the Category Table with space for a description to be added later. When the Essays are completed, Bob's work on the Category Table will be completed.

2) Raul mentioned a post that Henry Rich had made on the dev group forum about putting a link to the Playground. Eric had talked about doing the same thing with the JHS cloud. Promoting both of these is a good idea as they both have slightly different functionalities.

3) Ed wondered whether the work that Jan is doing on the category analysis might help with creating descriptions of the essays. Pursuing this further Ed wondered if there was a way that a Chatbot with knowledge of the J Wiki would be useful to guide user searches. The conceptual areas this could be pretty good, but the J language remains pretty much opaque to the LLM's. Raul wondered if it would be worthwhile to see if a Kaggle competition https://www.kaggle.com/competitions might be a good way to develop the LLM understanding of J. Ed pointed out that ChatGPT had crawled the J Wiki, but did not yet show any understanding. Raul wondered if the Playground would be a way to use ChatGPT to test J code. The JHS cloud version might be an even better test session for use by LLM's. Ed thinks that trick would be jump-starting the process. This is where Raul thinks that a Kaggle completion might be a good start. Bob gave the example of the Dendrite Essay https://code.jsoftware.com/wiki/Essays/Dendrite where the meaning is tied up in the code as much as the description.

4) Ed wondered if there was any appetite to checking the number of page visits for various locations across the wiki. Ed has overcome the issue of bots that would pollute the results. Raul thought that the results over a longer period of time are probably more important. Raul also thinks that since Ed has access to the logs, it is something that Ed could do now as long as the analysis would not be a burden on the wiki. Ed thinks that he might follow up on this for his next project.

5) Bob mentioned the work that Jan Jacobs has been doing on analyzing the categorization of the Wiki. Originally he was using statistical methods on word count, but now he has moved on to semantic meaning and this has shown some promise. At this point it may not affect the organization of the wiki, but it may have some benefits to filling in descriptions and summarizing the content of certain pages. Ed mentions that it is still early days as it is a challenging project.

6) Bob mentioned that he would like to see ChatGPT used in an energy efficient way. Ed mentioned that ChatGPT has a REST API that would allow it to include NuVoc information and use that as a Chatbot. The energy expense part is the development of the model. Bob wondered if adding a few thousand tokens to ChapGPT is an efficient way to use an LLM. Raul included a link to Kaggle https://www.kaggle.com/thirty-days-of-ml Bob wondered if Shakti https://shakti.com is a way to efficiently create models. This would be beyond our expertise.

7) Bob talked about setting up a test wiki in a few weeks to allow Raul to make some JavaScript changes that might allow closer interaction with the J Playground and the wiki.

For access to previous meeting reports https://code.jsoftware.com/wiki/Wiki_Development If you would like to participate in the development of the J wiki please contact us on the J forum and we will get you an invitation to the next J wiki meeting held on Thursdays at 23:59 (UTC)

Transcript

And I think the first thing I had on my agenda was just to show what I'd done with the category table, which if I share the screen, I've been making some progress with it.

So the category table I'm talking about is this one down here.

And I think the last time we talked, Raul, I might've got into reference somewhere.

I think it was reference and databases, where if I click on one of those, I actually come up with an information up here that's kind of a condensed version of if you wanted to look at all the pages down here, you got access to them.

But this gives you a straight up look into sort of the higher level of it.

So in other words, if you click on the JD, you're gonna go to the JD page and you're into that whole thing with Eric's put all the licensing.

So I'm relying on the author to keep things up to date in that case.

I'm just providing a link in that area.

But since then I have gone through things like plot and publish and other sort of areas.

All the way I haven't touched essays yet, although I have a way to, I think I'm gonna approach them, but I did do phrases, which was interesting 'cause there's the two top ones, they're pretty much just alternate of the same information.

And then whatever somebody did with this one is quite a different view of it.

It's still got the pages, but they've kind of broken it up in a different way.

So I've put those as separate links that way.

And the same way I've gone through books, which in the case of books, I figured if you click on this, you'd wanna go straight to J4C programmers.

Bang, you're right there.

If you go to J4C programmers here, I do the same thing.

So you're always gonna go to J4C programmers.

That makes these titles somewhat redundant, except that this allows you a top level way to get to the books as opposed to having to go to books and then go through a lower way to get to it.

That's sort of why I made that choice.

Did the same thing with archive pages, except that what I did with archive pages, well, I broke it down into the areas, but if you go to the different areas, I just put this note at the top that basically explains what it is.

And then these are the archive pages down here.

I'm not gonna go to the trouble of trying to categorize all the pages we've set our archive pages, but they're available to anybody who wants to click on them.

And that's kind of how I've left that bag of information.

And then I guess the only other area that I got into interfaces.

Interface, there's a really good page with all these interfaces that basically takes you to these different areas that you might be interested in.

So I went to a page in this case, not a category page, but in some cases, say for instance, think Python was one of them.

I've got a category page for Python, but if I go to JSON, I think it is, there was only one page linked.

So I just go to the JSON page.

So there is a difference in that approach.

And the big difference is when you go to the bottom of a page that's not a category page, it won't have the category table yet.

So you've got to back out of it.

Whereas if you go to a page that has a category page, it's got this, then it's got these pages at the bottom and you've got the category table.

So at this point, before we've added the category table to all the pages, the non-category pages have a little bit more trouble with navigation. - I think it's fairly standard UI design that when you have a single choice, you skip making the choice. - Yeah, and what I've done in that case is I've actually just, I haven't forwarded the category page.

I've left the category page in place.

I've just changed the link in the category table.

And that's because if that area expands, I wanna be able to go back to having a category pages as an intermediary.

So that was the choice that I made with that.

So those category pages still exist, although you'd have to go looking for them if you wanted to get access to them.

But if the category expands and you've got multiple points of information, and it makes sense to have this intermediate page, intermediary page, then you have the option to go back and do the category page that way.

And right now I've got all the way through to these pages here, technology web system, FAQ and guides.

I'll do them the same way as I've done the others.

That won't take me too long to do it.

Essays, what I think I'm going to do, say go to language concepts.

I will put, I'll probably take a harder look at which essays apply to which concepts.

So some of these may end up moving into different essay groups.

But what I would do is I think I will put a link for each page above the category table with a place to put a description to be filled out, but I'm not going to put in descriptions for every page 'cause that's just too much work for one person to do.

But as people, when people go in, they can provide descriptions of what they, why they think that essay is important.

And that means that those essays will show up as a table of contents at the top.

So if there's a specific, you know, within language concepts, there'll be a table of contents that tells you all these essays that have to do with language concepts.

You can scan through that, click on it.

It'll take you down to the link in the page and that link may have a description of what you're seeing there.

So you kind of got a little bit of curated information helping you along the way as you're looking through these groups of essays.

But I think that's how I'm going to end up doing the essay pages.

And I think that's pretty much as far as I'm going to go as personally curating the Lit Wiki.

Beyond as I interact with the Wiki, I'll go in and change things and make changes, but this step of me going in and creating the Wiki and its categories and all that stuff, I think stops when I created those essay pages with the links on them.

Because Eric is once again, starting to make noises of what he would like me to be working on.

And it's moving away from the Wiki is what he'd like me to do.

He's got other projects he wants me to work on. - At least you've done a good enough job already, huh? - Well, I'd like to think he's thinking that way, but there's part of me that thinks it's just, I've been on this a long time and he may be thinking, come on, get it done.

I'm the hitchhiker's guide writer.

It's the deadlines that are never met, Hunter S.

Thompson and so on. - Did you see Henry Rich's comment on the Wiki? - Which one? - About the playground? - CJ in action. - Oh yeah, no, was that Henry saying that the, that was a reply to the person asking about the iOS? - No, this is just a straight out email that wasn't replied to anything. - Okay. - Maybe he sent it to the dev group.

Maybe you're not there. - Oh, I'm not in the dev group.

So no, I didn't see that. - I will forward it to the Wiki group. (mouse clicking) - So what's the gist of it? - That he thinks there should be a CJ in action button on the Wiki that lands a person in the playground. - Okay, and that's something we were kind of talking about, wasn't it at one point? - Yep. - Yeah, yeah, okay.

And I guess the other thing is whether that, 'cause Eric's talking about the same thing with the JHS and the cloud. - Right. - So whether it'll be the playground or the cloud, I'm not sure, but let those guys work that part of it out.

Playground's working now, which is, I guess, different than the cloud, which is still in beta.

Although I think the cloud will move out of beta fairly soon. - Redundancy is often nice. - That's true. - Going for robustness. - Yeah, yeah.

And there is an advantage to having WebAssembly so it runs on your browser, not on something that's external. - The UI has a different feel to it. - Yeah, the UI is different again.

It's one, again, a different UI, but I kind of like the playground UI.

It's not a bad one. - Yeah. - I wonder about something.

So when you were talking, you mentioned briefly the idea of writing summaries for the essays, which I agree is way too manual.

But I'm thinking about the work that Jan is doing.

He's shifted over to LLMs. - Yeah. - My first thought was, well, I wonder if he could generate some summaries for the essays.

I mean, he's already definitely going in that direction with what he's doing. - Yeah. - But then I thought, what would it mean, and this may be becoming common, I don't know, I just haven't run across it, but what would it mean to have next to the search box on the top of the screen, a place where you could have a dialogue with an LLM that had been prompted with the contents of the wiki, and the J portions of Rosetta code and so on, whatever we've got.

So you could say, hey, anything interesting on the Collat's conjecture, or tell me about X, and the LLM would synthesize an answer based on the several thousand pages in the wiki that is unabsorbed. - Yeah. - I bring this up only because it seems like, I mean, there are a couple of purposes to a corpus usually, one is to answer questions, the other is to have a place to roll around and play in a domain.

And it strikes me that much of the organizational effort that you would normally have to go through, you could sidestep if you had an AI, you could just talk to about the contents of the corpus.

I haven't thought this through, I'm not quite sure how you would do it, but certainly OpenAI and the other ones have APIs, REST APIs, you could set up a prompted LLM, custom prompted LLM to do your bidding.

I just wonder what it would look like to have an expert J interlocutor.

Now the problem is it doesn't actually know how to code J, it's terrible at that, it's good at Python and JavaScript, but it's terrible at J.

The concepts it would probably do okay on, the kinds of things that are in the essays, for example, it would probably be pretty reasonable on, just a thought. - Have you run into Kaggle? - Oh, yes. - I'm wondering if those resources might be useful to tackle J-like AI or J-aware AI. - Could you peel back a layer on that thought? - Well, the whole point of Kaggle is you're configuring and setting up AI type systems.

And I think there was a, they give you access to AIs that you're working with as part of the problem solving.

And if they have some challenge there that is related in some way to the domains tackled by J, I wonder if it would make sense to not only be coming up to speed as a training thing and as a challenge thing, but also introduce some J parsing and J syntax and meaning semantics, I guess, we're not looking for the next. - The problem as far as I can tell is that OpenAI, JetGPT in particular, has brawled the J wiki.

It has. - Right, but it doesn't have access to, or it hasn't tried working with JHS, for example.

It doesn't have-- - No, it doesn't have an interpreter to play with the way it has a Python interpreter to play with. - Right. - But it also doesn't have nearly the size code corpus for J that it has for Python, JavaScript, and so on. - But like the WebAssembly, I mean, if it can do training, if you can direct its training and you can point it at the WebAssembly and somehow give it some experiments to start out with, it might be possible to train it on J, basically. - I think that much, that's interesting. - I'm kind of waving my hands here, but it seems like, at some point, somebody had to do something like that with Python, so it seems like it should be possible to repeat the effort, J, and possibly even finding documents on how, or notes on if they were the kind of person that liked to explain themselves how that was done.

That's really interesting. - So the challenge with the chat GPT with J compared to Python, one part of it is the immense amount of Python code it has to work with, but are you saying it had a way to test Python code as well? - I think it would have to, I mean, that's what AI does, it just throws a ton of resources at testing out different things, and for a programming language, one of the best ways of distinguish between fact and fiction is, you know. - Run the code. - Run the code. - Yeah, and in that case, the playground actually would probably be a pretty good window into the J code. - Yep. - JHS might be easier for servers. - Enough instances to toss them out as fast as it wants. - Huh? - Chat GPT didn't start out with a Python interpreter, it got good at Python by reading Python, and it was only later that it was given access to a Python interpreter to improve the quality of its responses, so I'm not sure which comes first, I don't think experimenting with Python came first. - Yeah, it might need some handholding, some specific experiments to start out with, you know, things like, you know, do an iota, do a matrix inverse, something to tie existing realms of its language base to things that, you know, that could be translated into in command line, and it might take, you know, I don't know how long and how fast or how slow and how tedious that would be, but it's definitely seems within the realm of possibility, even if I don't know how viable it is. - I also wonder if, I'm still stuck on how you jumpstarted, how you get enough-- - That's why I was thinking a Kaggle, I mean, obviously, or not obviously, probably the first, there's been a lot of, I've been seeing a lot of Kaggle competitions and people developing agents to do various things, and probably maybe starting off with talking to Jay is not the right place to start.

Maybe, you know, starting on something else like how to play chess is a better, you know, starting problem, but over the years, there've been a lot of things where they've been trying to get people involved this way, and there are people that are doing it.

So it's-- - Yeah.

That's really interesting.

I'm gonna think about that.

Not sure what to do with it, but that is, makes me wish I wasn't working on something else right now. (laughing) - Know that feeling.

I'm wondering whether it would give it a way to have a better interpretation of, like the essays like this one that I've got for Dendrite, where there's obviously written English and everything to kind of explain things, but most of it is coded Jay, which right now, from the work Jan's doing, it's very hard for a chat GPT to make sense of, or there's not a lot of inherent information going into it.

But if you had a way to interpret this code and it had a way of understanding and running the code, maybe that's the window into having a better sense about what's going on in this. - I think any improvement winds up being an improvement. - Yeah. - And something that's, that the whole, the leap between parroting and understanding in a way that you can do something instructive is a pretty major one. - Yeah.

I have another question.

And this harks back to something I tried to do a few months ago, which is the whole, the notion of a, of tracking visits to pages.

Is there any interest, is there any appetite for that at all?

I'm asking again, 'cause there wasn't a few months ago.

So I had thought in terms of some kind of usage report that would go out monthly saying, hey, here's where people are spending time. - Didn't Chris say that he could get us access, that we have access to the logs right now, actually?

Just the last March. - Well, here's what happened.

Here's what happened, Roel.

I sent Chris a prototype report that wasn't very good.

I don't, I always forget that you have to be very careful about what you show people when you're doing preliminary work.

So he thought that was not appropriate and shut the whole thing down. - He was probably talking about that prototype rather than the concepts that-- - No, no, no, no, no, no, no, no, no, no, no.

In particular said, look, there's so much bot traffic, you're not gonna be able to tell who's human and who's not.

That turned out not to be the case.

The bot traffic identifies itself, and it was quite easy to filter out the bot traffic.

So I actually sent him something and say, oh yeah, look, here, we can do it, but obviously I would need your approval.

I don't have it, so I'm just gonna let it drop, which I did until now.

So we are actually in a position to get what I think are good usage statistics.

The question is, do we even want to know?

I think that usage statistics over a long period of time are interesting.

Usage statistics over a short period of time are too quirky and too much about a person's passing interests to really be able to.

So it's kind of a, to even do a good prototype, you need to spend like a year or two on it to be building up the records. - Yeah.

I guess to me, it always strikes me these things are a bit like the planting an oak tree, story, right? - And I don't think you're necessarily, I don't think, I think Chris has kind of already got enough permission to do something, build a prototype.

Maybe for publishing, you'd want to get his approval, but for collecting the things to, just saying something on the back burner to run for a while.

I mean, you already have his permission for that because he's given granted you the access to the machine.

As long as you're not bringing it to his knees or anything with overloading its resources. - Right. - You don't want to do that.

I don't think anything about this would head into that territory. - No, certainly not.

So those, you asked for thoughts, Bob, those were my only two, my only two thoughts. - It's good to have two thoughts.

It helps to rub them together from time to time. (laughing) - Get some tinder and get a spark going. - You can get stuff happening between thoughts.

You can't get things happening between one thought.

That's harder to do. - Got your hair on fire. (laughing) - Yeah.

Okay.

I would say myself, I mean, the second question with the stats, I agree with Raoul, they become more valuable over a longer period of time.

But if we wait two years and don't do them till then, we don't have as long a period of time for them to become valuable, right? - Yeah, I don't, it was just a thought.

I don't have bandwidth right now to do anything with it. - When you get bandwidth, I think it would be something, if something lightweight could be put in that just sort of tracks page visits. - Yeah, it would be easy enough.

The logs get kept for seven days.

So every day or every week, I would pull whatever was new and just start populating a database. - Yeah. - Pretty easy.

Probably the right way to do it would be a public webpage that just grew over time. - And I think it's one of those things that if you start making noise about it early, people will probably tell you it's not worthwhile.

But if you ran it for a year and then said, "Look what I found out," they would find it immensely valuable. (laughs) - Awesome. - Yeah.

Okay.

You've touched on the stuff that you and I and Jan had been working on.

And we hadn't really mentioned in the Wiki was why I put it on as an agenda item because it's kind of flying under the radar.

But I know semi-weekly, every couple of weeks, Jan Jacobs, Ed and I are meeting together and just looking at some things he's done with ChatGPT.

And originally, I would guess he was using statistical methods to be able to categorize the Wiki.

And now he's sort of moved on to semantic methods and he's looking at more things like summaries and namings and those kinds of things about organizing the Wiki that way.

I guess the most obvious distinction he's made is within the essays, essays that have a fair number of English words in them, he can actually do a fairly good summary of.

Essays that are primarily J related, it has a lot of trouble understanding.

But that's not surprising, but it does show you the difference that it may take a different approach to get more semantic use out of J code than English.

English, it seems to do fairly well on.

And I just wanted to bring it up so people were aware that Jan is doing this and it's moving forward and we're continuing to explore it.

I'm not sure where it will end up going, but it's a fascinating thing to explore.

And he's, as I said, he's getting into the zone where there's some interesting things happening.

I think the last meeting we talked about whether there was a way to actually have, I can't remember what it was.

It struck me at one point where he was on the edge of having some semantic meaning into J by how he was asking the questions.

But I might be completely off base on that.

I might be hallucinating. - I missed that. - Yeah. - I know. - No, I mentioned it then during the meeting and he kind of shut it down real fast and I haven't hung on to it.

So I was probably hallucinating. - All right. - But that's why I brought that up.

Do you have anything to add to that Ed that I glossed over as I do? - No, I mean, the original goal was to come up with a principled bottom-up categorization of the contents of the Wiki.

And he was originally, as Bob says, using statistical methods.

He's now moved over into what might be characterized as a modern AI approach where he's looking for categorizations and summarizations and labeling from LLMs.

But it's, although he's been at it for a while, I would characterize it still as very much early days. - But I would say it's kind of neat to see that kind of exploration going on the same way as when we were discussing Kaggle.

My guess is if Kaggle, you would need to put forward a challenge to people, which I don't know.

I've watched Kaggle over the years, I'm aware of it, but to me, it's mostly a contest that people, somebody who's got a problem will throw an award to and see who wants to take it on and solve the problem.

And then there's a competition to see who can solve the problem the best.

But-- - Yeah, I would say Kaggle is mainly a source of data sets and Jupyter notebooks that explore the data sets.

I'm not really tuned into the contests.

That's not something I know a lot about.

Is that common? - I think originally the idea was that you would take, it was almost a crowdsourcing of these problems.

You'd have a data set that was large and hadn't really been explored.

And there were cases where NASA had these huge data sets of planets or potential planets.

And so they turned it loose and there was actually a group that ended up discovering planets out of the NASA existing data set because they hadn't had a chance to, like they don't have the time and they've got their resources other places.

Well, this group just went in and basically played around with over sampling and sampling appropriately and all the tools they had for sampling until suddenly these little things started jumping out.

And then they had a reduced set of things that they could look at potentially, well, there were planets and some of them turned out to be planets.

So that, and that was just based on, I don't know that NASA was giving an award for that, but it was more just that maybe Kaggle said, this is a really interesting data set.

We'll throw this as a problem.

What things can you spot out of it?

But it's also been a place where people who are really into cleaning data and want to develop experience in those areas will go in and learn from these challenges because quite often, I think in most cases, the challenges are open.

You can see the solutions.

So you can see what other experts have done in the field.

It's a way of exchanging information that way. - Right. - It would be, I'd be really interested in the chat GPT approach if it was less energy intensive.

So in other words, I'm not as interested in something that's gonna go off to another server the chat GPT is running, but if something could be made local using the J-Corpus plus an LLM model so that you're not having to grind through and create models all the time.

You do it once, then you make use of that. - The expensive part would be creating the model in the first place, training it, running it is pretty cheap. - Yeah. - You wouldn't have to perpetually create and recreate it.

You do that once.

So I don't, the energy intensiveness, I think is pretty modest in use.

I think it's amortized the more it gets used. - So you try and take an updated form of chat GPT and then you include it with the J-Corpus and blend that together.

That's what you would run going forward? - I'm not quite sure what would run where.

Chat GPT has a REST API that you can use for building this kind of thing, the kind of thing that I outlined, about which I don't know very much.

But the idea would be to augment the existing chat GPT matrix, whatever, brain, with, I think, with a lot of targeted J instruction.

So maybe you could just tell it.

It would be sort of like walking it through the vocabulary, walking it through Nuvoc, I guess.

And then having done that incremental augmentation of its knowledge base, then turn it loose on people who wanted to talk about J.

And I don't know enough to know exactly how it would work or what the odds are that it would work at all.

But I think it might be an interesting augment, basically a chatbot for the wiki.

Want to talk about J?

You're interested in J?

Can I answer any questions about J for you?

Operators are standing by. - It sounds like a door-to-door campaign.

Would you like to talk to me about J?

(laughing) Yeah, I guess the way I've heard, and I think it was Connor was telling me what he's done, is he took Marshall's GitHub repository of BQN, and he loaded that, because you could now take like text and just load it in as part of the question.

And he popped that into chat GPT, and a couple of others as well, 'cause he's got access to a number of different LLMs.

And he said the results were actually, he was pretty impressed.

It wasn't 100%, but there was a lot of information that was pulling out from that, that it could suggest to him ways, solutions to some of the things he was proposing. - That's what I'm imagining, yeah. - But each time he's doing that, he's loading this whole corpus into chat GPT. - Yeah, he won't do that once. - Exactly, that's where I'm getting at, is if you have to do that each time, that's energy intensive.

If you just are able to build the model once, and then react with that, that's probably the goal. - I threw a link into chat that kind of explains why I brought a Kaggle in the first place, and might be relevant to this kind of loading, data loading problem.

I haven't spent enough time on it, unfortunately.

I should have signed up for that thing when I saw it back in 2021, but. - The other thing is, and I don't know, I don't think it would be worth putting too much energy into it, but Arthur Whitney keeps claiming incredible advances with Shakti. (laughs) Compared, you know, like in terms of the speed and the ability of Shakti to do, like today he was talking about something that sounds like it fits closer to photo recognition than anything else, 'cause it was fast Fourier transforms.

But other things he's done have related back to LLMs.

So whether or not that's, and that of course is, certainly in some sense is an array language, although it's a regular array language, 'cause it's K, but it's still an array language and has a lot of the same kind of, and who was it?

Was it Keung, who was at the Iverson College, actually recently wrote a version of Shakti in ML, which blew everybody away.

'Cause he's taken Arthur's C, Whitney-esque C program and converted it over to ML.

And that's kind of interesting.

ML being- - I'm not sure what- - What's that? - I'm not sure what the point is that you're making about Arthur Whitney's fast code. - It's fast and it's already array oriented.

So is there an option to do something like that?

And- - I don't understand. - Do something like that for what? - For chat GPT, whether there's a more efficient way to do chat GPT. - Oh, I wouldn't stick my head into that problem for all the cheese and cheddar. - That's the problem with being informed. (laughs) Where angels cheer to dread. - Easily distracted. - Yeah, yeah.

I mean, there's almost no cost to me throwing the idea out there, but you know, yeah. - No, I'm perfectly willing to treat chat GPT as a service to be exploited, but the idea of re-implementing it strikes me as, when I think of the tens of billions of dollars that go into training it, and the sheer quantity of compute time and the sheer amount of data it has to ingest, that's not a job for an individual at all. - Yeah, I think what Arthur's claiming, I don't know how accurate it is, but he's talking about orders of magnitude, less time and less energy.

If you use-- - Probably for specific kinds of problems, which are probably related to market analysis or something like that. - Time series data. - Time series more, and I know chat GPT isn't really well suited to time series, so that would be a different area to go into.

Anyway, I think that's about it.

Good to see you again, Raul.

I think in the next couple of weeks, I will be ready to take that table and put it onto pages.

So I think you were talking at one point that you wanna do a test wiki on that. - Test wikis are nice for anything involving JavaScript, because one typo in JavaScript and you bring down the site. - Yeah. - At least until you get it fixed. - So I guess what I'll do is closer to it, I'll ask Chris if he can set up a test wiki for a couple of weeks.

'Cause the two things to put on was on the header, was to have that map, which isn't so necessary anymore, but the JViewer probably access or the search codes. - And the other thing was the JPlayground buttons. - And the JPlayground button would be another one, yeah. - Button or feature.

There's more things we can do with it than just a button, if we can remember some of our motivations. - I think the original motivation was to try and have a way to copy, paste, and then run J on the playground. - Right, to turn wiki pages that provide scripts into tools that a person could use to play with that script on that page. - Yeah.

So if it was a button and you had access to whatever your selection was, you would just need to go to the playground and then go to the script window and paste, and that would put what you'd put in there, and then you could play around with that. - Yeah, that's one approach.

There might be other approaches. - Okay.

We'll leave that on the burners.

Anyway, great to see you again.

Happy New Year. - Happy New Year to you both.

Take care. - Yeah, everybody be safe. - All right, goodbye. - Bye-bye.