Wiki/Report of Meeting 2024-02-29

From J Wiki
Jump to navigation Jump to search

Report of Meeting 2024-02-29

Present: Art Anger, Skip Cave, Raul Miller, and Bob Therriault

Full transcripts of this meeting are now available below on this page.

1) A thank you to Skip for providing access to a recent Gemini AI presentation https://drive.google.com/file/d/1KpW33ztSKDI1lbKjyb-o_Ld0yXLSQLMo/view. In the future Large language models (LLM's) may provide a better level of access to the J Wiki. In order to provide better responses Gemini can be based on 'grounded' in specific J sources to provide more reliable information. Gemini can be used to generate answer in a web search with citations or as a chatbot. Skip pointed out that the LLM is most useful because it is able to interpolate the user's search terms which can be an obstacle to newcomers. There is a cost with this, but other open source approaches may be able to achieve similar results, albeit with more work required. Gemini's model uses the initial LLM and then does daily training on the 'grounded' information to keep the information up to date. Skip felt that weekly updates would be all that would be required for the wiki. Bob pointed out that there may be times around version releases that might need more frequent updates. Jon Udell wrote a series of blog posts on incorporating LLM's into programming tools. https://thenewstack.io/author/jon-udell/ . Skip wondered if there were a way to recoup costs by charging a fee for a J chatbot. Raul pointed out that the limited amount of J code may restrict the success of this venture.

2) Raul talked about how the array languages had an advantage initially because the language was tightly integrated to the hardware. Now there is a much wider gulf between hardware and the language implementation with the introduction of GPU's and array processors that require specific programming techniques to take advantage of parallelism. Skip wondered if there was a way to link the J language to the CUDA languages that NVIDIA uses. Raul pointed out that there is limited documentation for adapting J to NVIDIA hardware. In the past J has been designed to connect to other communities, but Raul is unaware of how J could be bound to CUDA. It may be something worth investigating. According to Skip, CUDA compiles to an LLVM compiler that Raul feels J would not compile to. Raul feels the commercial barriers that come with CUDA oppose integration with J. The J language has been developing toward trying parallelize through CPU's and threads. Skip mentioned that the top end NVIDIA chip has 16 thousand cores. These might be used for training LLM's, but would not be necessary to run the LLM. The updating of the J interface to an LLM might be done on a personal computer along the lines of how Ed is generating the SQLite to provide a database for the J Viewer.

3) Bob had spent some time with the CSS Grid tool that Raul had suggested and replicated the Foreigns 9!: https://code.jsoftware.com/mediawiki/index.php?title=Vocabulary/Foreigns/9&action=edit page with a much easier layout style https://code.jsoftware.com/mediawiki/index.php?title=User:Bob_Therriault/9!:0&action=edit. This may make maintenance a lot easier compared to some of the tables we have been working with. Raul felt that this could have been a change rather than maintaining tow versions although Bob had taken more of a prototyping approach. Bob also had included tables of contents and category tree display as content within the grid. https://code.jsoftware.com/wiki/User:Bob_Therriault/Div this allows the display to be more responsive and also keeps elements of the grid centred no matter what the size of some of the elements.

For access to previous meeting reports https://code.jsoftware.com/wiki/Wiki_Development If you would like to participate in the development of the J wiki please contact us on the J forum and we will get you an invitation to the next J wiki meeting held on Thursdays at 23:59 (UTC) Next meeting is March 7, 2024.

Transcript

There we go.

And start off with, thank you Skip for sending that stuff out

to Ed and stuff about the Google stuff.

Yeah, I'm a Google partner.

I hadn't really gotten to that too much.

And then all of a sudden, I started

getting a bunch of stuff from Google saying, hey,

we got all our newer AI stuff coming,

and here's all these courses you can take.

So I watched this one, I said, oh, this is very interesting.

I guess there's several more videos I probably need to watch.

But I know there's some problem getting--

watch that video.

So I just downloaded it and put it on my Google Drive

and put a link on it so it doesn't

require all the authorizations to go in there and look at it.

Yeah, I saw that.

And thank you for doing that.

I actually decided it was worth signing up for it.

So I signed up through the Arraycast,

and they've got all that information now,

which is fine.

But it gave me access to the introductory video, which

was pretty cool.

I mean, I thought in that video, the number of things

kind of went wrong on them, which was interesting,

because it's an AI.

I guess it's not Gemini.

But a couple of things they were trying to do,

they couldn't get working.

And then a couple of things it zigged

when they thought it was going to zag, and that's

pretty natural too.

But it was obviously not too scripted a presentation,

which was nice, actually.

And--

I watched it.

That's not the one--

the video you're talking about, the introduction video,

is not the one that I sent out, right?

It's a different one.

I didn't check the one that you'd sent out yet.

I looked at one of the links that you sent,

and that's what I got it from.

OK, yeah, OK.

That's the intro, yeah.

Yeah.

I haven't-- well, I just stumbled across this one first.

I probably should have gone back and watched the intro.

But this one intrigued me so much that, OK, well, now I've

got to find the other ones and watch them all,

because this one just popped up.

And I started watching it, and I said, wow,

this is definitely interesting.

I mean, it basically showed how they took a random website

and a bunch of information and turned it into a chatbot.

You just ask questions about everything on that website,

and it would--

and there was all kinds of ways to put constraints

on how crazy it would get.

It was pretty good.

Yeah, and actually, in that introduction,

they talked about that.

They probably spent about 5 or 10 minutes

talking about what they called rounded AI, which

is what you do when you take an LLM

and you constrain it to a set corpora.

Right.

In that case, it would be the jWiki.

You could send it to the jWiki and any other blogs you wanted.

And then when you did that, you could

restrict the answers it gave you based on that information.

And so it's got the language ability and the discussion.

And you don't even have to go to the point

where it's a chatbot.

You can just do it as a search as well.

Yeah, right.

You can just search on that.

Right, right.

And then you can do searches, and it can do--

it can actually generate the references automatically.

And it's really cool.

And that was specifically, if it was grounded,

it gave you the option to generate

the references about where it got the information from,

which is pretty much exactly, I think, what--

well, in some ways, what Ed's looking for in terms of his--

and it was using vectors.

It was going through the site, the information.

It was creating vectors and then loading it back into the LLM.

But this allows you to get that natural language understanding.

So words that mean similar things, that's where the LLM

really comes in handy.

Because if you don't use exactly the right terminology that's

in the website, it realizes what you're

trying to mean, more or less.

Not always, but you can get off track, too.

But lots of times, it works.

Well, and one of the options in the chat,

you could do it as a one-off question and get an answer back.

Or you could do it as a continuing series of questions.

And it would base its answers on what

you'd already previously asked.

So in that way, you could sort of narrow it through.

And I think that would be useful as well.

But it does cost.

I mean, I think they were looking--

now, this was for--

I think they were talking about a million views over the year.

But for the search, I think it was in the area of $2,000

a year for that.

More for the chatbot.

And if you're actually using a voice-activated one,

it was more still.

I think there was--

and I'm thinking it would be for places like call centers

and things like that might use that information.

And it's worth them putting in $20,000 or $30,000 a year

on that.

That's nothing for them to get that functionality.

Yeah.

Yeah.

Yeah, I ran-- one business I was in was speech recognition

and getting rid of call centers.

And so we were able to charge a fairly good piece of money

for a robot that you could call into and tell it what you want.

And it would-- as long as it was a narrow range of stuff,

it was not nearly as flexible as what we're talking about.

But if you call into the bank and want your account balance,

you can get it.

And we wouldn't have to--

they could say it several different ways,

and they'd still get their account balance.

That was the main thing.

Yeah.

And I think what would be attractive in this situation

would be that if you ground it to the bank's information,

you're probably more likely also to be able to lock down

the security of the--

like you're not worried about it going out and exposing somebody

else's account.

If you give it the access it requires,

it's got to go through a certain process to get there,

and that should give you some security.

Right, right, right.

It's all-- it's interesting, but--

and I can see an application for it for the wiki.

It's just a question of whether it's

worth it to spend the funds, or it's

worth it to do some research into how to take an open AI

and conform it to that.

I would imagine that's Boston training.

Yeah, the video I watched, if nothing else,

it gives you an insight into the kind of things you have to do,

and the terminology that they use to control the AI.

And it's a pretty good video.

And I think for just the question and answer,

they were talking about the initial training

plus doing daily updates on the information.

Oh, yeah, OK.

So that every day you'd get--

it would get a little bit more information about your site,

whatever had changed, and it would be using that.

So there's a fair amount of compute time,

I'm guessing, in that.

So I mean, I can sure see its value.

Compute time, but maintenance time, people on call

if there's issues, that kind of stuff.

At least somebody having a contact.

Yeah, and I guess that's the other thing.

If we were looking at it in terms of the wiki,

it would be a question about what level of service

we were committed to providing.

Yeah, well, how often would you have to update it?

I doubt it would have to be hourly or even daily.

It would be maybe weekly or something like that would be--

because the wiki, I don't think, changes that much in a day

or two days.

I wouldn't think it does in a given day,

but there may be some days where it probably changes

and may be more important around the times of releases

or things like that.

You might get information that you don't really

want to go stale.

You'd like it to be up to date.

But actually, even then, a week might not be too bad.

But I think it'd be great to be able to call and say, yeah,

we'd like it updated now and have it--

Yeah, just have a manual update thing or something like that.

Don't auto-update, just manual update or something like that.

But again, then it'd be a question

about whether you use one of the open systems

and then train it and see if it's worthwhile to do that.

I don't really know that much about writing an LLM.

There's a series of blogs by John Udall, which are pretty

good, and he's basically just chasing down different issues

with LLMs and whether they're useful for programmers

and whether they're useful.

He started out by the--

I think it's the trope of having the duck on--

a little rubber duck beside your computer.

So you talk to the duck about what you're doing.

It's like having pair programming to some extent.

And he started out that way.

But then as he got deeper into it,

he thought, yeah, there's something here.

And he gets to the point where in some of his--

I'll put a link in the notes and stuff.

But he gets to the point where--

let's see if I can find it right now.

I should be able to--

where he talks about how to craft searches and stuff

like that.

There we go.

Put it in chat in case people are interested.

And there it is.

And it's got a list of a lot of stuff he's done.

Some of them are probably more important than others.

He doesn't go into--

I thought from reading the headlines,

I thought he was going to go into more depth.

He doesn't go into a ton of depth,

but it's enough to give you a sense of things

to watch out for, which I thought was useful.

But again, it would take somebody

who was committed to doing it or the usual thing.

You can either do the work or you can throw money at it.

Yeah, or both.

Yeah, or both.

That's true.

There's some work involved anyway.

But I'm just thinking there's--

I remember experiences I've had where

having discussions with people about technology

and we were figuring out a number of different ways

to do something.

And then somebody just said, well,

how much is that going to cost if we just buy that hardware?

Somebody said, well, you get a card for like $500.

We all looked at each other and went, OK, we're done.

That solves the problem.

Put it in, we're done.

We don't have to do all this other stuff to make it work.

Just $500 and the problem goes away.

So sometimes you have to check into that.

If the cost is reasonable and the service is good,

then sometimes it's just worth spending the money.

If you get the return of--

in the case of the wiki, you get the return

of being able to find the information you're

looking for much quicker.

And that helps grow your user base.

You could charge for J classes.

[LAUGHTER]

Well, honestly, at some point, that's

the thing is you've got to have to figure out a way to--

I mean, right now I think J software is doing a lot of it

by having developers work with third parties

and using J that way.

But you're right.

At some point, if you really wanted to grow your base,

you want to have things like that.

And you want to bring people in and sort of get a bit more buzz

going about the language.

Tough time to do it.

There's a lot of new languages coming out

that are very attractive.

So it's a tough time to start up doing that.

But then again, maybe it's never been that easy for an array

language.

You could charge for by the minute

or by the hour for a J chatbot or something like that.

[LAUGHTER]

Yeah, you never know.

That'd be interesting, whether you could actually

program something that was like a code assistant or something.

Yeah.

[LAUGHTER]

Weird in programming context because the issues

are so very narrow sometimes.

It really restricts you on the supply side sometimes

or the demand side.

Yeah.

Yeah.

Yeah, I saw some replies to some Y Combinator news, some Hacker

News things, people talking about how well the array

languages were set up for things like LLMs.

And I thought, yeah, it's funny.

And it's obvious from Hacker News,

people don't always know what they're talking about.

But you see, I've heard so many people at a level say,

oh, it must be.

But then I've never seen it done.

And I've talked to some people who

say, actually, that's pretty hard to do with an array

language because you just don't have as much--

the patterns are there, but they're maybe not quite as

evident as you would think.

The patterns are there, but the hardware fit is difficult.

If you were doing parallel programming and stuff,

you mean?

Yeah, or the programming techniques

to do the hardware fit.

The thing about array languages that got--

one of the things that got it off to a good start

was how well the array model matched up

to the hardware of the time.

I mean, it was kind of a double--

coming in from two directions.

On the one hand, it was being used to model the hardware.

On the other hand, once it was modeled,

the people that were involved were

able to come up with a reused set or a simplified approach

that also was something they could

implement on the hardware.

So it was a gratuitous sort of combination.

And nowadays, people that have that kind of insight

into hardware don't really come at it from the same direction.

There's a lot of other state machine type approaches.

And that's been built out.

It has a lot of fans and a lot of effort behind it.

It's different from back then when

we were trying to figure out--

we had the math and we were trying to figure out

how does the math apply to the hardware.

Nowadays, that's a very rarefied approach.

I think nowadays, there's a greater

distance between the low level and the higher level.

Back then, we didn't have a high level.

We were trying to invent it.

Well, I would say APL was a relatively high level.

But not as high level as the very high levels now.

But also, the low levels weren't as complicated

as the low levels have become now.

The way I see it, it looks like to me

that most of the LLMs and even the AI stuff is all about--

you've got this huge mass of data,

but you have multi thousands of concurrent tasks running

at the same time on multi processors kind of thing.

And the big deal with the new NVIDIA processors

that do all the AI stuff, you look at them,

they just have a whole raft of small processors.

And the secret is there's a lot of difficulty.

They're all sharing a common memory space.

And so they have to have a super high bandwidth memory.

So there's a super high bandwidth memory

and a whole bunch of processors all around this memory that's

sharing all this stuff.

And so that, to me, unless you tune the J interpreter

to deal with that model at the hardware level,

you're not going to get the efficiency that you need out

of to do LLMs and high level AI just because of that.

And a related issue applies to both--

I mean, NVIDIA processors tend to be

where a lot of the AI process happens.

I don't have anybody I can talk to that

can answer the questions I need to find the answers to get me

over the road bumps that I run into.

I don't know anybody that has that kind of depth

in the NVIDIA side of things.

And so the limited scope of our community

kind of bites me there.

And it's kind of a chicken and egg problem.

You don't have the hardware people.

Like, for example, working with OpenGL,

I can throw information into a buffer.

And I can't figure out how to get the information back out

of a buffer after doing some OpenGL processing on it.

That's just one very simple--

and I know it's possible because people do it all the time.

And we've got the OpenGL interfaces.

And then there's the CUDA interfaces

that's just almost orthogonal to their OpenGL

and gives you a lot more access.

And I don't know--

I don't even have--

we don't have JBindings for it.

I don't have example problems.

I don't have any--

I could start coming in from using somebody else's Canda AI

package.

I've tried that a couple of times.

But I'm kind of out of my depth there.

And I don't really know which direction I want to head.

I think-- well, NVIDIA has this CUDA language.

And they have a whole infrastructure

built around their language that they've taught people

on how to program their AI.

But if you don't get into that--

into their thing, you're pretty much out of luck

because everybody that wants to use them

has to maybe learn that, the CUDA language and all that.

I wouldn't mind learning CUDA.

But I don't know--

I mean, are there--

I mean, one of the things that Jay went for originally

was the ability to tap into other communities,

the DLL dynamic linking stuff.

And the scripting stuff, all that

was designed so that we wouldn't be such an isolated community,

so we could tap into the other communities.

And I don't know how to wire up Jay to a CUDA infrastructure.

I don't even know if CUDA--

I don't know enough about CUDA even if there's any C bindings

or exports.

It would have to be at the file level that we'd have to--

so often people want to say, to capture our audience,

we want to have people using this IDE.

And then the communities are tied now to the IDEs

rather than to the language and rather than the technology.

And that creates a barrier to entry, which--

there's commercial values to that.

And there's maybe even--

it's also a limiting--

it's also a limiting factor in how this stuff can be used.

And you can see it in--

nowadays, it's very rare for people

to be talking about file structures, for example.

It's always this app or this canned approach.

And then what happens is you need skilled operators

to work with that context.

And then the people develop that one skill set.

And then other skill sets have difficulty

being brought in unless-- except at the hobbyist level

where somebody with that one skill set

happens to have a hobby that touches on some other skill

set.

It's a large connectivity thing.

Well, NVIDIA has what they call their CUDA C, C++ compiler.

And they have a basic--

it's an LLVM-based C, C++ compiler.

And you're supposed to use their compiler, their C++ compiler,

to do everything.

So I don't know if you could take the core of J's low-level

C and C++ stuff and could use the LLVM-based compiler

and see if it'll work.

It won't.

I mean, you can't compile J to there

because the memory model is different.

There's just not enough address space in any one processor.

And you've got to solve those problems.

But no, what I was talking about was how

we have OpenGL bindings in J. We can build up--

I can do a little OpenGL and just tiny--

I've only done tiny stuff.

And I can do it in J, and it'll work.

But I can't do CUDA programming in J, at least currently,

and have a--

I haven't even tried to figure out what that would look like.

Maybe it would just be run one J command,

and I write stuff to file.

And you've got to start out small.

But the point I was going after was the commercial barriers

are in direct conflict with the adoption of--

not the adoption, the ties between our communities.

J community working with the CUDA community

is almost in direct conflict with the efforts

that have gone into making CUDA a viable commercial product.

I think-- yeah, I see what you're saying.

I think one of the other issues is that the whole CUDA model

is a whole lot of processors and a little bit of memory

in each processor.

You're pointing that out.

And J is like, I've got this big chunk of memory,

and I can just crunch through that whole big chunk of memory,

and I don't have to break it up.

And that's right.

That's the fundamental array language thing,

is that the size of an array doesn't matter.

The size and shape of an array doesn't matter

in classic programming, because you just

have this one big address space.

And you can carve it up however you want.

Whereas with the NVIDIA, you've got four wide--

of course, that's starting to also crop up in J.

You have other dimensional units.

And what Henry's been putting a lot of effort

into taking advantage of some of the hardware abstractions,

saying, we'll do it this way.

And if it happens to be close to this one memory model

or this one architectural feature,

we'll have another batch of code that does the same thing

all over again.

It runs faster, because it's four, eight by--

however many bytes wide the instruction set can leverage.

And with his threads and things like that,

he's basing them on the CPUs.

So it's not a GPU.

It's driven by--

Well, no, he's based now on the OS.

He's relying on the OS to tie it to the hardware.

But he is looking-- he's not looking at breaking instructions

down to GPU instructions.

He's looking at taking CPUs and running the language that way.

Right.

That's another thing is there's no integrated compiler that

can compile to both the GPU and CPU with dispatching as needed

or as directed by the programmer.

I was just looking up the NVIDIA CUDA core.

And their latest one, the GeForce RTX 4000,

has 16,384 cores, 16,000 cores.

And how many tens of thousands of dollars does that ship?

I don't know what it is.

But that's their top end line.

They also have other ones that have 10,000 and 3,000 cores.

But yeah, you're right.

Who's going to have that on their local machine to run J

is going to be another question.

Now, I mean, what happened, of course,

is that NVIDIA originally was everywhere

because they were making the video processor

cards for all the PCs, right, the video processor sections.

And they realized that the video processor needed a lot of cores.

It'd be much more efficient when you're doing transformations

on a video screen, manipulating it.

It'd be much better to have multiple cores

and more of the mirror.

So all they did was just take their video processor chip

and start putting new APIs on it to do AI.

And that's been interesting ever since.

But even today, I mean, a lot of machines

have a NVIDIA chip in them somewhere to do the video.

If you've got a screen on your machine, lots of times

you'll have an NVIDIA chip.

So you may be able to use them in the wild

rather than having to buy a specific machine

with NVIDIA in it.

I'm not sure how much Apple is using NVIDIA stuff these days.

I don't think they are.

Maybe not.

They may have built their own.

I think they are, to some extent,

using their own decoders as well.

But I think they also are leaning on some others.

If you think that's a direction to go,

I can talk to Connor about it.

He's a data researcher at NVIDIA.

And he's programmed in the Thrust application, which

I think is one of the parallel processing parts of CUDA.

So if anybody knows, he'd know the ways you could approach it.

I'm just not sure whether that's a market that we're

looking to expand into.

I don't know.

It's niche, but it'd be very powerful.

Yeah, maybe.

I mean, I usually--

I find I can--

with Jay, I do a lot of--

some of the stuff I do on Quora is huge.

And so I use a lot of memory.

But it's still-- it may take 30 seconds to run sometimes

or longer.

But it gets done, and it's easily controlled from Jay.

So I don't know if--

the main goal for NVIDIA is super--

well, is actually the making of the LLM.

Once you've got the LLM--

Model.

Model done, using it is actually not that big a deal.

So it's a matter of whether--

wait, it's a whole different question

about whether you're training an LLM or you're using an LLM.

I think that's the issue.

If you've already got the LLM done, then hey,

you can probably stick it on your machine and use it.

So that's an interesting thought.

Well, and that was one of the things that the Google video

that I watched, the introductory video,

they were talking about that.

That's essentially what they were saying,

is the work they've done to create the LLM has been done.

And then referring to their term was grounding it.

If you ground it in a specific database or information

knowledge area, there's just a little bit more training

to bring that in and give that priority to the answers.

And that's what they're banking on.

They've already built the natural language processing

part of it.

And now they're just latching it to this small little bit

of information that you particularly want to use.

That's what you're paying for.

Yeah, so they use the CUDA and the big multiprocessors

to generate the model.

But you don't need that to use it.

And that's maybe--

My guess is when they talk about doing a daily review

of the data and regenerating the model to reflect the new data,

I would say they're probably doing that with the CUDA.

But it's a small amount of information

compared to the huge amount of information

they've gone through over the last year to develop the LLM.

Yeah, that's a good point.

I don't know.

That's a good question.

I know.

Just how much processing do you need to update an already

built LLM?

Yeah, and that would come down to the question

about whether it's worthwhile to pursue something

that might be open source.

And you don't really need to be beholden to another company

or pay another company for it.

You just have to be able to install it

and then have the beans to be able to go

through your area, which I would say in the size of the wiki

probably isn't out of the scope of maybe

a fairly beefy personal computer being able to do.

I mean, Ed's generating his database, I think,

in a number of hours once a week.

I think his big question, from what I remember him talking

about it, was the thing that was holding him up

was getting a clean run all the way through that

didn't bog down.

If it bogged down, he had to start up again.

Once he got a run through the whole thing,

it was maybe an hour to go and create the database.

And then he would post it.

You could download it.

And you're not doing any of the work.

You're just doing a download.

But he's created that whole thing.

Interesting.

Yeah.

Anyway, it's certainly worth looking at.

And it would be an interesting extension of the JViewer

if you were able to give it some AI pattern

juice that could make the searches just a little bit more

helpful and make the wiki a little bit more accessible.

Ed has probably got all the data already sitting there.

It's more a matter of turning it into an LLM, which is--

I don't know what--

that's where it's going to take the interesting--

the trick is to figure out how to take that data

and put it into that LLM.

Well, I think you'd take the LLM model.

And then you'd have to figure out

how to point it at your data.

And if you did that, then I think the LLM isn't--

it's a matter of pointing it at the specific area

you want to answer the questions with.

Yeah, yeah.

And that-- I don't know.

I think it's worth looking at.

And thank you again for providing that,

because I think it sparked a conversation.

I think that's always good.

Well, I was just watching that.

And I thought-- I was watching that video.

And I thought, this is exactly the kind of thing

I was trying to think about.

I don't know how to say it.

But basically, they said, listen,

you can control the scope of what people can ask

by this up front.

You can kind of load it with the things

that you think are appropriate.

And it'll constrain itself, theoretically,

to those things that you've constrained it to.

Yeah.

Yeah, it's very much along the lines of what

we've been trying to overcome.

It's-- the technology is catching up to our problems,

I think.

[LAUGHS]

That's exactly right.

Since Ed's not on, I don't think there's a lot more

to discuss about the viewer.

It'll be interesting to see where he goes with it.

And I guess he'll have his new version out for Feb--

well, for March.

I think he's already kind of announced it.

So that'll be fun.

One thing I had was I went in and I dug around a little bit

more in the grid display, CSS grid.

And boy, is that ever cool.

Thanks so much, Rolf, for pointing that out.

One thing I noticed is I took a look--

I better share screen.

I took a look at what you'd done with the timing, the horn

conjunction timing thing.

And I was looking at that.

And-- well, just take a look.

What I did is I made a copy.

But I then put it onto--

It was updated, I mean.

What's that?

I would have just updated it.

Yeah, I didn't want to go in and mess up too much with you.

Because to me, a lot of what I was just suggesting I did

there was just more about--

I mean, I didn't change anything in the display.

But the thing I did change when I go into edit

is I changed the layout.

Because it's actually--

I know you were doing a lot of stuff with it.

It's actually a really simple layout to track, right?

Because once you decide that your grids are

going to look this way, then you can lay them out this way.

And it's going to find those grids.

You've given it basically three columns.

And as long as your divs are children

of this main class, which is the param class, which

is doing the grid, it's going to order all these.

So when you get to a blank spot, all you need to do

is put in a blank div.

And it's going to fill in those blank spots.

It's really simple.

And actually, that's kind of what I like about it

is if you were to have to go in and change something

in this table, it's pretty easy to figure out

what you're going to go in and change compared

to a lot of the other tables we've been working with.

But again, I would have made that change on the master copy

because this isn't a high traffic area.

It's an early in development.

And I don't see any value to having to maintain

two copies of a page.

Well, no.

What I did is I just created the user version, like my user

version, to play around with it.

And I can go back and copy this over

and do that change for sure.

Yeah.

Yeah.

But again, I was just really impressed

with how simple that one little bit of CSS is.

And the only other thing I did is

I decided to do a div for the heading

so that I didn't need to worry.

Although it was interesting because I still

needed to do the bold on this to get

it looking the proper way.

But--

You could-- I think you could--

oh, because you have a text declaration underlined.

You can't have two text declarations that are the same.

Exactly.

You can't do two at the same time.

So--

Yes, actually, you could.

What you would do is you would have two different anchors,

two different CSS selectors that happen to select the same text.

OK, so you could say give it two classes.

Yeah.

Different names.

Or you have it div.heading and dot param.heading, for example.

Right, right, because this is--

Yeah, OK, because this is a child of--

yeah, OK, yeah.

Now, the only thing is, wouldn't the CSS that applied

would be the most recent one?

And would that-- the two text declarations still conflict?

Well, you could do it--

hmm.

Oh, I see what you're saying.

Maybe one would erase the other.

That's what I--

I believe with CSS, it just--

if it's got a conflict, it takes the most recent one declared.

And if it was text decoration you're working with,

underline or bold or emphasis or whatever you want to call it.

But the way around it is I've just put the-- left the b in

as inline.

And that creates the effect that you want,

because it'll put it inline in no problem.

You can-- another thing you can do

is use text decoration thickness to achieve the bolded.

But it would still be text decoration, wouldn't it?

Because text decoration thickness

is a different property than text decoration.

Oh, OK.

Oh, I didn't know that.

OK.

Yeah.

Yeah, if there was a separate property--

or yeah, separate property.

Yeah, you can do it that way.

Yeah.

Yeah.

Anyway, that was just what I did playing around with that one.

And I'm just trying to look around

and see if I go back to here.

Now I've got to figure out how to get off my--

uh-- the other one I did.

And I just marked this up.

This is just the same thing.

I just copied it over.

But the reason I did this is that I can take within my grid,

and I can put in a table of contents,

and I can put in my tree.

It just sits like everything else.

And so the other thing that it allows me to do

is it's still just sitting within this grid.

But instead of putting this stuff in,

I've got this long description of what my category tree is.

And then my table of contents sits in this div.

And then they just show up in the grid the same way.

And what that allows me to do is I

can take an overall page, mount it in a grid,

and then actually have it balance properly

so I don't have these things floating all over the place

because I can determine where they are.

And the other nice thing about it is with the FRs,

you can set--

so if I go to this--

it'll be interesting to see.

I haven't tried this with this one.

But when you shrink the page with the FRs,

it will shrink that space in for you.

So you can get that kind of squeezing effect

and not lose as much.

It's much more responsive.

And if you use additional FRs, you

can get amazing thing happen where

you get down to a certain size.

It can shrink the size of the image

you've got and all sorts of things

to maintain the information on the page.

So in terms of making pages more responsive,

the grid is a really good way to do that as well.

So what I'm thinking is I may look

at using the grid in some of my category pages

to clean them up and make them more responsive

because I think that will really help too, possibly

with different size screens.

Now I want to play with your page here.

Anyway, that's about all I had.

And feel free to play with my page.

But that's about all I had.

But thank you so much for pointing out that CSS grid

because I wasn't aware of it.

And then looking back, and I think

it was the Jen Simmons video was the one that Ed had put me--

that Ed sent a link to.

And I followed that up as well.

And yeah, it's really interesting

because she's talking about it just as it's introduced,

which I think was 2017.

And it was quite interesting, the whole process

they went through because I guess leading up

to it in the 2010 to 2015, they incorporated the Flexboxes.

But the way they'd done them is they'd done a soft rollout

and had the different browsers, had the Flexboxes dependent.

So you had a Mozilla-Flexbox and a Explorer-Xbox and a Safari-Xbox.

And then when they actually launched and said,

well, this is what we're going to do,

they were stuck with people, had already

been using them with all these extra--

had just created a whole bunch of cruft

that nobody wanted to go back and clean up,

even though it wasn't required anymore.

Whereas with CSS grid, they held off and held off on a hull off

and then just launched it.

And they haven't had to make changes to it

because when it actually came forward, it was already working.

And it was adopted by, I think, like within the first week,

it was adopted by the four main browsers.

And then after that, the rest of them caught up within a year.

So you're learning something that's very robust

and isn't going to be changing around a lot, which is nice.

So I'm pretty comfortable using that.

And it seems to work very well within the wiki,

within MediaWiki.

Is it just CSS?

Anyway, so that's about all I had.

Good to see you, Arthur.

Always nice to see you showing up.

OK, thank you.

Hope things are going well for you.

Well enough.

Well enough, yeah.

Is there anything else anybody wanted to bring forward?

Not that I've accomplished anything useful.

Oh, I threw another link in the chat about the text

decoration.

OK, great.

I just copied it, so that's perfect.

Yeah.

I'm learning so much more about CSS now.

So a number of things.

I'm just trying to think.

It is the as selector.

I was trying to figure out a way to hover and have

multiple parts of an SVG light up at the same time.

So when you're hovering over one class,

another class in another area will light up as well.

And I saw somebody has done it with the as selector.

Because if you have it as on a class,

and there's another class--

oh, thank you, Skip.

See, Skip's posted some kudaclore stuff.

But anyway, if you use the as selector and base it

on a class, anything with that class

could light up at the same time.

So you can hover in one area and have another area

head up, which was a problem I was trying to solve

and I wasn't getting anywhere.

And then I found this, and it was nice to find

that kind of a solution.

I haven't tried to implement it yet,

but seeing it could be done is 90% of it.

And the other 10% is then actually

figuring out how to do it.

But it's nice that it can be done.

OK.

Well, if there's nothing else, I mean,

we can sit and chat for a bit.

But I don't have anything else really to go forward with.

We'll see what comes of the AI stuff.

I think that's quite interesting.

Yeah.

Very interesting, guys.

So we'll see you next week.

Bye-bye, everybody.

Be safe.

Have a good week.

And you as well.

Bye-bye.