Generative user interfaces
We keep hearing AI is going to change everything, but so far it’s mostly given us helpdesk-like chat interfaces. Mike Ryan thinks it can be better and he’ll show us how with Hashbrown.
Resources & Links
Read the transcript
Captions provided by White Coat Captioning (https://whitecoatcaptioning.com/). Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.
JASON LENGSTORF: Hello, and welcome to another episode of Learn With Jason. Today, on the show, we are going to dig into a subject that has been on my mind a lot lately. It feels like it's the only thing we talk about, you got it, AI. In addition to a whole bunch of other concerns that I'm not going to get into today� I've talked to them in other places. Other people have talked about them at length. I want to talk about one of the concerns I have that doesn't approach the ethics of morality of AI, but instead approaches the user experience of it.
One of the things that's really challenging to me, about AI, is that it feels like it's been an enormous step backwards in terms of UI and usability. We're going to dig into, maybe, how we might go about solving that problem and how we cannot give up good things about the web as we start to grapple with the web and AI.
And to do that, we are bringing on somebody who has been deep in this space, who's been exploring a lot about how this works. So, let's dig right into this show and meet him.
So, welcome, Mike. How you doing?
MIKE RYAN: I'm doing really well. How you doing?
JASON LENGSTORF: I'm so happy you're here. I'm so excited to dig into this. I feel like this is a topic that has been on my mind. I feel like there are a lot of things that people can talk about when it comes to AI. It seems like the only topic that comes across tech today is some facet of AI, but one of the things I haven't really seen people talk about� and has really started to bum me out� is we have come so far in what we can do on the web in terms of visual and interactivity and the ways we can display and interact with information and I just feel like the entire approach of AI is, yeah, yeah, forget about that. It's all text now. [Laughter]. So I guess, as somebody who's maybe been more plugged into this space than I am, my first question is, can you talk a little bit more about who you are some give yourself some context?
MIKE RYAN: Yeah, for sure. My name is Mike Ryan. I am a Google Developer Expert. I'm known in the Angular space. About four years ago, I left my previous job behind, moved all the way across the country and started my own consultancy. Instead, I've been just heads down, kind of in the React space, really working on bringing, like, huge datasets to life on the browser. You know, you've hit on that point already. We've really pushed the web over the first 10, 15, 20 years. I've been in that space, just trying to figure out how to get a bank of huge dataset and giving users controls over the dataset, let them visualize it, interact with it.
JASON LENGSTORF: Cool. This feels like maybe you've been thinking about this a little bit. Maybe the first question is, why do you think that we have, like, defaulted to text? Like, what is it about this� this new era that, um, that has led to us not even attempting to make it visual?
I think the answer is partly maturity and maybe also just a lack of imagination in some ways. I remember when ChatGPT came out in December 2022. I was away on a trip with my nowfianc� at the time, I went home and started building with Jippity. It blew my mind. It was amazing. It could only do text. It couldn't� like, you could maybe to get it to spit out a little bit of JSON, but it wasn't supergreat. I think it's just taken a lot of maturity in the tooling to get to a place where key can have these models output more than text.
And so I think� I think the answer to your question is, we've gotten stuck with text because that's what it did really well out of the box. It's time to be more creative or more imaginative with what we can do with this technology.
JASON LENGSTORF: That's maybe been my core contention is, you know, it often feels like tech is very susceptible to, like, "shiny object" syndrome. People scramble to do something a little bit worse just because it's newer.
[Laughter].
That lead me to being pretty slow to even consider AI because, you know, it kind of felt like it was going to be another crypto, where we were shoving an idea. As we've seen, that's not really how that panned out.
AI, though, feels like it's a little different. I don't see the� the� the same sort of isolation and utter failure of AI to make its way out of its sort of core use cases. So, it feels like in some capacity, AI's here to stay. I� I know that I sincerely hopes that it finds its place and stops being the thing that anybody cares about. It's making a lot of this discourse so tedious.
MIKE RYAN: It is. Huge noise. Huge noise signal in this space right now.
JASON LENGSTORF: Absolutely. We're seeing conference keynotes with companies who have done interesting things who don't even talk about them because they talk about AI that nobody cares about in their suite. The thing that I do thing is going to be important is as this becomes permanent in some way, as we see that AI has made its way into our workflows, I don't think there's going back on chatbots or smarter search. We love� that's really what it's good at. Can it go through this data set and not only search what I ask for or things that are like it. I can ask it a question and use the data that it has to give me a better answer than I would be able to get on my own. That's a really interesting use case. But what I don't like about it is it's been the same chatbot help desk thing. I'm your friendly robot, can I pop over whatever you're trying to do and ask you questions? If feels like that's how AI's being implementing right now. It's that annoying, popup chatbot. I want to use it differently. I want to use the web as the web.
So, in my� in my experience, like, the part that is hard about this is that the LLMs are a textbased medium. You can say, give me back JSON and they're pretty good at that. They'll give you back JSON. They can give you back some code but, like, how do we� I don't know� convince it to use our code and not to hallucinate other code? Because one of the big things about the web is what makes it feel good is this level of popish and we've got this whole design team working on polish. I don't necessarily want the lowest common denominator React code to be what the LLM spits out. We know, that's not good accessible code. We've all seen the charts. Average code is not very good so I want to be able to use the code that we've specifically audited and I don't know how to do that. Like, that feels like the big gap that I don't know how to cross as somebody who's even beginning to explore this.
And I know you've been thinking in this space. So I guess the first thing would be, maybe we could talk a little bit about "why" and second, maybe talk about how you've approached trying to correct this.
MIKE RYAN: Yeah, for sure. You're hitting on one of the things I'm pretty passionate about, quality. The design patterns that are implemented correctly is way better than any React component got it to generate. I'm a consultant, it's worth it for me to pay for these models and they're not very good at high quality yet. Maybe they'll get there. Maybe that's a place where we'll go eventually. For me, I want to really break out of these chatbots. I get a little cringe at humanizing and replacing human interaction with AI, that part gives me a little bit of ick.
There is a sense I feel when it predicts the next line of code correctly or finds me the best restaurant. I feel the magic in those interactions and I've been wondering, how do I take that sense of joy of AI, that, oh, it actually knew what I wanted it to do and helped me do it quicker. How do I package it that way? I'm not interested in all the different ways my developer workflow can improve, I want to pass that on to my users that's still using React components and feels like a really great web app.
JASON LENGSTORF: Yeah. And so, I mean, I 100% agree with you but if I were to try to start that today, I don't even know where I would begin. I don't know what step zero is. It feels very much like LLMs, question marks, it's [Indiscernible]. In my case, I know that people want to use LLMs. I know that I like an LLMpowered search more than a plain text search and so for example, on CodeTV, I would love it if somebody was able to come in and instead of having to know, search for Svelte, they could say, what are the modern frameworks people are learning right now? And it could pull up videos on Svelte and Astro. I don't want that to be a bulleted list, right? I don't want it to feed Markdown. I've done all this work to make CodeTV better. I would love it if it could just show an interface that is like, here is some videos you can watch with an interface I spent time designing. What is Step Zero here? What do you do to cross this bridge?
MIKE RYAN: You can roll this yourself. It could take you a good bit of effort. There's this growing concept of generative user interfaces. We're taking Generative AI and not using it to generate code, but respond to users' input and I have no clue what this means, to be clear up front. I know that it's exciting to me or the possibility of letting an AI generate a user interface on the fly feels fun to at least explore and so I've been working on an open source project called Hashbrown and it lets you, as a developer, do what we're talking about. You can use a Large Language Model, you can connect it to your React application, understand the React components you have available and assemble user interfaces kind of ondemand, dynamically inside of your React app.
JASON LENGSTORF: So the major question I have, when we talk about this stuff� forgive me because I'm asking a lot of questions that are probably sounding cynical and if that's how it's coming off�
MIKE RYAN: I think it's earned.
JASON LENGSTORF: What I keep hearing from these magic companies and they're like, oh, yeah, all you have to do is show us your website and we'll generate you� and I'm like, it never works that way. It's always it's always falling short and a manual effort to make it look right. I've seen what you've been working on with Hashbrown and I feel like this is different and I guess my question is, why? Like, what are you doing differently that is allowing it to make� and obviously, we're going to get in and actually look at this in a minute. I'm just sort of structurally curious, how are you getting it to do this the way that I would want it to be done?
MIKE RYAN: Yeah. So, the first part of that is to just make it really explicit and clear. I think AI's only as good as the context you provide it, so Hashbrown is all about having the developer create that context and the second part of it is, the developer has full control of the React. This is not writing the CSS, the JSX, trying to look like your code, it is your React components and you have full control over those components. We're asking LLM to pick with one. Closer to autocomplete than it is to replacing a software engineer. As awesome as AI is, I do think it's still mostly an autocomplete so we're leading into that strength of what it actually can do today rather than doing this moon shot, replace engineers thing.
JASON LENGSTORF: So if I can repeat that back, what you're saying is, instead of saying, my tool will magically consume your site, figure out what the components are and then use those to generate UI. You're instead saying, you need a developer to register the available components into this system at which point, only those registered components can be used to generate UI?
MIKE RYAN: Yep. And we're going to have full control over that and the list of components it's allowed to render. Make this a capability. Don't make this feel like magic, make this feel like an understood, open source thing that a developer can get their hands on.
JASON LENGSTORF: This starts to approach� honestly, this is one of the reasons I wanted to have you on the show. This is the first time I've seen an approach that feels like a� what I would consider a mature approach to AI. It's not, like, if we get enough GPUs, you can let go of the wheel. That's a doomed approach. We're never going to get to the point where I can have a feeling and the computer's going to interpret that feeling. You're always going to want to have your hands on the wheel. If you want something, you have to do the work to get the thing. So what I believe is interesting about what you're doing with Hashbrown is that you're approaching it the way I would expect a software engineer to approach the problem. We've got the components, our design system, our colors. We also know what our data looks like, so we know which components need to be available for which sets of data. What we don't know is exactly what our users want to look like. We can't predict what organization of our products or specific pieces of our docs, or any of those things, they might need at any given moment. So what we're doing with a tool like Hashbrown is we're saying, okay, I know what my data looks like. I know what my components look like. If you use this data and these components in any combination that matches what the user asks for, they're going to get a generative user interface that exactly matches what they were asking for and satisfies my desire to build a website that actually looks like and isn't just a Markdown�
MIKE RYAN: That's exactly it.
JASON LENGSTORF: This is exciting to me. I like this.
MIKE RYAN: If you use ChatGPT and ask it what a particular Stock is doing or find nearby restaurants, it is doing Generative AI. It's like, okay, we think� obviously, they know that rich responses are really great here. So let's go build those into our applications. Let's do that thing with we let AI do assembly of really good components for us.
JASON LENGSTORF: And to me, this is also one of those things that solves, on the long run, we're making it cheaper and we're making it less resourceintensive to do a lot of this LLM stuff. Tools like DeepSeek. There's the new one that came out, that is really, really, way lower in terms of intensity. Those types of things kind of point toward a future where we could selfhost our own LLM that is trained on our data and then we're not spending money to send it to Anthropics and OpenAI. What am I looking at? I forget which one it is. Some AWS service that has a confusing acronym for a name. We'd be able to just run it so it's no more resourceintensive than any other server we're running and I do think the trend is going to go that way because we're putting too much research in this.
MIKE RYAN: My iPhone already had a lot of hardware to run an LLM.
JASON LENGSTORF: Exactly.
MIKE RYAN: Browsers are going to do this. It's just really cool.
JASON LENGSTORF: By� by treating it like software, instead of, like, magic, we� we're building out a scaffolding where you've got your data, components and a robust set of instructions and right now, it goes to this thirdparty, this OpenAI, this Anthropic. In the future, if I can point that to my selfhosted, not that expensive, not that overpowered LLM, that's hosted on fly.i0 I own my own destiny and not hoping ChatGPT 4.5 will do the UI the way I want. They're still my components, that's what we paid the engineers to right. It's still my data, we paid the data team.
We won't have to worry that they overcorrected and it's mean and they switched to Grok and now its horny.
[Laughter].
There's so much externalized risk when you trust a company that's not interested in what you do to control the root of what's happening with your code. I'm ranting a little bit because this is the first time I see light at the end of the tunnel.
[Laughter].
MIKE RYAN: They always want to charge you electricity for running through their rock. We have rocks of our own.
JASON LENGSTORF: I want my rocks. You know what? That's the right way to look at it. When the philosophers write it this time, this will be the language they use.
Should we look at how this works?
MIKE RYAN: Yeah, let's drive right into it.
JASON LENGSTORF: As you see in front of you, this episode, like every episode, is being livecaptioned. I thought I had it in my� the board, and I definitely do. It's there. So, here is the URL, you can go find that right now. And, if you need a little help or you just want to figure out what I'm saying because I mumble. Thank you so much, Vanessa and White Coat Captioning, for being here today.
We are talking about Generative AI with Mike Ryan. You can find Mike on Bluesky and we are talking about the project Hashbrown, which is� you're the lead maintainer? You're the creator?
MIKE RYAN: Creator/tech lead. I grew up in open source. Have been in open source for a long, long time now. Open source I'm trying to grow a little bit of community around.
JASON LENGSTORF: You sent me a repo and I unpacked it. I'm going to ask you from here, if I want to learn about this and I want to use it, what's the first thing I should do as a developer?
MIKE RYAN: If you want to read about it, the documentation is the best place to start. We're pre1.0, so we're continuing to work on it. If you're an Angular developer and want to get in on the Generative AI, we support you. This will walk you through had setup instructions in terms of picking an AI vendor, we support AI, Gemini.
JASON LENGSTORF: Hold on. This works?
MIKE RYAN: Oh, yeah.
JASON LENGSTORF: What?! My whole life, my whole life I've been� oh, my god. You pick up a little thing like this. Nobody's ever going to write a blog post that I'm going to see, but I get to see this here and it unlocks this new thing in my brain where now I get to know something new and that is what this is all about.
MIKE RYAN: That's Learn With Jason.
JASON LENGSTORF: Okay, so� [Laughter]. So, this is kind of the base setup here, we've got a chat hook. We've got a "use chat." The specifier model. Then we've got a system prompt.
MIKE RYAN: And that's all going to in our React code. Everything's going to run clientside whereas a lot of the frontend AIfocused stuff, like the AI SDK, a lot of that runs in your backend and then your frontend consumes. With Hashbrown, we're threading the whole thing, including the system instructions. Are they downsides or deep falls? I don't know. Probably.
JASON LENGSTORF: I'm trying to think about what the tradeoffs would be and it sort of feels like you have to get the message, send it somewhere and then get a response back and there's not really a lot that putting on the client versus putting it on the server changes, right?
MIKE RYAN: System instruction being on the frontend, I think a lot of companies feel like their system prompt is somehow proprietary. I can make any model leak its system prompt with enough time. I don't think of the system instructions being too proprietary. End users should be able to customize these things. When I use the ChatGPT app, I can give it custom instructions. I think part of making friendly AIpowered apps is to be really transparent about what's going on and let users customize and configure and control it. I've never felt like these things are proprietary.
JASON LENGSTORF: Got it. Yeah, yeah. Okay. I can see people having feelings about that, but I would agree with you. Like, if your entire IP is your system prompt, I feel like you're in a lot of trouble as a company.
MIKE RYAN: A lot of trouble. [Laughter].
JASON LENGSTORF: Okay. So this feels� none of this feels, like, out of my comfort zone, so far. We're doing all this. But what I'm noticing is that, so far, this is a chat interface.
MIKE RYAN: We're starting with chat because Hashbrown kind of layers on top of different AI concepts. It is tech under the hood. We're going to have it generate structured data instead of just text. The concept of Generative AI builds on top of that. So this Getting Started guide does start with text, how to integrate or communicate with an LLM in your frontend code. As we get through it, we'll get into nontext, for sure.
JASON LENGSTORF: Do you want me to do that here or do you want to jump into this repo you created?
MIKE RYAN: We can do it right in the repo.
JASON LENGSTORF: Let's do that. We'll be writing some code as we go. I've set up my environment as an OpenAI key and then I don't even know what a Chromo UI key is. We've got our environment set up. There's this Content folder here. This was a rather large zip file that I downloaded. I think it was shy of 4 gigabytes. What's in that content folder?
MIKE RYAN: I took all the videos on CodeTV and ripped the files from them and created transcripts with 15second intervals on the timestamps and here and my idea here is let you type in a question or a search query across all the CodeTV transcripts and maybe we can get this thing to show the actual YouTube videos, actually linked to the proper timestamp as part of the search interface. Kind of a fun thing we could probably accomplish in an hour.
JASON LENGSTORF: I like that and we have exactly one hour to do it. We've got three apps in here. I saw, this is an Nx model.
MIKE RYAN: We've got an Express backend. There's a Tools folder that will do things to get data into our database and a React app bundled with the frontend.
JASON LENGSTORF: Okay. And this one, are we okay to share this? This is public, right?
MIKE RYAN: Have the data, have the code. I think lots of people could have fun with this dataset, assuming you're okay with everyone playing with the transcripts?
JASON LENGSTORF: Absolutely. I have one of the few� that's not true. There's a lot of people who generate a ton of content but I am happy to put it open. It's got video, it's got images. You can actually use it in pretty interesting ways, I think.
So, we've got our tools, our server, our frontend. Are we going to be looking at these or the turkey's already in the oven?
MIKE RYAN: I think it might be looking at the server, main.ts, to see what's going on in there.
JASON LENGSTORF: This is the one that created the transcript?
MIKE RYAN: We are going to run this one, "push transcripts to Chroma."
JASON LENGSTORF: Got it. Okay. Got it. There's the embeddings and then we're doing all the chunk segments. Okay. So, we have done this. You can hear us talk about this on the very first episode of Web Dev Challenging if you're curious about the chunking.
Tell me what Chroma is.
MIKE RYAN: I think it might be based on SQLite. I really like it just for spinning up, like, novel little apps like this where it's like, okay, I don't want to pay a big service provider to store data.
JASON LENGSTORF: Very good for develop experimentation?
MIKE RYAN: Absolutely. It has go Python and Node.js bindings around it. That's probably the first step is to get Chroma running.
JASON LENGSTORF: All right. I'm here. I'm in the root of this folder.
MIKE RYAN: Great. Let's just start with "npmrun chroma." This is going to get the�
JASON LENGSTORF: Chrome. It's there, I just hit the wrong button. Whoooooo.
MIKE RYAN: Someone did a great job. [Laughter].
JASON LENGSTORF: This is wonderful.
MIKE RYAN: That's all we really needed to do to get Chroma running. It is going to save everything locally in the Chroma folder as we interact with it, very diskoriented database. We're going to start pushing those transcripts to it and so I think it's called "npmrunpushtranscriptstochroma.
JASON LENGSTORF: I started using Warp and they have the autocomplete and they guess based on your last command. Very skeptical about so many things we're doing with AI, but this feels like one of the ones that is definitely okay. So, we found 142,376 transcript chunks and we are smashing through those pretty dang quickly, actually.
MIKE RYAN: Normally, it would take a long time with Chroma. Chroma will create the embeddings for you. I prebaked the embeddings.
JASON LENGSTORF: That looks like it's going to take another 20 seconds or so. What do we want to poke around in the codebase to get our bearings?
MIKE RYAN: Let's take a look at the three API endpoints. This is an Express app and the first endpoint that we're setting up is a chat endpoint. This is something that we're going to� every Hashbrown developer will have to start off with. Is they need to actually expose some way for your frontend code to call your LLM provider of choice. So, in this case�
JASON LENGSTORF: I can see here, we're streaming.
MIKE RYAN: Yeah, it's going to do all the streaming for you. It's going to do all the error handling for you. The streaming is nicely coded. The frontend code is there and it decodes it for you. It covers both of those hard parts of getting a streaming UI or AI thing kind of working. This is all the code you have to write for it. Call in whatever provider you want to, give it the API key and that will give you an async iterator.
JASON LENGSTORF: I've seen people going, we have to get streaming working. If you're not doing this in a "how do we stream back HTML to the browser," it really is as nice as just, hey, yeah, you can write back each chunk as it comes in.
MIKE RYAN: I've done all the error handling for you.
JASON LENGSTORF: Nice. So then we've got a search input here?
MIKE RYAN: Yeah, this is what allows us to search those transcripts. It's going to take in a query and a value k, which is how many to give back. And what we're going to do is take that query string and create embeddings for it. Embeddings is a numerical representation for language and we're going to pass off those embeddings along with all the episodes that we've got back to our Chroma database collection. We're going to get back anything that matches that query and those transcripts. That will give us those actual transcript segments and we use those to return them back to the frontend for us to play around with.
JASON LENGSTORF: Okay. Got it. Cool. Very cool. So then, let's see. We got our results. We're reiterating over those. This is the structured data you were talking about. This meta is coming out of...the episodes or the� no, we're passing off the� collection query. Where do these episodes end up?
MIKE RYAN: So, if you scroll down just a little bit on the return object�
JASON LENGSTORF: Aahhh, here we go.
MIKE RYAN: I'm including the ID of the vide, the start second and the end second. So that's all the metadata I put into Chroma. When we get one of these documents back out, I'm using that video ID to find the actual episode, that way, we can play with it in our response.
JASON LENGSTORF: Got it. We send that back as JSON so we're getting structured data back instead of, "great question." [Laughter].
MIKE RYAN: We will let it know how to actually build a user interface.
JASON LENGSTORF: We've got specific episode lookup.
MIKE RYAN: Pretty straightforward, not hitting Chroma, just hitting the Learn With Jason website.
JASON LENGSTORF: Let me see how we're doing this with. So close. We've got some types. We're getting a lot of data out and that appears� that's it, which is great. A nice, lightweight server, keeping it simple there. And up here, where we� I guess, so there's a lot, kind of scaffolded out here. Are there any pieces that are sort of prebaked that you want to talk through before we start writing code or are these stubbed out?
MIKE RYAN: Some of them have code, they have APIs. It's nice to know that these exist.
JASON LENGSTORF: So we can use "use episode, send an ID." "Search episodes" is going to send off our search term and give us back the JSON.
MIKE RYAN: Inside of app.tss, I've already configured the Hashbrown. You use the Hashbrown provider, somewhere high up in your component hierarchy, and give it the URL to the chat endpoint that you exposed.
JASON LENGSTORF: Got it. The basics here are, once you've got this chat endpoint, we could have skipped the search endpoint and episodes endpoint and use that "use chat" hook and have the very familiar, chatbased, I have a question, it sends back some text. We want to go further than that, which is why we've got more going on here. In the app�
MIKE RYAN: Super stubbed out.
JASON LENGSTORF: So stubbed out search component. And that is doing a search header. And our search header is a form that takes the input and, um�
MIKE RYAN: A button to submit it.
JASON LENGSTORF: Excellent. Okay. That's great. We're� we're looking at some pretty� pretty simple, solid stuff. We are finished with the embeddings so I'm going to run the app...
MIKE RYAN: Yeah, so, "npm run server." And "npm run frontend."
JASON LENGSTORF: Cannot find configuration for task.
MIKE RYAN: Let's see...
JASON LENGSTORF: Npm server should be running, uhh...
MIKE RYAN: I might have messed with the package.json on this.
JASON LENGSTORF: Oh, it needs a command, right?
MIKE RYAN: I think I messed it up, it runs the actual server. Let's do "nx run server."
JASON LENGSTORF: Undo whatever I did here. Okay. And we're going to try that one more time...what are you mad about? Both project and target have to be specified.
MIKE RYAN: Let's see. Oh, this is early morning. I need to have more coffee. It should be "serve server."
JASON LENGSTORF: Oh, okay. I gotchu. This Nx thing, the multirun thing.
MIKE RYAN: My friend, James Henry, ran this and did such a great job.
JASON LENGSTORF: Just killer. We're running on Port 3000 here. We're running on 4200 here. Let me open this up...all right. We've got a basic setup. If I type something now, it'll do nothing. We'll do Preact. We want to make a generative UI out of this. What is my step?
MIKE RYAN: Nice to see Mike's on. Hi, Mike. How are you doing? Lots of Angular friends showed up.
JASON LENGSTORF: Sorry, you told me to do something and I wasn't paying attention.
MIKE RYAN: Let's start with text. We need to tell the AI how to call these endpoints and then we'll have it generate a user interface.
JASON LENGSTORF: That's going to happen in the Search component?
MIKE RYAN: Yeah.
JASON LENGSTORF: Okay.
MIKE RYAN: From Hashbrown/AI, we're going to use the hook and set up a chat interface here. We'll do it below Line 7.
JASON LENGSTORF: Is it� is it destructured�
MIKE RYAN: Squiggly brackets.
JASON LENGSTORF: Curly bois. Use chat. Do I need to pass any options?
MIKE RYAN: Yeah, this is going to be a big, 'ole config object.
JASON LENGSTORF: And out of that, we're going to get messages and "send message." And my config object is?
MIKE RYAN: So, we're going to give it a model. Let's use Jippity4.1 and give it a system instruction. This is where we're going to basically tell the AI how to behave, give it a variety of instructions. For right now, we can say, "you are a friendly assistant" and just leave it at that.
JASON LENGSTORF: You are a friendly assistant. Okay.
MIKE RYAN: We'll come back and add some more things to do. Just to prove we have everything set up. In handle search here, on 17, we'll call "send message." It is going to take an object as a parameter and that object is going to have a role and this will be �user� for the role. And then it'll have content, which will just be the message.
We can render out the messages. I'm fine to do a pre, if we want to, and render the raw JSON.
JASON LENGSTORF: Okay. So we're going to send in messages and we'll format it a bit. That todo is also done. We're not searching episodes yet but we are running our test here. We can say� what's one of the classic ones? How many r's are in "strawberry"?
Heyyyyy! We've done it, everyone.
MIKE RYAN: Done. This is generative UI right here. We made text worse. [Laughter].
JASON LENGSTORF: Harder to read, harder to use, more expensive. Goddammit, this is it. [Laughter]. So, this is kind of our proofofconcept here. It's doing what we expect. We can ask a question, it gives us an answer. It's showing us it's an LLM. So, now what?
MIKE RYAN: Maybe let's start with teaching it how to search for episodes and make sure that that part's working. There's going to be a lot of things to unpack here, but we're going to start with, above the "use chat," we'll use "use tool."
JASON LENGSTORF: Search episodes tool, like this? Use tool.
MIKE RYAN: And so we're going to need to give this a couple different things. We'll start with a name. This needs to be a CamelCase name. And then we're going to give it a description. We need to tell the LLM basically what this does and so, let's tell the LLM that this is going to let it search episodes by their transcript.
JASON LENGSTORF: Search episodes by their transcripts.
MIKE RYAN: Yeah. Okay. So, from here, we need to give it a schema and what this schema is, is this is going to be the parameters we ask the LLM to generate when it calls our function. This is going to be an object. Hashbrown� this is going to be a schema definition. So, Hashbrown ships with a schema library we call Skillet. The difference is that Skillet is really optimized for Large Language Model. So we're going to import� it's called s, from Hashbrown AI Core, and then we're going to use s.object.
So this is going to take two parameters. One of the things is going to be a string. Hashbrown forces you to give descriptions to your schema.
JASON LENGSTORF: Because the LLM know why it's there.
MIKE RYAN: Exactly. You don't want it to be optional here. You really want to give the LLM as much context about your data as you can give it.
JASON LENGSTORF: So this would be an object containing details about the episode?
MIKE RYAN: Search term or search query.
JASON LENGSTORF: Details� is it details or just the search query?
MIKE RYAN: It's just the search query.
JASON LENGSTORF: Containing the search query.
MIKE RYAN: So the second parameter is an object. We can specify the keys. K is great.
JASON LENGSTORF: That's a number.
MIKE RYAN: Yeah.
JASON LENGSTORF: And this is the number of episode results to return search.
MIKE RYAN: Yep.
JASON LENGSTORF: And is that it? It doesn't need a second argument?
MIKE RYAN: Doesn't need a second argument, nope.
JASON LENGSTORF: What was the name of the second one?
MIKE RYAN: I think it was Search Term.
JASON LENGSTORF: Let's look, just to be safe. S.string. And, the term we use for searching episodes.
MIKE RYAN: Yeah.
JASON LENGSTORF: Any second parameter there?
MIKE RYAN: No second parameter.
JASON LENGSTORF: It needs a handle.
MIKE RYAN: We're going to give it a depths array. Now we'll write a handler function. This will be an async function. Yeah. Cool. So, this�
JASON LENGSTORF: Does that get any�
MIKE RYAN: I was one step ahead of you, I'm sorry.
JASON LENGSTORF: I'm jumping on you, sorry.
MIKE RYAN: One will be what we describe as the schema and one will be a port.
JASON LENGSTORF: Does this know� it does get typed. Awesome. Cool.
MIKE RYAN: One of the things we've done with Hashbrown is made sure everything is stronglytyped as possible. Everything with Skillet is stronglytyped. Inside of here, yeah, we'll just await search episodes and pass in input.searchtime and input.k.
JASON LENGSTORF: Input.searchterm and input.k.
MIKE RYAN: Always do cancellations, my philosophy on these things.
JASON LENGSTORF: Someday I'm going to learn what this means. [Laughter]. I assume I need this result?
MIKE RYAN: Yeah. We can return it if we want to. So, yeah, now we have our tool. We're all happy and we're going to take that tool and drop that into our list of tools in our chat hook.
JASON LENGSTORF: Is that an array, as well?
MIKE RYAN: It is. Results, search episodes, tools.
Now if we go back to our proofofconcept, in theory, we could ask it to search via transcripts of tools.
JASON LENGSTORF: So which episodes mention, um, we'll say, Nx?
MIKE RYAN: So you can see it did a search for Nx and got back a variety of transcripts that potentially match that.
JASON LENGSTORF: Nice. Okay. Here's a piece of transcript and here's...NX. Okay. All right. I get it. I see what it's doing.
MIKE RYAN: Yep. So we scroll all the way down, we should see a final assistant message where it created some Markdown, where it used that data to provide a response.
JASON LENGSTORF: Uhh. Uhhuh. Uhhuh. Here's our Markdown. We can see it is providing interesting information. And, like what we could do, as well, is probably fix our system prompt to, like, hey, this one's actually about Nx as opposed to mentioning it, so prioritize titles over mentions inside the body of the thing. But, yeah. Cool. All right.
MIKE RYAN: We've at least got the data loaded into it. I want to hit on something here. One of the things I really love about Hashbrown tool's calling model, it is authorization you've got. With Hashbrown's tool calling approach, since it's running in the frontend, its's using your prebaked auth mechanisms.
JASON LENGSTORF: Auth is a pain. It's nice if you can leverage your own cookies and just kind of make that work. Cool. Okay. So, we've got� the search is working. And, uh, let's see, we've got 30ish minutes left.
MIKE RYAN: Let's change this over now to maybe show a list of videos. We'll pause on the Hashbrown for a second and let's make a simple React component that maybe shows one of these results off.
JASON LENGSTORF: Okay. So we're going to do that. We'll call this episode.tsx. That's going to� let me just actually copy� not that. Not that. You seem small. We'll just pop this in here and this is going to be a div with a class name of, um...oh, this is going to be a whole thing, isn't it?
MIKE RYAN: I try to make it not a whole thing. Hopefully.
JASON LENGSTORF: So what we probably want to do here is show our, um...our image? We want to show... the title.
MIKE RYAN: Yeah.
JASON LENGSTORF: And then we want to show, like, the short description. I don't know if it's a short description or we'll have to truncate. We want our� just do�
MIKE RYAN: Maybe the transcript snippet?
JASON LENGSTORF: Oh, yeah, that's a good idea. So that's going to be our link. And let's add...the transcript snippet...okay. So that'll give us our bits there and then I can get rid of...inputs for now, we'll get back to the props. We're going to call this "episode." And we can get rid of all of these. We're not using them. So that's a basic episode. This isn't going to look great, but it's going to at least let us plug in our pieces. Did you want to style this up and get it looking right and drop it in?
MIKE RYAN: If we have time, I'm happy to make it styled.
JASON LENGSTORF: Let's do the very, very basics. So, we'll do "episode." You're on CSS modules. I know how those work so we're going to episode.module.css. We'll go "display flex." "Flex direction column." We'll set a gap of, we'll say 15 pixels. And then we're going to give back order of 1 [Away from mic].
MIKE RYAN: I stole all the colors and typography for CodeTV. So if you look at styles.css. I think I even kept the names for you. It should all be somewhat familiar.
JASON LENGSTORF: Sick. Cool. I appreciate you doing that. Now we're going to find out that I don't even remember what I did with my own� with my own CSS. This is going to be fine. So, we'll border radius at 3 pix. We can use nesting. I think this works.
MIKE RYAN: I don't think that API returns the images. You can tell me if I'm right or wrong. I don't know if we have the URL images back from that.
JASON LENGSTORF: Oh, it doesn't? That's really a bummer. [Away from mic] I thought I did.
MIKE RYAN: No. There is that host image.
JASON LENGSTORF: Neither one of these are the episode image. These are the people in the show. Dang. No, that's fine. We can skip that part.
MIKE RYAN: I don't want you to go Style and Image and find out later.
JASON LENGSTORF: No, that's fine. Maybe we embed the YouTube? We're going to todo that. That's going to be a stretch going, to do video player, because that's� that's probably not going to be that hard but I don't want to rabbit hole on it and not get to anything else.
It's going to be a box, listed at regular spacing and set the H2 and paragraph to have a margin of zero and that way, we should just get even spacing. It's not going to be the prettiest thing in the world, but it is going to do what we wanted, which is show what these things look like when they're out there. And then when we get our messages here� well, actually, I don't know what we do next.
MIKE RYAN: Yeah. So, let's� let's make� let's make one change to that episode component and get one piece of data in it. Let's set up a prop for just the transcript snippet, as a place to start.
JASON LENGSTORF: What are we doing here? We need this. We need curly bois. Oh, my god. And you wanted the transcript snippet.
MIKE RYAN: Yeah.
JASON LENGSTORF: I swear to god, computer, if you don't [Away from mic] the hell out. Transcript snippet string. And we'll drop that in [Away from mic]. Okay. So that'll be our basic.
MIKE RYAN: Yeah. So, we're going to swap out "use chat" for a different hook called "use UI hook." Is from Hashbrown AI.
JASON LENGSTORF: Is it double caps?
MIKE RYAN: Lowercase "i."
JASON LENGSTORF: This is almost happy. Array?
MIKE RYAN: Call "expose component." The first argument to this is going to be the actual function for our component. So, we'll pass in an episode.
JASON LENGSTORF: Okay.
MIKE RYAN: The second's going to be a curly bois and give it metadata. So we're going to give it a name, CamelCase, like the tools. We'll give it a description, what it does.
JASON LENGSTORF: Displays a single episode preview.
MIKE RYAN: Yeah. And then, let's give it some props. So, we want the LLM to actually be able to bind data to this, so let's have it bind the transcript snippet, so let's do an s.string. Let's say the snippet of the transcript that was relevant to the query, or something like that.
JASON LENGSTORF: Okay.
MIKE RYAN: Cool. We're going to make another refactor here. Instead of doing messages, we're going to do last assistant message.
JASON LENGSTORF: Okay.
MIKE RYAN: With a generative UI, we could have this chat. I like the idea where we type in the query and the UI's what we get back so this assistant message helps get the last one out of here. So now in our template, what we want to do, instead of doing JSON stringify, we can get rid of this. We'll render out� if we have a last assistant message, we'll do lastassistantmessage.ui. Otherwise we'll render nothing.
JASON LENGSTORF: Okay. Actually, I'm going to do� I'm going to do one of these just so that if it is� just so that we know that it's not, like, noauthing. It'll know we didn't find anything.
Okay. So, now, no results. And, let's see, show me...episodes with Mike Hartington.
MIKE RYAN: So now it's going to start generating the UI. It's going to take a little bit of time to see them and we got one out.
JASON LENGSTORF: There we go.
MIKE RYAN: So one of the things that you might have noticed is it took a little bit of time because it was sitting there generating that transcript string and this is something we thought a lot about. We don't want it to work that way. Let's go back to the code and make a quick change about how we define those props. With Skillet, you can use the streaming, if you do s.streaming string, what it's going to do is stream that text into our component so we can see it get built live.
JASON LENGSTORF: Okay. Let's immediately fix this and make it better. So we're going to come out here and add in the title, which is also going to be a string. And then, we'll swap this out, like so. And now when I go out here, "show me episodes that feature musical elements."
MIKE RYAN: I'm going to trust you know what your videos have. Sounds kind of streaming back to us, but it streamed really quickly and that might just be an effect of the model.
JASON LENGSTORF: Could be. I also screwed up the title of the model.
MIKE RYAN: Did we save?
JASON LENGSTORF: Did I save? What a question. Let's try that one more time. Maybe I just got it wrong. Title�
MIKE RYAN: No type errors.
JASON LENGSTORF: Title. Title. Title. Theoreticallyspeaking, if I refresh, show me episodes with music...
MIKE RYAN: Now it's going to start building that response...
JASON LENGSTORF: There we go.
MIKE RYAN: We see it streaming into that prompt and get results faster that way.
JASON LENGSTORF: What?! Good. Yeah. This is great. [Laughter]. I mean, this is great. So now we're getting a little bit closer. This is looking better and better. Let's take it a step further by� actually, here's a question. If I want to contain this, I could just throw this inside of, let's say, a, like, section class� actually, we'll just hardcode the style because we're rebels. [Laughter]. We'll go with a display flex and we'll say, "row wrap." Oh, no, "flex wrap." Flex wrap is going to be "wrap." And gap is going to be 20 pixels. I don't know if it works without a number. And that should, I think, do it...so, then, um...this...let me get out in here. We can say something like, "max inline size would be, I don't know, 30%."
MIKE RYAN: Cool. Make it look a little bit nicer.
JASON LENGSTORF: So now if we do this and we say, "which episodes discuss React hooks?" This should be a ton. And it refreshed the page to get it to use the new component. Which episodes� oh, my god. Hooks. Oh, I'm just bad at this. [Laughter]. It's fine. It's fine.
MIKE RYAN: Did it actually make the section? Yeah, we have the section.
JASON LENGSTORF: Display, flex. Oh, my god. We never actually imported our dang style so it was just defaulting. Is it root level?
MIKE RYAN: Default import.
JASON LENGSTORF: And then this is going to be styles.episode instead.
MIKE RYAN: Hey, that looks right. That explains it.
JASON LENGSTORF: I was like, you know? Maybe I'm just not as good at this as I thought, which is true. [Laughter]. But I was like, I think I'm better than this though. [Laughter].
MIKE RYAN: That looks great.
JASON LENGSTORF: Now we're getting actual episodes. They're streaming in. If I was doing better, it would� it would� you know, it would look nicer, but it's using my component. Like, granted it's my component and it's bad, but it is mine.
MIKE RYAN: You can make it look or work however you want it to. It can have nice animations, allow for user input. All the LLM has done is chosen that component to render.
JASON LENGSTORF: This is maybe our moment to install that� here's our frontend. Let me npminstall� what it's called? Lite YouTube. This will let us, really quickly add in the YouTube embed in what I believe will be an automaticallyresponsive way. We're going to find out really fast.
MIKE RYAN: That would be so cool.
JASON LENGSTORF: And the way that we do this is, we have to grab this thing...and then this is a� this is going to go index.html. Why not. We'll just make it universal. Scope that properly. So, instead we won't. We'll also bring this in, which is also going to sit down here. I guess we'll probably want that before we get this in here. We're going to drop this in, just like this, and then we need a video ID. Why are you mad? I know� oh, is React going to yell at me for just trying to use�
MIKE RYAN: Just trying to use an element? Probably.
JASON LENGSTORF: How do I get you to leave me alone? Maybe it'll just do nothing and...YouTube ID and then that's going to come in as another string. I need to� I seem to have managed to set up� I'm trying to give Cursor the old college [Indiscernible] here but I can't get it to do the format on save. This is just going to load, so we need to go back and actually give it the YouTube ID. YouTube ID is s.streaming string. The playback ID of the YouTube video. This will be an interesting one to see if it pulls it off because it needs to pull it off the data structure and we're not telling it where it is.
Let's try it. Okay. Got to start the server again...okay. That's running. Back. And, now we say, um, "show me episodes about Svelte." It does nothing because that component is crashing. 404, not found on YouTube. I need a React YouTube embed. React YouTube, easy.
MIKE RYAN: This must be good, right?
JASON LENGSTORF: It's probably fine, vetted by the 450,000 people who download it per week. We'll get that running. We'll come back out here. We're going to grab this. It's just video ID. Video ID and title. So this is going to be completely okay. Don't need any of this stuff, I don't think. Yeah, that's all fine. So we're going to come back in here. We're going to go to Cursor. We're going to drop this into the episode here...import this...we're going to import this from...React YouTube and then down here...it's the title instead and...close it. Format it. Everybody's happy. Probably. We can ignore� we probably can't ignore it, it was 404ing.
We're going to say, "show me episodes about Angular."
MIKE RYAN: Start the dev server.
JASON LENGSTORF: Try this one more time. Refresh for luck and we say, "show me episodes about Angular." Okay. So we'd have to style it, but look at it go! I mean, this is great. This is� I mean, this is what I would want, right. Is it's kind of doing the thing, as I was building out, I would want to put loading states in, you know, create, like, a placeholder for the video so it doesn't kind of pop when it comes in but that's stuff that now I can control this. I can make these choices and do this thing. I didn't have to come in here and directly map these components because we can just say, you know, yeah, use the title of the episode, use the playback ID of the YouTube video and because it's an LLM, this is the thing it's actually good at is patternmatching and predicting. This is sort of the dream, right, is I can say, this is my episode component, use this out of the response and only use this and it does the thing.
Okay. We've got� we got about 15ish� a little under. Is there anything else you wanted to show or anything you wanted to make sure we highlight?
MIKE RYAN: I would just call it out. I don't know if you want to play with it or not. You did a "if there's no result, show this." We could have given it a 404 component and been like, if you don't find anything relevant, show this instead.
JASON LENGSTORF: We'll put together a quick "not found tsx." And this is going to be..."not found equals." And this will just return...we'll put a div in there for no reason. Yeah. That's fine.
And so, then, out here...we would import� honestly, we can probably just come down here and say "expose component again." And we're going to say "not found." Is going to let us autoimport that component. And this one doesn't have any props� oh, wait! Did you all see that? Oh, it did show it to me. I was like, did it determine it didn't have props and didn't even autocomplete it. That would be some crazy TypeScript. Not found component and the description is..."show this is no episodes match the search query."
MIKE RYAN: Yeah.
JASON LENGSTORF: If I can spell it.
MIKE RYAN: It will probably know what you mean.
JASON LENGSTORF: That's true, but I have to have a little bit of pride in myself. [Laughter]. Um, okay, so then we've got our exposed component. Um, and...what if I just�
MIKE RYAN: Let's refresh.
JASON LENGSTORF: Refresh. And then we'll just say, "noodles."
MIKE RYAN: Watch, someone's mentioned "noodles" one time.
JASON LENGSTORF: Good. Okay. Let's� what have I definitely never said on this show? Liechtenstein. Heeyyyyy! [Laughter]. Good. Good. This is great. Can it be instructed to be the mostviewed or mostliked? I have a question.
MIKE RYAN: Sure.
JASON LENGSTORF: Can I ask this to polyfill with YouTube data because it's got the YouTube ID?
MIKE RYAN: It doesn't have any way to go get that data, I guess.
JASON LENGSTORF: Oh, we need some kind of API thing. That's right. We would have to give it a tool to go and make that request. Then you can start to enhance your results and also plug this into Algolia has all the contextual stuff. What if I say, up here� where's my� where did my system prompt go?
MIKE RYAN: Line 27.
JASON LENGSTORF: Line 27. 27. Can I tell it, here, like, when displaying episodes, show the� the most relevant first and see if that helps?
MIKE RYAN: Yeah, for sure.
JASON LENGSTORF: When displaying episodes, show the episodes that are mostrelevant and/or most focused on the search topic� search query at the top of the results.
MIKE RYAN: Yeah.
JASON LENGSTORF: Okay. Let's see how that goes. Remember that Nx query was hitting chronologically anything that matched Nx but there's an episode about Nx and I want to see if that shows up� let me refresh. Show me episodes about Nx. There it goes.
MIKE RYAN: Look at that.
JASON LENGSTORF: So now it pulled the more relevant thing, even though this is more older. And these ones are more, like, this is using Nx.
MIKE RYAN: Mentions Nx.
JASON LENGSTORF: And this one isn't even using Nx. This is pulling an NPX.
MIKE RYAN: That's just the LLM trying to do its best.
JASON LENGSTORF: This is slick. It's not necessarily perfect, right. It's going to have that� that same LLM squishiness where things are� probably if we made the search three times, we'd get different sets of results, but it's pretty good. If I come in here and I say, I want to learn about CSS design systems...this will be interesting, actually. Yeah. It starts pulling in CSS and does all these things and so, yeah, like... there's a mention about design systems and CSS. It kind of starts, I would say, this� open prop is an open design system, design system token. This was very design systemy and these are more adjacent, but these feels good. This feels really good, actually.
MIKE RYAN: And the whole idea with Hashbrown, you get control. These are just probability machines and it needs to be as unmagical as possible and you need to understand how it's picking it and why it's picking it.
JASON LENGSTORF: I need it to be nonmagical in terms of what it's showing on the screen. I'm okay with it making educated guesses. If you ask me about things on the episode, I'll give you a different answer every time. So, I'm going to be equally inaccurate if you ask me what's been on Learn With Jason. And this, knowing that it's starting with the dataset, I'm confident this will be as accurate or more accurate than I am, which feels okay. [Laughter].
MIKE RYAN: Exactly. Yeah.
JASON LENGSTORF: Has this moved me from being very cynical� very cynical to less cynical? I would say� here's the thing, my cynicism is not about the technology. It's about the breathless push to extract capital because nobody understands this technology and I think that's a very different level of cynicism. I think LLMs, themselves, are extremely cool. I think trying to sell LLMs to governments and schools because they don't understand the technology enough to not allocate money that's later going to screw them over, that's what I'm cynical about. So, just to be clear. Just to be very clear. [Laughter]. But so, okay, so this� I think� unless you got something quick, I think that's probably the end of what we've got time for.
MIKE RYAN: Yeah, that sounds great.
JASON LENGSTORF: If people want to go further, if they want to build something on their own, where else should people go if they want to dive in?
MIKE RYAN: Our GitHub repository, you click on the GitHub link in the topright of the page there, that's a great place to start. There are topics of Hashbrown we didn't talk about. We ship with a JavaScript script with WebAssembly.
JASON LENGSTORF: That scares the shit out of me. I'm not going to lie. [Laughter].
MIKE RYAN: Let's say we had YouTube views properties on there, a tool call is primitive, basic. You get raw JSON back out. If the LLM had a small JavaScript VM and call�.sort, that would be kind of interesting. You can use the LLM to generate small scripts of JavaScript, not whole programs. Small things that kind of glue what the user's asking for to your component code so if someone does come by and say, hey, sort these by YouTube views, the LLM has there capability to run code that can do that sorting logic for it.
JASON LENGSTORF: So for example, if I was trying to build a generative dashboard and by default, I'm showing a bar chart and you wanted me to� I say, like, hey, can you show me this as a pie chart, then the JavaScript to convert the bar chart numbers into percentages, that could be on the fly, generated by the LLM?
MIKE RYAN: Yeah. And if you want to see a demo of that, there's a finance sample in this repository, that generates charts based on what the user's asked for and it uses the JavaScript VM. If you want to get into how crazy generative UI could be, join us on Hashbrown.dev. Send me an email and I'll personally get you onboarded. I want to grow a community around this, of folks who want to push this technology and see what we can build with it.
JASON LENGSTORF: Just to be clear, this is completely open source, this isn't a venturefunded thing, you're not trying to monetize this?
MIKE RYAN: No. Take all the code, fork it, run with it. Go see our system prompts.
JASON LENGSTORF: If this isn't going to be monetized, how do you� how do you sustain this?
MIKE RYAN: If I had an answer to sustainable open source, I'd be a really wealthy person.
JASON LENGSTORF: Can I help? You run a consultancy. [Laughter].
MIKE RYAN: That's right. You can pay me to come do this with you. [Laughter].
JASON LENGSTORF: Mike, I thought I was giving you a softball, dude. [Laughter].
MIKE RYAN: It was a softball, I saw it. Open source is a pretty thankless thing. I would love people to come pay me. There's not a real good answer to that. Yes, come let me teach you about Hashbrown. I'd love to do a workshop and get you developers up to speed on the concepts we hit on. I want an open source project that lets people build cool stuff.
JASON LENGSTORF: This feels like AI for developers in a truthful way. I felt like I understood all of this. I use AI, I'm comfortable, like, incorporating it into things, but whenever I do it, I always get this sort of icky feeling of, I feel like I'm handing over a lot of control in a way I can't completely vouch for. I feel a lot less nervous about this thing. MCP is a similar approach, I feel a lot less nervous when I say, hey, you can use exactly these calls as opposed to, hey, go search the internet and see what you find. Generate code based on your data set. Here's what you're allowed to use, I built these. It makes me feel far more confident in attempting something like this in my own project. So, Mike, thank you so much for spending some time with us.
Chat, if there is anything else you want to know, you can head over to the GitHub here. You can also get into the CodeTV Discord, where all the links that we shared today are in a channel called ShowLinks, I think it's called and that Discord link is right here on your screen.
Mike, anything else you want to say before we wrapup today?
MIKE RYAN: No, I just really appreciate you having me on today and sharing Hashbrown and excited to see what generative UI means.
JASON LENGSTORF: I love it. We are going to dig into asynchronous Svelte. We're getting Rich Harris on the show. We're going to talk about Svelte. I'm going to fix that thumbnail and we will see you all next time. Thank you so much. [Laughter].
MIKE RYAN: Thanks, y'all. [Laughter].
JASON LENGSTORF: Bye. [Laughter].