How to work with generative AI in JavaScript

Introduction to Generative AI in JavaScript
Phil Nash introduces the topic of generative AI, emphasizing its novelty and current position in the tech world, especially within JavaScript development. He explains that, although still in its infancy, generative AI offers developers a chance to experiment and innovate due to its recent availability through APIs. Unlike previous trends dominated by Python, Nash asserts that the JavaScript community is well-positioned to leverage these tools across both front-end and back-end applications.
Understanding Large Language Models and APIs
Nash delves into large language models (LLMs) and their accessibility through APIs, such as Google's Gemini, Anthropic's Claude, and OpenAI's GPT. He notes the easy integration of these models into JavaScript applications and highlights the increasing compatibility and interchangeability between different API offerings. Nash also recommends sites like artificialanalysis.ai for evaluating models based on quality, speed, and price.
Working with Tokens and Systematic Prompting
Phil explains the concept of tokens in language models, discussing how inputs are tokenized to generate responses and how this impacts cost and processing. He introduces concepts of prompt engineering, such as zero-shot and one-shot prompting, and explains how using a system prompt can guide the model's answers more effectively. Nash illustrates with examples how models can be tailored to provide better, more accurate outputs.
Advanced Prompt Engineering
Nash continues his discussion on prompt engineering, focusing on how advanced techniques can improve model outputs. He introduces the idea of 'chain of thought' prompting to guide models in providing well-reasoned responses. This section highlights the importance of context and demonstrates with examples how better prompting can lead to more accurate and useful answers from AI models.
Chat and Context Management
Phil explores the implementation of chat features using AI models, including maintaining conversation state and context. He explains how saving the history of messages can improve interaction quality. He suggests tools and libraries that simplify the development of chat systems and handling of context in applications.
Retrieval Augmented Generation (RAG) and Vector Search
Nash introduces the concept of Retrieval Augmented Generation, which combines model-generated responses with contextual data retrieval to provide more relevant outputs. He explains vector search and embeddings, which involve translating content into multidimensional vectors for similarity analysis. The section includes a brief demo and discusses the use of databases like AstroDB for efficient vector storage and retrieval.
Streaming and Front-End Integration
This chapter covers the concept of streaming AI-generated tokens to the client-side and how it enhances user experience by gradually building responses. Nash demonstrates using fetch with streaming capabilities in the browser to render AI outputs progressively. He encourages leveraging modern JavaScript APIs to handle streaming responses efficiently.
Practical Tools and Trends in AI Development
Phil wraps up with an overview of useful tools and libraries for working with generative AI, such as Vercel's AI SDK and Langchain. He discusses the potential of in-browser models and function calling, outlining future possibilities for on-device AI processing. Nash underscores the importance of experimenting with these technologies now to stay ahead in the evolving landscape.
Ah, yeah.
Good afternoon, everyone.
Great to be here at Web Directions again.
Veteran, just starts to make me sound old, but I'll note that I haven't been around for the 20 years of this.
So, not quite as a veteran as maybe anyway, let's get into the thing.
Let's, let's talk about generative AI and JavaScript.
You might have already answered the question why for yourself, which is why you're sitting here in the room, but, generative AI is, I think in an incredible position right now.
It's actually very new, especially in the grand scheme of things.
If Web Directions have been going for 20 years, then generative AI for developers, at least has been around, for developers via like APIs without you having to run your own models or be working at OpenAI or something, has only been around since about, the first part of last year, about 18 months at this point, that's when the sort of chatGPT API kind of burst upon the scene, and we could suddenly start using this from code, rather than, rather than just using it as consumers.
So then I is brand new and that means, we don't really know, what to do with it.
I think, there's a, there's an amount of, hype that's obviously around the entire thing.
And then once you get hype, you get kind of skepticism and disappointment because we don't have killer apps for it necessarily yet.
Although some of you may be using them and disagree with me already, but it means it's brand new, which means it gives us all opportunity to play with this experiment with it, understand it as best we can and build our own tools and applications with it.
And that I think is an exciting place to be.
It's also good news about it that, I think in the, talk description, it talks about how, do we need to know Python for this?
Because much of, artificial intelligence in general and machine learning is done with Python.
And it's good news is that no, right?
JavaScript is pretty much in my Kind of experience number two is slightly behind just only slightly behind Python in terms of the things that we can do with it and the ways we can use it, but within the world of JavaScript and with both front and backend development, like we are very good at building applications and interfaces and experiences with these things.
And so the good news is that actually JavaScript is almost in a better, or the JavaScript community is in a better position to use this, than, than, those in the Python community who as far as I see, I'm pretty sure that a notebook with a bunch of variously runnable Python things is a shipped product.
We've got some more work to do, but we can bring that out to develop, to our users.
So let's start at the top, with how to work with this stuff.
I'm going to start with the large language models.
I'm going to start with the models that we might want to use.
Because the nice thing is, like I said, we don't have to necessarily run these things ourselves.
Great.
If you don't happen to have access to Nvidia's latest set of GPUs, they all come with APIs these days.
And there are a bunch of third party LLM hosting or model hosting kind of companies coming out also providing APIs.
And the big ones, Google's Gemini, Anthropic is Claude, OpenAI gives us, the GPT series.
All of them have nice, easy APIs to work with.
For this talk, I'm going to mostly use Gemini.
I actually really like working with it.
But for the most part, it's also, it's all very similar.
If it actually turns out that just a couple of weeks ago, the Gemini team made their API compatible with OpenAI.
So if you've ever used the OpenAI SDK, you can just change the URL to Google's one, and it works as well.
These things are growing and, changing all the time.
And, but like compatibility is coming with that as well, which is nice.
If you are interested in how to pick a model, because vibes aren't enough, there's a, I think this is a really useful little site called artificial analysis dot ai, and it's, it really just, has a bunch of, this is top line kind of views on models and how they, how they work.
There's a lot to it that you can go and, evaluate here, but like this top line is between quality, speed and, price, and, I just think that's quite useful to help you decide.
At the sort of far left, the top end of quality is currently, OPenAI's o1 preview, which is their supposedly extra thinking kind of model and then o1 mini the smaller one, but then we've got Claude, Gemini pro, Mistral.
This has changed since the last time I looked at it.
How about that?
But then in speed, like Gemini 1.5 Flash is absolutely far and away the fastest.
And then there's price, which is also important, when you want to be developing with these things because, you can rack up, some bills pretty quickly.
Case in point here being that over here on the, quality side, OpenAI'o 01 preview is the highest quality.
But it's also far and away the highest price.
So there's trade offs to make there.
And I, again, I like Gemini 1.5 flash apparently because I'm cheap.
It's the cheapest and fastest of these sort of top models.
So it's actually, it's, it, this is a great thing to use to compare those.
Let's go back to slides.
Yes.
I'm using Gemini from JavaScript.
In this case, the server side in node is as simple as either installing yet at Google slash generative AI or indeed the OpenAI SDK as well.
And so yeah, you can just there are libraries and API's for all of this with an API key and a choice of model, you can then just start asking it stuff.
We can create a prompt, write a whole presentation on getting started with AI in JavaScript, and then await the model to generate that prompt.
This is not how this presentation was written, I promise.
And so that's super simple to do.
And, and so you can, this is a kind of example app I'll be using.
It is a bit dark and I apologize for that, but it turns out this doesn't work well in light mode either.
So I can read things out that are on screen or hopefully it's visible.
But like we can ask things.
Have you ever, back in the days to Google yourself to find out if the internet knew about you, have you ever tried asking a model about yourself?
It's never a good thing.
If I ask this one, this is plugged into Gemini 1.5 Flash, and it currently says that Phil Nash is a prominent figure in the Kotlin programming language community, best work, best known for my work as a, for his work as a Kotlin language advocate at JetBrains.
Lies.
All of it lies.
That's actually, I have no idea where it's getting the Kotlin from.
And this is a new response from it from the last time I asked it as well, which is interesting.
There is another Phil Nash who used to work at JetBrains, but works on C++ and not Kotlin, don't know where the Kotlin has come from at all.
Anyway, this, but what the idea here is, this is Google's AI studio.
And it's the same model.
If I ask the same question, we should actually see a fairly similar response.
That'd be interesting to find out.
I haven't actually tried this.
Yes, it is.
It's the same one.
Prominent figure in Kotlin.
Excellent.
I'm not.
So it's pretty cool that we can do this and it's pretty easy to hook up to a little kind of chat or search like input, but it's not very useful, right?
Because right now we just have a, a consumerish model, with actually its ankles removed because these days, if you use the Gemini app or something, then Gemini model will go and search things on Google and reach out a bit.
Right now we have a model which can generate content and that's it.
Not even very useful content, certainly not correct content.
Thanks Gemini.
But it's not very useful yet.
This is where, we have to step up and do some stuff.
So let's talk about how these kinds of things work high level, high enough level, hopefully to get through it quickly.
First up when we, deal with these models, they don't tend to deal with, they don't deal with text, their computers, they like numbers, but they've deal, they will break down that text into what they call tokens.
It's important to know that these tokens exist because that's how you get charged normally by a number of input and then output tokens.
And, but also it's how it's generating things.
Models generate tokens one at a time.
The, the OpenAI tokenizer is a, is a useful kind of example of this.
This is how OpenAI tokens things.
It's got an example.
Tokenizes things.
So there are, potentially, a token can be a word, but it also can be a part of a word.
And the example here is fantastic, because many words mapped to one token are all one token.
But the word indivisible turns out to be two tokens.
Fantastic.
But knowing about these tokens, let's go back here.
Knowing about these tokens means, that when we get to the actual generation part.
We understand, we can understand that it is auto completing the next token every single time that we give a model, a bunch of tokens, like once upon a time, and it comes up with a, an idea of what that next token is, and a token can include punctuation around as well.
So a comma in, once upon a time in a, and What we're doing here is actually the models do only generate one token at a time, but in order to get a full output, that a set of tokens is then passed back to the model to generate the next one.
And then all the tokens go back in, including the newly generated one to generate the next one.
And that's important when you get to, how to prompt these things.
Because the fact that the tokens will always get passed in, including our generated one is useful to us.
We can change the behavior of this generation, as we saw there from both Google AI Studio and me calling it from the API, I've got a very similar response when asking, Gemini flash, what, who is Phil Nash?
Because, what's it, so take one step back, how we select that next token is also interesting.
The model has a huge, huge input of training data, which has allowed it to build a statistical model of what, tokens look like in a row.
And so when, when we are generating that one next token, it uses the tokens it's given to, generate a, a statistical idea of, of what the next one is likely to be.
But we actually have that across all it's available tokens.
There's obviously one that's most likely to be next and probably has a higher likelihood of that being the case.
But it also includes some randomness in there so that it doesn't do the same thing every time.
And so that you can, and so that it could be a bit more creative, but you can change this behavior, right?
You can limit, a few things.
And again, across different models, these kinds of things are similar or the same, sometimes available, sometimes not.
But inevitably something like this will be around.
So you can have top K, which actually limits the number of tokens that the model can select the next token from, to the top number you put in.
So if you've made that top K 10, it would only ever pick the next token from the top 10, most likely tokens.
Top P on the other hand, is that is all the tokens that fit within the probability of the P that you give it.
So if you say 0.9, if you say 90%, and that is 90 percent of all the potential tokens it will pick from based on the popular, sorry, the probability.
That was a bit harder.
I'll get back to it.
The temperature is then given that amount of tokens that we're going to select from.
How random do we go from that?
Do we just pick the first one?
Which is very boring, but most likely be correct.
Or do we pick just from anywhere in the statistical space and, and make up crazy world stories.
And so if you are trying to deal with this, if you're trying to build something with generative AI, that is, I said factual on this slide.
It's not exactly right.
Like these things aren't facts, but my more kind of conservative, keep that temperature down, restrict the top K or top P to only pick the most likely things.
If you want it to be creative, which is of course an option here, we tend the temperature up.
We don't limit those top P's and top K's and we give it free reign to be as creative and excited as it wants to be.
But beyond the model there's more we can do about this because the models don't really know anything.
Whilst it appears that they have this knowledge, all they have is this statistical model of what's most likely to come after a previous set of tokens.
And so they don't really know anything, although that statistical model manages to capture knowledge.
And this has been that kind of emergent behavior from these things.
They don't really know, but we can tell them stuff.
We can influence their output by what we put into it.
And this gets known as prompt engineering, and you might have dealt with some of this, when prompting consumer, consumer models.
But, it gets more exciting and more interesting when you have more power as a developer to deal with this.
And prompt prompts come in at various kinds of flavors.
The one that most people will deal with is known as zero shot prompting.
That's where you just talk to the model, just ask it something.
You don't give it anything else except what you're asking or questioning it.
These are all examples that I did put to Gemini 1.5 Flash.
It's interesting.
If I asked Gemini 1.5 Flash, tell me a story about friendly goblin.
It came up with the, the character of Grobnar.
Not your typical goblin.
I love that.
Inside of this model, Grobnar is a potential output.
Although weirdly enough, if you tell it, if you ask it to just tell you a story, I think I found Gemini in general, like inevitably picks the name Elyra as a, as the subject of the story, which is why I wanted to know about a Goblin , Grobner.
But it can be more factual, right?
If you ask what's the capital of France, it will tell you the capital of France is Paris, just because that is absolutely the most likely thing to happen.
If you turn the temperature right up on that, you're still unlikely to pick other things, although possible.
But, zero shop prompting isn't very useful for, consistency and responses.
And that's why one or few shop prompting where you give it examples of how to respond to you and what you expect to get back from it, is actually really helpful.
An example of this is, I gave it just a list of countries and their capitals.
I said, yeah, Australia, Canberra, France, Paris, UK, London, USA.
And, for some reason, Gemini 1.5 Flash thought it was playing a game show with me.
You got it!
no I didn't!
They have interesting personalities, these things.
But right, it knew what it wanted, or what I wanted from it, which was Washington D. C. But, it was a bit chatty about it.
And so we gave it these examples and we got the right answer, but we got a bunch of stuff around it.
But this brings us to the idea of the system prompt.
All of these models have a system prompt that is set by the, creators.
But the system prompt is also then something that we as developers can add.
And so if you add, a system prompt to say, hey, you're a capital city assistant and only respond with a name and a capital city for the country or state that you're provided, it knows what to do.
The most likely thing for it to do is to pick only the answer that we expect.
And so that system prompt allows us to drive more of the behavior because it is adhered to more strongly than a regular user prompt.
And then this is where it gets interesting.
As I said earlier, every time we're generating new tokens, we then pass the entire set of tokens back to the model to get the next token.
And so if you tell, a model to think about how it's going to approach a problem before it comes up with a solution.
It generates a bunch of tokens, which are ways to step through a problem.
And then when it gets to the point where it's generating a solution, it is passing all those tokens back through to give it more, give itself more context to answer the.
And so I tried this out with a riddle on Gemini 1.5 Flash.
I told it was a riddle solving assistant, and to break down its chain of thoughts, then come up with an answer.
And the prompt I gave it was, you see a boat filled with people.
It has not sunk, but when you look again, you don't see a single person on the boat.
Why?
Does anybody know?
Break it down.
It doesn't work on people, sorry.
It was quite good.
That's, fairly small, but it responded with, here's my thought process for solving the riddle.
The riddle focuses on the boat and the people, so the answer lies in the relationship between the two.
True.
The boat is full of people then empty.
This implies a change in state of perspective.
The boat hasn't sunk.
This rules out any event that would physically remove the people.
So it's getting there, and I like that.
And so we got the answer is that everyone on the boat is married.
There isn't a single person on the boat, right?
But it got it because it thought it through.
And I think when I tried this without the system prompt, without the chain of thought prompting, it did not.
It struggled for it.
There are obviously a lot of riddles on the internet that it already has the answer after it, but this, actually really helped.
This got it to get the answer.
So this brings us to chat, which is the, experience that we have a lot with, chat GPT, with, Gemini through its app, with Claude.
And chat is super simple.
All chat is, is giving it the entire previous conversation in order to generate, and the next chat message from the user in order to generate a response, it knows its history because it's told its history every single time we give that conversation context to the model and it knows how to deal with it.
In the, in the Google NodeJS library, the, they actually make available a start chat function, which actually handles that kind of state for you.
So you can just use chat dot send message instead of, instead of having to handle the whole kind of passing the context through yourself.
But, chat is that simple.
It's just users and, assistants talking back and forth.
And so actually, if we take my demo from earlier, this event chat, which still doesn't know anything, but if I check out, section two, basic chat, no, wait, hold on.
Let's go back to one.
I'm going to prove that it doesn't know things.
There we go.
If I say, my name is Phil, it's very happy to hear that.
It's nice to meet you.
But then I say, who am I?
And it's immediately forgotten because I'm not, I'm a, no, not who are you?
Who am I?
So it just doesn't know.
It's not being given the chat, context in this case.
But if I check out, section two, we can refresh that and say, my name is Phil and I'll be happy again for me.
Nice to meet you, Phil.
And then I say, who am I?
And.
Oh, no, You previously told me your name is Phil.
Is that still correct?
So it's unsure, but it's got an idea of it.
You do have memory, but you're literally having a chat.
But anyway, yeah, these are weird, aren't they?
It's never done that to me before.
That's what you'd expect.
These things are non determinative.
You don't get the same answer every time.
But this kind of brings us to that context idea.
The idea of, how do we, context around, not just the chat, but like what other information can a, can a model know or not know these models have, training cutoffs, right?
There, there's a point where the model gets trained no more.
It sees no more content that's made on the internet.
So it.
It stops its knowledge at a certain date.
There's also private data that it was not trained on in the first place.
And our ability to feed this context to a model, makes them useful in, in terms of our own businesses.
This brings us to the idea of retrieval augmented generation, which is a big fancy term for saying, tell the model stuff and then ask it questions about it.
Retrieval augmented generation, actually, it's a little bit more than that.
The idea of RAG is, that we store our data, our private or up to date data, and then a user makes a query to a bot, to a chatbot like this.
We go and retrieve that relevant data to the query.
We provide that data as context to the model with the original query, trying to get the answer out of it.
The model generates a response based on that context.
And then I guess profit.
We get there in the end.
But the difficult part about this actually is because we've seen it's not many lines of code to pass prompts and context to a model.
It's actually retrieving that relevant data.
And so I'm going to talk a little bit about vector search.
I'm going to run over it quickly because I know Shivay is on next and we'll be covering it in a little more depth than I had planned, but vector search, is, is an ability to find relevant or similar content to what the user is asking about in order to provide its context to a model.
Vector search is built by creating vector embeddings out of content, and that content can be text, images, any kind of content that we have.
Vector embeddings effectively capture the meaning of the content, in a large list of numbers.
But those numbers are vectors in a multidimensional space.
Multidimensional spaces are really hard to picture, so here's two dimensions.
And the idea with, this is that we are able to compare these points in space, in order to find things that are relevant and close.
But by close, I actually don't mean, nearest.
If you looked at this and thought, what are the two closest kind of points on this graph, you'd be right to think those two, but that's not actually, that's not the closest vectors.
Those are the closest points.
The vectors are actually a direction, and a magnitude, they're a direction from the origin zero, zero down at the bottom, and a magnitude, in, a, certain direction.
And so it turns out that like these two, the one at the bottom and the one in the far right are pointing in the same direction.
And that means, that they are more similar than the third one on the thing.
And so if we expand that to hundreds or thousands of vectors, and a machine learning model, creating meaning, turning text into points in multidimensional space.
That's what we'd like to search from and have time to go further into it.
But, again, we have APIs available for us, to turn content into, into vectors.
Google make one available here.
In this case, you pick a different model.
We picked Gemini Flash before this is the text embedding zero zero four model, a much less exciting name than Gemini, but it's just text embedding.
And then, you can make an API call out of it.
You can run these models locally and there are, they are not as big as large language models, so you can run them locally.
And if you are doing so in JavaScript, the hugging face transformers library is good at that.
That works in the browser and on the server.
But at the end of the day, you get a bunch of numbers.
I'll come back to that in a sec. But those numbers can be compared with trigonometry, cosine similarity.
It's it's all handled from libraries or indeed databases.
Because really, if we are doing a bunch of this to a lot of content, we don't just want to store it in memory.
It's best to store it in a database.
I work at Datastax and we have one of those.
It's called AstroDB.
There are many vector stores, many databases that have vector indexes available.
This is just how we do it.
Because if you take the vector, so we actually have an extra part to this, which, actually allows the database to create the vectors for you.
So in this case, once you've, installed our, library and, given it credentials, you can, you can use this dollar vectorized property of the content and the database will handle it for you, which is nice.
And then on the other side, when you're doing search, you got to use a query in, similarly, if you pass in as vectorized, it will do the vectoring for you, then do the similarity search, and then return those relevant results to you like that.
Let's do a little demo of that quickly.
So I have grabbed all the, content from the, talk descriptions, of the, of the conference.
So if I now ask who is Phil Nash I'm actually giving it at least details about my talk.
Based on the provided text Phil Nash is the speaker of a talk titled how to work with Generative AI and JavaScript at the Web Directions developer summit.
Damn I am.
And if I ask, who is speaking, speaking about CSS, it's going to give me kind of a list of those, talks.
It's going to think about it for a while and give me a, the internet is going to die on me.
I don't have time to wait for it, I'm afraid.
It would give you some answers.
And I can bring to the final point, which actually brings things into the front end a bit, in terms of user experience.
You see a lot of this stuff happening with models.
Oh, these models are slow, right?
They take a long time.
It just proved that for me.
Models are slow, and they generate these responses, a token at a time.
Which actually leads to them, mostly building streaming endpoints to return this response to you, which is cool.
Because we can stream those tokens and then do stuff with them as they arrive.
Gemini will do a streaming, if you ask for a send message stream in your, in the chat, example, for example, you can then for/await constant result dot stream and just.
Asynchronous iterators are just wonderful in this case.
In this case, we're just console logging, but what we could be doing is sending onto the browser.
And then on the browser, on the front end, it's inherent on us to, as front end developers stream that data, from the server and do stuff with it on the page and fetch handles this just natively, which is wonderful.
In fact, there's a load of nice little kind of primitives for this.
So you do a fetch.
But what you can then do is create a text decoder stream, and a writeable stream.
The text decoder stream takes the stream of binary, that's coming into the browser, tends into text on the fly.
The writeable stream, you can do whatever you want with as long as you implement a write function.
And in this case, I'm doing the very simple, innerHTML addition, and then you can take that body of the fetch response.
And just pipe through the text decoder, pipe it into the writer.
And we are writing stuff directly to the screen as it arrives in the browser.
I think this is important.
And hopefully the.
I don't know, I've gone, I collapsed the server.
Excellent.
If I check out git co4, we've got a streaming example.
And so hopefully it will answer me who is speaking about CSS.
It might die again.
I don't know.
Oh, there we go.
We need to see it responding.
How's spealing about AI and we should stream this out to the right.
Ah, I didn't do a very much time, but I don't have time again.
Come and ask me for demos later if you want to.
Cause I'm running through a couple of little extras right now.
There are libraries that are really useful for us to, to play with here.
So Vercel has their AI SDK that unifies interfaces to AI providers and simplifies some of that streaming for both Node and, some frameworks like Next and Svelte and such.
LangChain and LlamaIndex are originally Python libraries, but have been.
Ported into JavaScript, and again, they're slightly behind the Python inversions, but super useful and actually really interesting to look at for patterns of what to build or how to build these things.
So Langchain and Lama Index, there are visual builders like, we, we have, a tool called Lang Flow.
It's an open source, project where you can just drag and drop models and various bits of a, a generative AI flow for your data or how you get data back out of it.
And, function calling is, the next thing I want to be digging into and as everybody's digging into right now, function calling allows models to, say that they would like to, you provide it with tools, functions that they can call.
And, and when they realize that they need to do this, they will hand you the arguments and the function they want to call, you are responsible for running the function and then give that data back to it.
And this gives them abilities to have effects on the real world, do other stuff.
And then there's in-browser models.
I really, want to show this very quickly because, this is in Chrome Canary.
You cannot use this unless you sign up for the, the thing.
But there, if you look at, we've got window dot ai, will be available, and then you can talk to language model.
It used to be window dot ai dot assistant, and they changed it because it sounded too much like Google Assistant, and nobody liked that.
I've already set up a session here, and you can just ask it stuff, we're going to do a prompt here, and this is using Gemini Nano.
If I say a write a haiku about beer is my suggestion.
Sure.
Hops, barley and hops, golden nectar flows.
Cheers to the good stuff.
It'll turn out that's probably not a haiku because it's not a very big model and it doesn't know what it's doing.
But it's still interesting to see what we can do on device, on our users browsers, potentially in the future.
Again, this doesn't exist yet.
As I said at the start, now is the time to build an experiment with this stuff.
If we don't, get on board with understanding this now, then we're going to then you may well miss a, a very exciting and interesting train.
If I, we have so much more power as developers than regular users do.
And so learning how to deal with this and building with this right now is cool.
That's what I'd like to see.
And my name's Phil Nash.
I'm a developer relations engineer at It's an absolute pleasure to speak with you.
I'd love to answer questions for the rest.
I'll be around for the rest of the event.
So yeah, thank you so much.
- async/await
- tokens
- JavaScript
- fetch API
- chat