Sentient Design: AI and the Next Chapter of UX

Exploring AI in Design
Josh Clark introduces the concept of AI as a transformative force in the design industry, emphasizing the shift from creating artifacts to crafting experiences. He presents the "Pinocchio design pattern," where low-fidelity sketches are turned into high-fidelity prototypes using AI, exemplified by TL Draw's "make it real" feature. This segment highlights the potential of AI to seamlessly integrate into design processes, enhancing creativity and efficiency without replacing human input.
AI as a Design Material
Clark elaborates on the idea of AI as a design material, akin to HTML or CSS, and introduces the concept of "Sentient Design." He discusses how AI can be woven into digital interfaces to create adaptive experiences, using examples like a custom Figma plugin that assembles design components. The focus is on AI's role in enhancing human creativity and decision-making rather than replacing it, urging designers to consider AI as a material for crafting meaningful user experiences.
Radically Adaptive Experiences
Clark explores the potential of AI to create "radically adaptive experiences" that respond to user intent in real time. He presents Google's Gemini as an example of AI generating bespoke interfaces based on user input, demonstrating how AI can dynamically adapt content and interactions. This segment emphasizes the shift from designing specific interactions to creating systems that allow AI to work within established design patterns, enhancing user engagement and personalization.
AI's Role in Interface Design
Clark discusses the implications of AI's ability to generate content and interfaces, using examples like Salesforce's generative canvas. He highlights how AI can automate the creation of dashboards and other interfaces, allowing designers to focus on system-level design rather than specific interactions. This segment underscores the importance of design systems and best practices in enabling AI to create meaningful user experiences.
AI's Limitations and Opportunities
Clark addresses the limitations of AI, particularly in terms of accuracy and reliability, using humorous examples like AI's inability to accurately depict a bowl of ramen. He emphasizes the importance of understanding AI as a tool for presenting information rather than delivering facts, and suggests leveraging AI's strengths in transforming content across different formats and mediums. This segment encourages designers to focus on the experiential potential of AI rather than its technical capabilities.
Integrating AI into Collaborative Design
Clark introduces the concept of AI as a collaborative partner in design, using examples like Miro's NPC experience pattern. He discusses how AI can be integrated into design tools to provide feedback and suggestions, enhancing the collaborative process. This segment highlights the potential for AI to act as a non-player character in design environments, offering insights and augmenting human creativity.
Ethical Considerations and Future Directions
Clark concludes by addressing the ethical implications of AI, emphasizing the need for responsible design practices that consider the environmental and social impacts of AI technologies. He urges designers to engage with AI thoughtfully, applying values that prioritize human agency and creativity. This segment calls for a balanced approach to AI, recognizing both its potential and its challenges, and encourages designers to actively shape the future of AI in design.
Hey there everyone.
I'm from America.
I have no explanation, so I'm glad that Nicholas has some explanations.
I'm paying alot of attention.
We're about so far today, sort of big crises and uncertainties and, I think there's this kind of context of we're all dealing with a lot of weight and a lot of change in the world and I, appreciate the message here that these crises and these uncertainties can be opportunities and that nothing is inevitable and that we all have our place and agency and all of this to decide what the future might be.
And to apply our values to that.
So I'm gonna move from some of these really, big picture, institutional deep challenges and bring it a little closer to craft, to a crisis, if you can call it that.
A challenge, an opportunity that we're all facing in the technology industry, which is of course this thing called ai.
Anybody, have you heard of this?
It's yeah.
I wanna talk a little bit about how designers might think about AI and not so much the stuff about what can we make, the artifacts that we can create with ai, which I think is a big focus, and instead think about the experiences that we can make with ai.
What happens when you weave this technology as a material into the interfaces that we make?
Some different perspectives emerge.
What does it look like when we put machine intelligence into great design?
So speaking of great design, behold.
I made this, this is just a little sketch, of a little, interface for editing a shape, right?
And I put it into, an app called TL Draw, which is actually a platform for creating whiteboard sketches.
And the TL Draw team created this make it real feature.
And what happens is if you select what you've made and click make it real, it actually starts and goes and says, all right, let me turn this little dopey sketch into a thing.
And it's oh, it's actually bought the little UI elements in there.
But hang on a second.
If I click into it and I actually start using this.
What!
My little dopey sketch is turned into this actual kind of real working interface.
And it's not just interface stuff like this.
I can draw like a little notebook and put a little note that it's oh, I want to flip this, and then when I click make it real.
No kidding.
I dunno let's try it again.
What if we do a little piano keyboard and they click, make it real.
God dammit.
They added it sound.
So that's pretty trippy.
It changes a little bit the way that we think about creating interfaces.
Veronica, and I call this the Pinocchio design pattern.
Get it?
It's turning the puppet into a real boy.
So it's this thing of taking a low fidelity artifact and turning it into a high fidelity artifact.
So it's this kind of stuff that we're getting more and more familiar with and seeing AI at work, a sketch that turns into a prototype, like we saw a bullet outline that gets turned into, an essay, an idea, a notion that gets turned into reality and somehow fleshed out.
Just a another example of this pattern, at our agency, Big Medium, we created for a client a little custom Figma plugin that takes a wire frame like this.
And when you launch the plugin, it assembles the design using the company's own design system.
This is not in any way a polished or finished design.
The function of this is to be let me assemble the pieces, set the table for the designer, and bring together the relevant design system components so the designer can do the work.
So it's like a sous chef for the actual work.
And it's all in the context of your kind of working environment, right?
Figma in this case.
So you're in Figma, launch this thing, keep on going.
So this is like an important element, I think as we think about bringing machine intelligence into our interfaces is how can we actually integrate this more seamlessly into the environments that we're bringing it into.
So to come back to this TL draw example, I took my little, the interface that it had generated and I marked it up.
Here's some notes, some changes that I'd like to make, almost like the kind of things I might do with one of my developer colleagues.
And when I select the whole thing and click, make it real.
It goes through and look at that.
All right, so it's already changed up here.
The roundness, it's corrected that and we've got our red triangle and this sort of the size scale.
So the thing that's interesting here is this isn't some separate chat bot experience.
I'm not typing any text or prompt in here.
This is a whiteboard application and the interaction for editing things in it are keeping it in this whiteboard context, the interactive context that, I'm already in here from the beginning.
So what I love about this is that it, it maintains that seamlessness.
It's not some separate AI product or experience that's bolted on it is intelligence woven into the interaction paradigm that's already there.
So I'm showing this thing around look, we can generate things and this is not a talk about how the robots are gonna start designing.
Now the robots are gonna start doing our code and how they're gonna replace our jobs and create amazing efficiencies for the shareholders.
That stuff, that has its role, right?
Efficiency, productivity, yeah.
But what I'm interested in exploring is how can we make more meaningful, impactful experiences.
And elevate the stuff that we're trying to get done.
To create and to make as humans rather than to replace that stuff.
So what kind of, new what and why thinking can we be doing, because that's what design is about.
And really good engineering is about, it's not about the creation of the artifact.
It's all the thinking that goes into it.
That wire frame that we saw.
That's where a lot of the thinking lives.
The sense making.
And so how can we actually allow more focus on that stuff so that we can actually be more creative and more expansive?
That's the hope.
That's one path that we can take with all this.
So again, it's not about this, about not about how these things can code or design.
It's what happens when you weave intelligence into digital interfaces.
So instinct instead of thinking of AI as a tool.
What happens when we think of it as a material, instead of thinking of it as a maker of stuff, what it happens when it becomes a maker of experiences.
So just a little perspective shift to think about all this.
We can reimagine the interaction surface as an intelligent canvas.
This is one route of a few different paths that I'll talk about today, but in this case, when you weave that Pinocchio pattern into the fabric of an application.
It turns that interface into a radically adaptive experience, a freeform canvas that can adapt to your explicit or implicit intent.
And that's a far cry from yield old website, right?
These sort of relatively static experiences that are designed with fixed paths and information architecture, and it's also much more than chat.
Which I think has this incredibly strong gravity, for our imagination, that AI always comes back to devices that can talk.
So this, in this case is generative AI that again, creates experiences instead of artifacts.
Now this is only one direction.
This in intelligent canvas notion that we're be exploring some examples of here it's only one direction for these kinds of machine intelligent interfaces that I think are gonna be in front of us.
We'll talk about more of those types of experiences beyond this, but this intelligent canvas is a powerful example, I think, and it's especially well suited for open-ended and creative exploration.
So let's look at another example of this.
The, its version of the notes app for iPad combines generative ai with sort of contextual understanding and awareness.
So if I circle this image here, if I say I want to do something here with this, it takes the existing sketch as well as the text context, the textual context around it to say, I know what you wanna make.
And of course you can edit it from there.
But I think one of the powerful things here is that.
It's taking this awareness of the visual on screen and the content around it to get this in context imagery.
This is like, where do you wanna do this?
There's not a specific zone that you're limited to.
This is again, this open-ended canvas, turn a quick sketch into a high fidelity image, and that's Pinocchio pattern.
Stick with the Ipad here, the calculator app of all things has like a really novel experience in it has this math notes feature, and what it is it basically lets you draw equations or conversions anywhere on the screen.
So as you do this, if you wanna like actually turn this y equals now it's an actually an equation.
I can add a graph thats here.
Fine, that's good.
But all those things up at the top are variables, and so I can actually go in and I've drawn in my variables, but I can go and change them, and the graph is changing.
Now the kind of the operation of this is still simple calculators.
This is basically spreadsheet functionality, but I can draw any arbitrary interface on the screen.
I could draw a recipe portion adjuster.
I could draw a calorie calculator for my next workout.
I could draw whatever sort of calculator, best based application that I want, and I've just got it.
So it's using machine intelligence to understand my scrawls on this, but effectively, I'm able to create whatever application I want in the moment and then walk away from it.
It's almost disposable application with this sort of built in simple functionality behind it.
So the idea of all of this is that we have the possibility now to create systems that adapt to you.
Instead of the reverse, and this is part of a long march of history of over 200 years of interaction design from punch cards, which started in the 1830s through the command line, through gooey touch screens speech.
The sort of the long arc of this is this story systems that more and more adapt to us instead of the reverse.
And so right now we have interfaces that are beginning to be able to discern context and intent and marry that with some form of agency so that the interface itself, the fabric of the application Applic begins to feel intelligent itself.
So as we saw it approaches being this on demand software.
So if we actually go there, it's like, what could you actually just make the app that I need for me in the moment.
This is something that I think is pretty exciting about, Claude breathtaking at this.
So in a chat you can tell it to make an app and start using it and improving it right away.
So here I told it to build an asteroid game that I could play in the browser, and a few seconds later I was building it, and playing it during the chat.
We did this in our workshop yesterday where we were actually.
Like playing with this game that we made and then I was I don't know, add zombies.
I don't even know what that means, right?
But it figured something out and added these like little green asteroids that chased me around the screen.
So I just said, it's oh, I want this app and it made me this thing.
Made some assumptions that then I could refine and clarify.
For the past couple of decades, the way that we've discovered content and applications has been through search and then a bit through curation.
But now, oh my God, look what is happening.
So we're manifesting the experience that we want.
What do you want?
Here, let me have it.
And there's degrees to this, right?
I'm showing you the dial to 11 thing of being like, what kind of application do you want?
Here you go.
That's something that will fit some opportunities and experiences, but there's also some stuff where, all right, what are some more subtle and more nuanced experiences that we can create?
And we'll look at that in a means.
Designers are not directly in control of the experience.
We're designing systems instead of specific interactions, and that's unsettling and it's a little bit weird.
And I also wanna be careful around this language manifesting because it seems dangerously close to making.
And today I wanna talk more about, more than just about what machine intelligence can make.
What we're talking about is moving from this tool mentality, this capability mentality of, oh, it can make images, it can make text, it can replace people, to thinking about how do we actually move that capability closer to the stuff that we want to achieve and to do moving capability closer to the user's contacts.
Bringing it, right into again, the sort of the fabric of the applications that we're creating.
And we do that.
We have a whole new range of really, I think, meaningful user experience opportunities.
And that's what I wanna share with you today.
That's a lot.
How are you all doing?
You all right?
Yeah.
Look fantastic.
You really do.
You look really nice today.
This is essentially what I'm getting at here is that machine intelligence is our new design material, just like HTML and CSS.
Our design materials just as dense data is a material.
Our prose is a material.
It all wants to be used in a certain way.
To be most effective and to communicate in its best way.
And all of these things have a certain grain strengths and weaknesses.
They have things that are pitfalls that you need to avoid.
And so I think we need to be mindful about all of those things as we understand the texture of this new design material.
And it all adds up to something that, We're calling Sentient Design, and what we're getting at here is not just the form of this new experience, but a framework and a philosophy for approaching it because not only, as the machines become more mindful, so to must the designers and the developers and the product people who are using it, because this stuff is tricky and weird and hairy.
So I'm writing a book about this with Veronica Kindred, who some of you had the good fortune to meet at our workshop yesterday.
Veronica, led the workshop with me, and it's a book that's coming out middle of next year from Rosenfeld Media.
So watch your local book Sellers, everyone, please.
We need all the help that we can and book land.
One of the things then, as we're thinking about this is like what are, what is this?
I mentioned before, these are intelligent interfaces that are aware of context and intent so that they can be radically adaptive in the moment to conceive and compile experiences in real time based on that context and intent in ways that feel collaborative.
Conversational maybe, although not necessarily about dialogue.
Multimodal, which is the fancy way of saying it can talk, it can understand images, it can move in the physical space, all kinds of different channels and interactions that now these systems are able to understand that they're continuous and ambient available when we need them out of the way when they aren't, so they aren't full on clippy.
Looks like you're trying to write a letter, looks like you're trying to eat lunch.
And that they're deferential, which means that they suggest rather than decide that your hand on the smart lock wins over the system.
Alright.
But the big thing here is radically adaptive experiences.
That's really the core of sentient design and chatbots are great examples of this in the sense of what they've become.
Now.
It's like you can ask them anything and they will respond sometimes even correctly.
But that, that you can have this meandering conversations that nobody could have anticipated or designed beforehand.
Totally adaptive to what you are asking and what your intent in that moment is.
So just like a conversation you could have with another person, these things can go in any direction.
What happens when we bring that to any digital inter interaction and digital interface?
Weird stuff can happen, right?
It's not totally in our control, but there are a lot of new opportunities to explore and certainly we're only beginning to find even the edges of what best practices of this look like.
So what I mean by this though, radically adaptive experiences are conceived and compiled in real time.
They are not specifically designed before, so it's content structure or interaction, or sometimes all three.
Can change on a dime depending on what we need.
So how do, what does this look like and how do we avoid making this sort of feel like some robot fever dream where the whole thing is, completely disorienting for designers, for users, maybe even for the system itself.
Lemme give you an example.
Here's one example from Google's Gemini, team Gemini, of course, Google's ai experience.
And while the demo starts in traditional chat, it does something that I find exciting, and Veronica, and I call it bespoke UI as a design pattern.
Let's say I'm looking for inspirations for a birthday party theme for my daughter, Gemini is I can help you with that.
Could you tell me what she's interested in?
So I say, sure, she loves animals and we are thinking about doing something outdoors.
At this point, instead of responding in text, Gemini goes and creates a bespoke interface to help me explore ideas.
Wait, what?
It's like a little mini that just spun up based on this question.
That's, I think, somewhat exciting in the sense that it's oh, all I've understand.
I've understood not only what you're asking and the information, but I'm gonna give it to you in the best possible, interface for this specific thing.
We're ready to now engage in a more, in a different interface that would be more effective than chat alone.
So just looking under the hood here.
We've got this thing where it's actually like the structured data.
This is like the conversation that's happening behind the scenes between this front end system and the LLM, the large language model.
That's being the brains of the operation here, and what we can see is that there's, it's recognizing some ambiguity.
It's we don't yet know what kind of animals or what kind of outdoor party that we're talking about, but it's also made the idea is it's looking at the intent that there's enough information to keep going.
We're gonna go ahead and proceed with this.
We can ask about these questions, right?
So when we get there, we see that it's made some decisions about what it's going to ask about that we've gonna ask about party themes, party activities, food options, and that there's even some rationale here to it, right?
So it's saying the user wants to explore this, these ideas.
So that's why I'm gonna choose, and this part is pretty exciting.
Whoops.
Ah, far, man.
Not that exciting, everybody.
I'm getting excited.
All right.
Lemme go forward again here.
So what's exciting here is this bed.
It's saying list detail, layout.
So what this is saying is like there's a design system under the hood here, right?
This handful of design patterns that it has been taught to say, Hey, you've got 10 ways that you can respond, and these things are for these things.
So this is how it's staying on the rails, right?
This is oh, all right, we've got, we've creating some rules for this.
To work within.
It can respond to any different kind of question, but only within these design patterns.
So what that's saying is the designers are now being less specific around specific interactions or flows and more at a higher order level of how do we design a system that allows AI to work with our design system and to have an interaction.
That makes sense.
Alright, let's keep going.
Ah, farm animals.
She would like that.
Clicking on the interface regenerates the data to be rendered by the coded road.
Ooh, I know she likes cupcakes.
I can now click on anything in the interface and ask it for more information.
I could say step by step instructions on how to bake this, and it starts to generate a new ui.
This time it designs a UI best suited for giving me step by step instructions.
So it's not just being able to handle and being adaptive to a specific output.
It's also to any input, anything on the screen can become your launching off point.
That is not something, again, that you can specifically design for those paths.
You can design the broad interactions, select cupcakes, ask questions about cupcakes, and then you're off in a new direction with a new design pattern.
So this is what I'm getting at when we're saying how do we use machine intelligence as a new layer to mediate what are these radically adaptive experiences?
How can we think about ai?
Not in terms of the functionality, in terms of the artifacts that it makes, but in terms of the experiences that it can enable.
If we think about this, we can start to use LLMs to do things like understand intent.
Figure out what the data is to get, don't trust them for facts.
They should just figure out what questions they need to ask.
We'll talk about that in a minute.
Select the best UI from sort of some rules that it's been given and deliver that UI in an appropriate format.
And it's all of a sudden it's oh, this, I can get my head around this.
I understand how this thing can work.
It's adding just a little bit more sort of possibility.
And if we take this out of chat for a moment, which I encourage all of you to do.
Think about, I don't know, dashboards, a really common, especially kind of corporate enterprise design pattern of, here's a bunch of pieces, UI elements, widgets that are appropriately selected for different content.
What if we had dashboards that could build themselves?
So Salesforce is doing this with something that they call generative canvas.
This is an example of a plugin that they've got into one of their, CRM content relationship.
Management applications that basically can look at your calendar and say, all right, let me pull the information together and build this bespoke UI dashboard on the fly.
Or you can ask it specific questions.
So you've got this little thing here that you can start asking about or it can build from your specific context, through more implicitly, proactively.
So again, it's using these familiar design systems, design patterns from the Salesforce design system, and the same content that you would use if it was a manually created template, but it's pulling them all together.
So design systems and best practices become more important than ever in this, and the way that we design has to be more sort of system based.
To support this, we're giving the intelligent interface, the design system to use for expression.
Take a look at a simple example of how to spin something like this up ourselves.
This would be to pretend that we are the intelligent interface talking to the system.
Let's just use chat PT to do this.
So if I go in here, let's, just test it out.
Respond to me only in French.
All right, we're on board.
All right.
So it's very nimble in terms of changing the way that it communicates.
And I only want it in JSON structured data objects.
And it gives me the same thing with a, in a pul key value pair.
Alright, what are the top three museums in Paris?
And now it's giving it to me in French, in A-J-S-O-N object.
I've got this array of mue with, The name and description of all of them, and now we're just like seeing all right, like why don't you flex a little bit chat GPT.
Give it to me in Spanish right now I've got the EO instead of the Muse.
Alright, so this is interesting.
I've got the structured data of content that I could display.
I want to display it as a carousel of cards in a website to provide the properties to provide to this carousel object.
Now it's still speaking in Spanish in structured data, and it's giving me a carousel object with items, each of which is a museum title description.
It's even got a placeholder for an image, URL and wait a second.
It's proposing some interaction rules for the carousel too.
So it's I don't know at this point we may as well just ask for the whole shebang.
Just give it to me in HTML, and it's going through and it's doing its thing now, right?
So when we see, it's like it's putting in some CSS styles in here.
There's our carousel showing up with a carousel item for the Louvre, and then the oray, and then salt.
Oo, it's like pulling in some jQuery.
It's a little old school, even the cutting edge stuff has, its sort of like old school flaws and, a carousel library.
So now it's I don't know, let's chuck it into a browser and see what happens.
The only change I made was to replace the placeholder images with real images, and it's got this actual working carousel.
So you begin to see this sort of way that you can make these things work.
And by the way, this is accessible to designers for the first time.
Anyone can talk to the underlying system, which means that designers can participate with developers and product folks.
A multidisciplinary team can work through these things to figure out what are these things capable of, what can we do?
How do we design this layer together?
And I wanna be clear again, I'm just gonna say it another time.
The point here is not that these large language models can code, it's almost like a little bit beside the point and well understood at this point.
The point is that they're really amazing chameleons.
They have, they take these sort of internalized symbols, the language, of visuals or of code, and they can summarize and remix and transform those symbols.
They understand concepts and then they're like, Chinese, French, Spanish doesn't make any matter to me.
Prose, bullet lists, JS, ON, structured data, XML, whatever.
They're really nimble at transforming this text to image, to song to video.
It's all concepts just rendered through a different lens, and sometimes they can even turn that stuff into answers, but really they're manipulating symbols or concepts and associations among those concepts so they can summarize some of them.
They don't have real understanding of them.
Remember this movie, Catch Me If You Can.
Leonardo DiCaprio is like this really talented young conman.
He can pass himself off as anything, a master impersonator, doctor, air blind pilot, you name it.
He has the manner.
But not the knowledge.
What are you My deadhead to Miami?
Yes.
Yeah, I'm the deadhead.
You're a little late, but the jump seat is open.
You, it's been a while since I've done this.
Which one's the jump seat again?
No Leo, but before you know it, he's like spewing the jargon, right?
He's feeling it.
You turning around on the red eye.
I'm jumping puddles for the next few months trying to earn my keep running.
Leapfrog for the weak and weary.
Look at him, go.
He has no idea what he's talking about, right?
But he knows how to carry it, right?
It's the command of jargon and context and tone, but he still doesn't know how to fly a plane.
This is bullshit, right?
These things are master bullshitters, and I'm gonna suggest that is a feature, not a bug.
These things were designed to be storytellers and we've confused them for things that deliver facts.
So we'll talk about the implications of that, in a minute.
But coming back to this idea that LMS are very good at assuming roles and talking the talk, they will be what you want them to be.
They will try their hardest to look like that.
So what it means is that when you ask it for an answer, it's actually gonna give you something that looks like the answer.
It doesn't know the answers, it knows manner.
So the miracle here, and it's extraordinary, is that this approach often yields genuinely accurate results, and all it's doing is like figuring out what the next most likely word is.
So somehow by swallowing the whole internet in an exercise that was supposed to teach it language, it got pretty good and extremely confident at delivering sort of facts.
But because it's so convincing, it's really hard to tell sometimes whether they're accurate or not.
It's a bullshitter that tells the truth 95% of the time.
But it's so good that, how do you know about the other 5%?
So this is a problem to say the least.
If you're trying to do high risk or highly specific content, don't let it fly the plane, right?
Don't let it do surgery, please.
So this makes them dangerous to rely on, but it also means that they can present facts in all kinds of new ways.
These language models for presentation and not for truth.
There are opportunities here, but let's keep in mind that they aren't reliable even enough to describe a bowl of ramen.
Chat GPT show me a bowl of ramen.
Here's the illustration of a, get rid of the chopsticks.
Here's the revised.
Now there's three chopsticks.
I apologize for the, now there's four.
Let's generate the, I still see them.
Here's the revised.
Do you see those two Long brown.
I'm unable to visually show me what a pair of chopsticks looks like.
Here is the illustration of a, I don't wanna see these two things with my bowl of ramen.
Here's the new, what are those two stick looking things on the right side of the, I can't visually inspect the images.
What do you mean you can't see the picture?
You drew it getting a little heated, but we've all been there.
What is going on?
It doesn't know.
What Ramen is, it doesn't know what chopsticks are.
It has a mathematical concept of the two, and they are always joined in its experience.
So that's the thing.
Just to remember as we think about these probabilistic systems, they are dealing with statistical probabilities of mathematical concepts that have been mapped to things that we think of as physical objects.
And what that means, as we think about this is again, we shouldn't present their results as facts, but as signals they are delivering statistical likelihoods.
So how can we start to think about that as much as we might wish that they could deliver facts.
How can we start to actually represent what they're doing in a more realistic way?
'cause these are not answer machines.
They are dream machines.
They were been, they were built and designed to take a seed of, language and to say, what's next?
They're improvisers.
Everything is yes and right.
So it's great, I'm gonna roll with it.
So this is a feature, not a bug.
It's like this is what it was designed to do.
So how can we as designers adjust to that reality, present those signals as facts, and not present those signals as facts or as answers.
And instead, consider how can we treat and convey that information as suggestions, right?
Maybe they'll get better at doing answers.
Maybe they'll start hallucinating less, maybe not.
I think the thing to maybe think about is like, how do we embrace their weirdness as an asset instead of a liability?
Because it is constitutional to them that they don't actually know anything, right?
So don't let them fly the plane.
Instead, think about them as your charming host, right?
Use the LLM as the face of your system.
They're very good at interpreting.
A prompt intent.
They're solving what are we trying to accomplish here?
And they can also present information in good ways.
They can understand what are you asking, and pass that on to more sort of effective systems to get the answer and then present it in any way that you would like.
So it's not a complete solution, but is a compelling partial solution.
So for specific and especially technical data, serious important, high risk data, they have to talk to smarter systems.
In the meantime, they can present content in really varied ways.
Think about how much we have trapped.
Frozen content and interaction in these things called PDFs, right?
The world is full of 'em and you know it better than before.
At least we've got a single format, but they're frozen into this content.
How can LLMs help here?
Machine intelligence can liberate this content.
And I have a hunch that many of you have probably heard of Notebook lm. That's, Google's AI powered, research tool.
Veronica and I gave it some PDFs of the sentient design book manuscript, and in a little under three minutes it generated a 12 minute conversation.
It sounds a lot like a podcast or a radio interview about the book.
Ready to dive into some really cool stuff.
We're gonna be talking about Sentient design today.
Sentient design.
Yeah.
Pretty wild, right?
Like instead of us trying to figure out how computers think, they're starting to think more like us.
Seriously.
It's like this whole new way of looking at how we use technology.
So we've got excerpts from a book called Sentient Design by Josh Clark and Veronica Kindred.
Yeah.
Good book.
Good book.
Everybody.
So that's bonkers.
And I wanna say, first of all, that is like such an American conversation.
Like they nailed it.
Like they really got that kind of podcast manner there.
It might not be your cup of tea, but they really got it, the quality of it, the summary of it, really quite good.
That's not the point, like that's interesting and useful and is tailored to a specific audience.
This is not a story about, oh, we can replace podcasts.
Oh, we no longer need those.
What it actually is instead is here is a format that now someone who doesn't have the time or attention to read this, you know lengthy and incredibly quality manuscript, and can take it on the go to get a gist, listen to it in the car, they've created a new user and a new use case by creating a podcast for one.
So it's one of the things that's really interesting and powerful here to think about is, you know what happens when it becomes possible, when you can liberate content or data from its current form and what happens when that becomes easy.
You can do that for one person.
In a matter of seconds or minutes in this case.
So this is one of the real superpowers of generative ai, not the generation specifically, but how it can understand meaning in one context and change it quickly to another language, another medium, another data format.
It understands concepts and can transform them into different shapes.
So think what does that enable for the interfaces that you create.
The content and data that you work with, what could happen if you could be like, I want this in a hundred different formats and interaction styles.
That's this kind of thing that maybe we've thought about but haven't actually done because it's too hard.
We don't have the resources, but now we've got this thing that has especially good at this part, Focus on the experience, not the artifact.
That's what I've been talking about here.
Focus on the outcome, not the output.
And that's the exciting bit that I think generative AI enables for designers of digital experiences.
Less about the output, more on the outcome.
What's the user trying to do and how do we enable and elevate that in the most graceful way?
What kinds of new experiences does machine intelligence let us enable?
And they're like, part of it is like maybe we can invite it into our team.
Tools like Figma have really introduced the multiplayer experience where we're all working together in a common environment.
What happens when we invite the AI assistant to be part of that?
Suddenly we have multiplayer mode where we're interacting not just with our colleagues, but also AI assistants that can help in appropriate ways.
And of course, we're familiar with this Slack bots.
We're starting to see some bots that show up and make comments and documents, or we see it also in Miro.
It's something that we call the NPC experience pattern, non-player character, a gaming term.
It's a character that's driven by the system, not by a human player.
So when we look at this for Miro, they have this feature called sidekicks where they have these kind of robot assistants that kind of come in here and they do things like give feedback on your content, or suggest new content, which is classic generative ai, right?
Some suggestions or review of this text, but the experience they do it through is an experience where the system acts as a user.
So here we've got the product leader in PC and it behaves in the context of the system how a user would behave and how does it provide feedback by adding comments, right?
How does it suggest new content by adding post-it notes.
So my point with this too is, as we start to explore new kinds of interaction, this is so much more than chatbots and text boxes.
Miro could have done those comments and suggestions in a chat window, evaluate these audience segments, suggest new product ideas, and it could give a bullet list it's the same functionality.
But experientially this is different and better because like I was talking about earlier, it's happening in the context where I'm doing my work and thinking.
So typing prompts is not the UX of the future.
We don't have to rewind the clock to the command line.
We can think about bringing them into the interfaces that we already know, these intelligent systems, rather than having to on their terms, that's the opportunity.
So I wanna invite you all to be expansive in exploring the different shapes that your application experience can take.
I mentioned before there's a strong gravity to machines that can talk.
Ever since Alan touring introduced the touring test 75 years ago of being like the proof of intelligence is that it can talk in a way that fools a human.
We've just had this idea that thinking machines are talking machines and that has its role, but it's not the only role.
So I wanna share maybe what the shape of all this could be.
Veronica and I have uncovered the artifact that helps us explore all of this, and friends.
Call it the sentient triangle.
We're still workshopping the name, still working on it.
You know you're in trouble when you get like a few giggles after this.
I tried with a sentient triangle.
That is, we're working on it.
Okay, so what this is, this basically is, what's called a turny chart, a turny graph of sort of three points.
And this is something that is based on some work that was developed by Matt Webb of interconnected.Org and other web directions alum, really smart guy who's always thinking about how to create systems around forward-looking technologies.
And what this does is it says there's three kind of qualities of these interactive systems of grounded, accurate and avaliable.
LLM's are not in any way grounded.
Interroperatiable, they work with other systems, talk to other, other systems or other data formats and radically adaptive, which is the stuff that is new in this.
We've for computing, been working across the other two corners, this radically adaptive, conceived in real time and the new part of it.
And as you can see, this brings up what we call four postures.
The postures are the relationship that, That, the experience can take toward the users.
So tools is traditional software tools, precision control, chat conversations, almost acting like a peer.
There's a give and take, open-ended exploration.
Agents are things that begin to have their own autonomy and agency doing things on your behalf.
They decide what's to be done.
Do it and decide when to stop.
And co-pilots, which are more quiet helpers auto suggest doing some things along the way to ride along with you and help make things easier.
The fun part is when you zoom in, wow, Cynthia, a triangle.
And we've got, I'm really trying to sell it.
We've got 14 sort of specific experiences that you explore and roam around through this.
Don't have time ELAs to go through all of these, but there's more than one intelligent.
Interface model to follow here.
There's a lot to explore.
A lot of these overlap as you can see, and you can combine the experiences into one.
So if you're interested in exploring this, here's a good use for a text box.
You can actually find the sentient triangle by searching for the shape of sentient design at Google or your favorite search browser.
Alright, so what we're talking about here is thinking about what if we put this everywhere.
The examples I'm sharing are pointing to entirely new experiences and that can feel like a lot.
So lemme pull back a little bit and reassure you.
You don't have to turn everything upside down to engage with this.
In fact, that's not the place to start.
So there are powerful interventions that you can make within traditional interfaces within your everyday practice and a lot of these use old school machine learning that's been around 10 or 15 years.
So when we think about ai, we tend to think about whatever's the latest thing, but in fact, we have this whole kind of realm of machine intelligence, including very reliable and sturdy algorithms around recommendation, prediction, classification, and clustering.
This is the stuff that machine intelligence is actually made of.
Mostly a fair amount of machine learning categories like we see at the left, as well as sort of their applications, but some stuff that's just been around for decades and decades since the fifties.
So as we think about how do we use this, predictive keyboards are just this great example of everyday machine learning, the statistically most likely text showing up above the keyboard outta the way as a suggestion.
It's a simple intervention to speed up the error prone task of touchscreen typing, right?
It's prediction based on historical data.
Here's the thing that's most likely to happen next.
Really familiar example this is like a place where sprinkle a little bit of machine learning onto that really familiar interface.
One of our favorite examples, of everyday machine intelligence, Google Forms.
Probably, we, I think we've just interacted with it today if you're in the trivia pool.
When you add your questions, you choose the format of the answer that you want, and they've tried to do the best they could with some icons, but it's a lot.
It's a lot to absorb.
And so what they do in order to make it a little bit easier is that they've made it so that it listens to the question that you're typing.
So look what happens when you start typing the question text and how that changes the answer format.
How satisfied are you maps to a linear scale?
So they added a little machine learning to the mix to look at your question and classify it to the specific answer type.
So which of the following apply?
Maps to check boxes.
Just this convenient bit of intelligence to make the process easier.
So in this case, Google Forms had billions of human labeled examples, really high quality data to get this transformation.
And they're not giving you the answer.
They're not deciding for you, but by the time you get to the field, they've teed up the likely response that will be most helpful.
So these are not crazy, right?
This is just what we're, what Veronica and I call casual intelligence.
Sprinkling a little bit of modest intelligence into everyday experiences to make it feel easier, reduce some of that friction, give it a dash of ai, doesn't have to be a big deal.
If you start to make it the assumption, embed machine intelligence everywhere.
Get cozy with casual intelligence.
This is a way to start getting into these things.
What happens when you weave intelligence into digital interfaces?
That should be the question on all designers minds now.
So the goal of this though, is to elevate those experiences, how can we actually build experiences that amplify our judgment and agency instead of replacing them?
And it's worth saying, we're still dealing with, we're learning the grain of these systems as we've seen, large language models, especially unreliable for certain things, dangerous in some cases.
So we have to be asking what could possibly go wrong.
It's, we live in a very polarized society right now, and the conversation about AI is equally polarized, where we have, people who are, have outrageous levels of hype and optimism about it.
And equally people who are saying it's terrible and useless and like in most things.
Sort of the muddy middle is probably most true, both sides have real points.
So you look at all these genuine superpowers that this technology now gives us to create new and compelling kinds of experiences.
But on the other side, look at these risks and the challenges and the threats.
This in no particular order here, but these are major.
Stuff is real.
Large language models are the purest distillation of extractive capitalism that we have seen yet where giant companies run by billionaires, take all the value of this content and give literally zero to the people who created it.
They've nailed it you all.
Capitalism, they've got it and they're doing it through punishingly, huge amounts of energy.
None of those things have to be true to enable this technology.
Steve and Martin were talking about how, it's like what if we actually brought in environmental and ethical considerations when these models were being built, that part of those people's hearts was not in it.
So part of it is like, how do we actually start to think about, Managing these pros and cons.
Technology gives and technology takes.
That's not new.
The way that you use it matters.
The values that you apply to it matter.
This is complicated by the fact that humans are really messy too.
So we've got users interacting with these systems.
The designer can easily lose control, right?
That means that as designers, we aren't designing these paths anymore.
There is no happy path.
So the more that.
I've been working with machine generated results, machine generated content, machine generated interaction.
The more that I've realized that I'm not in control of this anymore, not at least in a literal sense, let's think at a higher level of the, overall system, which means that now instead of designing for success, we're designing for failure and uncertainty.
We have to anticipate this fuzzy range of results, and we're, but we're building the guardrails to it.
If there's an infinite number of paths, this multiverse of possibilities that we can't possibly design for.
How do we at least try to protect all of the things that we care about, the values, the people, the environment, and that's hard.
We're trying to figure it out.
So I think, as we go through this, one of our things that we have to do is to set expectations and channel behavior in ways that actually manage the system ability and try to guide both system and user.
In the right directions and interactive designers design behavior.
So this is wheelhouse stuff.
Just new territory.
There's a lot to cover around how to approach this to do this.
Don't have time to talk about it now, but it's gonna be in the book I mention there's a book.
And I encourage you to do that.
We've got a lot of different strategies.
We've got a couple of chapters that are gonna be in there.
And of course, in addition to managing all of this, we have to manage bias.
Establish the right level of trust, promote data literacy for ourselves and our users.
So when you get down to it, it's not really about the technology itself.
It's, enabled by machine intelligence.
But ultimately, this stuff is not about how do we use ai?
The market says we need an AI product.
It's really, what are the problems we're trying to solve, the outcomes that we're trying to enable, and how do we do it with this new material?
How can it help?
It's just software.
It's just another tool in the kit, a new material for us to design with.
It's not magic.
It has strengths and it has weaknesses.
So we have to pull back from some of the magical thinking or some of the negative thinking about it too, and just really look at it frankly.
What are these tools good at?
What are they actually good at?
Not what do we wish they were good at?
What new problems can they solve that maybe we weren't able to solve before?
And at what cost?
Because these things have big costs.
So how can we make sure the benefits outweigh the costs as we work to reduce those costs and the impact that they have?
And those are big things that don't feel in any of our, we're not designing these models.
I think we're some strategies today about how do we tackle some of these institutions even in our own small.
This is the bit like how do we focus on problems that is the user experience, what are the problems worth solving that we can point these things at?
But the thing that I'm really sure of is that the future should not be self-driving.
It's not up to us.
It's, sorry, it's not up to the technology.
It's up to us, the people in this room, and if we don't decide the best way to use it, the technology will decide for itself or its keepers will.
So the future should not be self-driving, which means friends, that you are needed more than ever if you haven't yet engaged with this stuff, if you are skeptical of it, if you are fearful of it, all of that makes sense.
But this is the moment that this technology is being formed.
Nothing is inevitable and we can shape it.
What values shall we apply to it?
Because we have this amazing new set of tools.
Let us really, I think, explore a bunch of new possibilities that are genuinely exciting, even if they're fraught.
So this stuff is amazing.
I wanna encourage you to go make something amazing.
Thank you.