Mark Pesce: Thank you, John.

What a lovely way to start.

I am not going to use this stage at all today.

As John said, my name is Mark Pesce.

As we open the final session today, I would like to pay my respects to the traditional owners of this land, the Gadigal people of the your nation, pay my respects to their elders past, present, and emerging, and acknowledge that this country was always sacred and never ceded.

Is beautiful.

Everything's okay.

Now I wanna start off with a bit of an embarrassing admission.

I have become addicted to interior decorating, and I need to be clear.

It is a very specific form of interior decorating.

Men's bedrooms, isomorphically presented, generally in blue and gray, or blue and gold.

Tastefully, softly lit.

There are so many to choose from.

So many I have lost count.

Hundreds, thousands, all very tastefully decorated.

I show my friends, they remark on the luxurious linens.

'Where can we order these?' They say, and I have to then tell them that none of these exist.

None of that is in the least bit real.

It certainly looks that way.

It feels real, but I'm getting ahead of myself because if we want to understand why I have become so addicted to this endless, very tasteful display of man's bedrooms, we have to look at the path that we took to get here, and that will lead us to the road ahead.


Welcome to Q and AI.

We are going on a tour.

We have several stops along the way on this tour.

Some of it is historical, some of it is thematic, all of it is in motion.

What we're going to do is a bit of photogrammetry.

We have one subject and we're gonna photograph it from a lot of different angles, so maybe we can actually get a sense of the matter inside.

And every step along the way, we are going to be hearing from someone, a domain expert, probably someone in this room, occasionally, someone who has contributed a video.

We will reflect on that.

We will discuss.

This is a discussion.

This is not me standing up in front of you for the next 40 minutes, keeping you from wine.

This is us engaging in a conversation about a topic that is so new and so weird, and.

Exciting, terrifying, fun, confronting all of those things.

The, it's good to have a bit of a discussion about what is really going on helping us.

Where are you, John?

Helping us here.

There we are.

The indomitable John Allsopp.

John is going to be monitoring the chat on Conffab and he, we will, will have a microphone at every stop along the way when we talk to someone, I'll say 'does anyone in the room have something they'd like to add or question or interject with or propose?' Because we are really trying to get everyone thinking about what all of this means.

Alright, so let's begin.

We're talking about generative AI and we need to go way, way back in the history of generative AI to start.

And by that I mean last year.

I put myself on the beta test list for GitHub co-pilot.

And co-pilot.

It's like a pair programmer.

If your pair is dead, has become a ghost, right?

You type a comment in and then the pair programmer types the function that you're referring to in the comment, but you haven't actually written the function yet, but it's making a pass at it, and most of the time I used it it wasn't bad.

Some of the time it would just need a few tweaks.

Sometimes it was perfect, sometimes, particularly if PHP was involved, it would miss the point entirely But it was interesting.

It was good enough, enough of the time to point to a future for coders, which makes this a very good moment in time to talk to a coder.

Ladies, and gentlemen, say hello to Ben Buchanan.

Come on up, Ben.

Have a microphone.

So Ben is the executive manager of engineering over at Quantum.


So you had come over into the light.

There you go.

Ben Buchanan: That sounds like a trap.

Mark Pesce: I know exactly.

So you had a bit of a play with co-pilot.

Do you think it's a useful, you have what?

50 engineers on the team?

Ben Buchanan: About a hundred now.

Mark Pesce: Okay.


Every time I ask, it's a bigger number.

So is this the kind of tool that you think is useful for a team?

Ben Buchanan: I think the interesting thing here is it's not, the direct answer is...

Mark Pesce: oh, is it not?

On wait one.

There you go.

There we are.

Ben Buchanan: How about that?

Ah, thank you Radio voice.

So, yeah, I think the interesting thing here is, yes, it's interesting, it's a tool we need to evaluate, but it was almost fascinating how quickly this has become banal because I'm evaluating it like any other tool we are looking at maybe purchasing Mark Pesce: as opposed to just something that's weirdly magic.

Ben Buchanan: Yeah.

It's yeah, it's got, oh my God, it's ai, it's doing this or that.

I'm like, yeah, it's annoying auto complete, and I've gotta figure out is it worth paying 10 bucks a month?

So the truth of that is in some languages you're gonna get a pretty good productivity boost Mark Pesce: right Ben Buchanan: The one, the languages that are more like suitable for this kind of work, they get a, they get a boost.

One of our devs thought maybe up to 10%, and I'm thinking 10% boost for 10 bucks a month, can you really afford to say, no?

Mark Pesce: No, no, no.

Ben Buchanan: Then you do the boring stuff because when you have titles like 'executive manager', you have to do the boring stuff and you go and talk to legal.

And I'm being a bit unfair 'cause our lawyers are brilliant and we're having the most fascinating discussion of all about co-pilot was about the IP . Because when you rock up to your legal team, you say, 'Hey, we wanna use IP generated by an ai.

We have no clue how that works.

And the vendor's being sued'.

Mark Pesce: Which we will come to Ben Buchanan: they have questions Mark Pesce: yes.

Ben Buchanan: So yes, I think it's one of these things, it depends on the language.

You throw something like JavaScript at, at copilot, you tend to get something reasonably predictable.

Throw it at CSS.

It's more like comedy Mark Pesce: which tells us a lot about CSS, so yes Ben Buchanan: there's a, there's a yes piss up later over beer.

But but yeah, the other thing that, that sort of becomes clear is you're learning like a specialist query language, It's a new query language you've got to get into.

It's a bit, it reminded me oddly, like very early Web search when we didn't know what we were doing.


Or also when I finally got Siri in my car and drove around pleading with it to play the right music.

Mark Pesce: Exactly.


All right.

Thank you very much.

I, do we have any responses from h how many people in the room have used co-pilot?

Okay, so a few people in the room.

How many people would use copilot if they could, if they thought they were gonna be 10% more effective?

How many of you would use it?

How many of you would use it if it were 50% more effective?

How many of you would use it if you weren't going to get sued for touching it?

All right.

All right.

Thank you very much, Ben.

All right.

So co-pilot is amazing.

It is so last year.

Because 2022, we can all admit 2022 is the year when things went a little bit exponential, right?

And in April of this year, what did we see?

DALL•E, right?

And this is the prompt I fed into DALLE-E this week.

Give me the Terminator as done by Salvador Dali, because why not?

So we have this new generation of text to image tools.

You utter a few choice words, the thing that's become known as a 'prompt'.

The model then takes a weighting of these words against a weighted digestion of pretty much every image it could scrape from every source it could find everywhere on the Web, and then it uses that to synthesize a new image.

Now, this is not something that I think most people thought was coming until we actually had it.

We didn't really have this on our bingo cards for how we were gonna be using ai.

It's taken people by surprise and a lot of people immediately started to be very concerned about the people who actually make their living taking a text description and turning it into an image.

In other words, commercial artists, 'cause that is what they do for a living.

Yiying Li is a commercial artist.

Come on up.

Yiying Lu: Thank you.

It's actually Yiying Lu.

Yeah, Mark Pesce: sorry.

Thank you.

Yiying Lu.

Yiying Lu: I can be a cousin.

I can, I can have a cousin have last name as Li.

No worries.

Mark Pesce: Now most famous perhaps for creating the Twitter fail whale, which I think is shown behind there.

Which greeted Twitter's uses during the early years, and I'm fairly sure we'll greet them again at some point during the World Cup.

How does DALL•E make you feel as an artist come coming to the light?

How does it make you feel as an artist?

Yiying Lu: This is a great question.

He already warned me.

It's gonna be a hairy question.

And one thing I learned when I talked to the original artist who designed the 'I Love New York' Icon, Mr.

Melton Glazer.


I actually showed him a video.

This was back in 2017.

There was actually a video of Miyazaki Hayao, the Japanese animator, I'm sure a lot of you are familiar with his work, his reaction towards AI replacing the animators.

And so I was, I was showing him the video.

Of Miyazaki Hayao's reaction towards ai.

He was furious.


And I ask him, what do you think?

And his answer said, he said to me, he said, 'the only thing that really matters is your relationship with the people and the object and the things that you live with.

Nothing else matters.

Fame, money, reputation', in his words, 'are bullshit'.

And he said 'the only thing that really matters is what's real, is your relationship with others'.

And to answer your question, I think that to me as an artist, you can say I'm a commercial artist or not because the emojis I'm actually, so far I'm not making any money yet but I did it because it matters to me.

I did it.

It's a way of expression.


. And the reason why we create art, whether it's for commercial usage or not, it's, sharing your experience, sharing your voice, and sharing who you are and the experience that you have as a human.

I don't know if you know if this is actually a simulation.

We've been talking about it and the whole thing is about the experience, right?

. If, if you really think about, and then you strip the way about the blue, I mean the, the green screen.


, I was just at, at this Sydney Harbor Bridge and, and Opera house yesterday.

And I took a photo, I, I, I was working with a photographer.

We took a photo there.

I was like, I've never actually taken any photos.

I'm living here for 14 years.

I've never taken a photo with iconic Sydney iconic architectures.

And when I was in the US people were like, 'there's no photo of you living in Sydney'.

How do you, how do you prove you live there?

I'm like, that's a good point.

And And I could easily do a green screen.

I could easily do that and then have a photo and I have to do this 'cause I'm very Asian here.

But I could easily do that.

But in my heart, I know I never had that experience.

I know that's fake.

And to others, they will look at it and they will believe that's my experience, but I wouldn't buy it because I didn't have that experience.

Mark Pesce: Do you think you are in competition with DALL•E?

Yiying Lu: We work together.

I've been using scanners and, and photocopies and I've been using computers.

When I was little, we worked together.

There is no like the us and them.

How do I know I'm not a algorithm having a hu human experience?


I had this, I had this epiphany when I was little, when I was reading this book by Lao Tse who wrote the book Lao Tse, which you probably know, like Dao de Jing, which the translation is actually really good.

I found the bilingual translation.

It's called 'Truth and Nature'.

I think it's actually a more accurate translation of Lao Tse.

And what's really interesting is there was Chuang Tzu, who, who's another amazing philosopher, he was having this conversation about the dreams.


Butterfly, the men and the butterfly, which is the man dreamed that he become a butterfly.

And he was like flying and it was very freeing.

And then he woke up.

He's oh, I'm a dude.

But wait a minute, but am I a dude that dreamed that I become a butterfly or am I a butterfly that just dreamed that I was a dude?

There's no way of knowing.

Only your experience will tell.

So I, I would say, I gotta make sure DALL•E hi, I'm introducing myself here.

If, if we are gonna, if we are gonna be in competition, I wanna work with.

Mark Pesce: Okay.

All right.



Thank you.

All right, so DALL•E was the first taster course, and it wasn't very long thereafter that we got another one of these called Midjourney.

Now, Midjourney.

Insofar as any of these systems have an aesthetic, Midjourney is the most pronounced.

To my eye, everything that Midjourney creates looks like '70s prog rock album covers.

And I am not the only person to have noticed this.

And this is an example of one of the works that was generated by it, which was then promptly entered into the Colorado State Fair's art competition for digital works.

And it won.

And all of a sudden there was a moment and people went is this art?' Which is exactly why we're gonna talk to Robin Backen.

Robin, join me.

So Robin is an artist and she teaches at the Sydney College of Art, and Robin Backen: that's really intense.

I'll look at you.

Mark Pesce: There you go.

So, you had a student who came to you with a question?

Ben Buchanan: Yeah.

In September, Oh, no, at the beginning of this semester actually.

So I work in the, with the third year students at the moment, and one of my students came to me and said he wanted to make a work, which was about being encased in the womb of his mother.

And I said we can try and do that.

And he said, 'I wanna make it in 3d', but he's actually a digital boy.

And so we looked at knitting it, we looked at making it with twigs.

And then after about a month of him playing with that, he wrote me an email and said, 'I found another way'.

And I had never heard of Midjourney at this point.

And he said, 'I found this tool called Midjourney and it does everything I want'.

And I thought, 'Oh my God, what in the hell is this?' And I looked at the image and it was a beautiful image, but he had about five images and they all looked exactly the same.

And they were the idea of a nest and a boy sitting in a nest.

And he said, am I allowed to use this tool?

So I sat there for a while looking at the email and thought this is a very interesting question for me because it's a tool.

But for me, the question was where was the art in that tool and how was he actually going to manipulate that tool?

Mark Pesce: And...?

Robin Backen: So I didn't write back to him.

I left it because we have tutorials where we have face-to-face contact, and I just thought, I need to discuss this with him and I need to understand, 'what is it that he's seeing in this tool that I'm not?' Because I looked at the tool and I thought it's and I did do a little bit of research for myself after talking to him.

Look, reading his email, and I noticed that all of these images looked exactly the same, slight variations, but the tone of them were like fantasy images or bedroom art, I call it from 17 year old boys.

You may know it going back into the surrealist world.

And so I, I, I did meet with him and I said let's try it.

Let's have a look and see where you go with it.

But I think you have to find a way of turning it into a tool that you transfer into your work.

Mark Pesce: So in other words, it's a matter of that relationship, which is one of the through lines we're starting to see here, that unless you build a relationship...

Robin Backen: I think that is...

Mark Pesce: it's not really art?

Robin Backen: For me, I think it's about the way you use it as a conceptual tool.

Not as the actual tool.

You put a few words in there and spit it out and you've got an image, I think how you take the, if you, how you put the words in, how you then take those different images that you put different words into the images, how they all could generate, or what happens when you put five words and you remove one or you add a different kind of tone.

What is a shift in the, in the imagery that you get?

I think there's some very interesting things you could do with it, but I think what you come out with, I haven't done enough of it.

I don't know enough about the actual tool, but I do think you have to actually, it's like any tool as an art maker, the tool is, it depends upon the person who's behind the tool, their brain, as well as their hand.

And so I think that for me is a really important part as to how we actually work with these AI generative tools because they need to be manipulated.

Mark Pesce: All right.

Thank you very much, Robin.

Okay, so there's this really interesting through line about the relationship.

Now, both DALL•E and Midjourney are commercial tools You have to pay to use them.

You actually get some free credits on both of those and you can use them for free.

But all of that was prologue.

'Cause at the end of August, the entire world of generative AI had an earthquake.

It changed dramatically with Stable Diffusion.

Prompt based image generation, just like the others, unlike the others, it was 100% open source.

You could install it, you could use it, you could run it, you could train it.

You could do whatever you liked with it, which in my case included finding the prompts that generated an infinite series of incredibly tasteful layouts of men's bedrooms.

Ron, come and join us.

Ron Au is a front end engineer at Hugging Face.

The folks who you are gonna have to explain that in just a moment.

Ron Au: I will.

I will.

I might.

Mark Pesce: You didn't create the model, but you've created most of the tools that folks like I use to use Stable Diffusion.

So first explain that and then tell us what Hugging Face does.

Ron Au: I haven't been long in the software industry but when I joined and joined conference speaking it was post covid.

I knew that online talks can be boring.

You can alt tab, you can go off and do your dishes.

You can be talking to your partner.

So in order to entrap people and make them pay attention to me, I cut out my head and made it float around the screen so that you were forced to pay attention.

it worked.

Mark Pesce: Alright, that actually makes sense.



So what does Hugging Face do?

Ron Au: So Hugging Face does a lot of things in ai and you'll detect a theme here.

We create APIs for front engineers to quickly pull inferences from models.

We host models for people to do downloading.

We host data sets for people to fetch things easily.

We create libraries like diffusers and transformers so that people can engage with AI easily.

The thing with all this is that we create tools for other people.

Our mission is to democratize AI-and good AI for that matter-and the thing is, AI traditionally, at least if you were around 10 years ago, it was the realm of large institutions.

You either had to be a FAANG company or you had to be a rich university.

Or if you were doing yourself, you had to spend 10 years doing it for your PhD.

But like a joke that you weren't there for, if you've ever had a friend who tells you a joke and they're laughing every second word, and they, and by the end of it you are not laughing and they just go.

You just had to be there.

That's, that's not fun.

So AI is great and we wanna democratize it for for everyone, and we want say, everyone in this room, for example, should be able to interact with AI if they want to.

And like Mark Pesce: just quick, how many folks have used Stable Diffusion?


So actually fewer than I thought, but still some.

All right, go on.

Ron Au: Yeah, yeah.

With GitHub for example a lot of the the world changing software out there in the world is on Github.

But GitHub itself doesn't create this software, right?

We're the same way.

We or we do create some models.

We do train some like Big Science and Bloom.

But for the main part, at the core, Hugging Face wants to be an enabler.

We want you to be able to interact with Hugging Face sorry, interact with Stable Diffusion.

Earlier we saw DALL•E, we created a version called Mini DALL•E, so that if you knew how to type on a computer, you knew everything you needed to use DALL-E.


Mark Pesce: Alright.

So the work is, in other words you have commercial clients that you work with, but you still release a lot of your work as free and open source so that people can have a go?

Ron Au: Exactly right.

Mark Pesce: All right.

What's the end game there?

What is, is Hugging Face trying to take over the world?

What's going on?

Ron Au: Yes.

In short, that's the short, that's the short answer actually.

But to use a cliche, a rising tide lifts all boats.


And when you go in open source, everyone can contribute.

Part of democracy is that it's for the people.

Mark Pesce: And here's the thing, that gets released first.

Stability AI releases the model, then you release the tools and all of a sudden it was the big bang of diffusion models.



And then, we'll, we will go through some of this.

All right, Ron, thank you so much.

Have a seat.


So Have, have, have most of you heard of Stable Diffusion?

Yeah, you have heard of it even if you haven't had a go, and I just want you to know because it'll come up later, but I've got it running on my iPad.

Alright, because it is in fact available on an iPhone and an iPad now because it was released as open source because someone said, I bet it doesn't need a huge PC with a huge whackin' GPU in it.

And oh, by the way, There's a guy who decided that he was gonna make a business out of presenting infinite series of tastefully redecorated homerooms.

You upload photos of your rooms, the AI will redesign them and feed them back to you.

You can use it for free or you can get a subscription and he's making $10,000 a month at that.

Now it's only been up for two months.

So my, my mania about tastefully decorated bedrooms is apparently monetizable . Okay.

So we can generate interior designs.

It's okay at writing code.

What happens if you take those two ideas of design and coding and combine them?

And I tried a bit of an experiment with DALL•E.

I said, all right, DALL•E, I want you to give me some website wire frames.

And I was very generic about.

Draw some website wire frames for me for a website redesign.

And that was precisely what it did.

Without being overly specific, 'cause I wasn't particularly specific.

But here's the thing, I am also not a domain expert in that area, so I don't really know what questions I should be putting to DALL•E in its prompt.

So let's talk to someone who does come on up, Oli.

So Oliver Weidlich is Director of Design and Innovation at Contxtual [sic].

So Oli, all right, let's just say that within the next 48 months, and I'm probably being conservative there, right?

You will have a service that you can subscribe to, you're gonna pay $10 per seat per month, like Ben will and given some prompts and perhaps some drawings, it will go off and generate that infinite series of wire frames and similar interface proposals, what would that mean for the way your team works?

Oliver Weidlich: So I think the first assumption is that those wireframes would be useful.



And the Input to wireframes comes from both customer engagement.

So that's what my team does.


So we gather that information and also businesses, clients that we work for.

They have, aims and purposes to what they're trying to achieve through those wire frames.

So it's a very nuanced thing that you're trying to accumulate all this information.

And one of the things that you need to do with a text prompt is write it down.


Now, now writing these things down can be really useful, right?

But it could be potentially very time consuming to get it in exactly the right way to, to push all those wire frames out to do the certain things.

And that's the generation of things, and I think there's a huge opportunity for creativity.

And exploring the more creative side of some of these aspects.

But wireframes should be about functional usability, and to my knowledge that AI isn't, isn't aware of that yet.

They're not.

They can, it's hard to describe the outcome that you want, you're, you're better just would be faster just to do it.

So put those boxes on.

Jordan Springer came out with designer in mid with GPT-3, in mid 2020.

And it's I can write that I want a box with a square in it and a button that's a watermelon.

But I can do that pretty quickly.

Like the time it takes me to write that text might actually be longer than that.

And because we have design patterns and because we have bunches of UI elements and design systems, we've got a great starting place, right?

Mark Pesce: So I wanna nail you down here because what I hear is you are attributing, there is a unique human creative capacity that none of these generative AI systems is going to be able to either emulate or assist with in the next 48 months?

Oliver Weidlich: So I don't know, but I think what would be really interesting is if you could input particular aspects.

The and, and UI guidelines, Mark Pesce: yep.

Oliver Weidlich: That information is there.

A design system.

If you could Input those things into the model holistically, and then have a create, have a, an iterative process beyond that.

But a lot of it's around nuance when you are moving pixels around, especially on a wireframe, which is very functional, right.

When you're moving those things around, you want to just tap things this way or this way, and to describe that might actually be harder than just doing it.

Mark Pesce: Okay.

Alright, Oliver Weidlich: and, and may I point out, Mark, in your bedroom designs?

Mark Pesce: Yes.

Oliver Weidlich: In terms of usability, those bedside tables always seem to be about 60 centimeters too far.

You can't put a glass of water very easily.

Mark Pesce: No, I.

They're not perfect.

They're just beautiful.

Oliver Weidlich: And if it's not perfect, Mark Pesce: this is the thing.

It's not that they're perfect.

Some of them are alien, right?

Like beds that are floating above the floor or are half in the floor.

You, I didn't use those, right?

Oliver Weidlich: So I'll, I'll, I'll use that line with my client.

They're not perfect, but they're beautiful.

Mark Pesce: But they're beautiful.

They're so beautiful.

All right.

Thank you so much, Oli.

All right, John, are we getting, are we getting comments from the room?

Because we're now getting into an area where you should be feeling at least a little bit uncomfortable about the locus of your set of talents and where AI is touching that set of talents.

Who does, anyone wanna make any public comments about this or John do we have comments from the contextual?


We'll start, we'll start with the room.

John Allsopp: Yeah.

Oh, sorry.


Questioner from floor: I want to ask a question.

And coming back to the tools and how we use them, I just had this thought, like when photography first emerged, the artist response to that, 'is that actually art?' And I wonder if we're having the same conversation now.

Mark Pesce: Oh, we so are, yeah.


Questioner from floor: Photo photography is so much to do still.

Even with my phone.


No, no, no.


John Allsopp: Yeah so, we just had Adam Bottega.

Who himself is a digital creator, designer, artist.

And he's linked to his Instagram thing so we can follow his art.

But so Adam observes 'where do you think the AI image gen tools will lead?' So it's broadly what we're talking about 'in the future of art and technology.' I, I guess it's about like how real something is.

Mark Pesce: Yeah.


All right.

So we will get to that, right?

All right, we'll get to that.

We need to keep hurtling through because I am keeping you from wine and I know this.

We're gonna run a little bit over.

Forgive me.

But, here we go.

All right, so if that weren't enough, all right.

What we saw, and this was last month, so November, right?

What we saw, oh, sorry, Oli I forgot to put your slide up.

I know.

But everyone knows you.

All right.

What we saw was at the beginning of the month, Microsoft announced at one of their conferences that Bing was getting a text to image generator, was gonna be integrated into that, into Microsoft 365.

And then there was this other little company, you won't have heard of them, that announced they were integrating it into the tool set Canva.

Which is a great opportunity to talk to Jessica Edwards.

So Jessica is a creative technologist at Canva.

So how is generative AI working its way into the Canva tools?

Jessica Edwards: It's taken us by storm we are also just jumped on as well.


Visually in, in the sense that on the surface level and that we're actually saying, 'Oh yeah, we ha we're also adding a text to image a generator to our thing.

Mark Pesce: So it's, it's like a check box now?

It's yeah, we did text to image?

Oh Jessica Edwards: yeah, check.

We are ticking that seo box that.

But we have been using AI in our tools for, for a long time.

Styles, like if you have a style and a brand, you can apply it to a template.

We have background removal, but, but now AI is center stage.

It's, it's the selling point.

It's what gets people in.

It's interesting now.

It was this thing that we just didn't surface or talk about.

Mark Pesce: So one of the nice things about Canva, is it presents an open marketplace for a lot of creatives to connect with customers who need their creativity.

Now you're throwing AI into that mix.

Is that potentially gonna create, we used to call it 'channel conflict'.

Maybe we'd just call it that threat of automating work out of existence.

Is that part of the equation here?

Jessica Edwards: For us like when I think of say Microsoft and Microsoft Designer and like they're doing this text to image Adobe, like we've actually always had a very large 'content moat', we call it.

We, we really invested in very strong relationship with say our photographers are like making sure like we pay them well and compensate them well and our illustrators and we try and connect worldwide 'cause we know that even in-house, we're not gonna get like the best content for say Japan because like how can an Australian design for a Japanese market.

And the same way we know how can AI surface all these sort of common use cases?

And I feel like, Canva, we, we know about the common use case.

We, we, we have tens and thousands of Yeah.

Any sort of content really.

I'm not saying this as a selling point, we just as a fact versus say generative so but we don't necessarily have 'fail whale playing the piano underneath an oak tree.' And it, it doesn't make financial sense probably to actually invest in, in that, but that's what I, yeah.

We're hoping that...

Mark Pesce: so the AI will have this corner on the surreal, but for everything else, there's going to be human beings?

Jessica Edwards: Yeah.

Mark Pesce: Possibly.

Jessica Edwards: Yes.

Ideally yeah.

And we, we take it, yeah.

A very, we know how important our creators are like, we want to compensate them like we are looking into this space, but we still wanna compensate them for like their contribution to models, which I think is, yeah, like something that's been ... I think missing in the equation.

A bit like a Stable Diffusion.

As open as it is, it's maybe...

Mark Pesce: funny you should mention that 'cause that is exactly the next place we're going.

Thank you very much Jessica.

As Jessica raised-so what's in these models?

Where does it come from?

Who's doing it?

For all the benefits?

And I think the first thing that you have to admit is that DALL•E Midjourney, Stable Diffusion effectively, and it's weird to say this, they have all the things, right?

If it exists as an image that can be hoovered up in some sort of Web crawler.

It has been hoovered up into these models.

Which is why at this point we need to talk to an economist because this is getting into issues around property and what happens and all of that.

So Nicholas, please join us.

Nicholas Gruen is an economist.

Nicholas Gruen: I am Mark Pesce: and a long friend of this conference.

Now, Nicholas, one of these data sets has effect effectively hoovered up 10,000 years of images.

If we have all of that captured, I know because they can, actually 30,000, 'cause it can spit out images in the style of lascaux cave paintings.

So we're talking 30, 35,000 years.

Nicholas Gruen: I think the copyright's runout.

Mark Pesce: So is there any.

Is any of this recognized in any of the models?

Are we mining an undiscovered or undeveloped resource?

What's going on here economically?

Nicholas Gruen: Yeah.

Economists have had a kind of traditional weakness thinking about renewable resources and some effort has gone into that in recent years.

But there's a kind of a spectral signature in economics which continues to ignore this and many of you will be fam well you will all be familiar with the economic term 'public good'.

And a public good, we think of, we get this from economics, is something which couldn't be built in the marketplace 'cause you couldn't sell it because when you made it, it's available to everyone.

And so we get the government to build it.

But in fact, as I've been following a line of thinking for a couple of decades now when you start thinking about public goods, you come across more elemental public goods.

I'm using several now, and the government didn't build the language that I'm speaking.

The culture that we inhabit and replicate as we use it and so one of the things that I like to do when people are trying to think about these things is I tell them of an expression that they know very well and that is the 'free rider' problem.

And everyone in this room will know about what a terrible problem the free rider problem is.

And for instance, if it costs you a billion dollars to make a, to get a drug through regulatory clearance and someone can make it for a couple of cents, then you've clearly got an economic problem and you've got a free rider problem.

And it is a mark of how biased our culture is, how twisted or just particular, how it's gone in a particular direction, that a more important thing about free riding, exists as a kind of a vacuum in our culture, and that is the 'free rider opportunity'.

Mark Pesce: So the opportunity maybe being that this, all of these images represent the, the human inheritance...

Nicholas Gruen: That there are, they're a common base for all of us.

That's true.

But just listen to the two words 'free riding'.

It's free.

So, in principle, that's a good thing.

Of course, it can lead to a problem.

There are some situations where if 'you go, oh good, this is free, I should have it' it can produce the free rider problem.

But if I say, 'oh good, there's a language, it's called English, I can use it' there is no free rider problem.

There is a free rider opportunity.

And the free rider opportunity is why we, our, as a species, having, mobilized, the free rider opportunity is why we are sitting in Sydney in a conference room and not swinging from the trees like our forebears because we copied each other and in copying each other, we invented language and we invented culture, and we invented how to bring up our kids.

And then we invented government and we invented all kinds of things.

And that's the real miracle.

And of course, like any big scale miracle, it comes with some problems that have to be organized.

Mark Pesce: Okay...

Nicholas Gruen: that have to be sorted out.

Mark Pesce: Perfect.

Thank you so much, Nicholas.

All right, so [applause] The free rider opportunity slash the free rider problem has already had some people focusing their efforts on it.

At the level of startups.

So this is Michael Osterrider.

He's gonna tell us what he does.

Michael Osterrider: I'm Michael Osterrider, the CEO of Vaisual, a company, providing all tools to generate synthetic media.

What we see nowadays is the biggest art heist in history.

The only way for artists and companies of the IP industry to stop this is to take legal action against any violation of the law by companies scraping personal data and copyrighted material without permission.

Beyond that, a legal alternative for the AI industry is currently lacking.

We at Vaisual create and ethically and legally sound alternative to scrape datasets with

Mark Pesce: So we'll take a look at that in practice in just a moment, but you can see already people are going 'ok, this is a free rider problem, so we're going to come up with a solution', which is a commercial solution to address the free rider problem.

But there has been a growing tone over the last few months around what's going on with these models.

And so when Stable Diffusion 1.5 was released, which was only about a month ago, it came with this essay from one of the folks at Stability AI talking about why they spent a little time waiting to release the model.

They were making sure that the model was safe and they were making sure that they were preserving the values of the folks that they were collaborating with.


Stable Diffusion 2 was released last week with a similar sort of explanation in which they also said 'we had a choice.

You can either include things in the model that are not safe for work, or you can include images of children.

You cannot include both of them in a model.' Understandably.

And so they made the decision to include children and the, the not safe for work stuff simply was not put into the model.

So people are now thinking about the consequential nature of the choices that they're making about their models.

Now, as I said, there was an app that was released-JJ, are you in the room somewhere?

Hiding in the back somewhere?


Oh, you're over there.

There you are.


So JJ Halans told me on Twitter back when we were using Twitter, that there was an app that would run on my iPad that would run Stable Diffusion.

It was on test pilot.

I went down and, and ran it immediately, and it was really, really cool.

And I told everyone on Twitter to go and grab it.

And then 24 hours later JJ's like, oh, I'm bummed.

He's withdrawn it, he's taken it out of test flight.


and what had happened was the developer had actually then written a very long and really, really clear essay about all of the reasons that he felt that this was a free rider problem and not a free rider opportunity, and that he could not in all ethical goodwill release an app that did that.

Now, mind you, I'm using an app called 'Draw Things', which is on the app store now.

You can all put it on your iPhone or your iPad, and it does all of that now, because even he admitted that just because he wasn't going to do it didn't mean it wasn't going to happen.

So there are people now starting to think about what's going on with these models and how to make these models ethical and clean.

And then two weeks ago, an engineer decided that he was going to take on Microsoft and basically sue Microsoft for sucking up all this stuff that they didn't own out of GitHub and then selling it to people for $10 a seat per month.

And this is when we need to talk to an IP lawyer.

So I want to introduce you to my friend Brent Britton.

He and I have been friends for almost 30 years.

He's interestingly the only person probably who's a graduate of both the MIT Media Lab and Law school.

And so here's an interesting take on all of this.

Brent Britton: And so Mark and I are accustomed to having long conversations about technology and the law, and we, we were having one such conversation today just, just prior to this recording.

And we talked about the fanfare associated with what's going on in geberative AI.

And Mark said something which is We're, we're, we're both old men and we've seen it all before.' And I think there's no more suitable headline for this momentary presentation that I'm giving than 'we are old men, and we've seen it all before.' And more to the point, so is the Law.

In my lifetime, in my and Mark's lifetime, there have been a number of instances where a new technology or a new use for old technology has been deployed in society to great fanfare often, almost universally, in fact, amid.

Cries of, 'it's the, it's the wild west and there's no law, and the law doesn't know how to, how to deal with this new technology'.

This happened in the nineties when the internet turned into the Web, and it happened during the past decade with the, with the full deployment of blockchain and cryptocurrency and, and, and DAOs.

And, and, and everyone was wrong.

Those cries were wrong.

Both times.

The law did quite well.

And, and was fully prepared to deal with the vicissitudes, occasioned by the new technology.

Even though it turns out you could freely copy digital music back in the nineties, that in fact was for the most part a copyright infringement and still is today.

And even though everyone thought it was the wild, wild west, it turned out that eventually the Law creaky though it may be sometimes will come around to focus its energy on you.

And the same is true of generative AI, okay?

We have lots of laws in place that govern what happens to ownership of creative works and one of those laws, at least in the United States, is that creative works can only be authored by humans.

If human creative input was not in, in inserted into the creation of a work of art then it is not ownable in, in a very real sense.

It's not, certainly not owned by the AI.

And it's not owned by the, by the owners of the AI because there's no author.

Authors have to be human.

So this needs to get integrated into people's way of thinking because everything created by a generative AI that doesn't have some modicum of creative contribution by a human being is simply not ownable and is by definition in the public domain whether it's shared universally or otherwise.

The other thing to consider is you look at this Microsoft case and one of the ways to commit a copyright infringement is not just to literally copy something, but to have access to it, and then to create something that's substantially similar to the original.

Because the access is what gives you the, the knowledge.

Well, the generative AI goes out and reviews all the GitHubs and then starts creating code that's substantially similar to what it reviewed.

That's the very definition of a copyright infringement.

So we may wanna rethink these issues.

We may want to think about what it means to be a legal person.

In the United States, companies have a lot of rights of, of people.

But at the moment the ownability of generative works of art and literature, is quite clear and the fact is it's not.

It's in the public domain.

Mark Pesce: Alright, so there's the official legal opinion on this and now I want you to meet Anthony Breslin.

So Anthony is a Melbourne based artist, quite well known.

Does a lot of quite interesting urban art.

And he worked with Vaisual, which was the company that we heard from the CEO a little while ago because he decided he wanted to get out in front of this trend, and so he created 'the Brezinator'.

All right.

The Brezinator is an AI that has now been trained with a specific set of prompts, such as 'Breslin Blue' or 'Buttons and Collage' that will then, get the text to image generator to turn out art that is similar in nature to art that he would've designed.

He did this willingly because he was experimenting, but he also did this willingly because he has at least two kinds of cancer right now, and he's looking at this as a way for his art to be able to live on after him.

I'm thinking this is a first pass at a soft upload.

It's what it is.

And so he's used Vaisual and now Jessica, I was told that Canva was licensing this-do you know if that's the, that's the case?

I was told that Canva was involved in the Brezinator, so maybe that's the case and that's gonna be part of it-I don't know if that's actually true.

So where is generative AI heading?

There's a wonderful quote from Mark Twain, which his history never repeats, but it sure does rhyme.

And I remember in 1990 when I was setting up one of my best friends with a color Macintosh and a 1.0 copy of Photoshop because he was an illustrator and he was an artist, and he was gonna go and learn how all of these tools worked.

And I remember that there was a bit of a panic about Photoshop in 1990 because you'd never be able to trust any image you ever saw again.

Pretty much every image that we see that is presented to us over any broadcast medium of any sort or streaming medium has passed through Photoshop now.

So we are completely off the map, but that's an interesting moment to remember.

There's another moment.

So 29 years ago this week I finished a surf of the entirety of the world wide web.

I'm not kidding you.

I went to the master list at CERN, started at the top on Monday, finished at the bottom on Friday, and by February of 1994, it wasn't possible to do that anymore.

The Web was growing so fast that there was no way any human being could possibly keep up.

There was too much happening from all quarters all at once, and I feel like this is exactly the moment we're at with generative AI.

What have we seen in the last year?

We've seen co-pilot.

We've seen DALL•E.

We've seen Midjourney.

We've seen Stable Diffusion.



Another really attractive bedroom Layout.

All right.

But just in the last six weeks, you know what else we've seen?

We've seen meta AI producing something called Make a Video, which makes videos from text prompts.

We have seen Google Dream Fusion, which creates 3D models from text prompts-that's going to be really important because let me tell you, the Metaverse is going to use more models than any number of people can create in their spare time sitting in front of whatever 3D tool they might be modeling in.

We're gonna need text to model be able to populate the metaverse in any meaningful way.

And then my particular favorite, this came out of, I think the University of Toronto, is Human Diffusion-Model, which is a text to humaniform animation tool so that you can say, type in a prompt like 'the person did 10 jumping jacks', and you will get a video out.

With a model that is correctly articulated doing the jumping jacks, and that is a human task right now that takes fleets of animators working very carefully with human models and positioning, and we're moving to a place now where that is going to be as automated, and you think about where film production is going with virtual sets like the Mandalorian, where it's all happening on a screen behind the actors, and now all of the cast that's on the screen behind the actors will also be modeled with these kinds of tools.

The best thing about that is that Human Diffusion-Model you can download and install if you're running Ubuntu.

I have it running at home.

So these tools are out there and they're being used widely.

What do we learn from this?

We learned that everything is exploding.

So what's left for us humans to do?

And this is where I have to admit to a failure, because I tried to get an ethicist.

The ethicist was not checking his communication channels.

I contacted him a month ago.

He got to me at 11 o'clock this morning and said, 'oh, I'm sorry, I'm busy this afternoon'.

And ethicists are great, and they're absolutely necessary, and these days they're really busy.

But I wanna point out something.

It is long past time for us to start doing our own hard work here.

We have blind spots in all of this.

We know in fact that AI gives us particular kinds of blind spots around bias.

What we don't know is the kind of blind spots that we're going to have around generative AI.

I was using a generative AI system down in Canberra on Tuesday, and one of the blind spots was that that system didn't know anything about Australian fauna.

It could not for the love of trying generate a cassowary.

Now I know every human being who has ever seen a cassowary has died.

I get it.

All right, but at the same time, it could not generate a cassowary.

Because the model was simply not trained with one, and so we had these interesting blind spots that we're building into these models Now.

So we need to have a think, and I want all of you to think really hard.

What are the questions that we are not asking?

The questions that we should be asking ourselves, our peers, the vendors of the tools, the people who are using and creating and selling these models.

What are the questions we should be asking?

They aren't just questions about property, they aren't just questions about propriety, they're questions about our role.

There are questions about relationship.

There are questions about what is our evolution around these tools.

I'm gonna leave you with a quote.

When John Allsopp and I sat down to plan this session, he said, 'Mark, there's a phrase that is coming to mind'.

I had never heard this phrase before, and now once John said it, I could never get it out of my head.

'There must have been a moment at the beginning when we could have said no.

But we didn't.

Thank you.

Video of Die Antwoord: Age of Illusion plays. It is an anime inspired series of landscapes and seascapes, with people walking through them.

So. Many. Tastefully. Decorated. Rooms.

a series of similar bedrooms appears one by one.


Mark Pesce - Provocateur - @[email protected]

GitHub Copilot

A number of text editor windows appears. Specific code cant't be made out. Code is appearing as though types in the various windows.


Executive Manager (Engineering), Quantium

Photo of Ben in front of street art.


"THE TERMINATORin the style of Salvador Dali"

DALL•E generated image of "The Terminator in the style of Salvador Dali".

Yiying Lu



Three women in elaborate gowns in front of an enormous round window in the style of a 19th C painting.

An Al-generated artwork's state fair victory fuels arguments over 'what art is'


Artist & Senior Lecturer, Sydney College of Art

Stable Diffusion

Several tasteful layouts of men's bedrooms appear one by one

Ron Au

Frontend Engineer, Hugging Face

Photo of Ron at a microphone. He has a large lime green cardboard rectangle around his neck.

Screenshot of the Interior Art Free Web site. "Start using Interior Al"

DALL-E "wireframes of a website redesign"

A series of comuter generated wireframes


Director - Design & Innovation, Contextual

Microsoft Bing Is Getting An Al Image Generator

Microsoft is adding an Al image generator to its Bing search engine, which enables users to create digital art from text input.

Press announcement from Canva. Headline reads "Turn imagination into reality with Text to Image in Canva"

Jessica Edwards

Creative Technologist, Canva

"All the things" meme image. A simplistic drawing of a stick figure person screaming with an explosion behind them

Nicholas Gruen

economist, entrepreneur and Lateral Economics

Video of Michael Osterrider

Michael Osterrider

Founder & CEO. Vaisual

Why the Future of Open Source Al is So Much Bigger Than Stable Diffusion 1.5 and Why It Matters to You

excerpts from the essay Mark describes.

Tweet exchange between Mark Pesce and JJ Halans. Mark describes the interchange.

"On Creating an On-Device Stable Diffusion App & Deciding Not to Release it: Adventures in Al Echics"

Excerpt from the essay. Text is not distinct.

Microsoft's GitHub Copilot sued over "software piracy on an unprecedented scale"

Excerpts from the article.

Brent C..J. Britton

Founder & Chairman CoreX Legal

Video of Brent sat in front of a grand piano with artwork on the walls.

Anthony Breslin & THE BREZINATOR

Artist standing in front of a canvas.

What is The Brezinator?
  • An AI model trained exclusively on the art of Anthony Breslin
  • Created by AI technology company (pronounced v-eye-sual)
  • The tool will go live in late November

Why is this innovative?

  • It is the first time an artist has opted-in to transfer the rights to their collection for the training of Al
    • Most artists works have been illegally scraped and exist in the LAION dataset (as well as others) that train Al, giving the artists no compensation for their works
  • Anthony will be earning a revenue share when people create art using the Brezinator

How will people use it?

  • People can use a text to image online widget to guide the Al to create artworks in the distinctive style of Anthony Breslin
  • The interactive inputs, called "prompts" will decide how the Al generated artwork will be constructed:
    • The coloring (breslin blue, frog leg green...etc)
    • The subjects (frog, bug, horse, person)
    • The elements (buttons, beads, brushes, zippers)
    • The materials (wallpaper, acrylic, carpet, canvas, paper)
    • The method (collage, drawing)

Splash screen of Adober Photoshoop 1.0 for the Mac

Browser window of NCSA Mosaic

Images representing each of the innovation mark lists appear as he announces them.

examples from Meta AI, Dreamfusion and Human Diffusion-Model.

Meme animated GIF of Ron Paul excitedly waving his hands above his head.

Cartoon image of a female presenting robot standing on rocky ground with scores of robot birds crashing into the ground.



'Rosencrantz & Guildenstern are Dead'

pays respects to the traditional owners of this country

Mark Pesce pays respects to traditional owners of this land, the Gadigal people of the Your nation. He acknowledges that this country was always sacred and never seeded. I am not going to use this stage at all today.

Into the Interior Decorating Men's Bedrooms

I have become addicted to interior decorating. Men's bedrooms, isomorphically presented generally in blue and gray or blue and gold. There are so many to choose from, I have lost count. If we want to understand why I have become so addicted, we have to look at the path that we took.

Q and AI: A Tour Through the Web

Q and AI is going on a tour. What we're going to do is a bit of photogrammetry. John will be monitoring the chat on confab. We will have a microphone at every stop along the way. It's good to have a discussion about what is really going on.

In the Future of Coding With Generative AI

Last year I put myself on the beta test list for GitHub Copilot and Copilot. It's sort of like a pair programmer, if your pair is dead, has become a ghost. Most of the time I used it, it wasn't bad. It was good enough to point to a future for coders.

In the Elevator With Quicksand

Ben Buchanan is the executive manager of Engineering over at Quantum. Copilot offers a productivity boost for engineers. It depends on the language. One of our devs thought may be up to 10%, and I'm thinking 10% boost for $10 a month?

How Many People Have Used Copilot Yet?

Do we have any responses from how many people in the room have used Copilot yet? How many people would use Copilot if they could if they thought they were going to be 10% more effective? All right. Thank you very much, Ben.

Artificial Intelligence Turns Text Into a Picture

New generation of text image tools. You utter a few choice words, the model then takes a weighting of these words against the weighted digestion of, well, pretty much every image it could scrape from every source it could find everywhere on the web. Uses that to synthesize a new image. It's taken people by surprise.

How Does Dolly Make You Feel As An Artist?

How does Dolly make you feel as an artist coming to the light? Do you think you're in competition with Dolly? The reason why we create art is to share the experience. Only your experience matters.

The Art of Mid-Journey

Sydney College of Art's Robin Hoyle talks about using AI generative tools. She says you have to find a way of turning it into a tool that you transfer into your work. Unless you build the relationship, it's not really art, she says.

Generative AI: The Art of Learning

Both Dolly and Mid journey are commercial tools. You have to pay to use them. At the end of August, the entire world of generative AI had an earthquake. It changed dramatically with stable diffusion prompt based image generation.

Is Hugging Face Trying to Take Over AI?

Hugging Face creates tools for other people to use stable diffusion. At the core, Hugging Face wants to be an enabler. We want you to be able to interact with hugging Face. And our mission is to democratize AI.

An AI to Design Your Home?

Dolly is available on an iPhone and an iPad now. It can generate interior designs. What happens if you take those two ideas of design and coding and combine them? Let's talk to someone who does.

Will Computer Generative AI Generate Wireframes?

Oliver Weed Lake is Director of Design and Innovation at Contextual. He says within the next 48 months, you will have a service that you can subscribe to. It will generate infinite series of wireframes and similar interface proposals. But wireframes should be about functional usability, he says.

Art and the Future of AI

Adam Batego is a digital creator designer artist. Where do you think the AI image gen tools will lead? It's broadly what we're talking about in the future of art and technology. Who does anyone want to make any public comments about this?

Canva's Generative AI integration

How is generative AI working its way into the Canva tools? It's sort of taken us by storm. One of the nice things about canvas is it presents an open marketplace for creatives to connect with customers who need their creativity. But for everything else, there's going to be human beings.

Free-Riding Economics

One of these data sets has effectively hoovered up 10,000 years of images, right? Is any of this recognized in any of the models? What's going on here economically? Economics continues to ignore this. The free rider opportunity is why we as a species are here.

The free-rider problem

Michael Ostrider: The only way for artists and companies of CIP Industry to stop this is to take legal action against any violation of the law by companies scraping personal data and copyrighted material without permission. Beyond that, a legal alternative for the AI industry is currently lacking.

Brent Britton on Generative AI

Brent Britton: We're old men and we've seen it all before. Everything created by a generative AI that doesn't have some modicum of creative contribution by a human being is simply not ownable. We may want to rethink these issues.

Anthony Breslin's Art Will Be Available on Canva

Anthony Breslin created the resonator, an AI that has now been trained with a specific set of prompts. It will then get the text to image generator to turn out art that is similar in nature to art that he would have designed. Jessica was told that Canva was licensing this.

Generative AI and the Future of the Metaverse

29 years ago this week, I finished a surf of the entirety of the world wide web. And I feel like this is exactly the moment we're at with generative AI. It is long past time for us to start doing our own hard work here. We have blind spots in all of this.