The Experimental Future

What is the future of product design using experimental technology from blockchain to voice commands, AI, and IoT?

This talk covers what the future of ethical product design can be? Why and when should we use blockchain, or AI, or IOT? What are design principals we should keep in mind for creating with new technology? Instead of building future worlds imagined in the ’60s during the Space Race, what is the future now, and how can we build it?

In the 50s and 60s, the future looked like the Jetsons. Cartoonish illustrations painted a world of mechanised home servants.

In the 80s and 90s the vision of the future became sci-fi dystopia. Think cyberpunk, Blade Runner, Akira. Everything became dark, gritty, cold and blue. While we might enjoy that as fiction, is it what we truly want?

But where are we now? Much of our current technology has gone a bit wrong, like in-home devices spontaneously laughing – very very creepy. Or the issues of sexist chatbots and AIs that almost immediately learn to be extreme-right racists. What future are we setting now?

Caroline’s dream is of a collaborative future.

What if we stopped making humanoid servants (with ML, AI, etc), and instead create machine agents that collaborate with us? Augmented intelligence, rather than artificial intelligence.

We should be skeptical about the ways people are using or planning to use data. We should expect products to be collecting the minimum amount of data they need to do what you want them to do.

IoT is creating a lot of data; and we need to think about the type of data it’s collecting and where it goes. There aren’t any widely accepted standards for IoT data or security.

What about IoT for good? There are networks that monitor floodwaters across communities. It collects very specific data that is very helpful. There are farming applications like monitors for livestock that track health and development of the animals.

So what about robots? As creepy as current robotics research is, we are going to use robots more in future. Not just impersonal machines for relatively industrial applications, but very personal things like robotic assistance animals.

Is it possible to use AI for good? For the public good and the civic good? How can we use AI for more than just predictions. What about applications where very dumb AI can be used, simple processing and monitoring. Can AI help with the by-products of humans, particularly in cities? Can AI help with things like sewage, rubbish, rats?

AI as a curatorial assistant. A system pairing photos produced the combination of a painting of a church; and a pile of guns. While a human may not have made the pairing because it’s so unlikely, the AI paid attention to direct similarities and not the context a human would have applied. Humans added the poetry and meaning of the paired images.

The future is gonna be weird

We need to stop rehashing the old visions of the future and create our own. We must critique where we are now, but what if we imagined warmer futures – where technology creates a better world and not a dark, blue dystopia. Let’s build what we can imagine.

(dance music) – Hi everyone, I’m Caroline Sinders.

Fun fact, one of those speakers from two years ago on the day Donald Trump was elected.

He is an American, he is now defected to Canada, I really don’t blame him.

I’m probably gonna tell a lot of really inappropriate and appropriate jokes.

But before we get started, I wanna give you a little, I feel like it would be hard to start this talk, which is about the past and the future, without acknowledging, and giving a shout-out to the thousands of Google employees who walked out today, they walked out against harassment discrimination. (audience clapping) So we hear you, and we see you, and we support you. And I think that’s probably an appropriate way to start my talk.

The future is gonna be really weird.

A lot of our concepts of the abilities of robots, so AI, came from the Space Age, and from the 1960’s. The 1960’s were emerging from the darkness of a war that galvanised perhaps, optimistically, the possibly of the exploration of space, and the affordances of what that technology could do for us in our daily lives.

The Space Age exploration resulted in this utopian meditations of Tomorrowland.

Tomorrowland brought about things like TV watches, and jet packs, and meal pills, and driver-less cars, and robot maids.

And all these things sound really familiar right now, don’t they? And a lot of these concepts were illustrated specifically, by Arthur Radebaugh.

A commercial artist based in Detroit, best known for his work in the auto industry. Radebaugh’s illustrations helped really shape the baby boomer’s ideas of futurism.

And a lot of this talk will actually be a dig at baby boomers.

(audience laughing) It also shaped the ideas of what technology could do. And in fact, we see a lot of those creations today. The iWatch, the Roomba, Soylent, doesn’t that all sound familiar? And let’s fast forward to the 80’s and 90’s, which brought the rise of cyber dystopia.

And this dystopia of technology, and why was it such a dystopia? Well technology was being so seamlessly integrated into our lives through this age of sci-fi that robots were becoming humans, and confused for humans, and this seemed to be a problem. And the worlds were incredibly grey, and very dark, and very, very cold.

And technology could be embedded anywhere and everywhere in these worlds, either around us or right in us. And to be frank, I did and do love this vision. I love cyber punk, and at times, I think dystopia’s important.

Because darkness is important, and at times we need to see and understand what that is, to understand how societies can go wrong, so we can start to course correct.

Now not all cyber punk stories are dystopias, they can be about the underdog rising up, and I still love that vision of what the world can be, and what the world could look like.

Blade Runner, however problematic, is still a favourite of mine.

As is Akira.

Snow crash, hackers and sneakers, those are great, too. But right now, where are we with technology and tech embedded in our daily lives? Children can order cookies from Alexa and Alexa can laugh spontaneously, and that’s pretty creepy.

(audience chuckles) Tay bot was taught how to speak like a Nazi, Siri really can’t understand accents and Cortana was sexually harassed.

Is this the future we imagined with technology? Is this the future that we want? What is the future of new and experimental technology? Amelia Winger-Bearskin, a machine learning technologist and artist, and the director of IDEA New Rochelle, an arts initiative in upstate New York, states that technologists right now are building their parent’ dreams and their grandparent’s dreams. Dreams that were imagined from the 19030s to the 1960s, dreams that were imagined by baby boomers.

And I’d like to add this additional thought to the mix, we’re also building future cyber punk dreams that were shaped by the media in the 1980s. This future imagines that working-class tasks and more feminised labour would be fulfilled by humanoid robots or chatbots.

And this is something the theorist Kathryn Cross’ work also explores.

Why are chatbots, for example, so gendered and named, and why are they so feminised and why are they assistant bots? And more importantly, it’s really hard for me to imagine as a creative technologists, this future of a mechanised metal working class with no humans, when, in the U.S., for example, universal healthcare is still trying to be defunded.

The 1960s had the threat of the Cold War, but with the looming threat right now of fascism, of perhaps no social services in the U.S.

And calls for white ethnostates, it’s really hard to imagine or create a thing that makes my food faster when people may not be able to afford groceries.

So as technologists, what are we dreaming right now? And this is my dream.

I think of a collaborative future, and that collaboration is with technology.

So perhaps this is actually the best time to introduce myself.

Hi, I’m Caroline.

I’m a designer and artists and I’m the principal designer and founder of Convocation Design and Research, as well as a fellow with Harvard’s Kennedy School of Government and Policy, and a writer in residence actually at Google with Google’s PAIR, People in Artificial Intelligence Research Lab. That means I work at the intersections of emerging technology, in design and policy, and I think about it from an ethical standpoint and how ethics is translated into design.

And this emerging technology can be things like machine learning, AR or VR.

Generally, I like to describe my work as I make complex technology human readable.

I look at how language is analysed and misused by machines.

And how humans understand and misunderstand each other in digital spaces.

And where is technology exactly involved in that equation? So how does technology augment, good or bad, human experiences? Which leads me to wondering, what is our future with technology? And I think it’s collaboration and not just optimization. So, what can humans do well versus what technology do well.

This is pretty much the basis of all of my work. Technology can optimise products and most of us in this room would agree, but I wonder where does whimsy and design and experimentation fit into product design, as well as usefulness? It’s an interesting intersection, isn’t it? Creating art for art’s sake, as well as designing systems to be sustainable and usable as well as utilitarian. So, when should we actually use AI? When does a robot make sense in an equation? Does something really need to be on block chain? ‘Cause it’s not a tech conference if you don’t mention block chain.

(audience chuckles) or did you just need better infrastructure in your product? What are useful applications of VR and AR beyond creating empathy because technology doesn’t create empathy, people do.

I don’t necessarily need technology to replace my tasks, but instead I wonder as a designer how technology can augment my experiences.

Artificial doesn’t mean better or that it’s right in the human experience, so why does a lot of AI creation immediately go to humanoid or human voice? Instead what if we thing of machine learning and AI as collaborators.

The future I imagine isn’t one of siloed work forces of a robot servant, but of a space exploring augmentation. How can we augment a human experience? How can I use what machines do really well like processing large quantities of data, and how do I use that to actually amplify my own strengths and experiences as a person? Again, let’s think of machine learning as this augmented intelligence and not artificial intelligence.

But before we get there, we have to talk about data. It’s important, obviously, and it’s important because data’s the backbone of emerging and experimental technology.

But data’s also something incredibly personal. We are creating spaces of intimacy inside of technology. The internet just isn’t an internet of things, it’s an internet of intimacy.

Data is people.

Data are stories about people’s lives.

Every datapoint from a software application, from an amount of usage or from how people talk to each other from a social networker platform, from a like from a retweet, is made by humans. Meaning that data isn’t something cold or quantitative, it’s inherently human.

The amount of likes on a past, human output. The spikes in usage within Nest, human outputs as well.

So experimental technology, especially in things like IOT are creating astounding amounts of data.

And we should be sceptical about where that data comes from and how it’s being used.

So what kind of data? How much data? And when do you need data from your users because your users are people and that data is their stories.

And that data can be their personal trauma or their daily intimate lives.

How are we using it? Artificial intelligence and IOT need data to function, to run, to just work properly.

But when we use data, let’s make sure we aren’t just building systems of surveillance. We don’t need all of the data from a user, so we need to balance data collection with user privacy and agency.

How do we start asking for the minimal amount of data for a product to work, for a product to run? Because any kind of technical infrastructure, like artificial intelligence, is a product. So when I think about augmentation or technology inside of our daily lives, I immediately think of IOT, don’t you? Connected devices, kind of makes sense.

John Rogers, a senior fellow at the Mozilla Foundation, as well as the head of Mozilla’s Open IOT Studio, points out as obvious as this sounds, that IOT is a part of the internet.

It’s not just hardware with the internet on top of it. But it’s a device or rather a design that’s creating so much data and so many siloed parts of the internet ecosystem that we need to think about what kind of data it’s creating.

More importantly, there aren’t data standards or even design standards for IOT.

What is considered an IOT dark pattern? Peter Beer, another Mozilla senior fellow, is developing trust marks for IOT, essentially to try and create these standards and stamps for user protection, which is great. So outside of this, I started wondering, what are we using IOT for and when do we really need it? So, what about IOT for good? And I’m not talking about IOT in the home, though I’m pretty sure there are some examples, but I’m thinking a bit bigger.

So in the U.K., there’s this group called Flood dot Network, which is a network distributed sensors that focus on reporting flood throughout the entire country by keeping track of water levels.

It aggregates data, it looks at trends, so that local communities can be warned in a timely fashion about potentially flooding.

However, this only works because multiple people, cities and groups collaborate together on Flood Network. So think of it this way, one maintained server or sensor can’t really do much on it’s own, or even a handful of sensors.

But with an entire network of sensors across multiple communities across multiple locations, it becomes quite powerful.

By while it’s producing a lot of data, that data has a point and that data is used in a hyper-specific way, and it’s only collecting a certain kind of data.

Here’s another example, which maybe isn’t so good if you’re a vegetarian.

But it’s another one I wanted to point out. Farmable.

Farmable is a crowd farming platform that connects small cattle farmers in Ghana with a bigger and global market.

So the company’s suite of IOT devices help farmers track cattle, monitor their health, and then connect with potential buyers.

These buyers can invest in the cows and also follow their progress.

I can think of so many other applications for IOT if we start to think about maybe from the perspective of environmental protection and also around endangered species.

Certain things that you would want monitored and that you would want updates on, that you would actually need real-time data. There are so many applications for IOT, especially non-invasive ones.

Once we start imagining IOT outside the home and we maybe start imagining it in our environment. So let’s change topics.

What about robots? Why robots? I’m sure you’re thinking, why the fuck is she talking about robots? (audience chuckles) ‘Cause I think robots are really great and I’m your keynote speaker.

(audience chuckles) (Caroline chuckles) (audience claps) But I also see them as being extremely integral in our future, especially in automation.

As creepy as Boston Dynamics is, we are years away from robots actually threatening us. But robots are gonna start replacing many human jobs, especially in automation, but not all jobs. So what are useful applications of robots? Well, robots can be used in plenty of ways that humans can’t.

We can use robots in dangerous situations like diffusing bombs or in factories and working with really large and dangerous machines.

Or in other cases, we can use robots actually in situations that are adjacent to care and caregiving.

Now, this isn’t replacing humans at all.

I’m not saying robots should be used instead of human caregivers, but let me give you an example from robot esthetician researcher Kate Darling of the MIT Media Lab.

In her TED talk, Darling mentions PARO, a care robot. It’s used in nursing homes with dementia patients and it’s been around for awhile.

Now, PARO isn’t replacing a human caregiver, but it is provided when a support animal cannot be used.

And I really like this example because it really forces us to think about in what situations could something that is designed to function as a form of care to do something that seems quite minimal, but is actually quite robust.

How could it exist in a mechanised way? And what I like about this is also the design and the carefulness that went into creating something like PARO.

And I want to reference Kate’s talk a little bit more because her talk goes even further.

She actually covers a lot about the ways in which people empathise with robots, especially when it’s designed a certain way and it takes a certain shape ’cause robots move.

And her talk isn’t about what robots can do for us, but how can we understand the ways in which we as humans relate to robots.

And how will this have greater implications? Because Kate realised over her decade of study into this subject that humans react to robots in a specific way with sometimes startling amounts of empathy.

So Darling wonders, her question for the coming era of the human-robot interaction is not do we empathise with robots, but can robots change people’s empathy? I think this is a fascinating space when we start to think about experimental technology with something like robots.

Even if we design them to not look like humans and we shouldn’t, how can robots change the way in which we interact wit society around us? Are there certain lessons we can learn from them? I think this is gonna define the next 20 to 30 years or research not just in HCI, but really also in product design.

And this brings me back to AI.

And this is where I actually like to think about AI. Where is it possible to use systems like artificial intelligence for good, for public good and for civic good? Now, AI can be wildly problematic.

From predictive policing to surveillance to just creating predictions from biassed data sets that are just plain wrong.

And those can have extremely negative and widespread effects on people’s real and everyday lives.

But I wonder what can AI solve and when can it be used when it’s not used for predictions? And it’s not used for predictions specifically about humans. How can we use AI to solves problems about the byproducts of humans? So at PAIR, this is what I’m thinking about. I’m thinking about when dumb AI can be used. I mean, AI for just statistical analysis.

I mean, AI and machine learning just to look at large, large unruly datasets.

So as a writer in residence, I’ve though about what about AI in cities? And I realised I needed to speak to people that knew a lot more about AI and cities than I did.

So I reached out to Noel Hidalgo of BETA NYC, a think tank focusing on New York City and the local government and technology, and Georgia Bullon of Simply Secure, and then I interviewed a group of researchers with the European Commission who work specifically on AI and datasets.

And we started to talk about data.

‘Cause what I started to think so much about when it came to AI not predicting anything or AI making assumptions about humans, I wondered about this ethical side of humans inside of cities and what would happen when it came to data and cities in AI.

Would people make predictions one to two years or even 20 years down the line? And again, talking to Noel brought me back to AI and cities, which brought me back to data.

Data is incredibly messy when it comes to aggregating data especially about a city, even if it has nothing to do with people.

Why is data so messy? Well, for example, we have to think about how is it caught? Who is in control of it? How is it collected? How’s that data structured? I’m sure you can already see the problems that are starting to arise.

In a city, for example, a certain group or organisation even inside the local government, may save or structure their data differently.

They may ask different questions about the dataset. They may not store it actually in the same space at all. It may to even be an open dataset.

It may be partially filled, it may be old, it may be missing large chunks.

It may be a dataset that was collected that one time and never touched again.

It may be a dusty dataset.

So what we have is an example of not all data being the same, treated the same, or caught in the same way.

And this is a problem if you’re going to use artificial intelligence to start solving any kinds of problems related to a city because all of these problems are extremely different and data is treated in very difference ways. So, this still brought me back to a conundrum because I’m studying AI in cities.

So then I started to think about the byproducts of humans inside cities.

Could AI analyse sewage or trash or rats? What about trees and parks? So when I spoke to Georgia Bullon, she actually brought up trees.

By 2020, New York City wants to plant 1 million trees in all the five Burroughs and that’s a really big order, that as really, really big number.

That’s a large number of trees.

So I thought about where AI could fit inside of this. Well, that’s a really unruly dataset.

What is the way in which trees are planted? Where do they succeed in planting versus others? When were they planted? At what time? How much did they grow? What kinds of trees were used versus other trees? AI can’t really give concrete answers, but it can be used as analysis, deeper analysis to probe civil servants on really looking at the problems or the successes of planting a million trees. Because a million trees is a very, very, very large dataset and this is where I see AI being extremely useful. It’s not giving us answers, but it’s helping us analyse answers in a dataset that’s just so, so big.

And this kind of analysis can be used actually by experts in the field to draw deeper conclusions.

So AI isn’t necessarily solving something, but it’s helping us.

And that’s one of the things I like about AI is thinking about it not actually as the solution, but as the extension of augmentation or as an assistant.

So here’s another example with AI in art.

In 2016, the Tate in London commissioned me to write an essay about this AI project called recognition. Now, recognition was an AI assist in design to go through the Tate’s archives as well as Reuters image archives and create image pairings.

It was creating image pairings of things that supposed to look alike.

Now, a lot of these image pairings didn’t quite make sense at first and it felt very AI in that way.

Some of the images had vaguely similar shapes or colours if you squinted really hard.

But the system then showed me a pairing that I still think about to this day, which is the one you see up here.

What you see is a photograph and a painting that the system had paired together as similarities. And I think we can see that I think these structures look incredibly similar.

The width of the brush strokes in the painting are really similar to the width of the guns. The way that they’re lined up, they look incredibly similar.

But what’s actually in the images? So the photograph is an image from 2016 from Reuters. It’s an assortment of 5,250 illicit firearms and small weapons that were recovered at various security checkpoints and operations in Kenya. And they’re stacked up to be destroyed.

And the painting is called Canterbury Cathedral by the artist Dennis Creffield.

So this is something I think AI does extremely well, machine learning and technology.

In general, AI and machine learning and tech can actually process images faster than a human can. I never would have made this pairing because this pairing almost doesn’t make sense. And I wouldn’t have made this pairing even with a background in fine art, which surprise, I actually have a bachelor’s in fine art and photography. But this is precisely where AI can be helpful as a curatorial assistant or even a curatorial suggester. The recognition project doesn’t know about the history of violence between these two images. The history of violence within Christianity in the church or the violence around guns.

It doesn’t know about modern conflict.

It doesn’t even know maybe that a gun is a device for violence.

But as a human, I know that history and I can add in the poetry between the meaning of the two images.

And this system prompted an entire essay and a curatorial pairing that almost two years later, I still think about.

Which is why I’m interested in creating new and emerging experimental products from tech because I hope that the future is extremely weird, but also, extremely ethical.

And I think we do actually need to look back at the past to better inform the future because we have to look at well-know use cases of failure and caution as well as hopes and dreams to better inform the right now.

But I want to build an extremely weird future. A future that feels new, that feels timely, which brings me to futurist Monica Beletsky’s work. Beletsky was tapped at the very beginning of funding for the Ghost In The Shell movie to help develop the world. She is a world builder, which sounds like a really cool job.

And she wanted to imagine a future based off trends that were happening right now.

She didn’t want to remake a version of cyber punk that already existed, one that was this dark and blue and wet world, and one that was full of white people.

She wanted to make a different world, a diverse world, a world that reflects what we have now.

She created these series of mood boards, none of which were used in the movie.

And in an interview with Gizmodo, Beletsky says that her main critique to Hollywood is that you can’t dip in and out of sci-fi. You could before because the future seemed so simple, but things are more and more complicated now because the future’s so complicated.

Too much of Hollywood sci-fi is stuck with retro ideas about the future.

The world of actual research and science and technology and space is ripe with inspiration. So I agree with Beletsky.

I really want us to stop imagining old dystopias or cold mechanical spaces.

Let’s dream about a new future and critique the present that we’re in.

What are we fighting for and what are we building for? So, it’s been hard as an American to talk about hopefulness, especially right now, but I think it’s important. How do you talk about a hopeful future? Because what are we building together? It’s important to critique where we are, but there’s something we should build and not just fight. What are things we want and what are things we want with technology? I want a society of equity and equality.

I want a society where people can walk out at work and actually have their voices heard and then have really systemic change inside of the company that they’re in, especially when it comes to online harassment or harassment discrimination.

What I wants is a society of equality and I want technology to also fit in that space. So instead of imagining colder or older or bluer dystopian futures, I want to imagine a warm future. A future where technology doesn’t amplify problems, but collaboratively with humans, it solves them. And I recognise the irony of ending on this GIF, ’cause I just talked about futures, but it’s one of my favourites.

So thank you.

(audience applauds) (dance music)

Join the conversation!

Your email address will not be published. Required fields are marked *

No comment yet.