- One of the areas that really, really interests me obviously is the Web.
I mean, it may be obvious, though, I think Web continues to be this profound platform that we massively underestimate the importance of. We just take it for granted, and I think we shouldn't, really.
And an area which I find is really extraordinary right now in that is the whole idea of Web virtual reality, or WebVR. So there are some very specific APIs in the browser that are increasingly well-supported, as well as an underlying idea of the Web as a platform for doing virtual reality, augmented reality, mixed reality. And we have a great good fortune to have someone who probably knows more about this area than literally just about anyone on Earth.
Most of us have probably met or heard Mark Pesce speak. We've seen him on television in The New Inventors. I had the very good fortune to kind of cross paths with Mark over a decade ago now.
He doesn't remember it, but I very much do, when he was emceeing a conference that I was attending, and I got caught up in doing a kind of crowd activity that we end up presenting a lot of on stage. He doesn't remember that, but I remember it very clearly. At the time, he was talking about Friendster and Orkut. And I got it.
The scales fell from my eyes, and I'd been kind of sceptical about all that social media stuff.
He sold me a vision that has come to pass.
And while it wasn't Orkut, and it wasn't Friendster, the vision that he saw has come to pass in a way that I think at the time no one imagined. And Mark has that capacity.
A lot of people call themselves futurists.
(chuckles) Most of them are completely not worth anyone's time listening to.
Mark is genuinely someone who has seen the future from the very earliest days of webs and before. He was a kind of an inventor of virtual reality markup language, which was in IE, I think.
Is it still in some version of IE? Can you still, right, so Internet Explorer to this day, it's a declarative markup language for doing virtual reality.
He'll probably talk at least a little bit about some of the things that he's done, but the work that he's done, the patents that he's hauled in this space are extraordinary.
So there really is no one better to outline for us the capability of virtual reality and the Web today and where it is headed.
I'm really looking forward to this.
Would you please join me in very warmly welcoming Mark Pesce? (audience applauds) - Thank you.
Don't go anywhere.
(mumbles) Don't move, don't move.
My God, I have to live up to that now.
(audience laughs) First off, one of my great good fortunes has been that I can call John Allsopp a friend, because he's an amazing human being.
And as friends do, you have idle chatter during the week. And last week, there was idle chatter between John and I. And I said, "John, I just want to let you know "that if Trump wins the election, "I will be on a bender that will be continuing "when I am giving this talk." (audience laughs) It's a lovely Japanese-blended whiskey.
(audience laughs) If any of you would like some, I'll be giving it out afterward.
Thank you, John.
(audience applauds) Now there was another time.
I wasn't on a bender, but I was getting to know someone who I would end up knowing now continuously for 23 years. He had just moved to San Francisco with his wife. He was full of ideas, and we were sitting and drinking deeply.
This is in December, 23 years ago.
And he said, "So, Mark, what do you do?" And I said, "Well, I just tried "to create a consumer virtual reality company, "but that didn't work, because Sega cancelled the product. "What I want to do is I want "to create a virtual reality interface "to the World Wide Web." So that's Tony Parisi, over there on the left. And that's me, over there on the right.
And although this picture was taken this year, I am still the crazy hippie, anarchist, queer. And he's still the punk, anarchist, shit-stirrer. (audience laughs) And we put our heads together.
And it turns out we had exactly the right complement of skills, because I understood 3-D, and he understood how to write a parser.
And we put them together over January, over a lot of coffee at the local cafe, designed a very simple language.
I used libwww, which was an open source library to integrate applications into the Web.
And at the beginning of February, we have produced the cyber banana.
(audience laughs) This is the very first VRML object.
The reason it is is because I still, 25 years later, cannot use a 3-D modelling software package. And so I had to use a pre-built model.
I happen to have a banana handy, so there it is. Now the URL over there on the lower right hand corner, if you want to bring that up on your phone or on your laptop, because I'm gonna be a little bit old school. We're actually gonna get stuck into code in a little while, 'cause I want to show you all how easy it is to work in VR now.
So bring that up.
We'll be referring to it a little bit later on. Now VRML generated a lot of interest, but the entire world was so busy trying to digest the most contribution to human knowledge sharing since Gutenberg, the Web, that although people were intrigued by VRML, no one really took it up.
And you have to remember when we did our work in 1994, Toy Story hadn't been released yet.
So people hadn't even seen pre-rendered 3-D graphics. So we were ahead of the curve.
By around 1997, VRML just sort of becomes one of those woulda-shoulda-coulda technologies, as did all of VR generally.
And then an amazing thing happened.
And the funny thing about this amazing thing is that this amazing thing was totally predictable. We hear a lot about Moore's Law, but we don't actually think about what it means. What I want to do is I want to show you what Moore's Law actually looks like. So the 20 years, from '96 to today, that's what it looks like.
And that's what it looks like.
Everything that was hard, everything that was expensive, everything that was weird and unfamiliar in 1996 is now common, cheap as chips, and everywhere.
And so we live in a period that I'm now calling the VRenaissance.
What I want to do is I want to take a look at the elements of that VRenaissance, because it tells you very much about the world we're in now and the world we're about to enter.
Now the modern history of virtual reality, when it is written, will point to one moment when it changed, and it's not the moment when Palmer Luckey posted an Oculus to the Kickstarter. It's when two engineers from Google decided to slap a piece of cardboard and some lenses in front of a phone and said, 'Oh my God, there are a billion VR class systems "in the world already." This is when it changed.
And this, we'll come back to.
So the next system to come along was Samsung. They did the GearVR, which is basically just a tricked-out Cardboard again. Lenses, but it has some very fancy sensors in it, because your smartphone wasn't designed with virtual reality in mind.
And so it's inertial sensors aren't really fast. They don't really give you a smooth experience. The first time I tried a Gear VR was the first time in all of my years in VR when I went, "Oh, this is as smooth "as an old shoe." And then, out of nowhere, completely unexpectedly, the most boring company in technology produces the most exciting piece of virtual reality there is, the Microsoft HoloLens, which although it has a very narrow field of view, maps the space in your in, because it basically has Kinects built into it. So it knows where you are.
It knows where things are.
You can take a virtual object in the real world, put it over there, go do something, turn back. It's still over there.
This is mixed reality.
This is the place that we're eventually going. The HoloLens is the first instance of an entirely new class of interfaces that all of you will need to think about designing for.
And of course, there's been a plethora.
This year, 2016, this is the VR year.
So we got Oculus shipping at least the first part of its hardware in April.
You have the HTC Vive.
I have one of these.
I absolutely love it.
That shipped in May.
Both of those have shipped in relatively small units. But Gear VR has shipped around, well, maybe two million units.
There's a problem, because the Note 7 was designed to be an ideal Gear VR device.
(audience laughs) Oops, there are at least 10 million Cardboards out there, and we're seeing more and more crazy things. You see people with just like lenses that flip up in front out of the case in front of your smartphones so that you can use them.
But then the thing that I tried to do 25 years ago, Sony did last month with the PlayStationVR. Now I bought a PlayStationVR.
I got it on opening day, but, because I don't own a PlayStation, I have to go and pick up my PlayStation Pro tomorrow, and I'll have a PS VR system.
Probably a half a million of these shipped. Here's an interesting thing, another when-it-changed moment. The number of high-end virtual reality systems in the world went up by about an order of magnitude in one day in October.
And then we come to yesterday.
Yesterday, Google released Daydream VR.
It's the Daydream VR system that's geared first around its Pixel phones but around all next-generation, high-end Android devices. It's projected that there will be roughly 300 million Daydream class devices by this time next year, essentially all the high-end Android phones. See, now what you have is we've gone from there's no VR to now there's VR absolutely everywhere. And so we come to WebVR, because now, particularly because Google is deeply involved here, and because it is in Google's economic interest to see that the Web is everywhere.
We have a very strong push for building a set of browser level APIs, and here's just a couple of them. This is a bit of a code fragment from the Mozilla site on how to move seamlessly from a virtual world to a virtual world inside JavaScript.
There's a set of browser APIs that are now being standardised that will be available in pretty much every browser by this time next year. Give you a sense, so you have W3C supporting this process. There was a nice meeting in California of everyone who's a core stakeholder in this process, last month.
What you have now is today you can go and download a special build of Chromium for Win64, which will work with your Vive or with your Oculus. How many of you use Firefox Nightly, to get the latest shiny? It already has full WebVR support, across platforms. You can download that and start working in WebVR today. In January, Google will release Chrome VR for Android. This is a special build of Chrome, which is designed to work with the Daydream. This is very important.
They're working very hard on this.
And to listen to them talking about what they're doing, what they're going to do is they're going to start to build in browser level features that will make it easy for us to build 3-D affordances in that browser. It'll end up in mainline Firefox right around the same time as it transitions from Nightly.
And the latest invitee to the party is the most boring company in technology, who came to the party last month and said, "Well, this is all very nice, "but you haven't put anything in it "to support mixed reality." And so the WebVR spec went from 1.0 to 1.1 as it adjusted to handle mixed reality environments. I expect that Edge will ship with full WebVR capability at the time the creator edition does, because Microsoft has announced an entire line of VR head mounts that are being made by OEMs. And probably sometime around Windows Holographic ships, which is next August, you'll probably find full mixed reality support in Windows, in mainline Windows. Now how does this all work technically.
This is where we're gonna start to get stuck in. At the bottom level, it doesn't matter whether you have a WebVR capable browser or not, because they've cleverly created the webvr-polyfill. And you can drop that in your code, and your browser will think it's WebVR capable, whether or not it really is.
That sits alongside WebGL, which are the accelerated Web JavaScript APIs for graphics. Generally, on top of this, how many of you have even heard of Three.JS? So Three.JS is pretty much the go-to building block library for three-dimensional applications that run inside of the browser.
It is fantastic.
Mr.doob who runs this project has been very good about making sure that it's kept up-to-date with all of the WebVR builds.
And then on top of that, we're now seeing the emergence of frameworks. There are three framework, Glam, which was written by Tony Parisi a year ago, A-Frame, which I'll be talking a lot more about in just a moment, and React VR, which was announced by Facebook at their Oculus Connect Conference last month, but which hasn't actually been released.
But now, we're actually seeing not only is there an API, we've actually got framework competition.
That's a good sign.
And then on top of that, you build the applications that you want to see. Now we're gonna get stuck in, and I'm gonna teach you all how to build some virtual reality for the Web.
Before I do that, I feel it's probably a good idea for those of you who haven't done this kind of work before to give you the three-slide crash course into virtual reality.
All right, virtual reality is built out of scenes. You can think of scenes as being the logical equivalent of Web pages. Every scene has three elements in it.
It has the objects that you see.
It has lights that illuminate those objects, because if you're in a world, and there are no lights, you cannot see anything.
And then finally, there's a camera that's looking at the objects that are being illuminated by the lights. Now these three elements are always present whether or not you explicitly specify them. And in most of these frameworks, you can sort of hand wave around some of this stuff. It won't necessarily look great, but it will still work. Now objects, since these are the things that we're actually going to be looking at and playing with. Objects are made out of two things themselves. Oh, wait a second.
I didn't even know my own order of my slides. Every one of these things, whether it's a camera, or it's a light or an object has a position where it is in space, so this is x, y, z, the coordinate space.
So it's a Cartesian coordinate space.
Every one of these objects doesn't just have a position. It also has an orientation.
So orientation is expressed in terms of pitch and yaw and roll.
Pitch is what you get when you're looking at a front-loading washing machine when it's doing a spin.
Yaw is what you get when you're sitting in a nice swivel office chair and going, "Wee!" And roll is what the tyres do on your car.
So some combination of those three orientations will give you how an object is oriented inside the virtual world, and the position gives you where that object is.
Now objects themselves, the visible things, they're made out of geometry and materials. You can think of geometry as the equivalent of a skeleton. It is the actual shape of an object.
You can think of the material as the skin on that object. And materials can be quite sophisticated.
They can be reflective.
They can be images, which we call texture maps. They can have transparency associated with them. They can give you a very, very precise effect. And you put those two together, and what you get are the visible objects that you see. All right, so these are the basics for what you need to understand to be able to create virtual reality. There's a lot more there, but this is the essential. Okay, so now what we're gonna do is we're gonna get stuck into the A-Frame framework.
Now A-Frame is really amazing, because Mozilla is leading a project that's designed to make authoring for WebVR as easy as authoring for HTML, because A-Frame defines a set of tags that look a lot just like normal HTML5 tags, and they're also extensible.
So if there's a tag that doesn't exist, you can open up the code, because it's all open source, and you can add it.
And those tags are fully accessible in the DOM. So it's not running in a black box that you can't do anything with.
You have all of the capacities that you would have normally on the Web and in your JavaScript libraries running inside of A-Frame. And the nicest thing of all is when you're playing with it, there's a built-in inspector.
Whenever you're doing A-Frame, you just hit Ctrl + Alt + I, and you have a beautiful inspector that allows you to take a look at things, change things, save it back out, just comes for free.
And that will run on pretty much everything, because A-Frame includes webvr-polyfill, so if it lands on a computer that doesn't have WebVR capabilities, it doesn't matter.
And it has Three.JS, so it runs pretty much everywhere. All right, example number one, and if you went to the page that I showed you earlier, and the URL will be at the bottom, so when we started yesterday, I took my Theta, which is an immersive camera. It takes 360-degree photos.
and I snapped a photo of John and I and a bunch of people as the room was filling up.
And here you can see in A-Frame, in four lines of code, how you can load that photo and display it on pretty much any device you like.
That will work on a Cardboard.
That will work on a Vive.
Doesn't work on a HoloLens yet, but it'll get there, all right.
And so what you do is you place all of your elements between a-scene tags, and then inside of that, there's only one tag, which is an a-sky tag, which basically defines how the sky looks all around you, and you give it a source that points to the immersive equirectangular photo.
Now you can create equirectangular photos yourself. You don't need any fancy hardware, because the Google Places, or what is it called? Google Street View app will actually do this for you as well, and so it's really easy in four lines of code to take any 360-degree photo and make that work. Okay, so that's just to show you that it's really easy to do this.
Now we're actually going to get stuck in, and what we're going to do is we're gonna create that soccer ball that we were using before, inside of A-Frame.
So here's the whole code.
And I'm gonna go through this section-by-section. That's all it takes to create the soccer ball. First thing we're gonna do here is A-Frame has an assets tag, a-assets.
So all of the images or sounds or movies that you're loading, you can put them over there. It manages all the loading them for you.
You don't have to worry about it.
And then you give it an ID, and you can refer to it later by its ID, as you would normally in HTML.
So we've gotten the texture map for the soccer ball, which you saw on an earlier slide.
Next, what we're gonna do is we're gonna actually create the soccer ball, and so we use another built-in tag a-sphere. And we say we're gonna want that soccer ball to be white. We're going to give it a source.
In other words, we're going to give it an image that's going to get wrapped around it, and we give it the soccer image that we pre-loaded in the asset tag.
We're going to give it a radius.
What's the radius of the sphere? Now, here's the thing about WebVR that will drive the Americans crazy.
Units in WebVR are metres.
(audience laughs) (audience applauds) So I've specified a meter-diameter soccer ball, which is a little big, but, you know, it's fine. And then I've positioned it, and I've positioned it one unit, so one metre out on the x-axis, one-and-a-half metres out on the y-axis, and zero metres out on the z-axis.
And so what I've done is I've created the ball. I've given it a surface, and I've positioned it. And then, of course, I need to light it, because remember we need objects, lights and cameras. And so I'm adding a light to this scene, and I'm using a light tag, a-light.
I say it's going to be a point light.
Now you could also have a directional light. You could have an ambient light, but in this case a point light, which is basically like a bare light bulb.
That's fine.
I want the colour of the light to be white. I could use a hex code if I want it there.
I could specify some other colour.
And I give it a position.
And I'm positioning the light source above where I know the ball's going to be so that the light is shining down on it.
But I've also positioned it back a little bit so that it's shining and reflecting toward us. And then finally, I add a camera to the scene. Now what I've done is I've positioned the camera so that I've given it basically a 270-degree turn. So I've turned three-quarters of the way around, because that means that when it loads up on a desktop browser, you are looking at it, because it's really possible in the 3-D world to create a scene and to have the camera pointing the wrong way by default. Now here's the thing.
If you use this on your mobile, A-Frame is going to ignore that.
Why is it going to ignore that? Because your mobile has all sorts of lovely inertial sensors which tell it which way it's pointed, and so the camera gets ignored.
But you put all of theses together, and you get a soccer ball just sitting there. Now what we're gonna do is we're gonna start to play with it as a DOM element.
And so I immediately pop jQuery in, and I add some initialization code so that when the document is loaded, it will set up the correct binding.
Notice that I am binding directly to the tag, just as I would if this were any other regular HTML element. When someone clicks on the scene, the clicky function, 'cause I know how to name functions, the clicky function will get executed.
And there's the clicky function.
There's nothing extraordinary about that, but take a look. I am changing the attributes inside of the sphere tag, just as you would change the attributes inside of any other HTML tag, inside of any other document. That means you folks already know how to do this. And finally what we're gonna do, and this is example three, which we're gonna put a little spin on things. We're gonna add some animation to this.
And so what we do is we define the sphere a little bit differently, and we wrap it in an entity tag. An entity tag is a bit like a div tag.
And we put some information inside of there. So we use the entity tag to position the sphere. So we're no longer positioning inside of a-sphere tag. We're putting it in the entity tag.
And then, we add an animation tag.
And the attributes of the animation tag tell you that the ball is going to spin every 50 seconds. It's going to spin forwards.
It's going to spin linearly.
That's at the same speed.
It's going to go in 360 degrees of yaw.
That means it's going to spin around, and it's going to loop indefinitely.
And remember, because this is within a tag, you can write any bit of code you want to change as much of this as you want.
So you put this together with the other bit of JavaScript.
When you click on the scene, what happens is the texture map goes from a soccer ball to a tennis ball.
And a tennis ball is smaller than a soccer ball, so we change the size of the radius, so you get a smaller ball.
You click again.
You get the soccer ball back, and you get a different radius.
So these are things you already know how to do. And this is the key point with A-Frame is that it leverages the amazing amount of knowledge we already have on how to manipulate DOM objects and brings VR into the heart of that.
Now, I know I just gave you a very simple example, but A-Frame is very rich, and the engineers at Mozilla, a month-and-a-half ago, introduced a project called A-Painter.
Now for those of you who haven't seen it, the canonical, modern VR app is something called Tilt Brush. It's made by Google.
It runs in immersive VR systems.
You take the controllers, and you paint in 3-D. And it's beautiful and amazing.
Almost everyone immediately falls in love with that app, but it's an app.
You have to double-click on it.
It's an executable.
Well, the folks over at Mozilla, they rewrote the entire thing inside of A-Frame. So it runs inside the browser.
And they've released all of the source code, and they wrote an extensible library so that you can write your own brushes.
So it's not just an idea.
This is now a platform for creativity, because what is the Web? The Web is the greatest platform for creativity we've come up with, and we can lean into that if we design WebVR well. Now there are a couple of gotchas.
None of these are going to be weird.
Cross Origin Resource Scripting, Homey ain't gonna play that.
You really have to keep everything well-managed in that respect.
Although this spec does not specify that you have to use TLS, the browser vendors do, in general.
And unless you're doing dev work, you will have to make sure that your worlds are accessible over secure connections, because you're handing someone your eyes and ears when you're in VR, and that is a relationship that requires a fair bit of trust.
Everything is moving really quickly now.
So make sure that you're always using the latest version of a framework and the latest version of a browser, because that will keep you up-to-date in all of the tools. And if you want to learn more, just go visit webvr.info. That's the first landing page for all of the resources on WebVR.
Okay.
We are now 25-ish years into the Web, and we're starting to get some understanding of what we're doing.
And we know that what we've done is we've built a machine for sharing knowledge.
One of the problematic aspects of that is that the Web doesn't do anything about saying whether that knowledge is actually true, but we have built a machine that shares knowledge at global scale.
What are we doing with VR? We're building a machine for the sharing of experience at a global scale.
At one level, it's as simple as finding out what it's like to walk in someone else's shoes. But it's going to be much more textured than that. It's going to be much more rich than that, because it's going to open us to kinds of experience that we don't innately possess.
It's going to give us, in a sense, new senses. An early example of this that got me very excited both because it was done in VRML, and because it was pointing the way to the future was a project called the Virtual New York Stock Exchange, which was creating an enriched, highly-visualized environment for stock traders. Now at the time, the Virtual New York Stock Exchange required a quarter-million-dollar Silicon Graphics supercomputer to run on, which is probably significantly less powerful than the smartphone that's in your pocket.
But one of the things that they learned very quickly during this project was that if you do it right, you can increase the human capacity to absorb and make decisions on information by as much as 5,000 times. Think about when you're on a web page, and you come across a really well-made infographic, and there's an internal (sighs), because the cognitive load of having to process information has been relieved.
Someone else has done the hard work of taking that information and representing it sensually, in a way that is easy for you to digest.
That's where we can go.
The price we pay for that is that it actually does fall on someone, designers.
It falls on them to actually do the work, which might be 10 or 100 times more work to get the win the for everyone else that something is 10 or 100 or 1,000 times easier to digest. Because here's the thing.
The Web has been a success, and because the Web has been a success, we are now all deluged with information.
And for us to be able to make decisions about that information, we need to find ways to make that data make more sense.
So I'm calling for the next revolution in computing to be the Sensual Revolution.
Sensual computing is all based around improving our ability to be able to make decisions.
It starts with decisions, not with the data. What do you want to be able to decide? Then, what data can help you make those decisions? Then, how do you sensualize that data? And I use the word visualisation here, but of course we can see data.
We should be able to hear data.
We should be able to touch data.
We might even be able to smell data someday. We don't know, but we know that we can get touch and hearing and vision out of the modern day systems.
And then we have to think about the interactions. How do the way that we touch and listen and see data affect the way that we interact with it? How do those interactions affect what we're seeing and feeling and hearing? And then altogether, how do those interactions allow us to be able to make better decisions? None of this is static.
All of this is dynamic.
All of it is feeding into everything else.
All of these are the agile test points as you're developing your methodology, because, ladies and gentlemen, we do not yet know how to do this.
A case in point is this.
This is a visualisation of what the work station for mixed reality will look like.
And when I saw this, two months ago, I went, "Oh my God, this is fantastic!" And the longer I look at this, the longer I am reminded of a line from William Goldman, "No one knows anything." Because what this is telling us is the first rule of Marshall McLuhan.
The content of a new medium is the medium it is obsolescing. And if you look at that, what you're seeing is some really bad 2-D stuff projected into a 3-D world. What it's telling us is that we know this world is coming, and we don't have the faintest clue on how to enter it. That's okay.
I want you to think about web pages in 1994. (audience laughs) It's okay.
But my challenge to all of you is that your job for the next 20 years is to think about design, test, improve and share the affordances for mixed reality systems, because they're going to become a basic part of the way we make sense of the world.
So think about how we connect.
There's been a whole bunch of talking about the different ways we connect, the different ways we share, the different ways we learn from each other. We actually understand a little bit of it.
It's time to take some of that and to apply that into mixed reality environments. Of course, things are going to be different, but we don't just throw away the rule book. We take that rule book and start applying it, see where it works, see where it doesn't.
The sensual computing revolution is your revolution. You are the folks who are gonna make that happen. The interesting thing about VR for me is that, in some ways, I thought it was a closed chapter of my life, 'cause I've had a career as an educator and as a broadcaster and as a consultant in FinTech. I've had three separate careers in all of the years since I thought VR had died.
VR has not died.
It was asleep.
And now it's back.
And now, I've become aware that the first vision that I had for virtual reality is now possible. And the moment I became aware of this was in July when this happened.
(audience laughs) Admit it, you were all addicted for a little while. Maybe some of you still are.
The thing that became apparent to me very quickly was that the design of the game colonised all of the real world, whether or not the real world wanted to be colonised, which means you had people playing Pokemon Go at Auschwitz, not a good look, why? Because the game effectively has no way to interrogate the real world, to engage the real world in a conversation about what is right and appropriate and permissible at a particular place and a particular time.
Well, it turns out I solved that problem 25 years ago, but until this happened, no one understood my use case.
Now I don't have to explain it anymore.
And so I took all of that work, which I had never really left alone, even all the years that I was doing VR.
And I dusted it off a little bit.
And I brought it up to date.
And last month, I submitted to the W3C something called Mixed Reality Service. The analogy here is DNS.
DNS takes names and binds them to IP addresses. Mixed Reality Service takes coordinates, real world coordinates and virtual coordinates, it's not picky, and it binds those to URIs. And if you think about that, what would you do with that? Why would you want that? Well, here's a couple of use cases that fall out really quickly.
Does a drone have overflight rights on a particular space? Does someone who's playing an AR game have the right to play that game there? Or as the game creator, do you have the right to situate the game there? If I am a first responder popping up at a building that I've never been to before, can I just ping the Net and find out what the hazards are on site? Or if I walk up to a building, and it's closed, can I find out when it's going to open again, and who the tenants are? We have no metadata binding between the real world and the virtual world, which 25 years into the Web, is a very weird oversight.
And it's an oversight that we need to fix.
And we need to fix it first, because it's going to make our smartphones a lot more usable, because this isn't just about people who are wearing goggles in public.
This is about people who are just taking their smartphone out and trying to find out what's going on in the world around them, and that looks like this. So by the way, this wasn't just a proposal. There is running code.
If you go and hit that URL, you will find out that this space has been mapped.
You can issue a very simple request in JavaScript inside of your web browser, get the responses and use those responses to change your behaviour, because you get interrogatable metadata back. And this is the point.
The point here is to be able to add the layer of metadata to the real world that the real world desperately needs, because the world that we have just been thrust into, no one gives a toss for the world anymore.
And it's about time that we started to provide the tools that will let the world speak for itself.
Now that's always gonna be argued and provisional, but you can imagine in a years time when you hit that here, you'll find out that the University of Sydney and the Seymour Centre and the City of Sydney and the Eora Nation, all have something to say about how this space is used and what's going on here and why.
We need this.
We need to think in these terms.
We need to activate the real world the same way we've activated the virtual world.
And that's what I'm going to be doing.
Thank you.
(audience applauds) - Thanks so much, Mark.
Let's take a seat.
I might have a question or two while we set up Jennifer. I've talked with you in the past about the very early days of VR.
Did you ever see, you know, like you had this period where this enthusiasm and excitement about VR. We've seen that a couple of times.
Do you feel like we've really made it? Now the feedback I get from people who are a bit sceptical sometimes is like we've seen this hype before.
What do you think it is that will make this different? - It's happening everywhere.
It's not just, so the first age of VR was essentially pushed by Silicon Graphics, which was a large company at the time, but you know, on the scale of an Apple or a Google or a Microsoft today is a minnow.
And so I think some of it was that, but some of it's just a quality of access.
If there are literally 300 million Daydream class smartphones by the end of next year, then automatically VR is accessible at a level for producing things for an audience that are very broad. So I think that that's part of the answer.
I think the other part of the answer comes again to what I was talking about that we are in a data-saturated culture right now.
And that the way, and it's not the only way, but one of the ways through is through central computing, being able to make better decisions, because we're actually thinking about this data. Now, let me just sort of go off on a bit of a tangent about this, because almost all of the big companies around the world all have Tableau licences. And I was talking to someone in one of these big companies, and they say if I see another bar chart that was done in Tableau that I could've done myself, I'm gonna start screaming.
So even 2-D visualisation, which would be enormously helpful, is underutilised right now.
I'm looking at you, folks.
Some of that's because data visualisation is its own beast. But some of it's because the easier path is to simply slap text down or something really simple down, because we haven't been able to do the 10x or 100x investment, because we haven't seen the clear pull-through about that. But I think what we need to start to do is start to go okay, inside of an organisation, who is the most informationally overloaded person? How do you start taking some of that burden away from them by using sensual computing to do this? How do you move the needle on their ability to make a decision? And so it becomes, in that sense, a virtuous cycle of improvement.
So it's not just a gimmick.
And I think that's what we understand now, 20 years later, that we didn't, 20 years ago.
- Yeah, I think it's often being associated with entertainment.
- Yeah.
- And I guess the other thing is they've been isolating experience and isolated experiences. And I know you and I are great fans of the URL. Perhaps one of the most underestimated technologies ever invented.
And I think that's what we're starting to see in MRS is obviously speaking to that, the capacity to deep link into space and time, and therefore making connections.
And I think Caroline made the observation about that particular slide you put up.
That it's really about connecting people in this space. - One of the nice things about the A-Painter as an app, and the Mozilla folks know exactly what they're doing is it's possible to create something and then hand someone a deep link to it, right, because it's all inside the Web browser.
Just here's the link and you go to my little crazy thing that I made.
This is the key here.
The code fragment of WebVR code that I showed at the front of the talk was from a Mozilla blog post about how to seamlessly move from world to world, in other words, deep link from world to world. Now there aren't any standards around this. It's such early days that we're just going, "Oh, does this work? "Does this work? "We need to do this.
"How do we do this?" And so a lot of it's poorly defined right now, but it doesn't mean it's not vital and it means that you guys have a lot of work to do, but what it means is that the basic premise of the Web as being open and extensible is inherently being preserved in what we're doing in WebVR.
- I can talk about this all day, and I'm sure we all could. Unfortunately, we don't quite have all day, and we have a bottle of whiskey waiting for us over there. But I guess I'll editorialise for one last moment, before I send Mark on his way.
Like Mark, do you remember the Homebrew computing days? The late '70s of, you know, I remember I told that story yesterday of going off with my friend's parents, 'cause I was dorkier than them.
And like the Peach and the Pear and those early Apple, you know, early Apple days.
I remember the desktop publishing days of the mid- to late-'80s.
I remember the Web as it kinda feels like that. It really does.
And as you say, we don't know anything, but that absolutely characterises each of those moments if we look back.
We knew nothing about personal computing.
We knew nothing about desktop publishing.
We knew nothing about the Web.
But we just had this sense that it was vague but exciting. - Right, so you folks should go and grab a podcast. The name of the podcast is Voices of VR.
The latest episode is an interview with a man named Josh Carpenter.
He was at Mozilla doing A-Frame.
He's not the lead of Chrome VR at Google.
And he's having a very deep think about how design elements are working in VR and what affordances he provides to you so that you can build amazing 3-D user experiences, because it is that moment.
But the thing is is that unlike in 1994, we kind of have, we know what questions we need to ask, because we stumbled through the territory last time. What was the space between the Web and CSS, right? There was a long period of time.
We're actually kind of inheriting the ideas and some of the technologies of CSS this time. So we can accelerate the ride, but you're right. I mean, seriously, six months ago, this wasn't really even on my radar.
And now it's taking up about 60% of my time, because everything has been accelerating so much again. - Thank you, Mark, exciting times.
And I look forward to when we're here a year from now, what we'll be seeing, but I suspect, driven by a lot of the technology you talked about, we'll be seeing some very interesting things. And hopefully, the folks in the room will be helping make that happen, right.
- Please, please.
- Once again, let's thank Mark.
(audience applauds)