Good morning, everybody.

Thanks for joining me.

Today's gonna be a bit of an exploration.

We want to look forward and look at, what's gonna be the next era of computing?

For over 20 years I've been working in mobile UX and doing a lot of work in that space and to me, this is the era beyond mobile, the current computing era that we're in mobile, the Mobile Era.

And my aim today is really for you to get a bit of an understanding of what that is and to start to think about how you might be designing this for the future.

So in a while, I wanna link back to UX talk this morning of building a bridge.

We're trying to, I'm trying to build a bridge to Spatial Computing so you have a better understanding and can go and explore it yourself.

It is gonna be a bit of a rocket ride.

We've got a lot to cover, and I tend to talk fast, so my apologies ahead of time.

But the key three things that we're gonna cover today, Spatial Computing, we're gonna talk about AR Head Mounted Displays, so the glasses or the goggles that you might wear to see these experiences.

And then, really, for the majority of it, to focus on these Stages of design to get you from an ID to prototyping an experience in Spatial Computing.

As I said, I wanna show you tools, I wanna show you examples, and really take you on the journey that we've been on over the last couple of years.

To sort show our learning so that you can have a faster track to go and explore, and to ask those bigger questions of, what might you actually do with Spatial Computing and where you might take it?

So just to quickly address, what is Spatial Computing?

It's really this ability to connect the digital to the physical world.

So we have a reality that we all live in or we all think we live in, and we wanna look at how do we overlay that with the digital experience and how does the digital experience understands that there is a physical world and connects those two.

This is a great quote by Crown and Scoble, I won't go into it, but it elaborates on that.

The areas that Spatial Computing brings together are very varied, and you had the opportunity to work across a number of these areas over the last bunch of years.

But the fascinating thing about Spatial Computing is how it brings them all together, and it really created, it creates a network effect for the types of experiences that you can have, and the interplay between them is real, can be really rich.

Of course, we have five sensors.

Today we are really gonna focus mainly on the visual sense.

There is a lot of work happening in the other sensors as well, of factory taste and certainly smell and haptics is a key one, but really most of the stuff that we'll be looking at today is on the visual and sound as well . And just to explain it, I'm sure you've all heard of augmented reality, but just to, to clarify, we are not doing that.

It's really looking at somebody's experience in the environment that they're in.

They might put on a pair of glasses augmented reality glasses or a headset, and we put a digital asset into that world that they can interact with or relate to and so on.

Now we also have the ability, as well as augmenting reality, to actually diminish reality as well.

So it is a really interesting aspect.

We're not gonna go deep on that today, but do keep it in mind because you can actually change somebody's ability to focus on something by reducing the noise around them as well as augmenting.

And the particular area that I wanna talk about today is Everyday AR, and this is really what we expect over the next 5 to 10 years, where people will wear glasses and augment their sight and sound and haptics potentially in lots of different ways, okay?

And in everyday life, these glasses will understand the physical world.

And as I mentioned, this is the era that we are all expecting to be beyond the mobile phone.

There is great, quote by Bart really talking about this fascinating opportunity to look at how do we bring digital content into that physical world to he help us with everyday activities and to act as a utility.

This shouldn't be a lot of noise coming in, a lot of ads popping up, or anything like that.

It should be to our benefit, and taking in the context in which we, we are.

And Doug Bowman from Virginia Tech has done a lot of research and many people have over the last decades in this area of augmented reality.

But he talks specifically about Everyday AR and what these displays that we wear will add in value.

And things like those applications that we want when we want them, the ability to have a dynamic screen size.

At the moment, we are very limited to the mobile phone we have the physical size of the mobile phone we have in front us, or the display size on our computer, or whatever it might be.

In a field of view, we can potentially in the future have those be dynamic shapes, sizes and even on a Zed access, right on a depth into more degree as well.

And so we are exploring what these things might be like, and certain companies might start exploring some of these things next year, but it's really likely to be a staged approach, and we're seeing some of those already.

So a person might be sitting at a desk, they have a computer on that desk, and they're wearing a headset.

They might still have a keyboard and mouse and be interacting as virtual screens.

Okay, now, taking that further, do you really need a computer?

Can you put a computer these days into a keyboard and just carry around your keyboard and mouse and your headset, and then you've got the whole experience?

And then later down the track, as we move towards this, and there are some devices that are doing this now, can I connect a pair of glasses to a mobile phone?

The mobile phone has the compute power net, and the glasses, this are the output and the display.

And of course the potential future state of a I just need a pair of glasses.

I just need a watch to wander around, and I can get most of my computing done these times.

So that sort of really leads us to, what are these things that we will be wearing?

What are these Head Mounted Displays?

And this does actually have a really long history, so we go back over 50 years now to Ivan Sutherland and him creating the Sword of Damocles, which is this DAPA research project, which was this massive mounted thing that was roof mounted to the roof that would come down and sit in front and again overlay a digital experience in front of his eyes.

If you haven't seen this before, there's an amazing podcast by our dear friend, Mark Pesce from 1968, when the world began, which really talks about a lot of the early days of this technology and its evolution over time.

And to me, if we reflect back to the mobile and the similarities with mobile, we are in that 1999 phase of the Nokia 7100.

The technology is exciting, it's cool, everybody's playing with it, but it doesn't really do much, the screens are pretty small, they're pretty boring it's not fully internet-capable in the way that we think about these devices now, but the technology is moving a lot faster than the mobile technology was those 20 years ago.

And what we're starting to see is quite an evolution in these devices.

So HoloLens was really one of the first augmented reality headsets to be commercially available and they followed with HoloLens 2.

In between that, we got Magic Leap 1 which was very famous for getting over half a billion dollars worth of investment from various venture capitalists, and then they released ML 1 in 2016 or 2018, sorry.

And then we've had Magic Leap 2 now, just earlier this year, and also other devices from people like SNAP and a range of devices from Meta as well, which attending, the sunglasses there, the Ray Band stories aren't really augmented reality, but they do have some elements of that.

Whereas the Meta Quests Pro is more a VR headset that can also do pass-through video, so you can simulate an AR experience, is probably the best way to think about it.

And Meta have very much talked about their project Nazare, which is really aiming for that future state of a pair of glasses and they're doing trials, and things like that, as I've Googled as well, that we hear.

And of course, one of the big players in the market and potentially, is it gonna be the same sort of situation that it was with the iPhone.

Apple are expected to be launch something probably early next year and evolve it from there.

Now, John was nice enough to invite my team and I along today, so if anyone wants to play with these headsets, we have them over in the corner there.

So drop your name on the whiteboard, and you can get to play with some of the things that I'm gonna show you today, so we wanna make sure that it's accessible; these devices do tend to be quite expensive.

Not every client can afford it, so now's your opportunity to go and play.

So the core of this is really about the Stages of AR Design.

How can you go from potentially being a designer now to exploring these types of technologies and the experiences within them?

And it's been a long process; as I said, over many years, we've been playing around this area and because it is so early, there's a lot there's not a lot of guidance or documentation, or information to really follow.

And as with mobile, the develop the tech side of things tends to develop much faster than the UX side of things, but my expectation is that will rapidly catch up.

The first thing I wanna talk about is Design Principles, and really this is an interesting space because we have to remember that a lot of investments were made in these types of technologies in the past decades.

And so a lot of the research and certainly the design recommendations actually come from academic research and papers out there.

So there's a great study by Kraus et al who looks at both the scientific they went through like I heard in 875, I think it was design recommendations from both the academic paper side of things which tend to be more generalizable than the practitioner side of things, where the key differentiator is really looking at a particular platform, whether that's a Magic Leap or HoloLens or whatever it might be.

So there is a lot of design recommendations.

A lot of them are based on the types of UI recommendations and guidance that you might be familiar with, just adapted in slightly different ways.

But of course, the core is looking at how do we integrate this physical and digital world?

How do they understand each other, and how can we enhance the physical world with these digital aspects?

So I just wanted to show you a quick sample of some of the things that are out there.

There's Magic Leap, have their own desired guidelines; you can go onto their website and find these out.

It is like mobile over all over again.

The things like this first one, what are the things that make this relationship to space an opportunity to explore?

And we've gone through a lot of all of these and repurposed them and reorientated them for our own needs.

And we've gone and started creating assets, so this didn't work.

Henrique and I worked on looking at, okay, how do we apply these design guidelines to an experience and match the guidelines?

But as we know, there's design guidelines and then there's also creating Euro UI as well.

Microsoft have put a similar set of guidelines out around HoloLens and HoloLens two, so they're out there unavailable, but to me, the exciting thing about this period is that there's not a lot of standardization.

We're still trying to work through a lot of these things.

If you actually go and play with the MagicAid 1 or MagicAid 2, while I just showed you a pair of glasses, there's actually a controller with it as well, right?

A lot of them will take hand gesture input, but not all of them are there yet.

There's some experiments that you can do with both of those, and this is the exciting thing, right?

Nothing's really bended better down yet.

How can we create these natural interactions for people, where they can use their voice gestures and other types of interactions to guide this technology?

And that's fun.

That's fun to play with and fun to explore.

Now let's take it up next step from the guidelines.

Let's go to 2D Design, something that I'm sure most of you're familiar with, and probably that tool who uses Figma out of the ring, almost all here, okay, great.

And of course these companies are very aware of that as well, so the Microsoft Mixed Reality Toolkit is one of those things that, that Microsoft has actually taken from their work on HoloLens and made it more cross-platform.

So it's a way of providing new UI guidelines and new UI assets so that you don't have to go and rebuild these things in your Spatial Computing.

And the great thing is that as an output of that, they've created a Figma file that can help you with storyboards and has, elements there that you would use to create the storyboard within a 2D Design of these types of experiences.

Now, throughout this presentation, what I want to do is show you some of the things that we've been doing and playing with, right?

So, one of the things that we do is really explore those scenarios, and a lot of our thinking is around what are the use cases for this technology?

Okay?

It's gonna be critical.

A lot of people that we meet, where we go.

Yeah, the technology can do in this and this, but what are the use cases?

And we thinking through those things.

It's not gonna be the emphasis of the talk today because it's more a how to, but you'll see elements of this.

So this is one little scenario that we came up with and we thought, what if people are deaf?

What if they're sitting at the, their computer in an office and the fire alarm goes off and they're working late at night, or whatever it might be?

How do we ga guide them out of that experience?

So there might some be, some sort of indication of what's happening and we guide them out.

We might give them some simple recommendations, follow the green path, don't use the lifts, just getting them ready for this experience.

And we might provide them a mini map down the bottom corner there to give an overview of the whole exit strategy through their particular floor or whatever it might be.

And then different little bits and pieces along the way, okay?

And we have to remember that potentially things like PowerPoints and so on can be smart.

So there might be ways of interacting with them, is actually, I don't have to go and flick every physical PowerPoint off.

I can do it another way, interact with another way.

And then how do we do things like overlays on lifts?

So say, don't use the lifts, you've gotta head this way there might be other information within there.

We might have heat sensors indicating, where actually the fire is?

Where the fire hose is?

All of these types of information overlay to get you out of it.

It is a really simple scenario, right?

Very basic but what we've been able to play around with is using these assets, creating them, certainly the background storyboard with things in Figma, and then adding to them with 3D elements with a website that you can use called Vectary, so you can create 3D assets there.

I don't know if any of you noticed, but one of the things that we used for that scenario was a very much an eye level perspective on things, right?

Because people will be wearing glasses, and this is an overlay for vision.

What we've used is a pair of the Facebook the RayBan stories, glasses which do this recording, and we find them really great for prototyping.

I wouldn't wear them out, but they're really great to go and get that perspective, that first person view.

And also, if you're using gestures, you can use your hands.

You're not holding a mobile phone and, doing that sort of stuff.

It's quite interesting, and we use it for body storming and walking through these scenarios, recording them, and then playing back and going, oh, where could we add value within that sort of experience?

And that leads me to Video Prototyping, which is the next level up of taking that video asset and overlaying further information and experiences on this.

Now, a lot of the things that we are interested in is how might a company like Apple take features that they're working on iOS or similar and extrapolate those to a future state.

So with iOS 16, we have things like focus modes and widgets lock screen widgets, especially focus modes have been evolving, but lock screen widgets coming on.

So how might we take those things and put them into this type of experience?

So I'm going to work, I'm listening to my lovely music.

I gaze up this blue dot is my gaze, my eye gaze, and I can see the things that I've gotta do here.

Siri tells me I've arrived at work because I've hit a wifi or a geo.

And I then get contextual information for what I might need to enter that environment, alright?

So very simple, quick scenario, right?

Taking things that we know exist today but moving them to this potential future world.

The other thing that we can do is also play around with interaction design, right?

We can have a look at how different interactions might happen within the same experience.

So the one on the left and right are slightly different.

So we can "Usability test" this stuff and get people's feedback early on in the design process, which obviously we want to do.

Henrique, how does your voice sound?

And we are taking tools that you use today, Figma, okay?

Getting those 2D assets very simple, and using something like Adobe After Effects to, to put those into the video.

And there are actually a range of different tools that you can do to play with this stuff as well, this is just a sample of tools.

Then the big step, designing in 3D is a head mounted displays, okay?

So this is where we start to get serious as I mentioned, this is the Meta Quests Pro, goes throughout $2,600.

It's a VR headset.

It, so it fully covers vision.

It's got blocking light blockers on either side as well.

Has just recently launched and, but it also allows passthrough.

So basically, you can see on the outside there, there's a couple of cameras, and those cameras are video cameras.

So when you look through it, it can turn on this Pass through view, which just shows you where you are and so on.

Now, the advantage with this type of approach is that you have a huge field of view.

Okay, so we can overlay anything, anywhere within the field of view that you have in that space.

When we move to the AR only headsets, you'll notice the field of view really diminishes.

And that's a point in the technology.

And there's some great apps out there; one that we really started early on with was, is one called Shapes XR allows you to design in 3D to create a space and a scene, and to have these storyboard scenes where you move through and around objects you can position things correctly you can change the size orientation, you can pull in stuff from Figma as well to then give it depth if you want to give it depth, and so on.

So a really one really easy one to get started with quite quickly.

And again, this is a very rough prototype of what this experience that we just saw of going through that fire escape might be.

Okay, so we see something red and flashing, and then we see these green arrows.

And obviously this is our, oh, not obviously this is our office.

But what you can see here is the green arrows stop, right?

And this is because the oculus has a limitation on the space that it can discern.

Okay?

It's called a guardian, and it acts as a frame around your experience.

So we can't build something that really goes out into the real world yet; with this type of device, it's limited to a space.

But it's useful to get a sense of things and to start again prototyping and exploring things.

The other reason that we bring up this as Shapes XR specifically is that you can use it on the Meta Quest 2.

So it's a much lower, I'm not saying it's cheap, but it's a much lower entry-level headset.

The video Pass through is black and white.

You just saw the color Pass through so it's not quite the same experience, but it gets you playing with things, and that's where we started, that's that's what we began with for a long time; we were playing with that.

Now I'll move to the more AR specific headsets.

Those from Magic Leap; they're also ones from Hollow Lens, but we wanna focus on the Magic Leap 1 here.

And as I said, they've come over, this was 2018 for ML 1 and ML 2; we just got in October.

And what I wanted to show was how do we go about setting this up and creating these sorts of workflows with different elements and so on.

So first, how do we create the things that we want to put in our digital world and design those or bring them in from somewhere else?

Secondly, how do we put them together so that, or create a scene or an experience, and then how do we push that out to either be viewable on one of these headsets or on a mobile device as well?

First, one I wanna show is one called Polycam mobile phone app.

You can go and grab it from the app store it utilizes Apple's room plan.

So what I'm doing is I'm waving it around the office, and you can see it is automatically drawing the shapes of things, and then on that mini map below, it's building a model of our office in real time, it's measuring things.

This is a real-time measurement.

It puts things in, so you'll see it notes blocks of chairs and things like that and you can see on the one over here where it's put in chairs.

Now those chairs don't look like our chairs, but it's understood that it's a chair, it's done that object detection, and it's put an asset in that place.

Okay?

Nearly saw the mess.

And then you can also do things like, instead of, that's the room around you, you can also scan objects, right?

So this is using neural radiance fields an app called Luma AI, so I can walk around this object on the table in our office and I get output, a nurse model that I can then pull into a scene.

Okay?

And it's got various elements that it's detecting.

And of course you could go out to a service like Sketchfab where a bunch of people have created these 3D assets and you can download some for free and some you have to pay for, etcetera.

So, you you could really have a play around with this stuff quite easily.

Now it gets a bit trickier.

Shane on our team is a bit of a, is really exploring the opportunities around Blender, and it's taking all those assets, the room plan, the chairs, the computers, whatever it might be, and putting them together into a 3D model, right?

So we're creating a view on these things.

And we can then take that Blender model, and we can push it out to something like Reality Composer, so Apple's app for iPad, etcetera and explore how that model might be, and then put this as an AR asset in the world and view it in the world and play around with it in the world.

Okay, and if Apple were to do something in this particular area, reality composer's probably one of those key to that tool chain that they would go to if they're gonna move to a headset, this would probably be a starting point for them.

This is two views through the ML 1 headset.

This one's looking at as a little object in the world.

This one's almost a bit more VR in that I've made the whole room proper scale, and I'm walking around within the environment, and I've screen dimmed the rest of the environment essentially.

So we can walk around and do things and animate stuff within these experiences, and they can be self-guided.

You, this is one of the things that you can see out there if you play.

The last component to this is spatial mapping.

So these headsets, especially the MLS you wander around, and you'll see the little dot pop up, the little x and the red means I haven't scanned.

So I look around, and you can see the little dots, and it's showing me by progressive.

It's showing me all the areas that I have scanned.

So it's doing a LiDAR map of the physical space that I'm in, and then it understands where walls are and things like that.

All right, so what you can start to do, and this was a real game changer for me, is you can take a digital object.

Obviously, it can interact with a digital object, but I can also bounce it off the wall.

It knows where the floor is.

I can throw it onto the couch.

You can roll off the couch.

Okay?

And this is where things get really exciting, right?

This is just a very simple example of the things that we will be able to do in the future to blend those worlds of physical and digital.

And so we might take something like that again, same scenario using a different set of technologies.

And again, this time we can take you right out the door, right and right around and so on, because we know that space, we understand where it is, etcetera, etcetera.

You can keep walking.

Thing I'd like to end on is artificial intelligence.

We're gonna have a great session this afternoon with Mark Pesce.

Who's gonna talk much more about this, but this whole area that is happening now with AR AI and image generation when you start to mix this with AR is fascinating, right?

It's also very scary in a lot of ways.

This is, sorry, we'll start.

This is somebody using a mobile app.

They say we see an elephant in the tree, and it digitally inputs an elephant on the real world that they're viewing now, they didn't draw that elephant.

The AI put that elephant there, understood where the tree was and everything.

And that's Stable Diffusion.

And then, while we haven't much had a chance to speak about the ethics today, these are real; there are, this is gonna raise so many ethical issues.

But our perspective on that is by only by understanding the technology and how we can interact with the technology, we begin to be able to design and appreciate those ethical issues and make sure we're designing experiences that optimize for them and don't put people at risk.

So, some quick Take Aways you really have a think about how this might be of interest to you.

Look at that physical world in the digital world, how they can work together, understanding how spatial computing could affect your product design in the future, and go out there and play with them.

As I said, a lot of these things are free, they're web-based or simple to get understand your preferences, how they fit into your design workflow, and so on, and really sort of understand how you put those pieces together.

We've given you an example of some of those workflows today, but there are many more as well.

And final, oh, sorry.

Have I been?

There we go.

Final slide is I'd like to thank my wonderful colleagues Shane, Yin, Henrique, who are now embarrassed, and, oh, Aiden, we were supposed to put Aiden's photo in.

Aiden doesn't look like that.

He's out the booth now.

But these are some of the Oculus experiences that that you can play with out there as well, with gestures and one called Soft Space as well.

Thank you very much for coming along today and enjoy the rest of the conference.

1. Spatial Computing

2. AR Head Mounted Displays

3. Stages of AR Design

My Aim Today

Show you tools, workflows and examples to help you design and prototype

1. Spatial Computing

  • AR Cloud
  • AR Glasses
  • Artificial Intelligence
  • Computer Vision
  • Conversational UI
  • Intelligent Agents
  • Internet of Things

Stylised icons represent each item

Icons represent sight, hearing, touch, taste and smell

  • Illustration of a woman standing in a small living room. Her arms are partially raised in front of her.

The same scene. The woman now wears glasses and a 3D globe is suspended in space in front of her.

Everyday AR

One of AR's biggest impacts will be "bringing digital content and interfaces out of the screen and into the world in a contextually appropriate form - helping us leverage the power of computing to get more done, quickly"

Bart Trzynadlowski @BartronPolygon 19 June 2022

Everyday AR

  • Virtual displays are available all the time, anywhere
  • Virtual displays contain information and applications that users may need anytime
  • Users interact with virtual displays for all general purpose computing needs
  • Virtual displays are registered in the three dimensional physical world

Doug Bowman, Keynote ISMAR 2021

An illustration of a woman seated at a desk. She wars glasses and a mac mini sized box appears on the desk in front of her. Several windows float in front of her.

The same scene but the windows are all now stacked one in front of the other

An illustration of a woman standing. Two scooters face her. She wears glasses. A phone is in her purse. In front of her floats a screen with information about the scooters.

An illustration of the woman standing alone wearing glasses. A screen displaying weather information floats in front of her.

2. AR Head Mounted Displays (HMDs)

Grainy old photos of a man wearing a steampunk style headset suspended from the ceiling.

A early 2000s style Nokia mobile phone.

Table of innovations in AR/VR by year from varius companies. Companies from top to bottom on the vertical axis are

  • Microsoft
  • MagicLeap
  • Snap AR
  • Meta
  • Apple

The horizontal axis has years from 2016 to 2025. Key moments are Hololens 1 2016, Magic Leap 1 2018, Hololens 2 2019, Snap Spectacles 2021, Meta Rayban Stories 2021, Magic Leap 2 and Meta Quest 2 2022

Images of Magic Leap 1, Magic Leap 2, Meta Quest Pro

3. Stages of AR Design

Design Principles

Reproduction of complex diagram from Krauß et al 2021 Research and Practice Recommendations for Mixed Reality Design Different Perspectives from the Community. Connects "Scientific Design Recommendations" and "Practitioner Design Recommendations".

Magic Leap

Use the Strengths of the Medium

Push the paradigm beyond screen- based interfaces. Make things that can only be done here--and require the kinds of sensory awareness that this new ecosystem awakens. You are no longer bringing the user into your world, you're joining them in their world

Visual Simplicity

You want to keep the user in their world, so reality should be a major player in your experience. Hand pose and position are seen. You want to keep the user in their world, so reality should be a major player in your experience.

Experience Simplicity

Keep it simple. Our tech and the novelty of the medium will max out people's headspace with new stuff to learn for quite some time. Luckily actual reality has a lot of intrinsic rules we can piggyback on. Drop a digital rubber ball, and it should bounce off solid objects, make the right noise, and roll under the table.

Respect and Embrace Reality

Respect the real world. By having your digital objects respond to basic physics in the world, you firmly ground them in reality. You also communicate to the user that the experience respects reality, so they can expect its rules to hold true.

Sharing Is Caring

Even the most mundane task becomes lighter when other humans are involved. We want to create interactions that allow for shared experience and go further by encouraging users to seek out those connections.

Engage the Senses

Sight, sound, touch. Less than half of what our brain perceives as seen is based on input from our eyes. The rest is filled in by what we call *Photoshop of the Mind." The brain is constantly building up a model of the world. It tracks objects and areas of focused attention, using various cues to determine physical attributes and behaviours.

Legibility Mask

A semi-transparent purple background colour that ensures legibility of the UI elements against the environment in the real world.

  • Use to separate application layers
  • Use to increase legibility and separate UI elements.
  • Make the size appropriate for readability at the right distance.

Image of a 3D model on a desktop.

Microsoft HoloLens 2

?

Stages of AR Design

2D Design

Figma logo

MRTK MIXED REALITY TOOLKIT

MRTK is a Microsoft-driven project that provides a set of components and features, used to accelerate cross-platform MR app development in Unity.

  • Provides the cross-platform input system and building blocks for spatial interactions and UI.
  • Enables rapid prototyping via in -editor simulation that allows you to see changes immediately.
  • Operates as an extensible framework that provides developers the ability to swap out core components.
  • Supports a wide range of platforms

https://github.com/Microsoft/MixedRealityToolkit-Unity

Screenshot of Mixed Reality Toolkit for Figma

Photo of person at their desk typing. The words "Fire Detected" is overlaid, and a red lens effect is around the image.

Photo of a corridor with a green line and 3D model of the layout of the room, and instructions overlaid.

Scene of a room, with a red cross and overlay over one door, a green arrow pointing to the bottom right, am inset 3D model of the floor and the text "Fire alarm activated"

  • UI elements and story board designed in Figma
  • 3D Model created in Vectary

Screenshot of Figma and Vectary designs

On the left a render of Facebook Rayban Stories. On the right a video recorded wearing the Stories.

Stages of AR Design

Video Prototyping

Video recorded via Stories. Oliver describes what appears.]

Compare Usability of Interactions

Splitscreen videos taken while a person walks up a street and looks at a for lease sign in the window. Shows two different ways a phone number might be recognized and a call commenced.

  • UI elements and story board designed in Figma
  • Animations created with Adobe After Effects

Meta Quest Pro

Image of a Meta Quest Pro

Shapes XR

Oliver describes a video prototype of an emergency escape.

Meta Quest 2

~$650

  • Magic Leap 1
  • Magic Leap 2
  • 1. Asset Creation
  • 2. Collation & Refinement
  • 3. Interaction & Output

Polycam

Oliver describes the demo

Luma Al

Oliver describes the demo

Sketchfab

3D model of a chair being rotated in space.

Blender

Video of a room being created in 3D in blender.

Reality Composer

On mobile, for now...

Video of exploring a 2D room in Reality Composer.

Demo of two views through the ML 1 headset. Oliver describes them.

Stages of AR Design

Spatial Mapping

Demo of wandering around with an ML 1 headset. Oliver describes it.

Stages of AR Design

Al Generation

Tweet reads: Made a quick AR inpainting experiment called We See! It is more a proof of concept showing how voice command, selection gesture, #stablediffusion and #AugmentedReality can come together to alter the reality around us.

Oliver describes the app

Tweet from @Aidan_Wolf reads

AR glasses paired with Al image generation will put humanity in a constant state of hallucination while paradoxically working to unity our perception of reality to include the outer world, inner, and beyond, like mixing paint

Take Aways

  • Shane
  • Henrique
  • Yin
  • Aidan

There is a photo of each but in place of Aiden is Yin's photo repeated.