Morning, everybody. Thanks for coming along. The more people the better to warm up the room.
It's nice and cold. Yeah, thanks for the intro, Claire. I'll talk a little bit about the work that I do with the different organisations.
So W3C is the group that defines the standards essentially for the web. The Khronos Group really represents a lot of the silicon manufacturers. So it focuses more at the chip layer of integration there and then ISO is working on a reference model for augmented reality, so people can have a reference model to develop against.
So we spend a lot of time working on how open standards and open data formats can work with augmented reality. What's been holding us back is a lot around that and I'll talk about that as well.
A lot of our work is focused on taking these leading edge technologies and applying them and working out what the real opportunities and the real limitations are.
So I'll talk through what we've done is applied the dream dare do to our story so I'll walk you through that.
So we started with a dream. We really wanted to build our own reality, so we love AR and I'll talk a little bit about the definitions of that in a moment.
And we really wanted to be able to explore that for ourselves. There's a whole lot of interesting things, not just at the tech layer, but in the way that it affects the way that we think and the way that we perceive space around us and the way that we communicate with others.
So we really wanted to focus on that as a research area because we find it really interesting and so we dared to build our own platform.
So the first thing we did is in 2009 we launched buildAR which was the first self-service web platform and the idea was to make it possible for anyone to create augmented reality.
Up until then it had really required a lot of dev skills primarily around C++ or Java to compile an application, build it and make it work.
So we made it possible so you could just go to a web browser, click around through a map interface, those sorts of things and start adding augmented reality content.
And what I'll really focus on through the rest of the presentation is the Do! aspect. The execution that we went through.
What we've learned from this and how all of the different technology components fit together and where the landscape is up to at the moment.
So I really want to share our story of how we've put these things together and why we believe right now it's just on the verge of mass adoption.
So what is augmented reality?
Since, you know, the early sci-fi movies there's been a lot of representations of information overlaid onto reality and so one of the first ones that people really pointed to was called Terminator Vision.
So Arnie would be looking around a scene and he'd recognise something, some options would show up and some information would be overlaid onto his view.
In 1990, Tom Caudell was working at Boeing and so he was dealing with that same sort of thing.
They were definitely exploring how pilots could benefit from information in the cockpit because of the information overload, but the real problem that they were facing was the engineers servicing the planes.
The service manuals are sort of a stack of books about that high and so there's an overwhelming amount of information, but it's really important that these planes are serviced correctly, obviously.
So they were working on different ways of presenting this information in a institute... in a real context of where you're working.
So Tom coined the term augmented reality and it really stuck after that.
There was a lot of different media representations and one that people really commonly point to is the Minority Report.
Maybe it's Tom Cruise, I'm not sure.
So this was really showing a gesture based interface, a really immersive, rich media experience.
So this is something that really captured people's imaginations as well.
But now you're all really familiar with this sort of thing.
We use these tools all the time.
I'm sure most of you have tried some sort of AR application on your phone.
Or certainly location based information as well.
So it's really common to see applications that lay out information around you onto your view of the world and it's just pulling in location based info.
I'm sure you guys are really familiar with that.
It's also possible to buy off-the-shelf cars now with built in navigation systems.
So they display things that optimise your driving and there's been some really interesting research, I think it was Toyota?
Yeah, anyway, so there's a lot of research around how this sort of information can change the type of driver you are.
So they've found that by presenting cues for navigation and cues for risk, they can make more timid drivers more assertive and more aggressive drivers more calm.
So by having consistent information it really seems to tone people down to a more middle ground.
So these sorts of systems are available now.
Of course you can use different applications to visualise things like putting furniture in your home so you can see how it would look. Ikea's a great example of that. And there's a really wide range of applications now that let you visualise products and specifically furniture and the home wares in your home.
It also lets us explore impossible things.
So there's a whole range of things that are either too dangerous or just not practical so, you know, playing with a brain is obviously one example.
So AR really lets us explore things that are impossible or just not safe to do.
These science fiction interfaces really are starting to become fact, so it's at the point where we can really now start weaving these into real experiences.
The things that we've seen in the movies really are starting to be possible now and one of the things that I really want you to take away from this is we can now start to open up our imagination and really dream about the things that we want to do.
Up until now there's been a lot of technical limitations and we're still going through some of those, but by the time you build projects the technology has changed and it's moving so quickly now.
So I was at Augmented World Expo, that was the conference in Santa Clara, I've been going there every year for five years I think it is now, and every year it's been getting bigger and bigger.
This year there was about 3000 attendees. There was about 200 people that attended by robot.
So there's a robot you could login from your laptop and then steer it, drive it around, and interact with people from that.
It's kind of disconcerting to be sitting in an environment like that with a whole row of faces behind you and you could turn around and actually wave and talk to the people on the screen.
So it was quite an experience.
But it's really... The market's really changed. There's now 3000 people, but there was a lot of big companies attending.
Boeing, SAP, Citrix, Disney, Johnson & Johnson, just a wide range of people, and they're really applying this in their business context now.
It's no longer this sort of hype-ware that it was before, and it's not just being used to advertising and marketing anymore.
The challenge with that was people wouldn't return to those experiences.
They were like empty calories.
You'd go, "Ooh, that's cool."
Most people would actually watch the YouTube video rather than trying the experience, and then they wouldn't come back again, but now we're seeing people applying this with real utility and real usefulness.
A lot of people ask, "Wasn't AR just a fad as well?"
So there's a perception that, you know, these technologies go through like fads and waves.
Paul Seffer has a great point about how technology evolves so looking at this line up through the middle, this is really, the straight diagonal line, that's really our expectational perception of what technology should be.
So for each new technology we'll have a fairly linear expectation and we just expect it to just increase and keep getting better, and we find that in the early phase, this blue line is the reality of what's actually delivered.
So you'll see in this phase there's quite a big gap between what we expect and what's actually delivered, and it starts off really, really slowly.
But it gets to a certain point where it passes that.
All of our expectations are trekking along about the same, but the acceleration and the technology has increased so much that it starts overachieving what we expected and our belief is that AR is about here right now.
It's a really important point and everything is in a state of change at the moment.
So it's really important to not think of these technologies as fads that wave through.
That's more around our perception of them. And I think if we're implementing strategies for these sorts of things, it's great to keep this in mind for all the different technologies that we deal with.
But one thing is that AR is only a small slice of the whole picture.
So what we're really talking about is two broad areas of computing: Evasive Computing and Mixed Reality.
I'll focus on Mixed Reality for today.
So this is a continuum presented by Paul Milgrim in 1994 and what it does is it shows the range of different relationships between the real world over on, I think your left side over here, and the virtual world over this side.
Now, inside the social sciences there's a whole lot of debate about if there really is a fragmentation between those two, but in reality, you know, you can have...
Let's take the examples here.
So there's a real person's head over this side and a completely virtual head over this side, and then you have shades of grey in the middle.
So augmented reality is really here where you're taking a real object and overlaying digital information onto it.
There's another thing called augmented virtuality, which is taking a virtual object and then maybe presenting video of the person's face onto that object.
If you put video into Second Life or those sorts of things, that's really what this slice is, but most people focus on this area where they're seeing the real world and then overlaying digital information onto it.
So what is virtual reality?
Virtual reality was a term coined by a French poet in 1938, and in 1962 Morton Heilig created the Sensorama.
If anyone's got $1.5 million it's for sale now, but basically it was a Coney Island sideshow sort of ride. So this is it here.
You sat in it like this and you stuck your head in it and it had a screen.
He was a filmmaker effectively, and it lets you drive through Manhattan on a motorbike, but what it did, not my head, but it blew wind in your hair and it also used smells, aromas, all sorts of things, sounds, vibrations.
So it was really sort of the first virtual experience, but 1962, that was a long time ago.
In 1968, Ivan Sutherland developed this head mounted display.
So this was a binocular vision system. Each of these two eyes were receiving independent images rendered by a computer.
The graphics back then were quite primitive, but it was quite compelling and he called it Sword of Damocles because all of this equipment was hanging over your head obviously.
And you can see how massive and how primitive this was, but 1968, you know, that's quite a long time ago.
In the '70s, '80s and '90s, Tom Furnace completed some amazing work that really extended VR even further.
He did a lot of work for The Department of Defense.
He was focused on cockpits and the information overload there and how you make an integrated scene.
The problem that they were dealing with was...
I'm not much into weapons and war and that sort of thing, but they were focused on planes being a delivery system for weapons, and at that time the way that they would steer the weapons would be by steering the plane.
So you point the plane at a thing and that would be where your weapons would go, but they had much more compelling targeting systems.
They just couldn't... The pilots couldn't drive that because it was something like, I think, 83 different computers or some ridiculous amount.
So what they developed was this big Darth Vader sort of helmet and it created this sort of environment where they had a VR view of the world and it pulled out all of the key landscape elements, the features, and the person, the pilot, could just point and tap at different areas and then you could interact with it like a scene so it wasn't really where you were steering the aircraft.
It was much more of a, you know, a God's eye view interact with it, and that really had amazing impact on the performance of the pilots and their ability to cope with the stress of that.
Tom's done some amazing work and he went on to found the Hit Lab, which is a global network of research labs, so I definitely encourage you to check out his work as well.
And then in the '90s, so yeah, things got a little hairy.
Jaron Lanier became the poster child of VR in the 90s.
He's a musician and a programmer, and he really focused on using VR for creating music.
And he created some great experiences.
It really built up a lot of the hype, and I think he suffered from that.
At one point, Nintendo made this power glove, which is what he's wearing here.
So it's an interactive gesture tracking device in the '90s for playing video games.
It was actually shipping and, you know, you could buy it at Toys R Us. It didn't go anywhere.
So I think what Jaron really suffered from was in the '90s, there was a whole wave of media, things like the Lawn Mower Man.
It really hyped up VR and showed, you know, this spectacular view, often quite dystopian view of what was going to happen in this environment.
But when that bubble burst, it really led VR into, you know, basically into the desert.
There was a long dry period where VR was uncool. It wasn't something that anyone focused on.
Silicon Graphics and those sorts of companies died down. It really wasn't something that people were focused on.
But since that time, in the last couple of years, there's been some amazing developments that have happened.
So if you look at the Kickstarter project that Oculus launched, they basically wanted to rebirth the industry.
And the thing that's come out of that is the Oculus Facebook buyout.
So I'm sure you're all familiar with this. If you haven't, so Mark Zuckerberg paid two billion dollars to buy Oculus.
This has had an amazing kickstarter effect on the industry.
So when I was in the valley couple of weeks ago, it's just an amazing bubble again. Everybody's focused on VR.
Again, I think VR is just a slice of the mixed reality continuum, but there is so much activity going on, it's literally crazy.
As you can see, it's really possible to draw a clear line between AR and VR. But who really cares? Most users don't care.
They just go, "I want some magical stuff."
In fact, a lot of technologies, they will misname. They just want digital magic.
They want a cool experience, and they want things to be useful and have an impact on them.
So let me talk through why there's a fuzzy line between the two, and how this is really changing in the marketplace at the moment.
So in 1833, Charles Wheatstone created this device, which is called the stereoscope. Basically there's two separate images that are created by two cameras not very far apart, just like the 3D filming we do nowadays.
And those lenses, you need to hold it up to your eyes. It was basically an early View-Master.
And from that, people could see a 3D perception of a scene, and back then, that was a pretty compelling experience. That was multimedia.
So what we've done nowadays is we've done exactly the same thing.
But instead of using the two stereo photos, we just put a phone there, and it works really well.
So you can render two slightly different views of a scene on your phone, and using these sorts of devices, you can get a full VR experience.
But of course it doesn't have to be in the stereoscope format.
There's a really common format now which is called cardboard.
Basically, if you... Yeah, cryptically called. I can explain why. (chuckles) But basically Google have kind of popularised this, but it really can work with any application that can render these stereo views on the screen.
Inside here, you have two small lenses. They're really cheap lenses. We can buy them for something like 15 or 20 cents each. So you can make this sort of thing for, you know, under a couple of dollars. But this whole space is exploding.
So one of these mobile as a mask carriers or devices is announced at least once a week. And so there's all sorts going from the, you know, low-cost cardboard ones, right through to the high-end.
This is a gear VR. So a couple of dollars for these. I think 300 dollars for these.
Basically you strap a device in the front, and it just works like that.
There's some really cool ones in between. These are called Go 4D which is from a Korean company. It's 20 dollars. And basically, they just clip onto a screen like that and then you look into it.
And it looks kind of primitive, but when you put it on, it's surprisingly compelling.
We can bring up some examples later.
So really, now you can just slip between your mobile and wearable device, you know, as comfortable as slippers. It's fairly easy.
This isn't the sort of experience that people are going to spend hours and hours and hours in.
The higher end VR displays, people will probably spend a lot of time gaming, but in these sorts of experiences, they're the sort of thing people will use at schools, use in, you know, small companies.
People will carry around small devices like the Go 4D, pull them out for, you know, for one or two-minute experience, and then they'll go away.
But the key thing is you can also just use the exact same experience in the phone. You don't necessarily have to go into the VR.
So you can switch between. Some people don't have stereopsis, so they can't see clearly through both eyes.
I have eye dominance, so I definitely have challenges with that sometimes.
Yeah, this technology is spreading really, really quickly.
And it's probably the thing that's going to transform this space the most because it's such a democratic or a democratising technology.
So I believe of the cardboard devices and the apps, Google have already shipped out 500 million of those. So it's a real base.
So what are Wearables? This leads us on to a nice segue.
Wearables, people have been wearing computers for a long time, so initially I think this is the international unit of measure called a Boy Band.
So there's a Boy Band worth of the people wearing them. This is Thad, Steve Mann and those guys back at MIT in '95.
I think it took real balls to walk around looking like that in 1995.
And certainly Steve has had... Steve loves a good policy debate. He loves a language debate. He'll like to twist words around. But then he also loves a policy debate.
He got kicked out of McDonald's in France because the security guard said, "You're not allowed to film in the premises."
And he got into an argument about how they were filming him with their CCTV cameras. And they tried to take it off him, and he had a letter from his doctor saying it was physically attached. You're not meant to remove it.
And, yeah, Steve loves a good media blowup. But since then, you know, wearable technology has come a long way.
There's probably a lot of people in this group already wearing wearable computers. But we don't think of them so much like that.
This is, you know, the Fitbit, an activity tracker. These are really common now. The reason that we believe that wearables and mixed reality work so well is often these devices...
Well, there's two categories, really. There's a wearable display, something that you'd put in front of your eye or near your eye and help you see the world in a different way.
And then there's also devices that have minimal or no interface, and are just wrapped to your body in some way.
And AR gives you the ability to project information that's coming from these devices and then create virtual displays. So you can have a much larger interaction.
And that works with the whole Internet of Things as well. There's a whole lot of objects probably in the room that you could reach out and grab information from, present information over, but they may have no physical display.
So it's really like we can peel the user interface off of these hard objects now, and put them in the network.
The first examples of that were really Wi-Fi access points, something that had, you know, maybe one or two buttons, and then most of the configuration was done by the web.
This is something that's really spreading with the visualisation of interfaces through AR and then interface-less objects like wearables, nearables, and Internet of Things. So I'll just walk through a little bit of the history of wearable tech.
This is a great overview provided by Mashable.
And let's just walk through this. So the history of wearable technology goes back a lot further, so I think there are some examples of in the early Chinese dynasties, people wearing abacus based rings. But what this focuses on is digital technology.
So starting in 1961, and Edward Throp created this, basically a gambling device. So it was a way for him to cheat at Roulette.
And he did this, he co-developed it with Claude Shannon who's the father of information theory, if there are any geeks in the audience.
And, you know, this gave him an amazing edge, a 44% edge in the game. That's really amazing.
In 1961, again, we're talking about those times of sort of the Sensorama and the Sword of Damocles head-mounted display.
In 1972, Keith Taft created another wearable device. And again, this was focused on gambling as well. So he wanted to take advantage in blackjack.
So this is the thing about rapid prototyping He lost a lot of money, so that one didn't go so far.
In 1975, we moved on to the Pulsar calculator. This was a big, large device and, you know, even then, they focused on celebrity for marketing.
It showed how important that is. So allegedly, Gerald Ford expressed interest in getting one. But back then, $3,900 for a calculator basically on your wrist. That's a ridiculous price.
In 1981, that's when Steve Mann started wearing these sort of devices. So he was wearing what's effectively an Apple II strapped to his back.
And he had a lot of TV equipment as well. So he really focused on broadcasting and rebroadcasting TV signals.
So he was definitely one of the earlier pioneers. And then '84 was the Terminator vision that we saw the scenery of earlier.
And so, yeah, predicting the dawn of Google Glass, but we'll talk a little bit about that in a while.
So 1987, the first of the digital hearing aids were released. And that would have been a really... In fact, my nephew wears one, and it was a really life-changing event for him to have them installed.
So, you know, this is an amazing wearable technology that's had a huge impact.
And really, that wasn't that long ago. And then Steve Mann continued evolving his wearable devices and fashion sense. So lifelogging was the... Steve explored a lot of different terms for these sorts of things. Lifelogging was one of them. It's kind of grown into what was an area called the quantified self.
Some people still follow that sometimes. It's really around, you know, just creating data about everything in your life: how you sleep, how you walk, how you talk, everything.
And in 2000, who's got a Bluetooth headset?
There's a lot of utility and you see specific context. I still wear mine running. You see a lot of those curious and those sorts of people wearing them.
But, you know, there's a real balance between social acceptability and these technologies. And we'll talk a little bit about that in a moment.
In 2002, POMA or POMA Wearable PC was launched, and the description of it is pretty apt, I think. So the computers retailed for $1,500 and all the sexiness of a cassette player strapped to a headlamp.
And I think that's a lot of what we've been struggling with, is the aesthetics and the form factor.
We're still quite a way off from that thing in mainstream technology yet.
In 2003, C-Series, the first digital pacemaker. Again, now, there's a whole category. Wearables are the broad area. Things that are more close to you rather than on you, people are calling Nearables now.
And then things like this, these are really embedded systems so, you know, I think this is going a step further beyond Wearables.
In 2006, Nike and iPod teamed up. And that was the first real mass adoption of, you know, pedometer, activity tracking sort of technologies.
From there, it moved into the Fitbit. And so Fitbit is now depending on, your measurement, effectively 60% of the Wearables market.
They've shipped a lot of devices. And I think the thing is they're not pitching it as a wearable computer. They're just trying to deliver some functionality, something that's useful that everybody can relate to.
And then 2009, again, another horrible, expensive wearable computer. And the Pebble evolved, so the whole watch space is really developing even further now.
Samsung have released some interesting things. I think watches is the real, it's a really interesting category. And really we need a mesh of devices to interact with.
The problem with things like Glass, so Glass is really a point of view, outward facing camera. So if you try to do video conferencing with somebody, it's impossible.
They're only going to see your view. If you're doing a service call or something like that, it's great for them to be able to see what you're seeing.
If you're doing a social call, they want to see your face. So you really need something else. You may have a phone, but it's also easy to have a camera on a watch.
The form factor is getting pretty small now, so being able to just do that is much easier than standing in front of a mirror or something like that.
So Google Glass. I spent about four or five months wearing Glass a lot.
We had three pairs and we spent a lot of time doing research on it. It's a great technology. And it really does have a big impact. But actually, I shall dive more into that in a moment.
So yeah, 2014 was termed the year of the wearable. But I think we've learned a lot about some of the challenges and shortcomings since then.
If you really want to get a good overview of the wearable space, check out vandrico.com. They're a Canadian company and they run a database of all of these devices.
They break it down so they'll tell you based on... There's 11 categories on the body, how many devices are for each of those parts of your body.
And they talk about the types of senses and devices and then their price. And so it's a really good overview, and it's been running for about eight years so it's a really good quality database.
So let's look at some of the things that have been holding AR and wearables and VR back.
So one of the things that we've really learned since sort of mid last year is some of the challenges that face Google.
So there's a large backlash around privacy. The fact that you have a camera on your face is so conspicuous it really has an interruptive effect on your social interactions.
So, you know, I've worn Glass in all sorts of context from, you know, filling up a car, going, getting something at the shops, into meetings, conferences, those sorts of things.
It really gets in the way of your conversation. And it's more about people's perception of the camera than almost anything else. There is a certain amount, something blocking your eye. It's a really impersonal thing to put something between your eye and the person you're talking to.
But, if anything, I believe it's mostly about the camera. People, you know, there is very little difference between the camera on here and me being able to take video of people and strapping one to my face.
But it is so conspicuous and it has such an impact on your social interactions that I think that's something that I've really struggled with. And so, the perception now is that, you know, you're some sort of spy bot.
There are a whole lot of places that have already, you know, the term "Glasshole" sprung up. And there are places banning Glassholes. So even when they weren't shipping, you know.
The other thing is the value exchange. I love this photo.
One of the most important things with technology is there has to be a great value exchange.
Every user who uses a technology has to invest a certain amount of themselves, a certain amount of activity, and they have to get back equal or better value.
Glass is a great tool but it's not really providing that value exchange for consumers. I think there's a good change that Glass is going to relaunch very, very soon. And I think there's a chance that they'll refocus on the enterprise market.
There's a lot of companies. So the conference that I just came back from, there was a whole segment on head-mounted displays, wearable glasses basically.
And there was eight vendors excluding Google that were covering their latest developments.
There are some companies like Vuzix and Optinvent that have been doing this for a long time now. Vuzix and SAP are shipping a lot of devices into logistics and infrastructure sort of context.
So, you know, there is some real benefit there, and if it improves your job, improves safety, there's a real business benefit.
But I think it's going to be a little while before we see this from a consumer perspective.
People like Optinvent are trying a different form factor. They're going with headphones that you can then bring down to a head-mounted display.
But again, I still think it's just too invasive.
There is an amazing system that Google have released. It is a contact lens but is nothing like Google Glass.
All it is is effectively a single pixel, not much more than a single pixel. And it measures the glucose levels from your blood through the moisture in your eye. And from that, so if you have diabetes, you can have almost a visual indicator and, you know, you can track your blood or your sugar levels.
So like that sort of thing, to me, that makes perfect sense as a great value exchange. It's minimally invasive.
That seems like a really good fit for technology.
But again, I think wearable glasses are going through that some sort of S curve that Paul Seffer was talking about before.
VR has gone through that. AR has gone through that. This is deep in the middle of the bottom hump.
You know, people's expectations just aren't being met. But I think pretty soon we're going to shoot out the other side.
And then the other problem. So with native applications, AR is primarily shipped as native applications.
And I'll talk more in detail about that in a moment. But the biggest challenge with that is getting there. So we call it, "From no to go."
So from somebody just broadly knowing that there's something that I can interact with in the world to actually interacting with it on my device, if you look at some of the...
Not to pick on them, but one of the mainstream AR browsers called Layar, they have a pretty good user experience, I think.
But to go from being told by somebody in some print material or a sign or something like that, to go to the app to download it, start interacting with it, search, find the content you want, go through the first user experience, it's 17 steps.
That's a massive distance. We measure the distance between you and an interface in a number of different ways, based on the network speed, the number of actions taken, the number of steps to access your device.
And so we really measure this as like a geometry.
It's how far away do you feel from an application or an experience. And at the moment, that gap is just too long.
So in each of these steps, there's a falloff rate. 17 steps, you know, you're getting to like 90% falloff rate.
So this is a really big challenge. And another big question is: How any people are really using AR?
So there's an industry-stated goal from Marionberg.
He's kind of driving this. You know, the goal is to have one billion regular users as soon as possible. They've sort of stated the goal is 2020.
But if you look at where the mainstream AR browsers are. Bless you. These are the three mainstream ones. Junaio is run by a German company called Metaio. They were recently bought by Apple. So this one's kind of gone away.
Wikitude, which were really the first into the marketplace. They have quite a good momentum. But the ones that really have won through OEM deals, they've done deals with Samsung.
There's a whole bunch of Samsung devices that ship with Layar installed on it already. So they've got up to 10 million.
So if you look at the total of that, half million. That's a long way short of the one billion users that we're aiming for. So there's got to be a better way to access that.
Our view is that you can surf the web. So we started in 2009 by making content creation available through the web.
But what we've been really working on since then is how do we use open standards and web standards to make the experience itself run inside the web browser.
And if you add these new technologies, these new standards, and I'll talk through those in a moment, you really do turn the web, the traditional web which, you know, now is Web 2.0, into a rocket that is just a whole difference experience.
So this is the slice of browsers that support the new technologies that we need to deliver AR.
So if you look, this is already 14 times bigger than these other installable apps.
And the thing is, these browsers are already installed on the devices.
So these numbers don't represent the whole market. This is just the Android slice because they're public data.
So if the iOS, you know, depending on the audience you're talking to, Australia's quite a high iOS specific audience but there's normally roughly a good balance between the two. But these are public numbers.
You can check them on the Android Play Store. But the important thing is I put that slide together about a year ago.
Right now, it's grown to 600 million. So the default browser on Android runs these standards.
But if you look at the other browsers in that period, they have flat-lined. They haven't evolved or grown their market space at all. So we really see and have always seen these installed browsers as a transitional silo.
They really do create separate silos. You can't create content for one of then run it on the other. There are real inter-operability problems.
And from our perspective, it's kind of the equivalent to where AOL or Apple eWorld were before the web came along. It was, you know, a great experimentation. It was a great way of getting people involved.
But it wasn't where the main game was going to be.
So 600 million, that's a really, a big addressable audience. And that's available right now.
So what does the Augmented Web look like?
So that's the term that we use, so we're basically taking the web, adding all the standards we need to add AR, and we call that the Augmented Web, or these are Augmented Web experiences.
So this is the major benefit of the web. This is what Tim Berners-Lee's major innovation was, was href - the link. You could send a link to somebody and they just open it.
It's a universal way of sharing content. And that's the biggest advantage the web has over native applications. The Augmented Web is exactly the same, but it's just what happens when you click on it can be much, much more compelling.
So if we look at which browsers support these Augmented Web experiences, it's pretty mainstream already. So Chrome is driving the market. They obviously have the largest install base.
Firefox has a large install base. And Opera is effectively built on the same platform as Chrome.
So if you look at these, these provide the bulk of the market, depending on the audience you're talking.
If you're talking about consumers, Chrome by far is the largest audience. If you're talking about
enterprise market, banking, those sorts of things, you know, IE is definitely the main for those.
So if we talk about those other platforms, iOS, Safari, you can run a lot of these technologies. Actually, let me talk through what the standards are.
The second is that you need some sort of 3D representations. So the mainstream web standard for 3D graphics now is called WebGL.
So it just allows you to deliver great 3D content as compelling as any of the native applications. And then on top of that, we need to be able to sense how the devices are moving. So we need things like GPS so we can work out roughly your location, and we need a gyroscope and an accelerometer so we can work out how the device is moving through space.
And then on top of that, we need to access the camera and the microphone. And so this is one of the things that's really changed.
One of the new technologies is called WebRTC. It stands for Web Real-Time Communication. Effectively, it was designed as a video conferencing sort of replacement.
So they're targeting that sort of Skype, FaceTime market. And Skype and Hangouts, I was thinking of
the Google one, Hangouts, yeah.
So most of those platforms are integrating so they can talk WebRTC and interact with that.
Now, if you look at Firefox, you'll see in the top corner, there's a little smiley face. If you click on that, that's a video conferencing platform.
You can just send someone a link and your browser just starts video conferencing with them straight away. These things have really opened up the technology massively, and with these video streams, it's really different from how the web treated media before.
Before, you could place a video on a page and then you could go play or stop. This is completely different.
Now, with these streams, and it's both the microphone and the camera, we can reach into them, access each individual frame, and work each pixel, and process them in real time. It's amazing.
So traditionally, we've been more around managing boxes and networks. And then from there, people have focused onto building APIs.
You know, I think that's a great thing. We've done a lot of that for our enterprise customers. But the real strategic challenge that's happening now is the management of streams. So how companies create tools that sit on top of streams, extract value and meaning.
I think that's sort of the next business challenge that people will really be focused on.
So that's WebRTC.
That gives us the camera and the microphone access. These will put together, give us all the tools we need.
So iOS and Safari in general doesn't support that yet. It supports the WebGL.
It supports the gyro and the compass, but it doesn't support the camera access yet. So that's one of the big slices that is yet for them to roll out to the market.
And if we look at the Microsoft space, so Microsoft has got a good support for WebGL and the different senses now.
They understand the significance of the camera and microphone access we're about to see, and there's a bit of a battle playing out around that at the moment.
But they are definitely engaged in supporting that, but probably the next version.
So what I'd like to do is jump to a couple of videos that just show the types of interactions that you can see in a web browser. And the key with these things is they are just, you know, things you can tap on a link and start experiencing it.
So let me show this first one.
So this is literally just a camera view in the background and using the gyro and the compass, so it's worked out where I am. And as I look around, I can see no more GL AI content around me.
But one of the challenges with that is it's always been at the centerpoint looking out. But now there are a new approach.
You can actually zoom into models and walk around them. So it can be much more interactive, and you can create these sorts of scenes.
So these are just some rough examples, you know, education games, marketing, those types of things. It can create all sorts of inter-activities.
So this one's, basically, when you tap on the guy, he shoots a ball at you, and we can shoot back the other direction.
I'll show you some examples around that later. And, you know, the models, it's got the ability to pinch and zoom, so you can zoom right into it, and the car, so you can, you'll see, we get right in and almost sit in the passenger seat and look around, and then you can go back out.
So this is just one of the modes that can be supported, but, you know, this looks like a native application. This is a sort of experience people will expect from AR.
And so, yeah, the focus really is on how the web itself can now be blended with the physical world.
So what I'll do next is show a different video.
This is a demo that we're about to release. This hasn't been released yet. And this really pushes the network and the devices to their extreme.
So what this is is effectively kind of like a Minecraft sort of experience. So you have an island in the middle here, and these are the blocks.
On your device, as you point around, you have like that green block here is the cursor block. So as you move it around on the island, wherever it is, if you tap on the screen, it will add a block there, or you can go on delete mode and delete one.
So you can see, this was Alex using the device and just moving around.
And you can walk around the island physically and, you know, build sorts of, these sorts of directions.
But one of the really key points is that you can then invite a friend in and they can interact with this in the same way.
Sorry about the face.
And you can shoot them, of course.
So, really, we're just showing the different types of interactions and throwing this open for people to play with it while we prepare the next release of our platform.
But this is great for all sorts of context, so educational context, people sharing models, talking about ideas, just being able to share someone a model of, you know, something you're going to put into place or a situation.
They can tap on a link, walk around it.
If you want to talk about it, you can bring them in and chat.
And then all sorts of interactivity as well.
So this gives you a good overview of the sorts of experiences, you know.
Depending on your context, some people may think this is VR, some people may think it's AR.
We can switch on the camera view in the background so you can see the world as well.
But again, users don't really care.
It's about the value proposition. Is it useful? Are they actually going to use it?
So that's my overview of AR, VR, wearables and how we believe this is going to gain really wide distribution.