What is Hybrid Earth

Joaquin Keller explains Hybrid Earth

Transcript:

What is Hybrid Earth? Let me tell the story.
When walking in streetview we almost have the feeling of being for real in the street. Imagine then to be able to see and chat with people really in the street, physically there.
That’s what is called mixed reality: a hybrid world, half real half virtual, with avatars and people side by side in a same space. This world is dual in nature: it can be entered either as an avatar in a virtual world, a copy of the real world or as a physical person with goggles to see the virtual side. This seems to be science fiction but I think, that we are now ready for this experience, that all the required technology is now available.

The ongoing exponential growth of the number of geolocated sensors has given birth to what we call mirror worlds. The typical example is streetview, deploying all over the planet those cars full of sensors, spherical cameras, rotating lasers for 3D scanning, antennas to map the wifi hotspots, et cetera.
And more, there are billions of smartphones, drones… and soon google glasses, constantly uploading and streaming geolocated pictures and videos. All this leads to a mirror world with more and more details, updated more and more frequently, covering the whole planet.

Mirroring the real world consists of building a map where sensed data is associated to its location. Hence, in order to contribute to mirroring, mobile sensors need to be precisely geolocated.
In a virtuous circle, the same sensors used to map, also serve to compute geolocation.
Let me explain: Android and iOs rely on a map of WiFi hotspots to provide their location services when GPS is unavailable. To get their location users send the list of hotspots they have in sight. By comparing and triangulating with the map, their position is computed.
But by sending this data, users are not only consumers: they contribute in refining and keeping the map up-to-date.
This collaborative and iterative mapping process can be implemented with other sensors as well, some solutions make use, for example, of the compass embedded in smartphones to improve geolocation by mapping the magnetic field. Also, mimicking the way humans proceed, cameras can be used in the same iterative map-and-locate scheme. After all, you often know where you are by merely opening your eyes. Today’s techniques and computing power may not yet allow fast image identification against a large database, but in our experiments using simple visual markers, we got a precision in the centimeter range.
This is accurate enough to place inside the mirror world the users wearing cameras. Here comes an important point to stress: augmented reality glasses are intended to augment reality, sure, but as they include a camera they also help in locating the user and in collecting data for the mirror world.

Then there is another technical issue. Maybe the hardest to solve.

Today’s virtual worlds barely reach hundreds, at best thousands, of simultaneously connected users in the same space. However, in this new territory: ubiquitous mixed reality, everyone should be able to get in. It means a virtual world as vast as planet earth, with billions of avatars.
This is many orders of magnitude bigger than any existing virtual world.

The research community has extensively addressed this problem and recent results rise hope that scalability issues are soon to be cleared. For instance, within our team, we have designed Kiwano, a distributed infrastructure for scaling virtual worlds. For more details, I invite you to check up the project’s web page.

In conclusion:
As all the elements are now in place, we can predict the emergence, this year or the next, of a social or gaming platform using mixed reality on a large scale. And with Kiwano and Hybrid Earth our intentions is to be part of this.

Leave a Reply

Your email address will not be published. Required fields are marked *

*



You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>