We are happy to announce our presence at International CES 2014.
To illustrate their activities IEEE ComSoc has selected HybridEarth so we will present on their booth.
The IEEE booth #30242 is located in the South Hall of the Las Vegas Convention Center (see a general map of CES)
More in detail, the booth is about halfway down along the side wall, a little past Huawei (see the exhibit floor map of South Hall upper level)
Find here the article and the demo accepted at NetGames, December 9-10. When the exact schedule for the Manycraft demo at Netgames will be known, we will post a invitation to all. The idea is to have as many players as possible to join us during our presentation.
A stable version of Kiwano is now running on our servers. It has been implemented in Python and C++ using twisted networking engine and cgal library for computing Delaunay triangulations.
Documentation and sample code to connect the open API are available at kiwano.li. HybridEarth and Manycraft are the first systems to rely on Kiwano API and hopefully others will follow. If you plan, even remotely, to use the API, please advertise yourself on the mailing list.
This article presented at HPCS’2013, describes the architecture and algorithms of Kiwano.
“The first version of this Surrogate Metaverse was simply called Google Earth, and it’s imagery and topography was updated only every few months or so… Until the launch of Mesh-net – that changed everything”
Rainbows End, by Vernor Vinge, 2006 “A new information age in which the virtual and the real are a seamless continuum, layers of reality built on digital views seen by a single person or millions, depending on your choice. But the consensus reality of the digital world is available only if you know how to wear your wireless access—through nodes designed into smart clothes—and to see the digital context—through smart contact lenses.”
Three years ago it became clear that the future (or at least one of the futures) of the smartphone will be connected augmented reality glasses. And then Google announced their glasses giving weight to the prediction. However, today, as I write these lines, there are still no easily available devices.
To experiment the near future and to be ready when AR phones will become mainstream we have decided to “build” our own.
The contraption elements are:
- an Epson Moverio head-up display device. This wifi enabled device runs gingerbread Android.
- a Samsung galaxy S3 mini smartphone. This is the smallest full featured android phone we found.
- a clamp, strong tape, etc. to interlock the two android devices.
How it works:
The Epson Moverio is connected to the internet via wifi. If there is no wifi around the S3 mini is configured in portable hotspot mode. This ensures that both devices are always connected.
The app for our “hybrid glass” has 2 parts: one running on the head-up display, another one on the smartphone. The smartphone has all the sensors and does most of the job before sending what and where to display to glasses.
All in all, our hybrid glass device is similar to a Google Glass, just binocular and with a wider display area. A little bit bulkier and heavier too…
What is Hybrid Earth? Let me tell the story.
When walking in streetview we almost have the feeling of being for real in the street. Imagine then to be able to see and chat with people really in the street, physically there.
That’s what is called mixed reality: a hybrid world, half real half virtual, with avatars and people side by side in a same space. This world is dual in nature: it can be entered either as an avatar in a virtual world, a copy of the real world or as a physical person with goggles to see the virtual side. This seems to be science fiction but I think, that we are now ready for this experience, that all the required technology is now available.
The ongoing exponential growth of the number of geolocated sensors has given birth to what we call mirror worlds. The typical example is streetview, deploying all over the planet those cars full of sensors, spherical cameras, rotating lasers for 3D scanning, antennas to map the wifi hotspots, et cetera.
And more, there are billions of smartphones, drones… and soon google glasses, constantly uploading and streaming geolocated pictures and videos. All this leads to a mirror world with more and more details, updated more and more frequently, covering the whole planet.
Mirroring the real world consists of building a map where sensed data is associated to its location. Hence, in order to contribute to mirroring, mobile sensors need to be precisely geolocated.
In a virtuous circle, the same sensors used to map, also serve to compute geolocation.
Let me explain: Android and iOs rely on a map of WiFi hotspots to provide their location services when GPS is unavailable. To get their location users send the list of hotspots they have in sight. By comparing and triangulating with the map, their position is computed.
But by sending this data, users are not only consumers: they contribute in refining and keeping the map up-to-date.
This collaborative and iterative mapping process can be implemented with other sensors as well, some solutions make use, for example, of the compass embedded in smartphones to improve geolocation by mapping the magnetic field. Also, mimicking the way humans proceed, cameras can be used in the same iterative map-and-locate scheme. After all, you often know where you are by merely opening your eyes. Today’s techniques and computing power may not yet allow fast image identification against a large database, but in our experiments using simple visual markers, we got a precision in the centimeter range.
This is accurate enough to place inside the mirror world the users wearing cameras. Here comes an important point to stress: augmented reality glasses are intended to augment reality, sure, but as they include a camera they also help in locating the user and in collecting data for the mirror world.
Then there is another technical issue. Maybe the hardest to solve.
Today’s virtual worlds barely reach hundreds, at best thousands, of simultaneously connected users in the same space. However, in this new territory: ubiquitous mixed reality, everyone should be able to get in. It means a virtual world as vast as planet earth, with billions of avatars.
This is many orders of magnitude bigger than any existing virtual world.
The research community has extensively addressed this problem and recent results rise hope that scalability issues are soon to be cleared. For instance, within our team, we have designed Kiwano, a distributed infrastructure for scaling virtual worlds. For more details, I invite you to check up the project’s web page.
As all the elements are now in place, we can predict the emergence, this year or the next, of a social or gaming platform using mixed reality on a large scale. And with Kiwano and Hybrid Earth our intentions is to be part of this.
Mirror world, view from the web browser: In this screencast the user’s avatar moves in google streetview (and “homemade” streetview when indoors).
External view: The camera captures Elodie wearing augmented reality goggles. As she walks, her clone in the mirror world moves accordingly. The device computes her position using visual makers placed on the walls.
Augmented reality view, what Elodie sees in her goggles: Avatars are walking in the room. The markers are now used as classical augmented reality markers.
When we started working on mixed reality we faced a problem: to cover the whole planet, with all its inhabitants, we had to overcome a huge scalability issue: it was admitted that only a few hundred users can simultaneously be together in a virtual world.
We tried to understand how others deal with this issue. For sure, if you connect everyone to the same server, they will be together but at some point the server runs out of resources. The actual limit is very low: a server can handle at most one thousand users. So we need to add more servers, we need a distributed system.
An early solution for scalability were the shards, invented by Ultima online. They distributed the users onto many copies of the world. This was great because it was possible to have many players connected at the same time. This solution is also used in World of Warcraft. But, at the end, the users might be in completely separated worlds, if they are not in the same shard they cannot play together.
And then there was this great idea of dividing the space into zones, each zone being hosted by a server. This solution was successfully implemented in Second Life. However, the approach has its drawbacks: as people want to socialize and concentrate where the crowds are, many zones remain empty while a few become overcrowded. Since exactly the most attractive zones are saturated, Second Life still faces important scalability issues.
There are many available resources: data-centers with a myriad of computers. There are social services with millions of users. Yet, virtual worlds remained limited to just a few hundred. The problem, we thought, must be redefined from scratch.
We realized that handling the decor and handling the avatars are two different problems. They should be addressed separately. Google Earth, for instance, is a sort of empty virtual world, huge in extension and browsed by many users.
Avatars move, talk, dance, and so on. It would be too costly to send these events to everyone. The server needs to compute who is concerned, who sees the acting avatar.
But as avatars move their positional configuration changes.
This repeated computation is costly: the server needs to find quickly who is in sight, out of a haystack of moving avatars.
The bottleneck resides in the avatars movement from one position to another.
In this context we came up with a new solution, where servers are allocated to groups of users based on their geographic proximity.
In a virtual world users are concerned with what is happening nearby. That’s also why most virtual worlds assume that events have only a local effect.
So, for each avatar there is a neighboring zone around its position that will provide the complete scene, the visible decor and all the events. Thus, avatars need to be notified of all the events produced by those located in their neighborhood.
It is assumed that, the neighborhood is the space and objects within a certain range distance. But this does not take into consideration variations in density. An area of interest with a fixed size often can be too large, when there are many people around, or too small, where the density is lower. Also, if avatars have areas of interest of different sizes this can produce asymmetry: you see your neighbor, but she does not see you.
In a crowded space you will pay attention to people in the immediate proximity. If you are in an open space you can see someone far away.
Then given a set of avatar positions, what is a good way to connect them?
We have seen that connecting everyone to everyone is not efficient. Moreover, any direction you look in, you should see who is the closest. Fortunately, the Delaunay triangulation, provides this property. The Delaunay Triangulation is a classical computational geometry tool, widely used in computer graphics.
This is also the basic data structure of Kiwano, the system we have designed to scale virtual worlds.
An important feature of this approach is that it preserves locality: moving an object produces only local changes and those who are concerned by this event are the old and the new neighbors only. Accordingly, the Delaunay structure offers efficient, constant time access to the neighbors, which enables us to send notifications timely, before the source has moved further.
In Wold of Warcraft they succeed to balance the load by evenly distributing the users among realms. In Second Life the problem is that the zones do not match the distribution of the avatars.
Ideally, each zone should handle the same load, so the zones should not be shaped by the geography. They should be shaped by avatar distribution.
And because avatars move, we can expect the distribution to change in time; this is why we need zones that shape dynamically.
In Kiwano, a zone is the space covering a group of avatars, selected by proximity. Each zone handles roughly the same number of avatars, ensuring load balancing.
Each zone is taken care of by one server. Its size will not exceed the server’s capacity. And, because each zone communicates only with its neighboring zones to maintain the frontier, we can add as many servers as we need. The more people are connecting, the more servers we add.
According to the tests we ran, our 8 core servers supported around ten thousand users each.
In World of Warcraft, at any given moment only 5% of users are connected and playing. Using the same ratio of active users versus connected users, Kiwano can support 200 thousand active users per server, this is a cost effective solution.
That’s why we are confident that virtual and hybrid worlds with millions will be common in a near future.