Various examples exist that provide users with such virtual space and virtual object interaction capabilities, and while utilizing a wide range configurations of display and input devices, the examples are not capable of enabling multi-user interaction with detailed virtual objects of a shared virtual space superimposed for all users identically on a shared physical space.
Computer entertainment
software applications which enable multi-user access of virtual spaces are vastly focused on providing online
connectivity of geographically separated users, and in such scenarios understandably do not provide users with ways to interact in a shared physical space.
Specialized
engineering and
design software applications that display views of virtual spaces, on the other hand, provide absolutely no way for multiple users to simultaneously share virtual spaces and collectively interact with virtual objects of those virtual spaces.
Therefore, despite being able to move to perform input signals, users are attached to a
static display device and cannot rotate around and effectively utilize the shared physical space for interaction.
Such
software applications, which are using
static display devices for displaying views of virtual spaces, are not capable of creating a shared virtual space superimposed on a utilizable shared physical space.
This requirement limits
usability of such interaction mechanisms in some multi-user scenarios, where physical face-to-face user communication is necessary, rendering
software applications using the interaction mechanisms inappropriate.
Furthermore, while using these software applications users are also attached to the static motion-registering unit to perform motion-based input signals and are unable to fully utilize the physical space for interaction.
The limitation of the previously mentioned software applications, due to which users cannot communicate face-to-face in physical space, applies as well.
Software applications for interaction with virtual spaces using head-mounted display devices for displaying views of virtual spaces are also more difficult to implement than those utilizing conventional display devices.
User interaction with such objects however, requires providing special software functions for zooming in on the details of the virtual objects or positioning of heads of users in anatomically very difficult, if not completely impossible to achieve, positions.
Movement of a head in space requires the
whole body to adjust and follow the movement, which can be very difficult.
Viewing small parts of detailed virtual objects may require users to position their heads into positions that are out of their reach, and therefore may not be possible at all.
The ability to interact with small parts of detailed virtual objects is therefore highly limited using such software applications.
When a user of a multi-
user group performs such input signals, even if these software applications generate a shared virtual space superimposed on a shared physical space, there is no possibility of these software applications maintaining a shared virtual space that is at the same time superimposed for all users identically on a shared physical space.
Additionally, users who interact with virtual spaces superimposed on physical spaces using these software applications in conjunction with head-mounted display devices that are tracked in physical space, and who also use software functions for zooming in on details of virtual objects, cannot be at the same time present in a shared physical space.
The problem that prevents users from being able to be in a shared physical space is that their shared virtual space is superimposed for each user differently.
Due to head-mounted display devices restricting view of users into physical space, users would not be able to know true positions of other users in their
shared space and would involuntarily collide with each other.
Although by using these software applications, users are able to see when a shared virtual space is superimposed on a shared physical space differently from other concurrent users, this method has most of the previously mentioned limitations.
Using head-mounted display devices tracked in physical space for interacting with detailed virtual objects of virtual spaces is impractical in general.
Users cannot view small parts of detailed virtual objects or achieve certain viewing angles on virtual objects, as that would involve them positioning their heads in awkward or impossible positions.
Furthermore, such software applications for interaction with virtual spaces that utilize
augmented reality and display views of virtual spaces overlaying views of physical space are compromising on the
image quality of one of the views.
When the combined views are displayed using head-mounted display devices that
restrict view of users into surrounding physical space, both views are image streams and the image
stream containing view of physical space is of lowered quality, due to it being captured by a physical camera, introducing
image noise.
Therefore, the resulting
image quality of head-mounted display devices that are used with such software applications that utilize
augmented reality, is always lower than the
image quality of display devices displaying only views of virtual spaces.
In
spite of the mentioned capabilities, these software applications are not capable of generating a shared virtual space that is at the same time superimposed for all users identically on a shared physical space.
These software applications are nonetheless limited, by being dependent on reference objects or images, which are used as markers when being tracked by the handheld devices that are tracking their surrounding physical space.
Multiple users would not be able to share the same physical space, as they would block handheld devices of each of the users from tracking the reference objects located on the perimeter of their surrounding physical space.
Software applications utilizing handheld device based tracking, are therefore unable to create a shared virtual space superimposed for all users identically on a shared physical space.
While displaying views of virtual spaces superimposed on the physical space on the special-purpose handheld device and allowing multiple users to be present and communicate physically in the same space
pose no problem to these software applications, for tracking they rely solely on
motion capture cameras with a narrow
field of view, making the
system unfeasible and unsuitable for being used in regular indoor environments.
It is therefore not possible, while using these software applications, to create a shared virtual space that is at the same time superimposed for multiple users identically on a shared physical space.
These software applications are missing a mechanism of precisely identifying points in virtual space, rendering interaction with details of detailed virtual objects extremely difficult or completely impossible.
Moreover, these software applications use handheld devices that are positioned by hands of users in physical space, due to what it is extremely difficult to interact with virtual spaces by using display devices of sizes and weight comparable to desktop display devices.
Sizes and weight of display devices used by handheld devices are therefore limited to sizes and weight of mobile devices.
Finally, most of the software applications for interaction with virtual spaces using mobile handheld devices such as tablets are relying on computing devices included in the handheld devices for
processing power, and therefore have limited
processing capabilities when compared to stationary computing devices.
Therefore, displaying virtual spaces, which are comprised of many detailed virtual objects containing a vast amount of geometric features, is impossible using these software applications.