The metaverse is heading towards becoming the next big thing in the development of the digital world, with the division of cyberspace underway as firms like Meta (which owns Facebook) preparing the groundwork for new, imaginary worlds. Property spaces within one of several metaverses are already being purchases.
The metaverse is a simulated digital environment that draws on concepts from social media to create spaces for rich user interaction mimicking the real world. While there are similarities with ‘cyberspace’, the metaverse signals a broad shift in how we interact with technology.
Much of the interaction between people and the metaverse requires the use of augmented or virtual reality. A new development based on simulated human eye movement sets out to train metaverse platforms. The virtual platform replicates how human eyes track stimuli from a variety of interactions, stretching from conversations to art galleries.
The development heralds from Duke University where computer engineers have created so-termed virtual eyes that can simulate how humans look at the world. The development can do this accurately enough for companies to train virtual reality and augmented reality programs.
The new technology is termed EyeSyn and it is designed to assist developers with creating applications for the rapidly expanding metaverse. A secondary aspect is with the protection of user data.
The technology is based on an assessment of the tiny movements of how a person’s eyes move and their pupils dilate. Gathering data on this can provide a large amount of information.
For example, human eyes can reveal whether a person is bored or excited, where concentration is focused, whether or not they are an expert or novice at a given task, or even if they are fluent in a specific language.
As to why these data are of value, eye movement information can prove very useful to companies for building platforms and software in the metaverse. For example, reading a user’s eyes can enable a software developer to tailor content to engagement responses or reduce resolution in their peripheral vision. In turn, this can save on computational power.
By providing the EyeSyn a variety of different inputs and then running this a sufficient number of times, this will create a data set of synthetic eye movements that are large enough to train the machine learning classifier for a new program.
In practice, EyeSyn has been able to closely match the distinct patterns of actual gaze signals and then simulate the different ways different people’s eyes react.
It is hoped the technology can be used to produce commercial software can then achieve even better results by personalizing its algorithms after interacting with specific users.