The assumption is that [[Virtual Worlds]] and [[Ambient Computing]] will converge, creating experiential layers in a world where [[Everything that can be digital will be]], with [[Mixed Reality]] offering an immersive way to experience this digital-physical world.
In terms of layers, this would mean:
* The experiential layer where people can interact with the digital-physical world (see [[Mixed Reality]])
* The contextualization layer, where the system creates distinct, contextualized dimensions out of the data (see [[Metaverse Expectations#As processing power, storage and bandwidth increases, there are less limits at how many dimensions can exist at the same time]])
* The virtualization layer, where the real world is pulled into virtuality (see [[Metaverse Expectations#The Metaverse is digital dimensions entangled with reality]])
That means the component stack looks something like this:
* [[Mixed Reality]]: Offers immersive ways to experience digital-physical dimensions, either in 2D or 3D
* [[Cognitive Services]]: Provide human-like services to create immersive user experiences with places and things
* [[Machine Learning]] Adding further contextualization to people, places, and things, analyzing, predicting, and creating activities
* [[Computational Graphs]]: Connecting people and other contexts to Digital Twins, also providing governance
* [[Identity]] Services: Providing authentication, access control, roles & rights as well as personal data & metadata management
* [[Digital Twins]]: Contextualizing places and things through ontologies and structures
* [[Internet of Things]]: Connecting the physical to the digital