There's been some questions around the viability of the Metaverse, and we've talked in the past about some of the constraints, and some of the things that the technology has to overcome, and one of the items that I think continues to come up is the graphics and, specifically, horizons and the Meta platform. And there's others that are the prominent in discussions.
Metaverse solutions right now suffer from the same symptom, their graphics don't feel like they're modern. They don't feel like they're taking advantage of modern day video games, for example, and how they look, or movies with computer-generated imagery (CGI). And so, why are the graphics so behind or feel like they're so behind? There are a couple of reasons for that.
Why the graphics in the metaverse look bad and without legs?
One of the reasons is a computing/internet challenge. Rendering graphics in real time actually takes quite a lot of processing power. If you're trying to create technology that's accessible on various different devices and across different geographies where perhaps Internet isn't as ubiquitous, as it is in the western world, then you need to create something that kind of compensates for that. Until there are other solutions to assist with that, like creating semi “cartoony-like” graphics, it is a good way to temporarily solve that problem while the technology adoption increases.
One of the other issues with the cartoony appearance, is the fact that most of the avatars don’t have legs and that really creates a barrier to reality. It doesn’t appear immersive. Take the Quest device, for example, which has the most dominant market share for an AR/VR (augmented reality/virtual reality) platform. It has a headset that is connected and two handheld units that you use to interact with the AR/VR interface and cameras on the headset so you can use hand gestures to control the environment. The cameras inherently can't actually see your legs. So if you are walking, you can't see your own legs. You just know you're walking because you're doing it instinctively.
That’s what’s available currently. The headset doesn’t get those cues because there’s no way for the camera to see your legs which would instruct the computer models to make the avatar walk. That is why they create models that don’t have legs which isn’t as immersive as seeing a model with legs. There are ways to solve this. For example, Microsoft has the Xbox that has a camera that allows you to do a lot of movement mapping.
There are other similar technologies that exist that can be incorporated and I suspect Meta will have the same capability soon. By leveraging their portal device with their Quest device, they may be able to close that gap.
Those are the two biggest challenges right now and there are solutions that with time will overcome those hurdles of old-fashioned graphics and and avatars without legs.
Hopefully that was helpful. If you have any thoughts on this, we would love to hear them. Stay tuned next month and we'll continue the conversation on the Metaverse. And if you have any questions, please feel free to reach out to firstname.lastname@example.org. Thanks.
Not using Workplace from Meta yet?
Want to see how Workplace could take your Corporate Metaverse strategy to the next digital level and have virtual meetings? Give us a shout! Through a live demo of the Workplace platform, we’ll help you to brainstorm ideas on how to utilize Workplace to achieve your organization’s goals and objectives around internal communications.
Already Using Workplace from Meta?
If you’d like to explore more opportunities to learn how to use Workplace from Meta helps to improve your internal communications strategy and be part of the corporate Metaverse, we’d love to chat! Give us a call if you want to learn how to use Workplace to enhance employee interaction and engagement.