Kinect Projects Life Size Holograms Via Videoconferencing Pod

IT Management

Share this Post

Researchers at Canada's Queens University are experimenting with Kinect to create a videoconferencing pod that allows the users to project 3D life-size images of themselves via TeleHuman. TeleHuman allows the user to stand in a pod surrounded by kinect sensors, which capture their image from multiple angles. That information is then transferred onto another pod which displays a 3D image on the pod's surface.

The camera captures the user from 360 degrees, projecting the image on another pod in 3D. The other person can walk around the outside of the pod and view them in 3D from all angles.

The technology looks pretty cool, but there are some disadvantages right now. For instance, the 3D image can only be viewed by one person, as it can only be seen from their perspective. This rules out speaking in front of a group of people, but does allow for one on one video chats. In this way, the new technology could be good for more intimate encounters with loved than your standard video chat.

Having the person actually "standing in front of you" allows you to visualize nuances in their behavior more clearly. Body pose and eye gaze are evident, where they are not with current video chat.

Judging from the video, the technology could use some work before it is commercially viable. While the concept is intriguing, it definitely needs to be fine tuned. The image it self looks a little fuzzy, though that could, in part, be due to the video quality. At any rate, the concepts being explored are fascinating. Once the technology catches up, I expect we will see this in use in the future.

Researchers are continually finding new uses for the kinect system. In the video game world, they are experimenting with mood-detection as a way to draw the user into the story. Other characters in the story may act and react differently with your character based on things like you body posture and the inflection in your voice.

The Kinect has also made its way into other field including robotics and 3D desktops. The TeleHuman project is similar in this realm, in that it can also be used to as a way to interact with visual information in a new way. As the second part of the video demonstrates, hand position and your physical distance to the object influence what you see before you.

Overall, I am impressed with the amount of uses they are finding with the Kinect. Once these concept are finely tuned, expect to see some futuristic implementation down the road.

[via: psfk]