philip lelyveld The world of entertainment technology

7Jan/12Off

True 3D telepresence getting closer

Researchers at the University of North Carolina have been working on life size 3D telepresence which allows remote viewers to look around a scene far away without wearing markers or 3D glasses.

The team, led by Dr. Henry Fuchs and graduate student Andrew Maimone, have created a telepresence system with room sized real-time 3D capture and a life size tracked display wall.

The prototype shown in the video below utilises ten Microsoft Kinect cameras, a two panel display wall (the 'window' to the remote scene), complex algorithms and GPU accelerated data processing to allow a remote viewer to look into a live scene, which changes perspective as the viewer moves his or her head. It is as if the displays are a window into another room so as you walk past, you will be able to look around objects.

The Microsoft Kinect cameras provides 3D scene capture of the remote room including depth information.  The system then merges the data from multiple cameras, reads the depth information, and applies some processing to change the view presented giving the viewer of the remote room the illusion of depth which changes as the user's perspective changes.

UNC Chapel Hill is one of three universities participating in an international consortium called the Being There International Research Center for Telepresence and Telecollaboration that includes Nanyang Technological University (NTU, Singapore), Swiss Federal Institute of Technology Zurich (ETH Zurich, Switzerland).

...

3D for cinema is based on filming with two cameras side by side. This is a very primitive way of producing 3D content and has been around since the dawn of photography. The next stage of 3D capture is likely to offer multiple viewpoints within a 3D scene using ‘depth’ information (using two side-by-side cameras provides just one view – you cannot 'look around' objects) and the UNC Chapel Hill team are using the Microsoft Kinect to provide this depth data for telepresence.

They have presented a Kinect-based marker-less tracking system that combines 2D eye recognition with depth information to allow head-tracked stereo views to be rendered for a parallax barrier autostereoscopic display. Like the 2D system, a single Microsoft Kinect situated on a glasses free 3D display tracks the remote viewer’s head as shown in the video below.  ...

One of the biggest challenges the team had to confront is the considerable overlay of images when using multiple Microsoft Kinect cameras. Several algorithms, such as hole filling and colour matching, were built to create a more true to life image.   ...

A few weeks ago we reported that the have created a solution that enables direct eye contact for video conferencing. High tech R&D firm Fraunhofer Heninrich Hertz Institute in Berlin, Germany showed 3D Focus the Virtual Eye Contact Engine – a software module analyses a scene in real-time 3D from three cameras mounted around the video cameras display. It computes the depth structure information of the person’s head which is used to generate a 3D model. The 3D model is then used to compute the view of the virtual camera for both parties and the rendered output appears to show each person looking directly at each other.

Both systems are crude in appearance and the quality is still not high enough for commercialisation. However, it will be very interesting to see what researchers will do with the upcoming Microsoft Kinect 2, which is rumoured to be so accurate it will be able to lip read.  ...

Read the full article here:

Filed under: 3D articles Comments Off
Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.