NG
They did it because they can! It was a very neat, clever demonstration of two areas of TV technology being used together (VR sensing and Sport multi-camera derived scene reconstruction)
The problem they had was that they didn't have an application for it...
If anything they should be transporting the presenters to the various locations, which I'm sure could be done very easily via CSO.
Though that would be totally unethical if you didn't clearly state that the reporter wasn't in that location. Also implementing a system that allowed you to do clean 3D tracking and zooming of both foreground and background is still quite complex. The CNN thing only had to cope with a single 3D element (the reporter) not a background - which was real. Trying to analyse a background to do the same thing would be quite a lot more complex (unless you just wanted a flat spherical projection surrounding your reporter)
BTW - CNN used CSO to implement their "hologram". The guest was stood in a green tent, with the cameras poking through, and the video sent to CNN in NY was effectively a single angle shot of the reporter against green, with the NY vision mixer (or upstream keyer) CSOing them into the studio camera shot. The clever bit was the motion data from NY and the image processing in Chicago that merged the best two (from thirty-five) camera shots for any given angle requested.
noggin
Founding member
Brekkie posted:
So pointless - surely viewers would rather see the reporter on location (that's why CNN sent them there), rather than as a "hologram" in the studio.
They did it because they can! It was a very neat, clever demonstration of two areas of TV technology being used together (VR sensing and Sport multi-camera derived scene reconstruction)
The problem they had was that they didn't have an application for it...
Quote:
If anything they should be transporting the presenters to the various locations, which I'm sure could be done very easily via CSO.
Though that would be totally unethical if you didn't clearly state that the reporter wasn't in that location. Also implementing a system that allowed you to do clean 3D tracking and zooming of both foreground and background is still quite complex. The CNN thing only had to cope with a single 3D element (the reporter) not a background - which was real. Trying to analyse a background to do the same thing would be quite a lot more complex (unless you just wanted a flat spherical projection surrounding your reporter)
BTW - CNN used CSO to implement their "hologram". The guest was stood in a green tent, with the cameras poking through, and the video sent to CNN in NY was effectively a single angle shot of the reporter against green, with the NY vision mixer (or upstream keyer) CSOing them into the studio camera shot. The clever bit was the motion data from NY and the image processing in Chicago that merged the best two (from thirty-five) camera shots for any given angle requested.