What matters is that the point at which the cut between the two sources happens with the same frames (not at the same (clock) time)that the vision mixer person sees when they press the button .
So if you label each frame / media unit with its origination time,
And the signal from the vision mixing person to the vision mixer electronics is to “switch between source A frame number x and source B frame y “ not “cut now”
Then at the site the vision mixing box delays the videos so that it can then do that cut .. a bit like a
VT edit EDL ...
Ed Calverly had a slide set at this SMPTE meeting
https://www.smpte.org/sections/united-kingdom/events/remote-production-over-ip-networks
The “trick” is at slide architecture 3 .. versus architecture 2
And you can do it with multi stage production ..as the BBC did for Euro 16
The camera to mixer output delay has to be just longer than the time to proxy code, the circuit and decoder to the vision mixing person eyes .. And then the circuit back to the venue ..
But with this time compensated production the cuts happen where the vision mixing person saw it happen..... or the commentator saw it and spoke
Sadly Suitcase tv went into administration ,
IPhrame was a very good product ...
Or you pay for full bandwidth circuits like the EPL/ IMG set up !
Last edited by Technologist on 28 March 2020 4:32am