Cineform:
-12 bit linear/10 bit log uncompressed/wavelet compression tech
-workflow solutions – Adobe & FCP pretty well worked out
-new stereoscopic recording formats – left/right eye in one file – big deal
-new FCP workflow allows for realtime, while playing color and making stereoscopic adjustments on the fly AS IT PLAYS
-other tools and niftiness
-playout to Kona card for 3D stereoscopic display
My takeaway – I’d long admired their compression tech, but since it lived outside of the RT accelerated engine, whaddaya gonna do? It is still outside the engine, but they are making it do realtime color correction, realtime stereoscopic geometry corrections (X/Y offsets, keystoning, etc.)
Should run even better on Gulftown based Macs sometime (next year?), with 6 cores per processor – with they have dual or quad processors then (12 or 24 procs)?
DAMNED impressive! Read on after the jump.
DAVID TAYLOR AND DAVID NEWMAN FROM CINEFORM:
————————————————————————————
(left and right eye Davids!)
BandPro3D_Cineform – pics from there presentation, including a lot of screen shots of various controls and modes and FCP operations – impressive!
they do compression, but it is really about workflow
all software based, no hardware acceleration
underlying tech is compression – have RAW compression, 12-bit 4:4:4, uncompressed is 12 bit, 4:2:2 it 10 bit,
will have Pablo support for 3D later, working with Quantel, working with others
Avid – not yet, but know them pretty well, is a pretty closed system
(Apple not mentioned)
supports to different cameras – SI-2K, CineDDR works with SDI as an SR replacement, Prospect and Neo are editing family products, support some consumer editing as well – AVCHD to Cineform, etc.
-indie filmmakers are clients, doing more work at higher end for feature and episodic TV, working with studios as well, a lot of integration
(the black box is Quantel logo)
at the QT (or Directshow layer) – underneath that, decode and hand it off to the next layer – inserted an active metadata processor layer in there – can process after decoding – whether RAW or RGB recorded and don’t touch it again – might add 3D LUTs, 3D processign, etc – done as active metadata without destructive editing to the source – is reversible, can add/delete w/out destruction – all changes aren’t damaging. FirstLight 3D checks for metadata – does it exist? Should I change some, all? Etc. Is about a workflow process
FCP processes 3D as if they were 2D (Kona HD-SDI support for 3D)
gonna start with a PC, but will show it all running under FCP as well
the metadata for 3D is agred upon with Iridas (did I catch that right?????)
you have a lot of controls for exposure, contrast, saturation, RGB gain, RGB gamma, etc.
can display as difference layer – grey but differences show up – have tools to adjust for convergence
If have a HD-SDI or HDMI card, can display this stuff out as well
can play at fulll frame rate on a workstation class machine to make changes in real time
interleaved displays (each field is its own eye) can work with the passive displays
24″ gaming monitors are 1080p are good for this,
side by side will work with some HD-SDI rigs
Mac and PC versions – some versions are slightly out of sync – on PC right now, you can put any metadata you like and use it as you wish – can add to the metadata and use it – for instance, grab the timecode and display it live at size/location you want – is done as a database function – Vegas, After Effects, etc. – can turn on the burn-ins you want, cna put in camera ID #s, can put in take #s, etc- can be turned this on/off as you wish –
-offline/online is more consistent these days for higher end work – they got into 3D because a post production company wasn’t ready to do what they needed – they came to SI to put all this together – all of these changes can be toggled at any time
everything you have to fix in post is lost resolution
auto-zoom crops to the combined coverage image – if you have misalignment you have to offset to fix, you’ve pushed off the edge – so autozoom crops the maximum usable image
you can make global changes as well – deactivate certain channels, such as white balancing, or LUTs, or orientation adjustments, etc.
-using Iridas look formats – the look created on set goes into the camera, goes into Cineform, optionally aplied to image, Iridas to post can read the look and let you tweak every knob, etc.
-single file solution – having left and right eye image streams in a single file
-the way it works – can open in even old Media Player, can toggle modes
(burn-ins are 3D placeable) – gotta do it, because if you do’em 2D your brain freaks out
there’s a codec identifier – made a codec identifier – looks the same to everybody else – instead of putting two streams into an MOV – but nobody understands it in software – so instead of using multitrack content, put the second (right) eye as metadata into the right eye – looks like a 2D file – the 3D aware apps can understand it
-2D tools treat it like any other codec – if the tool played Cineform, can still play it – didn’t need to modify any of the FCP, AE, etc. tools – the interface becomes the trojan horse to get it to work
-you can take any 2 streams and mux them, or pull a 3D stream back into 2 no problem
-for instance – Nuke has advanced 3D compositing – should be able to sync up to it – (at the moment, split it back out to 2 streams) – Nuke would be perfect to support this properly. Everything you’ve created – post shoot convergence, etc. –
“Brain Shear” – temporary disorientation caused by having one shot with a focal depth cut to another shot with a different 3D focal depth
in FCP – I see it up side by side, but I know understand that depending on the preset, can control how it displays in apps
when in side by side mode, can feed it out to a proper 3D display and you’re editing in 3D!!
(I’m watching a timeline play out in realtime, can do 3D edit in FCP – and for a preso did a quickie edit and played off the timeline)
recently integrated the Tangent Wave, while I’m playing in FCP, live color controls in FCP, and ca change the convergence LIVE AS YOU PLAY
LIVE DEPTH GRADING – metadata controls that he’s adjusting are writing into the database – metadata is recorded per shot – later will add keyframing – the WAVE support came out last week
-“Final Cut is not aware that we are abusing it so” – THAT is why they can get this pretty amazing performance – “gets us a little closer to realtime desktop DI – but you’d still need to see it on a big screen to do a proper DI”
-all the different display modes are usable for different things – tools in your basket – fields mode is handy to watch on a 2D and a 3D screen at the same time
-keystoning – if shooting a toed-in camera – can correct for that as active metadata
Q: how is implemented in FCP?
A: is JUST a codec for FCP – get the apps and codecs and tools etc. to convert to Cineform – for FCP works as a QT layer – drop on timeline, say “yes” to Make Sequence Match This Clip
Q: video cards?
A: working with nVidia – FCP is HD-SDI oriented, not graphics oriented system. Cinema Desktop doesn’t run at a frame rate to support 3D. Pageflipped Open GL interface being built into First Light – working to make gaming screens work with gaming displays – didn’t focus on that because first clients were FCP interested, wanted pro displays. Gaming monitors are so inexpensivne – $5000 vs $300. For broadcast 42″ and above, for
keystone and 3DLUT won’t run on today’s Mac Pros w/out dropping frames, the new Gulftown 6 core Macs probably will
Q: what about presettting outputs for burnins for offline?
A: in compressor? They aren’t focusing on it
Q: In a pablo or smoke, what si the benefit of compression
A: don’t edit in pablo, creative edit benefits from this. But with Quantel has SDK for next release of single file 3D camera source and edit that content, a single hard drie can play back a stereo file (40 MB/sec) – there’s a Cineform Uncompressed mode but nothing could use it EDIT/CORRECTION – David emailed me this – “There is an SIV SI camera mode, but nothing (much) can use it, however the new CineForm uncompressed mode, everything can use if they support either Quicktime, DirectShow, VideoForWindows or the CineForm SDK. This is way SIV is being phase out in favor of CineForm modes in the SI cameras.”, sometimes don’t have the compute power to do the compression. a fast SSD can do dual 2K (200 MB/sec), if you have it, record it to SIV files didn’t get metadata – were capturing and converting into Cineform to do the metadata – UNC was done for FCP, but 4:4:4 people couldn’t tell the difference
Q: if you save your project file – all your underlaying metadata –
A: your metadata is stored separately – can have different profiles (color and 3D) different from your edit – an independent database is stored – if you want to take it elsewhere, gotta take that database with you
WHERE IS THAT DATABASE STORED!!!! THIS IS A VITAL QUESTION!!!!
Q: can you do your creative edit and do basic 3D stuff on desktop, finish on Pablo?
A: not YET – but it is the goal – to have an open format to have an extension to AAF to add 3D convergence track. Export XML from FCP, Automatic Duck (or metric equivalent) to get out to
Q: how can you do this?
A: cleverness – if we went through the FCP API, FCP couldn’t do it – First Light is running in the background and is the one controlling the manipulation of the looks and the database
working on CS4 & CS5, FCP are heavy work.
AJA decodes their stuff faster than FCP does – AJA is their partner and is doing help for them (gotta talk to Jon!)
follow up afterwards:
there’s a file directory – each shot has its own GUID (Global Unique IDentifier), so each shot has its own little bitty file of metadata
can’t have same shot used with two different looks/treatments unless make a dupe of the GUID or shot
it keeps a directory of how each shot is treated – that is one more piece of metadata that needs to go with the sequence.
What if two people edited same sequence but have different settings? Trouble!
But is another set of data gotta take with you to post facility for finishing
Mike’s closing thought – DAMNED impressive running around of the barriers Apple has in place to get some amazing realtime performance. That said, however, they STILL aren’t part of the RT engine – so while you can do super-cool realtime debayer, color correction, and stereoscopic geometric controls in real time, you STILL can’t get a god damn realtime cross dissolve! Sigh. Not Cineform’s fault.
but they’ve really thought this stereoscopic workflow stuff out very nicely – and a bunch of other workflow issues as well, frankly – Apple would do well to learn from what these guys have done. I’d imagine the some parts of the code underlying FCP is getting kinda crufty at this point, knowing what I do about FCP and QT architecture and what I’ve overheard over the years – doing some slick new code outside the sandbox of FCP was how these guys circumvented some potential limitations within FCP.
-mike
Filmtools
Filmmakers go-to destination for pre-production, production & post production equipment!
Shop Now