Standard version: 1.5
With modern VFX systems, arbitrary extra metadata is available on the clips passed to an effect, such as camera shot on, colour space, and much much more. Some of this data is very useful to effects and they may need to transform the metadata that is presented on output. Currently OFX does not support metadata, and it should.
(jan 22, 2017, Pierre Jasmin): Adding some notes,
We have been struggling with this one for a while. After a lot of thinking (including about successful implementation in the past). Meta data is often available but the path to effects is often not there and practical use case until recently of vague success. For example researching this I saw Cooke Optics did a big announcement about it a few years ago collaborating with PFTrack on capturing all that meta-data goodness (e.g. zoom values over time). The first thing people did is grab the gyro data and try to use it in tracking and realize it's not working better than tracking without it. Of course, perhaps the point is to have access to that data when you don't need to collect stabilization data over time... Maybe actually having a target would help the developers using this API to lead such industry assimilation in a data-rich context.
After looking at a bunch of meta-data capture devices, and given here we are effects not handling the Image Clip IO part per se (so no discussion here about using a blob like JSON/XML for that, a 'track' in an mp4 or even an EXIF per frame. Although on this, I note a big error Red I think did when they introduced storing gyro 6 values in the R3D file is save only one value per frame (e.g. you have to shoot higher frame rate to get more gyro samples) and since a lot of people shoot 24P - even if their gyro could spit 150 Hz (>) one would need to shoot 150 FPS to get the full readout.
That said: The meta-data can probably be decomposed into two basic things. What is related to colorimetry (have a better word?) - to mean even ISO and what not, for example if you have AWB - perhaps a 3X3 color matrix would be useful for time-lapse deflickering.... There is another discussion about that but as part of our thinking about meta-data the "Color" case was "CDL". The other type of metadata is transformational (photogrammetric - have a better word?). The basic structure here is probably a 3X3 matrix if my extention it could support simple scaler (e.g. depth gauge reading for underwater video or temperature readout when parachuting with a POV camera on your head - these might have only application for motion graphics overlay - but this is still something we should support).
So we have two types of values (Color or Not). We have a number of dimensions associated to a stream (1,2,3) sharing a name (e.g. accelerometor XYZ). And we have data that is static or dynamic. The dynamic data has two potential form: A regular sampling rate (e.g. 400 Hz) completely orthogonal to FPS or a less regular stream which requires values to be associated to time of sample.
Thinking about this, we already have parameters that support key-frames (with a way to get next keyframes, number of keyframes). So we already have a way to drop somewhere (parameters) such streams (- with again the UI issue that some host might not be elegant with 400 KFs per second on a 30 FPS timeline). Note this data is rarely directly to be used (edited) by end-users, the idea is we already have a format for that which includes a storage mechanism and a data type association (and labeling...).
This brings a new issue which is right now we adapted frame time as our integer sampling basis. So does this means the meta-data has to be passed to effects in frame time. I think so.
The other API abstraction simplification we could do is to describe a suite as a set of parameters. Then it sorts of become just an include. Then missing is the discussion on the Transform suite, i.e. an image callback in the suite itself. So for generalization if we transpose this to our CDL example. CDL has let's say 10 defined parameters with specific names. We have image inputs and image outputs. Where are these values displayed to the users? For output this can make sense for this to show up in the effects controls UI. What if there are more than one color suite an effect can support? Does that mean this becomes a menu with options (populated by host). If so a good model for other applications (e.g. the roto param extention into a suite) is a bit like Combustion allowed via allowing after param description support for this and that (with for people familiar with AE callback to Completely General command - to for example if I have a gamma parameter it now becomes the Power values of that meta-data, that is I can disable or hide it). The parameters would show up only if an effect supported that sub color suite and user selected the color suite option menu... I am not going to have an opinion about image input and such suite, is this similar to asking custom instead of linear (or simply gamma 1, if gamma 1, gamma 2.2 is precise enough for many effects). That is for input is it enough to essentially have a metadata param that essentially encapsulate the getImage...
Meta data is basically a collection of key/value pairs of known data types. This is close to how OFX properties currently behave. I suggest we do the following…
rev the PropertySuite up to 2.0 so that…
we can enquire about the contained properties and iterate over them
information includes name, index, type and r/w status
be able to add/delete properties dynamically
have a new property type, which is another property set (so property sets can be contained in property sets)
present a new property on all input clips and images kOfxPropMetaData, this is a separate property set
have an action kOfxImageEffectActionGetMetaData, where a plugin gets to populate the metadata on its output
We probably need well named meta-data keys for a variety of things. eg: CDL