×

Terminal Output

  • Welcome
  • Standards Discussion
  • Contact
  • Ofx small
  • Welcome
    • Why a Standard?
    • For Implementers
    • Association Business
  • API Documentation
    • API Reference
    • Programming Reference
    • Programming Guide
  • Standards Discussion
  • Contact
Back to standard change list


Deep Images / Multi-Plane Proposed

Standard version: 1.5

Major Change

(september 23, 2016): This is to be merged with Multi-View

A recent trend is to embed multiple image types in the same image stream, for example depth information as well as colour data. The current version 1.4 OFX Image Effect API only presents a single colour image buffer from any clip, either input or output. This prevents hosts with rich image streams from passing such data to plug-ins that could take advantage of it.

The Foundry has implemented an unpublished extension to the OFX API in its Nuke compositing application that allows support for deep image buffers. They are proposing cleaned up versions of these extensions to the OFX 1.5 Committee for adoption in the Image Effect API.

To do this we need to add a new concept to OFX, a plane, which plane represents a single typed image data buffer, for example colour, depth, motion vectors and so on.

The basic idea behind this extensions is that it is orthogonal to time, so when an effect fetches image data from the host it would specify the frame time and the image plane to fetch, eg: the source clip's the colour plane of frame 10.

The extensions will need supporting machinery, so that plug-ins advertise what planes/views they process and need to render an output and so on.

It is proposed that the OFX API be updated to include deep image buffers for version 1.5.0.

 

 

A plane is a single image buffer of a specific type. The API defines the types of planes available via #defining a set of string literals, for example…

  • kOfxImagePlaneColour
  • kOfxImagePlaneDepth
  • kOfxImagePlaneBackwardMotionVector

The data types of the corresponding planes are sometimes strongly defined, for example depth would always be a one component floating point image, colour images could still be of variant data type and 3 or 4 components.

The full set of image planes has yet to be determined, but the initial types should include….

  • colour,
  • depth,
  • forward motion vectors,
  • backwards motion vectors,
  • mask.

Note that some planes come in pairs, for example a forwards motion vector plane cannot exists without a backwards motion vector plane.

Supported Planes On A Host

Not all hosts will support all image plane types. To manage this the multidimensional string property kOfxImageEffectPropPlanesAvailable is set on host descriptors.

This property is the set of planes that a host may present on a clip. All hosts must advertise kOfxImagePlaneColour, the other planes are optional.

A Plugin's Rendered Planes

Plugins need to indicate to the host what planes it will process on output, it to, uses the kOfxImageEffectPropPlanesAvailable property on it's descriptor.

This is the set of planes a plugin instance may be able to render. The actual planes rendered by an instance are indicated by the new Get Clip Planes action (see below).

If a host does not support the planes a plugin renders, a host should ignore that plugin.

A plugin may break the coherence between planes available on its input clips. This is typically because it is doing a spatial or temporal transformation on the subset of planes being processed. In such a case, it should indicate to the host that the non-processed planes should not be passed through.

To do this a new plugin descriptor property is required. kOfxImageEffectPropPassThroughPlanes. This is a one dimensional integer property, if set to zero it indicates that non processed planes should not be passed through to output by the host, if set to not zero, then the host should pass through un-rendered planes to the node's output.

Advertising Available Planes On A Clip Instance

A host advertises the planes available to be fetched from a specific clip instance by the property kOfxImageEffectPropPlanesAvailable on the clip's corresponding property handle.

Fetching Specific Planes From a Clip

When fetching an image from a clip instance, a plugin will need to specify the plane as well as the frame. To do this the signature of the clipGetImagePlane function in the OfxImageEffectSuite needs to be modified in the next version of that suite so that it looks like…

               OfxStatus
               (
               *
               clipGetImagePlane
               )
               (
               OfxImageClipHandle clip
               ,
               OfxTime
               time
               ,
               const
               char
               *
               plane
               ,
               OfxRectD
               *
               region
               ,
               OfxPropertySetHandle
               *
               imageHandle
               )
               ;
            

The new plane argument to the function being one of the predefined string literals that define a plane.

The Get Clip Planes Action

A plugin's descriptor indicates the planes it may render on output via the kOfxImageEffectPropPlanesAvailable property. Depending on instance parameter values and available input planes, an instance needs to indicate what planes it will actually render prior to rendering.

A plug-in instance will also need to tell the host what planes it will fetch from each input clip to produce it's given output.

Finally, if the plane has enabled passthrough of non rendered planes, it needs to indicate indicate what frame time at what clip the host should get those planes from.

To do this a new action is required, kOfxImageEffectActionGetClipPlanes. This action has a single 'inarg', being the time that a render will subsequently be called for.

It has the following 'outargs'….

  • for each clip (in and out), a property that is starts with “OfxNeededPlane_” post-pended by the clip's name (e.g. OfxNeededPlane_Output”) which represents,if an input, the planes needed by the effect, if an output, the planes produced by the effect. These are all char * X N properties, which must be one of the planes supported by the host.
  • kOfxImageEffectPropPassThroughClip - clip to use as a pass through for all non rendered planes
  • kOfxImageEffectPropPassThroughTime - time on that clip to pass through,

Modification Needed For the Render Action

When a host calls the render action on a plug-in instance, it needs to indicate which set of planes it wants the plugin to fill in. This will be a subset of the planes that the plugin has specified in the Get Clip Planes action. To do this, the render actions 'inArgs' needs an extra property, being the kOfxImageEffectPropPlanesAvailable property specified above.

Back to standard change list


Discussion

Comments

Complementary notes

1. There is a proposal to review and discussion here (that can be continued):
see above


2. We discussed support for output write by effect to this (as a mechanisnm to support multiple output).


3. An host can support stereo without supporting additional image channels. (the initial reference was Sony suite). That suite used int iview, it was agreed that we could use strings so it matches the Nuke multi-plane suite.


4. I think we need a naming convention but one that is not as flat as a single string. When in doubt I think we can revert to parameter convention for types, number of dimension...(more below).


A. We do need to agree how we call this: What is a view - stereo is a good example of obviously a view, by extention each view could have multiple additional components. Does this need to address multi-cam grouping? Is a cubic map (e.g. 6 camereas 90 degrees apart from each others) a multi-view item? In which case Dennis was saying we need to have angle/rotation etc as part of view definition.


B. Do we need a simple convention for naming, to absorb effect generated output image channels - so their next effect can find it (the example by Paul Miller was to carry the residual noise produced by an effect into another effect). Do we need a startup registration for that so it can appear in the host UI?


C. We do need to agree about how we call the individual image channels: Are they planes, components,... And are all these channels the same width and height?


D. Is the data colorimetric or linear math? (subjected to colorspace management or expected linear). We have a similar issue with meta-data I think, separate discussion.


E. For symmetry across the API, let's take surface normals, 3 rotations. We have a double parameter of type Angle. Presumably a 3D parameter of that type is like a 3D rotation. Now the definition for Angle parameter says the Angle is in Radians as opposed to in degrees or normalized -PI to PI to 0 to 1 or -1 to 1... It's not totally clear. At least we have a way to specify parameters that are position, scale and rotate. Normals is obviously 3D rotate. I don't really care if things are -1 to 1 or 0 to 1, however as you know there is no such thing as -1 to 1 outside of 32b float images (sub-question to we need to support 16 bit image channels?), however I would like to know. Add to this that from a 3D system, the 3D rot might be tangential (to camera plane, therefore hemispherical, e.g. for bump mapping) or a full 3D spherical space normal (often implies you have a 3D camera somewhere to use). It took me a while to get there and thinking about meta-data led me to thinking like this (initially I objected to too hard coded naming).


[OPT_EXT_WHO]_[NORMALS/ROTATE_string]_[XYZ]_[0,1]_[Optional: Tangent]_[Basis]
This long string is not for end-user consumption. For example Fusion use the label X Normal, Y Normal, Z Normal, I don't expect they would change how to label that. Only that from API stand-point we could define this as 3D rotate. This then provides a simple way for effect to filter candidate image channels. So in that example [OPT_EXT_WHO] is just Normal maybe and since it's the host provided image channels, there is no need to prefix something else for them.


From there taking Paul Miller residual noise image.

[OPT_EXT_WHO]: SDFX (some owner), Residual Noise (some name)
[TYPE]: 32b float RGB (could also be single channel), Scaler
[XYZ]: how components are labeled - is it always RGB or XYZ (if RGB subject to colorspace). E.g. UVW.
[0-1]: Is the data additive noise or difference to smoothed image (thus -1 to 1 maybe)
[Optional]: Additional Info - not sure how this translates to UI, could even have version number or whatever one wants to add as tidbit
[Basis]: This is not clear with this particular example, some data is relative to something else (e.g. forward (+1) and inverse (-1) - next time sample or previous time sample, or even to other frame in case of stereo)

 

Pierre Jasmin | 7:59 pm, 22 Jan 2017

While implementing this for Natron, here are the notes I gathered as to how it is working in Nuke, and what we needed to modify to correctly make it work with the existing kOfxImageEffectActionGetClipPreferences action:

 

/*

  ----------------------------------------------- -----------------------------------------------------------------

 ////////////////////////////////////////////////////////////////////////////////////////////////////////////////

 Natron multi-plane definitions and extensions brought to Nuke multi-plane extensions defined in fnOfxExtensions.h:

 ////////////////////////////////////////////////////////////////////////////////////////////////////////////////

 ----------------------------------------------- -----------------------------------------------------------------

 

 Definitions:

 ------------

 

 - Layer: Corresponds to 1 image plane and has a unique name

 

 - Components: The "type" of data (i.e: the number of channels) contained by a Layer. This may be equal to any of the default

 components defined by OpenFX (e.g: kOfxImageComponentRGBA) but also to the one added by fnOfxExtensions.h (namely 

 kFnOfxImagePlaneForwardMotionVector or kFnOfxImagePlaneBackwardMotionVector) and finally to the custom planes extension

 defined by Natron.

 

 ----------------------------------------------- -----------------------------------------------------------------

 

The Foundry multi-plane suite:

------------------------------

 

 - In The Foundry specification, some layers are paired and can be requested at the same time:

 this is the (kFnOfxImagePlaneBackwardMotionVector, kFnOfxImagePlaneForwardMotionVector) and (kFnOfxImagePlaneStereoDisparityLeft, kFnOfxImagePlaneStereoDisparityRight) layers. A pair means both layers have the same components type and are generally rendered together.

 These layers are the only one added by this extension.

 

 - The color layer (kFnOfxImagePlaneColour) can have the default OpenFX types.

 

 - The only plug-ins to support The Foundry multi-plane suite are the Furnace ones (among potentially others) and the only (known) hosts to support it are Natron and Nuke.

 

 - In The Foundry multi-plane suite, the plug-in specify that it wants to operate on a motion vector plane or disparity plane by setting kFnOfxImageComponentMotionVectors or kFnOfxImageComponentStereoDisparity on the clip components during the getClipPreferences action. They do not support other planes.

 

 - The getClipComponents action is unused by Nuke.

 

 - If the clip components are set to kFnOfxImageComponentMotionVectors or kFnOfxImageComponentStereoDisparity it is expected that the following render actions are called on both paired planes (the plug-in will attempt to fetch both from the render action).

 

Natron modifications:

---------------------

 

 - Some file formats (OpenEXR, TIFF) allow multiple arbitrary image layers (= planes) to be embedded in a single file.

 In the same way, a host application may want/need to use multiple arbitrary image layers into the same image stream coming from a clip.

 The multi-plane extension defined in fnOfxExtensions.h by The Foundry has nothing set in this regard and we had to come-up with one.

 

 A custom Layer (or plane) is defined as follows:

 

 - A unique name, e.g: "RenderLayer"

 - A set of 1 to 4 channels represented by strings, e.g: ["R","G","B","A"]

 

 Typically it would be presented like this to the user in a choice parameter:

 

 RenderLayer.RGBA

 

 Internally instead of passing this string and parsing it, we encode the layer as such:

 

 kNatronOfxImageComponentsPlane + layerName + kNatronOfxImageComponentsPlaneChannel + channel1Name + kNatronOfxImageComponentsPlaneChannel + channel2Name + kNatronOfxImageComponentsPlaneChannel + channel3Name

 

 e.g: kNatronOfxImageComponentsPlane + "Position" + kNatronOfxImageComponentsPlaneChannel + "X" + kNatronOfxImageComponentsPlaneChannel + "Y" + kNatronOfxImageComponentsPlaneChannel + "Z"

 

 

 Natron custom layers can be passed wherever layers are used (clipGetImage,render action) or components are used: getClipComponents. They may not be used in getClipPreferences.

 

 - Multi-plane effects (kFnOfxImageEffectPropMultiPlanar=1) request their layers via the getClipComponents action

 

 - getClipPreferences has not changed and may only be used to specify the type of components onto which the color layer (kFnOfxImagePlaneColour) will be mapped

 

 - Multi-plane effects (kFnOfxImageEffectPropMultiPlanar=1) are expected to support arbitrary component types and should not rely on the components set during getClipPreferences except for the kFnOfxImagePlaneColour layer.

 

 - OpenFX didn't allow to pass 2-channel image natively through a clip, even for plug-ins that are not multi-plane.

 The color layer (kFnOfxImagePlaneColour) can now have the default OpenFX types of components as well as kNatronOfxImageComponentXY to specify 2-channel images.

 kNatronOfxImageComponentXY may be specified wherever default OpenFX components types are used. 

 

 

 

 

Notes:

------

 

 - Layers are what is passed to the render action and clipGetImage function whereas components are what is used for getClipComponents and getClipPreferences

 

 - In the getClipComponents action, the plug-in passes OpenFX components.

 

 */

 

 

/** @brief string property to indicate the presence of custom components on a clip in or out.

  The string should be formed as such:

  kNatronOfxImageComponentsPlane + planeName + kNatronOfxImageComponentsPlaneChannel + channel1Name + kNatronOfxImageComponentsPlaneChannel + channel2Name + kNatronOfxImageComponentsPlaneChannel + channel3Name

 

  e.g: kNatronOfxImageComponentsPlane + "Position" + kNatronOfxImageComponentsPlaneChannel + "X" + kNatronOfxImageComponentsPlaneChannel + "Y" + kNatronOfxImageComponentsPlaneChannel + "Z"

 

  This indicates to the host in which plane should the given components appear and how many pixel channels it contains.

  It can be used at any place where kOfxImageComponentAlpha, kOfxImageComponentRGB, kOfxImageComponentRGBA, kOfxImageComponentNone (etc...) is

  given.

 */

#define kNatronOfxImageComponentsPlane  "NatronOfxImageComponentsPlane_"

#define kNatronOfxImageComponentsPlaneChannel   "_Channel_"

 

 

/** @brief String to label images with 2 components. If the plug-in supports this then the host can attempt to send to the plug-in

 planes with 2 components.*/

#define kNatronOfxImageComponentXY "NatronOfxImageComponentXY"

 

Alexandre | 4:18 am, 14 Apr 2016

(Pierre Jasmin):  As usual I have no difficulty with the gist of this until we get down to the details - e.g. "Note that some planes come in pairs, for example a forwards motion vector plane cannot exists without a backwards motion vector plane."  - eg. try to render bi-directional MV from a 3D renderer...

I am currently happy with the way the OFX example 4 (Saturation) works, when a single channel is requested it provides a submenu for sub-channels. This bypasses the base need for that (including naming things differently than host does elsewhere). What could be better (CBB) in Nuke is to only display what is actually filled.

Pierre Jasmin | 3:26 pm, 6 Jun 2015
Back to standard change list
  • OFX @ Github
  • Association Information

Copyright ©2023 The Open Effects Association