×

Terminal Output

  • Welcome
  • Standards Discussion
  • Contact
  • Ofx small
  • Welcome
    • Why a Standard?
    • For Implementers
    • Association Business
  • API Documentation
    • API Reference
    • Programming Reference
    • Programming Guide
  • Standards Discussion
  • Contact
Back to standard change list


Distortion/Transform Effects Proposed

Standard version: 2.0

Major Change

Some effects are continuous distortions of an image (eg: a lens correction tool), the simplest case being an effect that performs a simple matrix based transform (eg: a stablisation tool).

A user often applies a series of such effects in a row, and if each effect processes the image, you end up with a significantly degraded image, as each effect will separately filter the image.

If an effect could report it's transformation to the host, the host could concatenate such multiple distortions and only filter the image being transformed once. This would also be a speed optimisation as only a single pass over the image would be performed.

We have two types of effects, a distortion effect, and a transform effect. Plug-ins advertise (probably via a context?) that they can behave as a distortion or transform.

Rather than a render action being called on such effects, an action is called to either return a function pointer (for distortion effects) or a homogeneous matrix (for transform effects). These are used by the host application to perform rendering as it sees fit.

The function would, in essence, take a point in output pixel co-ordinates, and back transform that to the input pixel co-ordinates.

The matrix would represent a transform a vector (x, y, 0, 1) from output to input.

Back to standard change list


Discussion

Comments

Frederic,

This link is interesting (although in domain of scripting)

http://helga-docs.readthedocs.io/_modules/helga/nuke/reconstruction/sceneReconstructVRay/lib/reconstruct_camera_from_vray_exr.html

 

Pierre Jasmin | 1:46 pm, 29 Jan 2017

Fred,

Is this something many hosts could do and is it desirable?

1) The transform matrix - 

to complement this - would be cool if we somehow could access inertial sensor motion data even if not a lot of normal camera spit that out yet (aside phone cameras - latest goPro, garmin, and other action cameras and some video dash cams do spit out relevant data). 

https://developer.android.com/guide/topics/sensors/sensors_motion.html

https://developer.apple.com/library/content/documentation/EventHandling/Conceptual/EventHandlingiPhoneOS/motion_event_basics/motion_event_basics.html

Already difficulty one - it's a motion matrix or a normal spatial transform matrix, is the source fisheye, rectilinear or equirectangular...  is it a 2D matrix, a 3D matrix, a projection matrix? (3x3,4x4,...). 

2) In general I like suite format for revision purposes (so #define don't need to be versioned themselves). 

3) Image Pixel Transform - shouldn't that reside in multi-channel image model?  I am worried of too many image callback buried in different places personally. 

 

Pierre Jasmin | 12:13 am, 4 Jan 2017

We proposed a specification for this in the following commit in our fork of the OpenFX repository:

https://github.com/devernay/openfx/commit/eceec67c21a4ced1b7021217af7102fedc8efaf8

This is going to be implemented very soon in Natron

Alexandre | 5:04 am, 16 Dec 2016

I like the idea of generic distortion function. This could simply be a property on the OfxImage (because it may be time-varying) containing either:

  • a transformation matrix

or:

  • a pointer to a distortion function
  • + a pointer to distortion data, to be passed as the first arg to the distortion function (this is opaque, and the underlying data may be simple function parameters, or even a STMap if the transform is very complicated and requires a long computation, but this doesn't have to be any kind of standard ofx type since it's handled by the plugin that created it)
  • + a pointer to a function to free the data (so that proper destructors can be called)

Distortions can easily be concatenated by the host: the host can easily build a composite function that calls the successive transforms, and pass it through these 3 pointers. If there are matrix transforms in the chain, they can be concatenated by the host too.

Frédéric Devernay | 3:34 am, 15 Dec 2016
  • Pierre Jasmin : in some application this sort of happens in expression space if you like. (Shouldn't this be Matrix parameter(s)?). I forget is there a way for effect to turn off render internally so it's bypassed.
  • Peter Litwinowicz : What about a warping effect that is not based on a matrix or function (like a lens distortion function), how do you report the transformation? Do you report a 2D image of displacement vectors?
    • Bruno Nicoletti : to Peter, if you can't efficiently return a vector from a function (eg: something based around motion vectors), then returning a displacement image is the way to go, but that needs some sort of deep image format. That is probably a different discussion to this one, as you would still need some sort of render pass to calculate them. This is for the simpler case.
  • Phil Barret : A matrix would be handled correctly and efficiently in Baselight. Need to discuss how we signal to the effect that the input images have already been transformed, and clarify what spaces the ROIs are then in.
    • Bruno Nicoletti : I'm not sure what you mean by already been transformed ? The whole point is we avoid the render call and the effect doesn't get to see the image at all, so how does it need to know about a transformed image. RoIs are in the co-ordinate system of the effect, however, you wouldn't need to call getRoI on the effect if you have the matrix, as you can calculate it directly.
unattributed | 9:49 am, 8 Mar 2014
Back to standard change list
  • OFX @ Github
  • Association Information

Copyright ©2023 The Open Effects Association