Standard version: 1.5
Phil Barrett (Filmlight), Peter Litwinowicz (RE:Vision Effects), Dennis Adams (Sony)
Below is 5 different topics that can be considered. It does not have to end up all in one suite but it should maybe be discussed at same time.
- CPU based effect registering GPU ressource usage
Host that render multiple frames at once and switch GPU per frame, could report the GPUid so we don't load the same GPU making it easier on the host in terms of memory ressouces used. Similarly host using 2 GPU, one for UI and display, and the other for rendering, could assign to processing/rendering the correct GPU and hosts that need to fallback to write back to RAM to support an extention could also assing a different GPU than they would for OpenCL and Cuda suites. However this might only be possible via passing an OpenGL context from which one can map to OpenCL and Cuda device index.
Question: do we need a different ThreadSafety than the 3 we have to set a different value for all on CPU versus GPU receiving RAM images? Currently ThreadSafety might be a Descriptor only setup.
- Fallback advertising (hand shake)
useful info: there might be other ones
Boolean (0,1) ofxGpuUtilEffectProducesSameResultCPUorGPU: (w plugin at description time) - If effect says YES, then host can fallback to CPU without expected issues. If effect says NO, host cannot fallback to CPU at least not without flushing the whole sequence cache maybe. Same result is a choice made by effect.
Boolean ofxGpuUtilTracksMemoryUsage: (rw host-plugin at description time) - If host says yes, the effect can return an out of memory error and expect the host to retry. There has to be a protocol defined to stop that recursion in case Host is really out of memory. This could be an Instance Change Prop Reason for example. Else the effect in interactive mode might return a solid color to hint the user there is not enough GPU and during render return Failed to interrupt render.
Boolean ofxGpuUtilSupportsAllGPUNotTheSameBrandModel: (rw write host & effects) defaults to false which is most common I think.
- GPU settings
Host might have their own GPU settings global to application and an effect might have a menu to let users decide to use GPU or not on a particular instance (this is also a poor man way to manage GPU memory from an end-user perspective). Effects could support slaving to host parameters for GPU as well. This includes even –gpu switch on a background processing render. For Cuda and OpenCL suite we are assuming the device index can be consulted for which GPU to run on.
- Memory management hints
So far it has been suggested (Phil from Baselight) that an effect before any VRAM is allocated in the Render action provides an hint in bits (or something) to allow host to clear memory ahead. This could maybe be of the form (try bitsamount, tryagain incrementamount)- two parameters so it can be set before render action as it might be hard for effect to be precise enough. The second argument would prevent plugin having to do book-keeping of we are at right now and assume host can retain what worked for subsequent frames in the image sequence. Since an host can be running CUDA and an effect OpenCL this has to be within a separate suite, as we cannot like RAM memory simply have indirection to host via Memory suite to do allocation book-keeping. Currently the OpenGL suite has a simple flusing() method - which is maybe OK on a tiny VRAM card but the trend is for more and more GB (e.g. Quadro has a 24 GB model, one would probably not want to flush everything).
It has been suggested by Paul Miller (DFT) and Dennis Adams (Sony) that we could provide a way to allow effect to ask the host to retain some GPU memory. A proposal for that needs to be presented. This is likely most important for effect where a user can spend a lot of time on a single frame (e.g. paint, roto, warping). This is a subset of the general caching discussion that applies to GPU.
No comments on this change yet