0

C++ Plugin

Hi all

We are developing an application that does small-scale retiming of movies. Currently it is a stand-alone C++ & Qt application, but there is interest in RV Integration. Our algorithm requires access to the whole range of input images and generates a potentially different number of output frames. For user input we require a custom widget and some tools directly on the video frame, similar to the annotation tools that already exist.

What are the best options to this? The first Idea was writing a filter node similar to a retime node, but there does not seem to be an API to make user-defined nodes, or even load C++ plugins. I didn't find a way to access the image data from Python either. But if it is possible to replace a retime node with our own, or even create a custom top-level node that can sit between a source and the view group, then that would be optimal.

A potential fallback solution might be to ignore RV's input capabilities and expose only the output of our program via the network. User interaction could be done with a small python plugin, sending all the feedback back to the external program. This method would not be very convenient and probably also not very performant.

5 条评论

  • 0
    Avatar
    Jon Morley

    Hi,

    We sell another program called RVX that allows users to create their own nodes. The API for that is GLSL, but that is one way to implement features like the transitions and retiming. We use that in approach ourselves to create rich new features. If you are interested in trying RVX out please create a new support ticket with your request and a member of our sales team will get back to you.

    Thanks,
    Jon

  • 0
    Avatar
    CPPDev

    Hi,

    Thanks for the reply. Our algorithm requires access to pixel from every single frame in the input frame range to generate a single output frame. The only GLSL node type that claims to support something like this was the evaluationType "combine" kind. And I have not found any documentation or example of how those work. But realistically it sounds very impractical to implement our code in GLSL, in particular due to our random access requirements. But I would be happy to take a further look at this option if you can point me to more information, about the GLSL combine API in particular. The reference manual[1] was a bit sparse.

    Alternatively, is there any way to access the pixel data in source images via python, or better via C++? Most python plugins seem to be very simple MinorMode based tools, and they have no access to the actual image data. The capabilities for event hooking would probably be sufficient, but my main worry is image data access, and result image output, neither of which seems to be covered by Modes.

    If none of this is possible, does it sound plausible to communicate with our program via the network? It does seem possible to add new single-frame RVImageSource via the network and render our output as series of such single-frame sources. But it also seems a bit hacky to use RV as simple UI to an external program.

    Thank you again for your feedback so far!

    [1] file:///Applications/RV64.app/Contents/Resources/English.lproj/rv_reference.html#Chapter_3_Writing_a_Custom_GLSL_Node_Fields_in_the_IPNodeDefinition

  • 0
    Avatar
    CPPDev


    The user interaction is not the main problem, with either Python or direct C++ Qt code we should be able to replicate the user-experience of our stand-alone program. More problematic is getting the input data from RV, and returning the output data to RV. But I can try to answer your questions as good as possible without revealing too many internal secrets.

    1) The input frame sequence is ideally the currently displayed sequence, possibly the "Default Sequence" or "Default Stack", or the "Default Layout". Basically all frames that would be displayed on playback. Which input frames are required for a particular output frame depends on the parameters and the user interaction.

    2) The user interacts with a brush with the current frame. Each interaction changes the output, potentially influencing every single frame, similar to changing the parameters in a retiming.

    3) The stand-alone application has key frames and various brush-like tools to modify the key frames via the currently displayed frame. The set of key frames and some parameters are used to compute the output sequence from the input sequence. Since this is slow, we cache the result, we can not provide the output directly for displaying.

    4) The number of output frames in the output sequence is independent of the input frame count. Each output frame can potentially depend on the data of every single input frame.

  • 0
    Avatar
    Jon Morley

    Hi,

    Can you please walk me through your users' interaction in specific example (use case)?

    1) What is in the input frame sequence?

    2) What does the user want?

    3) How do they specify what they want to the application

    4) Finally, what does the resulting frame sequence look like?

    If I knew the answers to those questions I could tell you if RVX or some other part of RV was up to the task before falling back to the network interface.

    Thanks,
    Jon

  • 0
    Avatar
    Alan Trombla

    Hi,

    I think RV is probably not what you're looking for.  RV is entirely GPU-based.  That is, ignoring caching issues for the moment, after the media is decoded it is pushed to the GFX card and then all the rest of RV's processing (geometric transformations, composting, color changes, editorial transitions, etc) takes place in one or more GLSL passes on the card.  There is no opportunity or mechanism to bring the partially-processed pixels back to the CPU for some stage.

    Since you say:

    The set of key frames and some parameters are used to compute the output sequence from the input sequence. Since this is slow, we cache the result, we can not provide the output directly for displaying.

    There does't seem to be much point in trying to cram this (slow) processing into RV.  If you want to have RV act as a "prep" tool for your algorithm, and then display the results, I think the best you could do would be a scripted plugin that, on user command, dumps a session file and runs RVIO to produce an input sequence, then your algorithm, then loads the resulting (files) into RV.

    Sorry to not have something more helpful to offer,

    Alan

请先登录再写评论。