Table of Contents
Join the discussion at our RV forum.
Chapter 1 Overview
RV comes with the source code to its user interface. The code is written in a language called Mu which is not difficult to learn if you know Python, MEL, or most other computer languages used for computer graphics. As of 3.12, RV can also use Python in a nearly interchangeable manner.
If you are completely unfamiliar with programming, you may still glean information about how to customize RV in this manual; but the more complex tasks like creating a special overlay or slate for RVIO or adding a new heads-up widget to RV might be difficult to understand without help from someone more experienced.
This manual does not assume you know Mu to start with, so you can dive right in. For Python, some assumptions are made. The chapters are organized with specific tasks in mind.
The reference chapters contain detailed information about various internals that you can modify from the UI.
Using the RV file format (.rv) is detailed in Chapter 6.
1.1 The Big Picture
RV is two different pieces of software: the core (written in C++) and the interface (written in Mu and Python). The core handles the following things:
- Image, Movie, and Audio I/O
- Caching Images and Audio
- Tracking Dependencies Among Image and Audio Operations
- Basic Image Processing in Software
- Rendering Images
- Feeding Audio to Audio Output Devices
- Handling User Events
- Rendering Additional Information and Heads-up Widgets
- Setting and Getting State in the Image Processing Graph
- Interfacing to the Environment
- Handling User Defined Setup of Incoming Movies/Images/Audio
- High Level Features
RVIO shares almost everything with RV including the UI code (if you want it to). However it will not launch a GUI so its UI is normally non-existent. RVIO does have additional hooks for modification at the user level: overlays and leaders. Overlays are Mu scripts which allow you to render additional visual information on top of rendered images before RVIO writes them out. Leaders are scripts which generate frames from scratch (there is nothing rendered under them) and are mainly there to generate customized flexible slates automatically.
1.2 Drawing
In RV's user interface code or RVIO's leader and overlays it's possible draw on top of rendered frames. This is done using the industry standard API OpenGL. There are Mu modules which implement OpenGL 1.1 functions including the GLU library. In addition, there is a module which makes it easy to render true type fonts as textures (so you can scale, rotate, and composite characters as images). For Python there is PyOpenGL and related modules.
Mu has a number of OpenGL friendly data types which include native support for 2D and 3D vectors and dependently typed matrices (e.g., float[4,4], float[3,3], float[4,3], etc). The Mu GL modules take the native types as input and return them from functions, but you can use normal GL documentation and man pages when programming Mu GL. In this manual, we assume you are already familiar with OpenGL. There are many resources available to learn it in a number of different programming languages. Any of those will suffice to understand it.
1.3 Menus
The menu bar in an RV session window is completely controlled (and created) by the UI. There are a number of ways you can add menus or override and replace the existing menu structure.
Adding one or more custom menus to RV is a common customization. This manual contains examples of varying complexity to show how to do this. It is possible to create static menus (pre-defined with a known set of menu items) or dynamic menus (menus that are populated when RV is initialized based on external information, like environment variables).
Chapter 2 Image Processing Graph
The UI needs to communicate with the core part of RV. This is done in two ways: by calling special command functions (commands) which act directly on the core (e.g. play() causes it to start playing), or by setting variables in the underlying image processing graph which control how images will be rendered.
Inside each session there is a directed acyclic graph (DAG) which determines how images and audio will be evaluated for display. The DAG is composed of nodes which are themselves collections of properties.
A node is something that produces images and/or audio as output from images and audio inputs (or no inputs in some cases). An example from RV is the color node; the color node takes images as input and produces images that are copies of the input images with the hue, saturation, exposure, and contrast potentially changed.
A property is a state variable. The node's properties as a whole determine how the node will change its inputs to produce its outputs. You can think of a node's properties as parameters that change its behavior.
RV's session file format (.rv file) stores all of the nodes associated with a session including each node's properties. So the DAG contains the complete state of an RV session. When you load an .rv file into RV, you create a new DAG based on the contents of the file. Therefore, to change anything in RV that affects how an image looks, you must change a property in some node in its DAG.
There are a few commands which RV provides to get and set properties: these are available in both Mu and Python.
Finally, there is one last thing to know about properties: they are arrays of values. So a property may contain zero values (it's empty) or one value or an array of values. The get and set functions above all deal with arrays of numbers even when a property only has a single value.
16 lists all properties and their function for each node type.
2.1 Top-Level Node Graph
When RV is started with e.g. two media (movies, file sequences) it will create two top-level group nodes: one for each media source. These are called RVSourceGroup nodes. In addition, there are four other top-level group nodes created and one display group node for each output device present on the system (i.e. one for each connected monitor and in the case of RVSDI one for each SDI device).
There is always a default layout (RVLayoutGroup), sequence (RVSequenceGroup), and stack node (RVStackGroup) as well as a view group node (RVViewGroup). The view group is connected to each of the active display groups (RVDisplayGroup). There is only one input to the view group and that input determines what the user is seeing in the image viewer. When the user changes views, the view group input is switch to the node the user wishes to see. For example, when the user looks at one of the sources (not in a sequence) the view group will be connected directly to that source group.
New top level nodes can be created by the user. These nodes can have inputs of any other top level node in the session other than the view group and display groups.
In the scripting languages, the nodes are referred to by internal name. The internal name is not normally visible to the user, but is used extensively in the session file. Most of the node graph commands use internal node names or the name of the node's type.
2.2 Group Nodes and Pipeline Groups
A group node is composed of multiple member nodes. The graph connectivity is determined by the value of the group node's properties or it is fixed. Group nodes can contain other group nodes. The member nodes are visible to the user interface scripting languages and their node names are unique in the graph. Nodes may only be connected to nodes that are members of the same group. In the case of top level nodes they can be connected to other top level nodes.
In RV 4, a new type of group node has been introduced: the pipeline group. A pipeline group is a group node that connects it members into a single pipeline (no branches). Every pipeline group has a string array property called pipeline.nodes which determines the types of the nodes in the pipeline and the order in which they are connected. Any node type other than view and display group nodes can be specified in the pipeline.nodes property.
Each type of pipeline group has a default pipeline. Except for the RVLinearizePipelineGroup which has two nodes in its default pipeline all others have a single node with the exception of the view pipeline which is empty by default. By modifying the pipeline.nodes property in any of these pipeline groups the default member nodes can either be swapped out, removed completely, or additional nodes can be inserted.
For example. the following python code will set the view pipeline to use a user defined node called “FilmLook”:
setStringProperty("#RVViewPipelineGroup.pipeline.nodes", ["FilmLook"], True)
2.3 Source Group Node
The source group node (RVSourceGroup) has fixed set of nodes and three pipeline groups which can be modified to customize the source color management.
The source group takes no inputs. There is eiher an RVFileSource or an RVImageSource node at the leaf position of the source group. A file source contains the name of the media that is provided by the source. An image source contains raw pixels of its media (usually obtained directly from a renderer etc).
The source group is responsible for linearizing the incoming pixel data, possibly color correcting it and applying a look, and holding per-source annotation and transforms. Any of these operations can be modified by changing property values on the member nodes of the source group.
addSourceBegin()
|
Optional call providing a fast add source mechanism when adding multiple sources which postpones connecting the added sources to the default views' inputs until after the corresponding addSourceEnd() is called. The way to enable this optimization is to call addSourceBegin() first, followed by a bunch of addSource() calls, and then end with addSourceEnd().
|
||
addSourceEnd()
|
Optional call providing a fast add source mechanism when adding multiple sources which postpones connecting the added sources to the default views' inputs until after the corresponding addSourceEnd() is called. The way to enable this optimization is to call addSourceBegin() first, followed by a bunch of addSource() calls, and then end with addSourceEnd().
|
||
addSources(string[] sources, string tag = “”, bool processOpts = false, bool merge = false)
|
Add a new source group to the session (see addSource). This function adds the requested sources asynchronously. In addition to the "incoming-source-path" and "new-source" events generated for each source. A "before-progressive-loading" and "after-progressive-loading" event pair will be generated at the appropriate times. An optional tag can be provided which is passed to the generated internal events. The tag can indicate the context in which the addSource() call is occurring (e.g. drag, drop, etc). The optional processOpts argument can be set to true if there are 'option' states like -play, that should be processed after the loading is complete.Note that sources can be single movies/sequences or you can use the "[]" notation from the command line to specify multiple files for the source, like stereo layers, or additional audio files. Per-source command-line flags can
also be used here, but the flags should be marked by a "+" rather than a "-". Also note that each argument is a separate element of the input string array. For example a single stereo source might look like string[] {"[", "left.mov", "right.mov", "+rs", "1001", "]" } |
||
2.4 View Group Node
The view group (RVViewGroup) is responsible for viewing transforms and is the final destination for audio in most cases. The view group is also responsible for rendering any audio waveform visualization.
Changing the view in RV is equivalent to changing the input of the view group. There is only one view group in an RV session.
The view group contains a pipeline into which arbitrary nodes can be inserted for purposes of QC and visualization. By default, this pipeline is empty (it has no effect).
2.5 Sequence Group Node
The internal RVSequence node contains an EDL data structure which determines the order and possibly the frame ranges for its inputs. By default the EDL is automatically created by sequencing the inputs in order from the first to last with their full frame ranges. The automatic EDL function can be disabled in which case arbitrary EDL data can be set including cuts back to a single source multiple times.
Each input to a sequence group has a unique sub-graph associated with it that includes an RVPaint node to hold annotation per input and an optional retime node to force all input media to the same FPS.
2.6 Stack Group Node
The stack group node displays its inputs on top of each other and can control a crop per input in order to allow pixels from lower layers to be seen under upper layers. Similar to a sequence group, the stack group contains an optional retime node per input in order to force all of the input FPS' to the same value.
Unlike the sequence group, the stack group's paint node stores annotation after the stacking so it always appears on top of all images.
2.7 Layout Group Node
The layout group is similar to a stack group, but instead of showing all of its inputs on top of one another, the inputs are transformed into a grid, row, column, or under control of the user (manually). Like the other group nodes, there is an optional retime node to force all inputs to a common FPS. Annotations on the layout group appear on top of all images regardless of their input order.
2.8 Display Group Node
There is one display group for each video device accessible to RV. For example in the case of a dual monitor setup, there would be two display groups: one for each monitor. In the case of RVSDI, there is also an additional display group for each SDI output device.
The display group has two functions: to prepare the working space pixels for display on the associated device and to set any stereo modes for that device.
By default the display group's pipeline uses an RVDisplayColor node to provide the color correction. The user can use any node for that purpose instead or in addition to the existing RVDisplayColor. For example, when OpenColorIO is being used a DisplayOCIONode is used in place of the RVDisplayColor.
For a given desktop setup with multiple monitors only one of the RVDisplayGroups is active at a time: the one corresponding to the monitor that RV's main window is on. In presentation mode, two RVDisplayGroups will be active: one for RV's main window and one for the presentation device. Each display group has properties which identify their associated device.
Changes to a display group affect the color and stereo mode for the associated device only. In order to make a global color change that affects all devices, a node should be inserted into the view group's pipeline or earlier in the graph.
2.9 Addressing Properties
A full property name has three parts: the node name, the component name, and the property name. These are concatenated together with dots like nodename.componentname.propertyname. Each property has its own type which can be set and retrieved with one of the set or get functions. You must use the correct get or set function to access the property. For example, to set the display gamma, which is part of the ``display” node, you need to use setFloatProperty() like so in Mu:
setFloatProperty("display.color.gamma", float[] {2.2, 2.2, 2.2}, true)
setFloatProperty("display.color.gamma", [2.2, 2.2, 2.2], True)
In an RV session, some node names will vary per the source(s) being displayed and some will not. Figure 2.8 shows a pipeline diagram for one possible configuration and indicates which are per-source (duplicated) and which are not.
At any point in time, a subset of the graph is active. For example if you have three sources in a session and RV is in sequence mode, at any given frame only one source branch will be active. There is a second way to address nodes in RV: by their types. This is done by putting a hash (#) in front of the type name. Addressing by node type will affect all of the currently active nodes of the given type. For example, a property in the color node is exposure which can be addressed directly like this in Mu:
color.color.exposure
#RVColor.color.exposure
When the “#” type name syntax is used, and you use one of the set or get functions on the property, only nodes that are currently active and which are the first reachable of the given type will be considered. So in this case, if we were to set the exposure using type-addressing:
setFloatProperty("#RVColor.color.exposure", float[] {2.0, 2.0, 2.0}, true)
setFloatProperty("#RVColor.color.exposure", [2.0, 2.0, 2.0], True)
In sequence mode (i.e. the default case), only one RVColor node is usually active at a time (the one belonging to the source being viewed at the current frame). In stack mode, the RVColor nodes for all of the sources could be active. In that case, they will all have their exposure set. In the UI, properties are almost exclusively addressed in this manner so that making changes affects the currently visible sources only. See figure 2.9 for a diagrammatic explanation.

Figure 2.9:
Active Nodes in the Image Processing Graph. The active nodes are those nodes which contribute to the rendered view at any given frame. In this configuration, when the sequence is active, there is only one source branch active (the yellow nodes). By addressing properties using their node's type name, you can affect only active nodes with that type without needing search for the exact node(s).
@RVDisplayColor.color.brightness
The above would affect only the first RVDisplayColor node it finds instead of all RVDisplayColor nodes of depth 1 like “#” does. This is useful with presentation mode for example because setting the brightness would be confined to the first RVDisplayColor node which would be the one associated with the presentation device. If “#” was used, all devices would have their brightness modified. The utility of the “@” syntax is limited compared to “#” so if you are unsure of which to use try “#” first.
Chapter 16 has all the details about each node type.
2.10 User Defined Properties
It's possible to add your own properties when creating an RV file from scratch or from the user interface code using the newProperty() function.
- You wish to save something in a session file that was created interactively by the user.
- You're generating session files from outside RV and you want to include additional information (e.g. production tracking, annotations) which you'd like to have available when RV plays the session file.
2.11 Getting Information From Images
RV's UI often needs to take actions that depend on the context. Usually the context is the current image being displayed. Table 2.4 shows the most useful command functions for getting information about displayed images.
For example, when automating color management, the color space of the image or the origin of the image may be required to determine the best way to view the image (e.g., for a certain kind of DPX file you might want to use a particular display or file LUT). The color space is often stored as image attribute. In some cases, image attributes are misleading–for example, a well known 3D software package renders images with incorrect information about pixel aspect ratio—usually other information in the image attributes coupled with the file name and origin are enough to make a good guess.
Chapter 3 Writing a Custom GLSL Node
RV can use custom shaders to do GPU accelerated image processing. Shaders are authored in a superset of the GLSL language. These GLSL shaders become image processing Nodes in RV's session. Note that nodes can be either “signed” or “unsigned”. As of RV6, nodes can be loaded by any product in the RV line (RV, RV-SDI, RVIO). The most basic workflow is as follows:
- Create a file with a custom GLSL function (the Shader) using RV's extended GLSL language.
- Create a GTO node definition file which references the Shader file.
- Test and adjust the shader/node as necessary.
- Place the node definition and shader in the RV_SUPPORT_PATH under the Nodes directory for use by other users.
3.1 Node Definition Files
Node definition files are GTO files which completely describe the operation of an image processing node in the image/audio processing graph.
A node definition appears as a single GTO object of type IPNodeDefinition. This makes it possible for a node definition to appear in a session file directly or in an external definition file which can contain a library of definition objects.
The meat of a node definition is source code for a kernel function written in an augmented version of GLSL which is described below.
The following example defines a node called "Gamma" which takes a single input image and applies a gamma correction:
GTOa (4) Gamma : IPNodeDefinition (1) { node { string evaluationType = "color" string defaultName = "gamma" string creator = "Tweak Software" string documentation = "Gamma" int userVisible = 1 } render { int intermediate = 0 } function { string name = "main" # OPTIONAL string glsl = "vec4 main (const in inputImage in0, const in vec3 gamma) { return vec4(pow(in0().rgb, gamma), in0().a); }" } parameters { float[3] gamma = [ [ 0.4545 0.4545 0.4545 ] ] } }
3.2 Fields in the IPNodeDefinition
- node.evaluationType
- one of:
- node.defaultName
- the default name prefix for newly instantiated nodes
- node.creator
- documentation about definition author
- node.documentation
- Possibly html documentation string. In practice this may be quite large
- node.userVisible
- if non-0 a user can create this node directly otherwise only programmatically
- render.intermediate
- if non-0 the node results are forced to be cached
- function.name
- the name of the entry point in source code. By default this is main
- function.fetches
- approximate number of fetches performed by function. This is meaningful for filters. E.g. a 3x3 blur filter does 9 fetches.
- function.glsl
- Source code for the function in the augmented GLSL language. Alternately this can be a file URL pointing to the location of a separate text file containing the source code. See below for more details on file URL handling.
- parameters
- bindable parameters should be given default values in the parameters component. Special variables need not be given default values. (e.g. input images, the current frame, etc).
3.2.1 The “combine” Evaluation Type
A “combine” node will evaluate its single input once for each parameter to the shader of type "inputImage".
The names of the inputImage parameters in the shader may be chosen to be meaningful to the shader writer; they are not meaningful to the evaluation of the combine node. The order of the inputImage parameters in the shader parameter list will correspond to the multiple evaluations of the node's input (see below).
Each time the input is evaluated, there are a number of variations that can be made in the context by way of properties specified in the node definition. To be clear, these properties are specified in the “parameters” section of the node definition, but they are “evaluation parameters” not shader parameters. These are:
A context-modifying property has 3 parts: the name (see above), an "inputImage index" (the int tacked onto the name), and the value. The affect of the parameter is that the context of the evaluation of the input specified by the index will be modified by the value. So for example "int eye0 = 1" means that the "eye" parameter of the context used in the first evaluation of the input will be set to "1".
StereoDifference : IPNodeDefinition (1) { node { string evaluationType = "combine" } function { string glsl = "file://${HERE}/StereoQC.glsl" } parameters { int eye0 = 0 int eye1 = 1 } }
vec4 main (const in inputImage left, const in inputImage right)
- It's a Combine node, so it's single input can be evaluated multiple times.
- The shader has two inputImage parameters so the input will be evaluated twice.
- The node definition contains "eye" parameters, so the eye value of the evaluation context will differ in different evaluations of the input.
- In the first evaluation of the input (index = 0), the context's eye value will be set to 0, and in the second it will be set to 1.
- The results of each evaluation of the input is made available to the shader in the corresponding inputImage parameter.
FrameBlend : IPNodeDefinition (1) { node { string evaluationType = "combine" } function { string glsl = "file://${HERE}/FrameBlend.glsl" } parameters { int offset0 = -2 int offset1 = -1 int offset2 = 0 int offset3 = 1 int offset4 = 2 } }
vec4 main (const in inputImage in0, const in inputImage in1, const in inputImage in2, const in inputImage in3, const in inputImage in4)
So the result is that the input to the FrameBlend node will be evaluated 5 times, and in each case the evaluation context will have a frame value that is equal to the incoming frame value, plus the corresponding offset. Note that the shader doesn't know anything about this, and from it's point of view it has 5 input images.
3.3 Alternate File URL
Language source code can be either inlined for a self contained definition or can be a modified file URL which points to an external file. An example file URL might be:
file:///Users/foo/glsl/foo_shader_source.glsl
If the node definition reader sees a file URL it will also perform variable substitution from the environment and any special predefined variables. For example if the $HOME environment variable exists the following would be equivalent on a Mac:
file://${HOME}/glsl/foo_shader_source.glsl
There is currently one special variable defined called $HERE which has the value of the directory in which the definition file lives. So if for example the node definition file lives in the filesystem like so:
/Users/foo/nodes/my_nodes.gto /Users/foo/nodes/glsl/node1_source_code.glsl /Users/foo/nodes/glsl/node2_source_code.glsl /Users/foo/nodes/glsl/node3_source_code.glsl
and it references the GLSL files mentioned above then valid file URLs for the source files would like this:
file://${HERE}/glsl/node1_source_code.glsl file://${HERE}/glsl/node2_source_code.glsl file://${HERE}/glsl/node3_source_code.glsl
3.4 Augmented GLSL Syntax
GLSL source code can contain any set of functions and global static data but may not contain any uniform block definitions. Uniform block values are managed by the underlying renderer.
3.4.1 The main() Function
For each input to a node there should be a parameter of type inputImage. The parameters are applied in the order they appear. So the first node image input is assigned to the first inputImage parameter and so on.
Any additional parameters are searched for in the 'parameters' component of the node. When a node is instantiated this will be populated by additional parameters to the main() function.
vec4 main (const in inputImage in0, const in vec3 gamma) { vec4 P = in0(); return vec4(pow(P.rgb, gamma), P.a); }
In this case the node can only take a single input and will have a property called parameters.gamma of type float[3]. By changing the gamma property, the user can modify the behavior of the Gamma node.
3.4.2 The inputImage Type
A new type inputImage has been added to GLSL. This type represents the input images to the node. So a node with one image argument must take a single inputImage argument. Likewise, a two input node should take two such arguments.
There are a number of operations that can be done on a inputImage object. For the following examples the parameter name will be called i.
Use of the inputImage type as a function argument is limited to the main() function. Use of the inputImage type as a function should be minimized where possible e.g. the result should be stored into a local variable and the local variable used there after. For example:
vec4 P = i(); return vec4(P.rgb * 0.5, P.a);
NOTE: The st value return by an inputImage has a value ranging from 0 to the width in X and 0 to the height in Y. So for example, the pixel value of the first pixel in the image is located at (0.5, 0.5) not at (0, 0). Similarily, the last pixel in the image is located at (width-0.5, height-0.5) not (width-1, height-1) as might be expected. See ARB_texture_rectangle for information on why this is. In GLSL 1.5 and greater the rectangle coordinates are built into the language.
3.4.3 The outputImage Type
The type outputImage has also been added. This type provides information about the output framebuffer.
The main() function may have a single outputImage parameter. You cannot pass an outputImage to auxiliary functions nor can you have outputImage parameter to an auxiliary function. You can pass the results of operations on the outputImage object to other functions.
3.4.4 Use of Samplers
Samplers can be used as inputs to node functions. The sampler name and type must match an existing parameter property on the node. So for example a 1D sampler would correspond to a 1D property the value of which is a scalar array. A 3D sampler would have a type like float[3,32,32,32] if it were an RGB 32^3 LUT.
In the above table, D would normally 1, 3, or 4 for scalar, RGB, or RGBA. A value of 2 is possible but unusual.
Use the new style texture() call instead of the non-overloaded pre GLSL 1.30 function calls like texture3D() or texture2DRect(). This should be the case even when the driver only supports 1.20.
3.5 Testing the Node Definition
Once you have a NodeDefinition GTO file that contains or references your shader code as described above, you can test the node as follows:
- Add the node definition file to the Nodes directory on your RV_SUPPORT_PATH. For example, on Linux, you can put it in $HOME/.rv/Nodes. If the GLSL code is in a separate file, it should be in the location specified by the URL in the Node Definition file.You can use the ${HERE}/myshader.glsl notation (described above) to indicate that the GLSL is to be found in the same directory.
- Start RV and from the Session Manager add a node with the “plus” button or the right-click menu (“New Viewable”) by choosing “Add Node by Type” and entering the type name of the new node (“Gamma” in the above example).
- At this point you might want to save a Session File for easy testing.
- You can now iterate by changing your shader code or the parameter values in the Session File and re-running RV to test.
3.6 Publishing the Node Definition
When you have tested sufficiently in RV and would like to make the new Node Definition available to other users running RV, RVSDI, RVIO, etc, you need to:
Make the Node Definition available to users. RV will pick up Node Definition files from any Nodes sub-directory along the RV_SUPPORT_PATH. So your definitions can be distributed by simply inserting them into those directories, or by including them in an RV Package (any GTO/GLSL files in an RV Package will be added to the appropriate “Nodes” sub-directory when the Package is installed). With some new node types, you may want to distribute Python or or Mu code to help the user manage the creation and parameter-editing of the new nodes, so wrapping all that up in an RV Package would be appropriate in those cases.
Chapter 4 Python
As of RV 3.12 you can use Python in RV in conjunction with Mu or in place of it. It's even possible to call Python commands from Mu and vice versa. So in answer to the question: which language should I use to customize RV? The answer is whichever you like. At this point we recommend using Python.
There are some slight differences that need to be noted when translating code between the two languages:
In Python the modules names required by RV are the same as in Mu. As of this writing, these are commands, extra_commands, rvtypes, and rvui. However, the Python modules all live in the rv package. This package is in-memory and only available at RV's runtime. You can access these
commands via writing your own custom MinorMode package. So while in Mu you can:
commands via writing your own custom MinorMode package. So while in Mu you can:
use commands
require commands
to make the commands visible in the current namespace. In Python you need to include the package name:
from rv.commands import *
import rv.commands
4.1 Calling Mu From Python
It's possible to call Mu code from Python, but in practice you will probably not need to do this unless you need to interface with existing packages written in Mu.
To call a Mu function from Python, you need to import the MuSymbol type from the pymu module. In this example, the play function is imported and called F on the Python side. F is then executed:
from pymu import MuSymbol F = MuSymbol("commands.play") F()
If the Mu function has arguments you supply them when calling. Return values are automatically converted between languages. The conversions are indicated in Figure4.3 .
from pymu import MuSymbol F = MuSymbol("commands.isPlaying") G = MuSymbol("commands.setWindowTitle") if F() == True: G("PLAYING")
Once a MuSymbol object has been created, the overhead to call it is minimal. All of the Mu commands module is imported on start up or reimplemented as native CPython in the Python rv.commands module so you will not need to create MuSymbol objects yourself; just import rv.commands and use the pre-existing ones.
When a Mu function parameter takes a class instance, a Python dictionary can be passed in. When a Mu function returns a class, a dictionary will be returned. Python dictionaries should have string keys which have the same names as the Mu class fields and corresponding values of the correct types.
For example, the Mu class Foo { int a; float b; } as instantiated as Foo(1, 2.0) will be converted to the Python dictionary {'a' : 1, 'b' : 2.0} and vice versa.
Existing Mu code can be leveraged with the rv.runtime.eval call to evaluate arbitrary Mu from Python. The second argument to the eval function is a list of Mu modules required for the code to execute and the result of the evaluation will be returned as a string. For example, here's a function that could be a render method on a mode; it uses the Mu gltext module to draw the name of each visible source on the image:
def myRender (event) : event.reject() for s in rv.commands.renderedImages() : if (rv.commands.nodeType(rv.commands.nodeGroup(s["node"])) != "RVSourceGroup") : continue geom = rv.commands.imageGeometry(s["name"]) if (len(geom) == 0) : continue x = geom[0][0] y = (geom[0][1] + geom[2][1]) / 2.0 domain = event.domain() w = domain[0] h = domain[1] drawCode = """ { rvui.setupProjection (%d, %d); gltext.color (rvtypes.Color(1.0,1.0,1.0,1)); gltext.size(14); gltext.writeAt(%f, %f, extra_commands.uiName("%s")); } """ rv.runtime.eval(drawCode % (w, h, float(x), float(y), s["node"]), ["rvui", "rvtypes", "extra_commands"])
4.2 Calling Python From Mu
There are two ways to call Python from Mu code: a Python function being used as a call back function from Mu or via the "python" Mu module.
In order to use a Python callable object as a call back from Mu code simply pass the callable object to the Mu function. The call back function's arguments will be converted according to the Mu to Python value conversion rules show in Figure 4.3 . There are restrictions on which callable objects can be used; only callable objects which return values of None, Float, Int, String, Unicode, Bool, or have no return value are currently allowed. Callable objects which return unsupported values will cause a Mu exception to be thrown after the callable returns.
The Mu "python" module implements a small subset of the CPython API. You can see documentation for this module in the Mu Command API Browser under the Help menu. Here is an example of how you would call os.path.join from Python in Mu.
require python; let pyModule = python.PyImport_Import ("os"); python.PyObject pyMethod = python.PyObject_GetAttr (pyModule, "path"); python.PyObject pyMethod2 = python.PyObject_GetAttr (pyMethod, "join"); string result = to_string(python.PyObject_CallObject (pyMethod2, ("root","directory","subdirectory","file"))); print("result: %s\n" % result); // Prints "result: root/directory/subdirectory/file"
If the method you want to call takes no arguments like os.getcwd, then you will want to call it in the following manner.
require python; let pyModule = python.PyImport_Import ("os"); python.PyObject pyMethod = python.PyObject_GetAttr (pyModule, "getcwd"); string result = to_string(python.PyObject_CallObject (pyMethod, PyTuple_New(0))); print("result: %s\n" % result); // Prints "result: /var/tmp"
If the method you want to call require the python class instance "self" as an argument, you can get it by using the ModeManager as in the following exemple
let pyModule = python.PyImport_Import ("sgtk_bootstrap");
python.PyObject pyMethod = python.PyObject_GetAttr (pyModule, "ToolkitBootstrap");
python.PyObject pyMethod2 = python.PyObject_GetAttr (pyMethod, "queue_launch_import_cut_app");
State state = data();
ModeManagerMode manager = state.modeManager;
ModeManagerMode.ModeEntry entry = manager.findModeEntry ("sgtk_bootstrap");
if (entry neq nil)
{
PyMinorMode sgtkMode = entry.mode;
python.PyObject_CallObject (pyMethod2, (sgtkMode._pymode, "no event"));
}
If you are interested in retrieving an attribute alone then here is an example of how you would call sys.platform from Python in Mu.
require python; let pyModule = python.PyImport_Import ("sys"); python.PyObject pyAttr = python.PyObject_GetAttr (pyModule, "platform"); string result = to_string(pyAttr); print("result: %s\n" % result); // Prints "result: darwin"
4.3 Python Mu Type Conversions
4.4 PyQt versus PySide
RV 6 uses Qt 4.8. This version of Qt is supported by both the PySide and PyQt modules. However, from RV 6.x.4 onwards, RV ships with PySide for all platforms (OSX, Linux, Windows).
#!/Applications/RV64.app/Contents/MacOS/py-interp # Import PySide classes import sys from PySide.QtCore import * from PySide.QtGui import * # Create a Qt application. # IMPORTANT: RV's py-interp contains an instance of QApplication; # so always check if an instance already exists. app = QApplication.instance() if app == None: app = QApplication(sys.argv) # Display the file path of the app. print app.applicationFilePath() # Create a Label and show it. label = QLabel("Using RV's PySide") label.show() # Enter Qt application main loop. app.exec_() sys.exit()
To access RV's essential session window Qt QWidgets, i.e. the main window, the GL view, top tool bar and bottom tool bar, import the python module 'rv.qtutils'.
import rv.qtutils # Gets the current RV session windows as a PySide QMainWindow. rvSessionWindow = rv.qtutils.sessionWindow() # Gets the current RV session GL view as a PySide QGLWidget. rvSessionGLView = rv.qtutils.sessionGLView() # Gets the current RV session top tool bar as a PySide QToolBar. rvSessionTopToolBar = rv.qtutils.sessionTopToolBar() # Gets the current RV session bottom tool bar as a PySide QToolBar. rvSessionBottomToolBar = rv.qtutils.sessionBottomToolBar()
4.5 Shotgun Toolkit in RV
As of RV version 7.0, the standard Shotgun integration (known as “SG Review”) is supplied by Shotgun Toolkit code that is distributed with RV. In future releases, this will allow Toolkit apps to be versioned independently from RV and for the RV Toolkit engine to host user-developed apps.
- All the code comprising SG Review in RV is written in Python and is distributed with RV.
- In addition to the RV engine, three Toolkit apps are distributed with RV:
- The SG Review app (tk-rv-sgreview), which drives the core SG Review workflows in RV.
- The Import Cut app (tk-multi-importcut), which can be launched from the SG Review menu to import an EDL to Shotgun.
- A general purpose Python console (tk-multi-pythonconsole), which can be launched from the Tools menu after Toolkit has been initialized.
- A complete install of Shotgun Toolkit is not required to run RV 7 or the Toolkit components distributed with it. All the required code is installed with RV.
- As noted above, in future releases we hope to allow the Toolkit apps to be independently versioned and to allow the RV Engine to host user-created apps, but the initial RV 7 release is fairly “baked” and although all the relevant source code is supplied, we'd recommend that developers contact us first before making extensive changes (so that we can help make sure that future updates do not generate too much hassle).
- Since the “SGTK” RV Package is loaded by default, the “SG Review” menu will appear as soon as you authenticate RV with Shotgun. If you do not wish to use any SG Review features, you can uninstall the package or set the env var RV_SHOTGUN_NO_SG_REVIEW_MENU.
Chapter 5 Event Handling
Aside from rendering, the most important function of the UI is to handle events. An event can be triggered by any of the following:
- The mouse pointer moved or a button on the mouse was pressed
- A key on the keyboard was pressed or released
- The window needs to be re-rendered
- A file being watched was changed
- The user became active or inactive
- A supported device (like the apple remote control) did something
- An internal event like a new source or session being created has occurred
Each specific event has a name may also have extra data associated with it in the form of an event object. To see the name of an event (at least for keyboard and mouse pointer events) you can select the Help → Describe... which will let you interactively see the event name as you hit keys or move the mouse. You can also use Help → Describe Key.. to see what a specific key is bound to by pressing it.
Table 5.1 shows the basic event type prefixes.
When an event is generated in RV, the application will look for a matching event name in its bindings. The bindings are tables of functions which are assigned to certain event names. The tables form a stack which can be pushed and popped. Once a matching binding is found, RV will execute the function.
When receiving an event, all of the relevant information is in the Event object. This object has a number of methods which return information depending on the kind of event.
5.1 Binding an Event
In Mu (or Python) you can bind an event using any of the bind() functions. The most basic version of bind() takes the name of the event and a function to call when the event occurs as arguments. The function argument (which is called when the event occurs) should take an Event object as an argument and return nothing (void). Here's a function that prints hello in the console every time the ``j'' key is pressed:
If this is the first time you've seen this syntax, it's defining a Mu function. The first two characters \: indicate a function definition follows. The name comes next. The arguments and return type are contained in the parenthesis. The first identifier is the return type followed by a semicolon, followed by an argument list.
\: my_event_function (void; Event event) { print("Hello!\n"); } bind("key-down--j", my_event_function);
def my_event_function (event): print "Hello!" bind("default", "global", "key-down--j", my_event_function);
There are more complicated bind() functions to address binding functions in specific event tables (the Python example above is using the most general of these). Currently RV's user interface has one default global event table an couple of other tables which implement the parameter edit mode and help modes.
Many events provide additional information in the event object. Our example above doesn't even use the event object, but we can change it to print out the key that was pressed by changing the function like so:
\: my_event_function (void; Event event) { let c = char(event.key()); print("Key pressed = %c\n" % c); }
def my_event_function (event): c = event.key() print "Key pressed = %s\n" % c
In this case, the Event object's key() function is being called to retrieve the key pressed. To use the return value as a key it must be cast to a char. In Mu, the char type holds a single unicode character. In Python a string is used.
See the section on the Event class to find out how to retrieve information from it. At this point we have not talked about where you would bind an event; that will be addressed in the customization sections.
5.2 Keyboard Events
There are two keyboard events: key-down and key-up. Normally the key-down events are bound to functions. The key-up events are necessary only in special cases.
The specific form for key down events is key-down–something where something uniquely identifies both the key pressed and any modifiers that were active at the time.
So if the ``a'' key was pressed the event would be called: key-down–a. If the control key were held down while hitting the ``a'' key the event would be called key-down–control–a.
There are five modifiers that may appear in the event name: alt, caplock, control, meta, numlock, scrolllock, and shift in that order. The shift modifier is a bit different than the others. If a key is pressed with the shift modifier down and it would result in a different character being generated, then the shift modifier will not appear in the event and instead the result key will. This may sound complicated but these examples should explain it:
For control + shift + A the event name would be key-down–control–A. For the ``*'' key (shift + 8 on American keyboards) the event would be key-down–*. Notice that the shift modifier does not appear in any of these. However, if you hold down shift and hit enter on most keyboards you will get key-down–shift–enter since there is no character associated with that key sequence.
Some keys may have a special name (like enter above). These will typically be spelled out. For example pressing the ``home'' key on most keyboards will result in the event key-down–home. The only way to make sure you have the correct event name for keys is to start RV and use the Help → Describe... facility to see the true name. Sometimes keyboards will label a key and produce an unexpected event. There will be some keyboards which will not produce an event all for some keys or will produce a unicode character sequence (which you can see via the help mechanism).
5.3 Pointer (Mouse) Events
The mouse (called pointer from here on) can produce events when it is moved, one of its buttons is pressed, an attached scroll wheel is rotated, or the pointer enters or leaves the window.
The basic pointer events are move, enter, leave, wheelup, wheeldown, push, drag, and release. All but enter and leave will also indicate any keyboard modifiers that are being pressed along with any buttons on the mouse that are being held down. The buttons are numbered 1 through 5. For example if you hold down the left mouse button and movie the mouse the events generated are:
pointer-1--push pointer-1--drag pointer-1--drag ... pointer-1-release
Pointer events involving buttons and modifiers always come in there parts: push, drag and release. So for example if you press the left mouse, move the mouse, press the shift key, move the mouse, release everything you get:
pointer-1--push pointer-1--drag pointer-1--drag ... pointer-1-release pointer-1--shift--push pointer-1--shift--drag pointer-1--shift--drag ... pointer-1--shift--release
Notice how the first group without the shift is released before starting the second group with the shift even though you never released the mouse button. For any combination of buttons and modifiers, there will be a push-drag-release sequence that is cleanly terminated.
It is also possible to hold multiple mouse buttons and modifiers down at the same time. When multiple buttons are held (for example, button 1 and 2) they are simply both included (like the modifiers) so for buttons 1 and 2 the name would be pointer-1-2–push to start the sequence.
The mouse wheel behaves more like a button: when the wheel moves you get only a wheelup or wheeldown event indicating which direction the wheel was rotated. The buttons and modifiers will be applied to the event name if they are held down. Usually the motion of the wheel on a mouse will not be smooth and the event will be emitted whenever the wheel ``clicks''. However, this is completely a function of the hardware so you may need to experiment with any particular mouse.
There are three more pointer events that can be generated. When the mouse moves with no modifiers or buttons held down it will generate the event pointer–move. When the pointer enters the view pointer–enter is generated and when it leaves pointer–leave. Something to keep in mind: when the pointer leaves the view and the device is no longer in focus on the RV window, any modifiers or buttons the user presses will not be known to RV and will not generate events. When the pointer returns to the view it may have modifiers that became active when out-of-focus. Since RV cannot know about these modifiers and track them in a consistent manner (at least on X Windows) RV will assume they do not exist.
Pointer events have additional information associated with them like the coordinates of the pointer or where a push was made. These will be discussed later.
5.4 The Render Event
The UI will get a render event whenever it needs to be updated. When handling the render event, a GL context is set up and you can call any GL function to draw to the screen. The event supplies additional information about the view so you can set up a projection.
At the time the render event occurs, RV has already rendered whatever images need to be displayed. The UI is then called in order to add additional visual objects like an on-screen widget or annotation.
Here's a render function that draws a red polygon in the middle of the view right on top of your image.
\: my_render (void; Event event) { let domain = event.domain(), w = domain.x, h = domain.y, margin = 100; use gl; use glu; glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(0.0, w, 0, h); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // Big red polygon glColor(Color(1,0,0,1)); glBegin(GL_POLYGON); glVertex(margin, margin); glVertex(w-margin, margin); glVertex(w-margin, h-margin); glVertex(margin, h-margin); glEnd(); }
Note that for Python, you will need to use the PyOpenGL module or bind the symbols in the gl Mu module manually in order to draw in the render event.
The UI code already has a function called render() bound the render event; so this function basically disables existing UI rendering.
5.5 Remote Networking Events
RV's networking generates a number of events indicating the status of the network. In addition, once a connection has been established, the UI may generate sent to remote programs, or remote programs may send events to RV. These are typically uniquely named events which are specific to the application that is generating and receiving them.
For example the sync mechanism generates a number of events which are all named remote-sync-something.
5.6 Internal Events
Some events will originate from RV itself. These include things like new-source or new-session which include information about what changed. The most useful of these is new-source which can be used to manage color and other image settings between the time a file is loaded and the time it is first displayed. (See Color Management Section). Other internal events are functional, but are placeholders which will become useful with future features.
The current internal events are listed in table 5.3.
5.6.1 File Changed Event
It is possible to watch a file from the UI. If the watched file changes in any way (modified, deleted, moved, etc) a file-changed event will be generated. The event object will contain the name of the watched file that changed. A function bound to file-changed might look something like this:
\: my_file_changed (void; Event event) { let file = event.contents(); print("%s changed on disk\n" % file); }
In order to have a file-changed event generated, you must first have called the command function watchFile().
5.6.2 Incoming Source Path Event
This event is sent when the user has selected a file or sequence to load from the UI or command line. The event contains the name of the file or sequence. A function bound to this event can change the file or sequence that RV actually loads by setting the return contents of the event. For example, you can cause RV to check and see if a single file is part of a larger sequence and if so load the whole sequence like so:
\: load_whole_sequence (void; Event event) { let file = event.contents(), (seq,frame) = sequenceOfFile(event.contents()); if (seq != "") event.setReturnContent(seq); } bind("incoming-source-path", load_whole_sequence);
def load_whole_sequence (event): file = event.contents(); (seq,frame) = rv.commands.sequenceOfFile(event.contents()); if seq != "": event.setReturnContent(seq); bind("default", "global", "incoming-source-path", load_whole_sequence, "Doc string");
5.6.3 Missing Images
Sometimes an image is not available on disk when RV tries to read. This is often the case when looking at an image sequence while a render or composite is ongoing. By default, RV will find a nearby frame to represent the missing frame if possible. The missing-image event will be sent once for each image which was expected but not found. The function bound to this event can render information on on the screen indicating that the original image was missing. The default binding display a message in the feedback area.
The missing-image event contains the domain in which rendering can occur (the window width and height) as well as a string of the form ``frame;source'' which can be obtained by calling the contents() function on the event object.
\: missingImage (void; Event event) { let contents = event.contents(), parts = contents.split(";"), media = io.path.basename(sourceMedia(parts[1])._0); displayFeedback("MISSING: frame %s of %s" % (parts[0], media), 1, drawXGlyph); } bind("missing-image", missingImage);
Chapter 6 RV File Format
The RV file format (.rv) is a text GTO file. GTO is an open source file format which stores arbitrary data — mostly for use in computer graphics applications. The text GTO format is meant to be simple and human readable. It's helpful to have familiarized yourself with the GTO documentation before reading this section. The documentation should come with RV, or you can read it on line at the GTO web site.
6.1 How RV Uses GTO
RV defines a number of new GTO object protocols (types of objects). The GTO file is made up of objects, which contain components, which contain properties where the actual data resides. RV's use of the format is to store nodes in an image processing graph as GTO objects. How the nodes are connected is determined by RV and is not currently arbitrary so there are no connections between the objects stored in the file.
- The RVSession object (one per file) which stores information about the session. This includes the full frame range, currently marked frames, the playback FPS, and whether or not to use real time playback among other things.
- RVLayoutGroup, RVFolderGroup, RVSwitchGroup, RVSourceGroup, RVRetimeGroup, RVStackGroup, RVDisplayGroup and RVSequenceGroup nodes which form the top-level of the image processing graph.
- One or more RVFileSource objects each within an RVSourceGroup which specify all of the media (movies, audio files, image sequences) which are available in the session.
- Color correction objects like RVColor nodes which are members of RVSourceGroup objects.
- Image format objects like RVFormat or RVChannelMap which are also members of RVSourceGroup objects.
- An RVDisplayColor object (one per file) which indicates monitor gamma, any display LUT being used (and possibly the actual LUT data) which is part of the RVDisplayGroup.
- A connections object which contains connections between the top-level group nodes. The file only stores the top-level connections — connections within group nodes are determined by the group node at runtime.
Normally, RV will write out all objects to the session file, but it does not require all of them to create a session from scratch. For example, if you have a file with a single RVFileSource object in it, RV will use that and create default objects for everything else. So when creating a file without RV, it's not a bad idea to only include information that you need instead of replicating the output of RV itself. (This helps make your code future proof as well).
The order in which the objects appear in the file is not important. You can also include information that RV does not know about if you want to use the file for other programs as well.
6.2 Naming
The names of objects in the session are not visible to the user, however they must follow certain naming naming conventions. There is a separate user interface name for top level nodes which the user does see. The user name can be set by creating a string property on a group node called ui.name.
- If the object is a group node type other than a source or display group its name can be anything, but it must be unique.
- If there is an RVDisplayGroup in the file it must be called displayGroup.
- If the object is a member of a group its name should have the pattern: groupName_nodeName where groupName is the name of the group the node is a member of. The nodeName can be anything, but RV will use the type name in lowercase without the “RV” at the front.
- If the object is a RVFileSourceGroup or RVImageSourceGroup it should be named sourceGroupXXXXXX where the Xs form a six digit zero padded number. RV will create source groups starting with number 000000 and increment the number for each new source group it creates. If a source group is deleted, RV may reuse its number when creating a new group.
- The connection object should be named connections.
- The RVSession object can have any name.
6.3 A Simple Example
The simplest RV file you can create is a one which just causes RV to load a single movie file or image. This example loads a QuickTime file called “test.mov” from the directory RV was started in:
GTOa (3) sourceGroup000000_source : RVFileSource (0) { media { string movie = "test.mov" } }
The first line is required for a text GTO file: it indicates the fact that the file is text format and that the GTO file version is 3. All of the other information (the frame ranges, etc) will be automatically generated when the file is read. By default RV will play the entire range of the movie just as if you dropped it into a blank RV session in the UI.
For this version of RV, you should name the first RVFileSource object sourceGroup000000_source and the second sourceGroup000001_source and the third sourceGroup000002_source, and so on. Eventually we'll want to make an EDL which will index the source objects so the names mostly matter (but not the order in which they appear).
Now suppose we have an image sequence instead of a movie file. We also have an associated audio file which needs to be played with it. This is a bit more complicated, but we still only need to make a single RVFileSource object:Here we've got test.#.dpx as an image layer and soundtrack.aiff which is an audio layer.
GTOa (3) sourceGroup000000_source : RVFileSource (0) { media { string movie = [ "test.#.dpx" "soundtrack.aiff" ] } group { float fps = 24 float volume = 0.5 float audioOffset = 0.1 } }
You can have any number of audio and image sequence/movie files in the movie list. All of them together create the output of the RVFileSource object. If we were creating a stereo source, we might have left.#.dpx and right.#.dpx instead of test.#.dpx. When there are multiple image layers the first two default to the left and right eyes in the order in which they appear. You can change this behavior per-source if necessary. The format of the various layers do not need to match.
The group component indicates how all of media should be combined. In this case we've indicated the FPS of the image sequence, the volume of all audio for this source and an audio slip of 0.1 (one tenth) of a second. Keep in mind that FPS here is for the image sequence(s) in the source it has nothing to do with the playback FPS!. The playback FPS is independent of the input sources frame rate.
Aside: What is the FPS of an RVFileSource Object Anyway?
If you write out an RV file from RV itself, you'll notice that the group FPS is often 0! This is a special cookie value which indicates that the FPS should be taken from the media. Movie file formats like QuickTime or AVI store this information internally. So RV will use the frame rate from the media file as the FPS for the source.
However, image sequences typically do not include this information (OpenEXR files are a notable exception).
. When you start RV from the command line it will use the playback FPS as a default value for any sources created. If there is no playback FPS on startup either via the command line or preferences, it will default to 24 fps. So it's not a bad idea to include the group FPS when creating an RV file yourself when you're using image sequences. If you're using a movie file format you should either use 0 for the FPS or not include it and let RV figure it out.
What happens when you get a mismatch between the source FPS and the playback FPS? If there's no audio, you won't notice anything; RV always plays back every frame in the source regardless of the source FPS. But if you have audio layers along with your image sequence or if the media is a movie file, you will notice that the audio is either compressed or expanded in order to maintain synchronization with the images.
This is a very important thing to understand about RV: it will always playback every image no matter what the playback FPS is set to; and it will always change the audio to compensate for that and maintain synchronization with the images.
6.4 Per-Source and Display Color Settings and LUT Files
If you want to include per-source color information – such as forcing a particular LUT to be applied or converting log to linear – you can include only the additional nodes you need with only the parameters that you wish to pass in. For example, to apply a file LUT to the first source (e.g. sourceGroup000000_source) you can create an RVColor node similarly named sourceGroup000000_color.
sourceGroup000000_color : RVColor (1) { lut { string file = "/path/to/LUTs/log2sRGB.csp" int active = 1 } }
This is a special case in the rv session file: you can refer to a LUT by file. Version 3.6 and earlier will not write a session file in this manner: a baked version of the LUT will be inlined directly in the session file.
If you have a new-source event bound to a function which modifies incoming color settings based on the image type, any node properties in your session file override the default values created there. To state it another way: values you omit in the session file still exist in RV and will take on whatever values the function bound to new-source made for them. To ensure that you get exactly the color you want you can specify all of the relevant color properties in the RVColor, RVLinearize, and RVDisplayColor nodes:
sourceGroup000000_colorPipeline_0 : RVColor (2) { color { int invert = 0 float[3] gamma = [ [ 1 1 1 ] ] string lut = "default" float[3] offset = [ [ 0 0 0 ] ] float[3] scale = [ [ 1 1 1 ] ] float[3] exposure = [ [ 0 0 0 ] ] float[3] contrast = [ [ 0 0 0 ] ] float saturation = 1 int normalize = 0 float hue = 0 int active = 1 } CDL { float[3] slope = [ [ 1 1 1 ] ] float[3] offset = [ [ 0 0 0 ] ] float[3] power = [ [ 1 1 1 ] ] float saturation = 1 int noClamp = 0 } luminanceLUT { float lut = [ ] float max = 1 int size = 0 string name = "" int active = 0 } "luminanceLUT:output" { int size = 256 } } sourceGroup000000_tolinPipeline_0 : RVLinearize (1) { lut { float[16] inMatrix = [ [ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] ] float[16] outMatrix = [ [ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] ] float lut = [ ] float prelut = [ ] float scale = 1 float offset = 0 string type = "Luminance" string name = "" string file = "" int size = [ 0 0 0 ] int active = 0 } color { string lut = "default" int alphaType = 0 int logtype = 0 int YUV = 0 int invert = 0 int sRGB2linear = 1 int Rec709ToLinear = 0 float fileGamma = 1 int active = 1 int ignoreChromaticities = 0 } cineon { int whiteCodeValue = 0 int blackCodeValue = 0 int breakPointValue = 0 } CDL { float[3] slope = [ [ 1 1 1 ] ] float[3] offset = [ [ 0 0 0 ] ] float[3] power = [ [ 1 1 1 ] ] float saturation = 1 int noClamp = 0 } } defaultOutputGroup_colorPipeline_0 : RVDisplayColor (1) { lut { float[16] inMatrix = [ [ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] ] float[16] outMatrix = [ [ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] ] float lut = [ ] float prelut = [ ] float scale = 1 float offset = 0 string type = "Luminance" string name = "" string file = "" int size = [ 0 0 0 ] int active = 0 } color { string lut = "default" string channelOrder = "RGBA" int channelFlood = 0 int premult = 0 float gamma = 1 int sRGB = 0 int Rec709 = 0 float brightness = 0 int outOfRange = 0 int dither = 0 int active = 1 } chromaticities { int active = 0 int adoptedNeutral = 0 float[2] white = [ [ 0.3127 0.329 ] ] float[2] red = [ [ 0.64 0.33 ] ] float[2] green = [ [ 0.3 0.6 ] ] float[2] blue = [ [ 0.15 0.06 ] ] float[2] neutral = [ [ 0.3127 0.329 ] ] } }
The above example values assume default color pipeline slots for a single source session. Please see section 12.3 to learn more about the specific color pipeline groups.
6.5 Information Global to the Session
Now let's add an RVSession object with in and output points. The session object should be called rv in this version. There should only be one RVSession object in the file.
From now on we're just going to show fragments of the file and assume that you can put them all together in your text editor.
rv : RVSession (1) { session { string viewNode = "defaultSequence" int marks = [ 1 20 50 80 100 ] int[2] range = [ [ 1 100 ] ] int[2] region = [ [ 20 50 ] ] float fps = 24 int realtime = 1 int currentFrame = 30 } }
Assuming this was added to the top of our previous file with the source in it, the session object now indicates the frame range (1-100) and an in and out region (20-50) which is currently active. Frames 1, 20, 50, 80, and 100 are marked and the default frame is frame 30 when RV starts up. The realtime property is a flag which indicates that RV should start playback in real time mode. The view node indicates what will be viewed in the session when the file is opened.
Note that it's usually a good idea to skip the frame range boundaries unless an EDL is also specified in the file (which is not the case here). RV will figure out the correct range information from the source media. If you force the range information to be different than the source media's you may get unexpected results.
Starting in version 3.10 the marks and range can also be stored on each viewable top-level object. For example the defaultLayout and defaultSequence can have different marks and in and out points:
defaultStack : RVStackGroup (1) { session { float fps = 24 int marks = [ ] int[2] region = [ [ 100 200 ] ] int frame = 1 } }
If a group has a session component than the contents can provide an in/out region, marks, playback fps and a current frame. When the user views the group node these values will become inherited by the session.
6.6 The Graph
Internally, RV holds a single image processing graph per session which is represented in the session file. The graph can have multiple nodes which determine how the sources are combined. These are the top-level nodes and are always group nodes.
Versions prior to 3.10 did not store graph connectivity in the file because the user was not allowed to change it. In 3.10, the user can create new top-level nodes (like sequences, stacks, layouts, retimings, etc). So the inputs for each node need to be stored in order to reproduce what the user created.
The connections between the top-level group nodes are stored in the connections object. In addition, in 3.10.9, a list of the top level nodes is also included. For example, this is what RV will write out for a session with a single source in it:
connections : connection (1) { evaluation { string lhs = [ "sourceGroup000000" "sourceGroup000000" "sourceGroup000000" ] string rhs = [ "defaultLayout" "defaultSequence" "defaultStack" ] } top { string nodes = [ "sourceGroup00000", "defaultLayout", "defaultStack", "defaultSequence" ] } }
The connections should be interpreted as arrows between objects. The lhs (left hand side) is the base of the arrow. The rhs (right hand side) is the tip. The base and tips are stored in separate properties. So in the case, the file has three connections:
RV may write out a connection to the display group as well. However, that connection is redundant and may be overridden by the value of the view node property in the RVSession.
- sourceGroup000000 → defaultLayout
- sourceGroup000000 → defaultSequence
- sourceGroup000000 → defaultStack
The nodes property, if it exists, will determine which nodes are considered top level nodes. Otherwise, nodes which include connections and nodes which have user interface name are considered top level.
6.6.1 Default Views
There are three default views that are always created by RV: the default stack, sequence, and layout. Whenever a new source is added by the user each of these will automatically connect the new source as an input. When a new viewing node is created (a new sequence, stack, layout, retime) the default views will not add those —- only sources are automatically added.
When writing a .rv file you can co-opt these views to rearrange or add inputs or generate a unique EDL but it's probably a better idea to create a new one instead; RV will never automatically edit a sequence, stack, layout, etc, that is not one of the default views.
6.7 Creating a Session File for Custom Review
One of the major reasons to create session files outside of RV is to automatically generate custom review workflows. For example, if you want to look at an old version of a sequence and a new version, you might have your pipeline output a session file with both in the session and have pre-constructed stacked views with wipes and a side-by-side layout of the two sequences.
To start with lets look at creating a session file which creates a unique sequence (not the default sequence) with plays back sources in a particular order. In this case, no EDL creation is necessary — we only need to supply the sequence with the source inputs in the correct order. This is analogous to the user reordering the inputs on a sequence in the user interface.
This file will have an RVSequenceGroup object as well as the sources. Creating sources is covered above so we'll skip to the creation of the RVSequenceGroup. For this example we'll assume there are three sources and that they all have the same FPS (so no retiming is necessary). We'll let RV handle creation of the underlying RVSequence and its EDL and only create the group:
// define sources ... reviewSequence : RVSequenceGroup (1) { ui { string name = "For Review" } } connections : connection (1) { evaluation { string lhs = [ "sourceGroup000002" "sourceGroup000000" "sourceGroup000001" ] string rhs = [ "reviewSequence" "reviewSequence" "reviewSequence" ] } }
RV will automatically connect up the default views so we can skip their inputs in the connections object for clarity. In this case, the sequence is connected up so that by default it will play sourceGroup000002 followed by sourceGroup000000 followed by sourceGroup000001 because the default EDL of a sequence just plays back the inputs in order. Note that for basic ordering of playback, no EDL creation is necessary. We could also create additional sequence groups with other inputs. Also note the use of the UI name in the sequence group.
Of course, the above is not typical in a production environment. Usually there are handles which need to (possibly) be edited out. There are two ways to do this with RV: either set the cut points in each source and tell the sequence to use them, or create an EDL in the sequence which excludes the handles.
To start with we'll show the first method: set the cut points. This method is easy to implement and the sequence interface has a button on it that lets the user toggle the in/out cuts on/off in realtime. If the user reorders the sequence, the cuts will be maintained. When using this method any sequence in the session can be made to use the same cut information — it propagates down from the source to the sequence instead of being stored for each sequence.
Setting the cut in/out points requires adding a property to the RVFileSource objects and specifying the in and out frames:
sourceGroup000000_source : RVFileSource (1) { media { string movie = "shot00.mov" } cut { int in = 8 int out = 55 } } sourceGroup000001_source : RVFileSource (1) { media { string movie = "shot01.mov" } cut { int in = 5 int out = 102 } } sourceGroup000002_source : RVFileSource (1) { media { string movie = "shot02.mov" } cut { int in = 3 int out = 22 } }
Finally, the most flexibly way to control playback is to create an EDL. The EDL is stored in an RVSequence node which is a member of the RVSequenceGroup. Whenever an RVSequenceGroup is created, it will create a sequence node to hold the EDL. If you are not changing the default values or behavior of the sequence node it's not necessary to specify it in the file. In this case, however we will be creating a custom EDL.
6.7.1 RVSequence
The sequence node can be in one of two modes: auto EDL creation or manual EDL creation. This is controlled by the mode.autoEDL property. If the property is set to 1 then the sequence will behave like so:
- If a new input is connected, the existing EDL is erased and a new EDL is created.
- Each input of the sequence will have a cut created for it in the order that they appear. If mode.useCutInfo is set, the sequence will use the cut information coming from the input to determine the cut in the EDL. Otherwise it will use the full range of the input.
- If cut info changes on any input to the sequence, the EDL will be adjusted automatically.
When auto EDL is not on, the sequence node behavior is not well-defined when the inputs are changed. In future, we'd like to provide more interface for EDL modification (editing) but for the moment, a custom EDL should only be created programmatically in the session file.
For this next example, we'll use two movie files: a.mov and b.mov. They have audio so there's nothing interesting about their source definitions: just the media property with the name of the movie . They are both 24 fps and the playback will be as well:
The example RV file has fewer line breaks than one which RV would write. However, it's still valid.
GTOa (3) rv : RVSession (2) { session { string viewNode = "mySequence" } } sourceGroup000000_source : RVFileSource (0) { media { string movie = "a.mov" } } sourceGroup000001_source : RVFileSource (0) { media { string movie = "b.mov" } } connections : connection (1) { evaluation { string lhs = [ "sourceGroup000000" "sourceGroup000001" ] string rhs = [ "mySequence" "mySequence" ] } } mySequence : RVSequenceGroup (0) { ui { string name = "GUI Name of My Sequence" } } mySequence_sequence : RVSequence (0) { edl { int frame = [ 1 11 21 31 41 ] int source = [ 0 1 0 1 0 ] int in = [ 1 1 11 11 0 ] int out = [ 10 10 20 20 0 ] } mode { int autoEDL = 0 } }
The source property indexes the inputs to the sequence node. So index 0 refers to sourceGroup000000 and index 1 refers to sourceGroup000001. This EDL has four edits which are played sequentially as follows:
You can think of the properties in the sequence as forming a transposed matrix in which the properties are columns and edits are rows as in 6.1. Note that there are only 4 edits even though there are 5 rows in the matrix. The last edit is really just a boundary condition: it indicates how RV should handle frames past the end of the EDL. To be well formed, an RV EDL needs to include this.
6.7.2 RVLayoutGroup and RVStackGroup
The stack and layout groups can be made in a similar manner to the above. The important thing to remember is the inputs for all of these must be specified in the connections object of the file. Each of these view types uses the input ordering; in the case of the stack it determines what's on top and in the case of the layout it determines how automatic layout will be ordered.
6.7.3 RVOverlay
Burned in metadata can be useful when creating session files. Shot status, artist, name, sequence, and other static information can be rendered on top of the source image directly by RV's render. Figure 6.1 shows an example of metadata rendered by the RVOverlay node.
Each RVSourceGroup can have an RVOverlay node. The RVOverlay node is used for matte rendering by user interface, but it can do much more than that. The RVOverlay node currently supports drawing arbitrary filled rectangles and text in addition to the mattes. The text and filled rectangle are currently limited to static shapes and text; in a future version we plan on expanding this to dynamically updated text (e.g. drawing the current frame number, etc).
Text and rectangles rendered in this fashion are considered part of the image by RV. If you pass a session file with an active RVOverlay node to rvio it will render the overlay the same way RV would. This is completely independent of any rvio overlay scripts which use a different mechanism to generate overlay drawings and text.
Figure 6.2 shows an example which draws three colored boxes with text starting at each box's origin.
The session file used to create the example contains a movieproc source (white 720x480 image) with the overlay rendered on top of it. Note that the coordinates are normalized screen coordinates relative to the source image:
GTOa (3) sourceGroup1_source : RVFileSource (1) { media { string movie = "solid,red=1.0,green=1.0,blue=1.0,start=1,end=1,width=720,height=480.movieproc" } } sourceGroup1_overlay : RVOverlay (1) { overlay { int show = 1 } "rect:red" { float width = 0.3 float height = 0.3 float[4] color = [ [ 1.0 0.1 0.1 0.4 ] ] float[2] position = [ [ 0.1 0.1 ] ] } "rect:green" { float width = 0.6 float height = 0.2 float[4] color = [ [ 0.1 1.0 0.1 0.4 ] ] float[2] position = [ [ -0.2 -0.3 ] ] } "rect:blue" { float width = 0.2 float height = 0.4 float[4] color = [ [ 0.1 0.1 1.0 0.4 ] ] float[2] position = [ [ -0.5 -0.1 ] ] } "text:red" { float[2] position = [ [ 0.1 0.1 ] ] float[4] color = [ [ 0 0 0 1 ] ] float spacing = 0.8 float size = 0.005 float scale = 1 float rotation = 0 string font = "" string text = "red" int debug = 0 } "text:green" { float[2] position = [ [ -0.2 -0.3 ] ] float[4] color = [ [ 0 0 0 1 ] ] float spacing = 0.8 float size = 0.005 float scale = 1 float rotation = 0 string font = "" string text = "green" int debug = 0 } "text:blue" { float[2] position = [ [ -0.5 -0.1 ] ] float[4] color = [ [ 0 0 0 1 ] ] float spacing = 0.8 float size = 0.005 float scale = 1 float rotation = 0 string font = "" string text = "blue" int debug = 0 } }
Components in the RVOverlay which have names starting with “rect:” are used to render filled rectangles. Components starting with “text:” are used for text. The format is similar to that used by the RVPaint node, but the result is rendered for all frames of the source. The reference manual contains complete information about the RVOverlay node's properties and how the control rendering.
6.8 Limitations on Number of Open Files
RV does not impose any artificial limits on the number of source which can be in an RV session file. However, the use of some file formats, namely Quicktime .mov, .avi, and .mp4, require that the file remain open while RV is running.
Each operating system (and even shell on Unix systems) has different limits on the number of open files a process is allowed to have. For example on Linux the default is 1024 files. This means that you cannot open more than 1000 or so movie files without changing the default. RV checks the limit on startup and sets it to the maximum allowed by the system.
There are a number of operating system and shell dependent ways to change limits. Your facility may also have limits imposed by the IT department for accounting reasons.
6.9 What's the Best Way to Write a .rv (GTO) File?
GTO comes in three types: text (UTF8 or ASCII), binary, and compressed binary. RV can read all three types. RV normally writes text files unless an RVImageSource is present in the session (because an image was sent to it from another process instead of a file). In that case it will write a compressed binary GTO to save space on disk.
If you think you might want to generate binary files in addition to text files you can do so using the GTO API in C++ or python. However, the text version is simple enough to write using only regular I/O APIs in any language. We recommend you write out .rv session files from RV and look at them in an editor to generate templates of the portions that are important to you. You can copy and paste parts of session files into source code as strings or even shell scripts as templates with variable substitution.
Chapter 7 Using Qt in Mu
Since version 3.8 RV has had limited Qt bindings in Mu. In 3.10 the number of available Qt classes has been greatly expanded. You can browse the Qt and other Mu modules with the documentation browser. RV 6 wraps the Qt 4.8 API.
Using Qt in Mu is similar to using it in C++. Each Qt class is presented as a Mu class which you can either use directly or inherit from if need be. However, there are some major differences that need to be observed:
Prior to RV 6, it was necessary to supply all parameters to a Mu function in Python even when those parameters had default values. This is no longer the case in RV 6: Python code can assume that default parameters values will be supplied if not specified.
- Not all Qt classes are wrapped in Mu. It's a good idea to look in the documentation browser to see if a class is available yet.
- Property names in C++ do not always match those in Mu. Mu collects Qt properties at runtime in order to provide limited supported for unknown classes. So the set and get functions for the properties are generated at that time. Usually these names match the C++ names, but sometimes there are differences. In general, the Mu function to get a property called foo will be called foo(). The Mu function to set the foo property will be called setFoo(). (A good example of this is the QWidget property visible. In C++ the get function is isVisible() whereas the Mu function is called visible().)
- Templated classes in Qt are not available in Mu. Usually these are handled by dynamic array types or something analogous to the Qt class. In the case of template member functions (like QWidget::findChild<>) there may be an equivalent Mu version that operates slightly differently (like the Mu version QWidget.findChild).
- The QString class is not wrapped (yet). Instead, the native Mu string can be used anywhere a function takes a QString.
- You cannot control widget destruction. If you loose a reference to a QObject it will eventually be finalized (destroyed), but at an unknown time.
- Some classes cannot be inherited from. You can inherit from any QObject, QPainter, or QLayoutItem derived class except QWebFrame and QNetworkReply.
- The signal slot mechanism is slightly different in Mu than C++. It is currently not possible to make a new Qt signal, and slots do not need to be declared in a special way (but they do need to have the correct signatures to be connected). In addition, you are not required to create a QObject class to receive a signal in Mu. You can also connect a signal directly to a regular function if desired (as opposed to class member functions in C++).
- Threading is not yet available. The QThread class cannot be used in Mu yet.
- Abstract Qt classes can be instantiated. However, you can't really do anything with them.
- Protected member functions are public.
7.1 Signals and Slots
Possibly the biggest difference between the Mu and C++ Qt API is how signals and slots are handled. This discussion will assume knowledge of the C++ mechanism. See the Qt documentation if you don't know what signals and slots are.
Jumping right in, here is an example hello world MuQt program. This can be run from the mu-interp binary:
use qt; \: clicked (void; bool checked) { print("OK BYE\n"); QCoreApplication.exit(0); } \: main () { let app = QApplication(string[] {"hello.mu"}), window = QWidget(nil, Qt.Window), button = QPushButton("MuQt: HELLO WORLD!", window); connect(button, QPushButton.clicked, clicked); window.setSize(QSize(200, 50)); window.show(); window.raise(); QApplication.exec(); } main();
The main thing to notice in this example is the connect() function. A similar C++ version of this would look like this:
connect(button, SIGNAL(clicked(bool)), SLOT(myclickslot(bool)));
where myclickslot would be a slot function declared in a class. In Mu it's not necessary to create a class to receive a signal. In addition the SIGNAL and SLOT syntax is also unnecessary. However, it is necessary to exactly specify which signal is being referred to by passing its Mu function object directly. In this case QPushButton.clicked. The signal must be a function on the class of the first argument of connect().
In Mu, any function which matches the signal's signature can be used to receive the signal. The downside of this is that some functions like sender() are not available in Mu. However this is easily overcome with partial application. In the above case, if we need to know who sent the signal in our clicked function, we can change its signature to accept the sender and partially apply it in the connect call like so:
\: clicked (void; bool checked, QPushButton sender) { // do something with sender } \: main () { ... connect(button, QPushButton.clicked, clicked(,button)); }
And of course additional information can be passed into the clicked function by applying more arguments.
It's also possible to connect a signal to a class method in Mu if the method signature matches. Partial application can be used in that case as well. This is frequently the case when writing a mode which uses Qt interface.
7.2 Inheriting from Qt Classes
It's possible to inherit directly from the Qt classes in Mu and override methods. Virtual functions in the C++ version of Qt are translated as class methods in Mu. Non-virtual functions are regular functions in the scope of the class. In practice this means that the Mu Qt class usage is very similar to the C++ usage.
The following example shows how to create a new widget type that implements a drop target. Drag and drop is one aspect of Qt that requires inheritance (in C++ and Mu):
use qt; class: MyWidget : QWidget { method: MyWidget (MyWidget; QObject parent, int windowFlags) { // REQUIRED: call base constructor to build Qt native object QWidget.QWidget(this, parent, windowFlags); setAcceptDrops(true); } method: dragEnterEvent (void; QDragEnterEvent event) { print("drop enter\n"); event.acceptProposedAction(); } method: dropEvent (void; QDropEvent event) { print("drop\n"); let mimeData = event.mimeData(), formats = mimeData.formats(); print("--formats--\n"); for_each (f; formats) print("%s\n" % f); if (mimeData.hasUrls()) { print("--urls--\n"); for_each (u; event.mimeData().urls()) print("%s\n" % u.toString(QUrl.None)); } if (mimeData.hasText()) { print("--text--\n"); print("%s\n" % mimeData.text()); } event.acceptProposedAction(); } }
Things to note in this example: the names of the drag and drop methods matter. These are same names as used in C++. If you browser the documentation of a Qt class in Mu these will be the class methods. Only class methods can be overridden.
Chapter 8 Modes and Widgets
The user interface layer can augment the display and event handling in a number of different ways. For display, at the lowest level it's possible to intercept the render event in which case you override all drawing. Similarily for event handling you can bind functions in the global event table possibly overwriting existing bindings and thus replace their functions.
At a higher level, both display and event handling can be done via Modes and Widgets. A Mode is a class which manages an event table independent of the global event table and a collection of functions which are bound in that table. In addition the mode can have a render function which is automatically called at the right time to augment existing rendering instead of replacing it. The UI has code which manages modes so that they may be loaded externally only when needed and automatically turned on and off.
Modes are further classified as being minor or major. The only difference between them is that a major mode will always get precedence over any minor mode when processing events and there can be only a single major mode active at a time. There can be many minor modes active at once. Most extensions are created by creating a minor mode. RV currently has a single basic major mode.
By using a mode to implement a new feature or replace or augment an existing feature in RV you can keep your extensions separate from the portion of the UI that ships with RV. In other words, you never need to touch the shipped code and your code will remain isolated.
A further refinement of a mode is a widget. Widgets are minor modes which operate in a constrained region of the screen. When the pointer is in the region, the widget will receive events. When the pointer is outside the region it will not. Like a regular mode, a widget has a render function which can draw anywhere on the screen, but usually is constrainted to its input region. For example, the image info box is a widget as is the color inspector.
Multiple modes and widgets may be active at the same time. At this time Widgets can only be programmed using Mu.
8.1 Outline of a Mode
In order to create a new mode you need to create a module for it and derive your mode class from the MinorMode class in the rvtypes module. The basic outline which we'll put in a file called new_mode.mu looks like this:
use rvtypes; module: new_mode { class: NewMode : MinorMode { method: NewMode (NewMode;) { init ("new-mode", [ global bindings ... ], [ local bindings ... ], Menu(...) ); } } \: createMode (Mode;) { return NewMode(); } } // end of new_mode module
The function createMode() is used by the mode manager to create your mode without knowing anything about it. It should be declared in the scope of the module (not your class) and simply create your mode object and initialize it if that's necessary.
When creating a mode it's necessary to call the init() function from within your constructor method. This function takes at least three arguments and as many as six. Chapter 10 goes into detail about the structure in more detail. It's declared like this in rvtypes.mu:
method: init (void; string name, BindingList globalBindings, BindingList overrideBindings, Menu menu = nil, string sortKey = nil, int ordering = 0)
The “bindings” arguments supply event bindings for this mode. The bindings are only active when the mode is active and take precedence over any “global” bindings (bindings not associated with any mode). In your event function you can call the “reject” method on an event which will cause rv to pass it on to bindings “underneath” yours. This technique allows you to augment an existing binding instead of replacing it. The separation of the bindings into overrideBindings and globalBindings is due to backwards compatibility requirements, and is no longer meaningful.
The menu argument allows you to pass in a menu structure which is merged into the main menu bar. This makes it possible to add new menus and menu items to the existing menus.
Finally the sortKey and ordering arguments allow fine control over the order in which event bindings are applied when multiple modes are active. First the ordering value is checked (default is 0 for all modes), then the sortKey (default is the mode name).
Again, see chapter 10 for more detailed information.
8.2 Outline of a Widget
A Widget looks just like a MinorMode declaration except you will derive from Widget instead of MinorMode and the base class init() function is simpler. In addition, you'll need to have a render() method (which is optional for regular modes).
use rvtypes; module: new_widget { class: NewWidget : Widget { method: NewWidget (NewWidget;) { init ("new-widget", [ local bindings ... ] ); } method: render (void; Event event) { ... updateBounds(min_point, max_point); ... } } \: createMode (Mode;) { return NewWidget(); } } // end of new_widget module
In the outline above, the function updateBounds() is called in the render() method. updateBounds() informs the UI about the bounding box of your widget. This function must be called by the widget at some point. If your widget can be interactively or procedurally moved, you will probably want to may want to call it in your render() function as shown (it does not hurt to call it often). The min_point and max_point arguments are Vec2 types.
Chapter 9 Package System
With previous versions of RV we recommend directly hacking the UI code or setting up ad hoc locations in the MU_MODULE_PATH to place files.
For RV 3.6 or newer, we recommend using the new package system instead. The documentation in older versions of the reference manual is still valid, but we will no longer be using those examples. There are hardly any limitations to using the package system so no additional features are lost.
9.1 rvpkg Command Line Tool
The rvpkg command line tool makes it possible to manage packages from the shell. If you use rvpkg you do not need to use RV's preferences UI to install/uninstall add/remove packages from the file system. We recommend using this tool instead of manually editing files to prevent the necessity of keeping abreast of how all the state is stored in new versions.
The rvpkg tool can perform a superset of the functions available in RV's packages preference user interface.
Note: many of the below commands, including install, uninstall, and remove will look for the designated packages in the paths in the RV_SUPPORT_PATH environment variable. If the package you want to operate on is not in a path listed there, that path can be added on the command line with the -include option.
9.1.1 Getting a List of Available Packages
shell> rvpkg -list
Lists all packages that are available in the RV_SUPPORT_PATH directories. Typical output from rvpkg looks like this:
I L - 1.7 "Annotation" /SupportPath/Packages/annotate-1.7.rvpkg I L - 1.1 "Documentation Browser" /SupportPath/Packages/doc_browser-1.1.rvpkg I - O 1.1 "Export Cuts" /SupportPath/Packages/export_cuts-1.1.rvpkg I - O 1.3 "Missing Frame Bling" /SupportPath/Packages/missing_frame_bling-1.3.rvpkg I - O 1.4 "OS Dependent Path Conversion" /SupportPath/Packages/os_dependent_path_conversion_mode-1.4.rvpkg I - O 1.1 "Nuke Integration" /SupportPath/Packages/rvnuke-1.1.rvpkg I - O 1.2 "Sequence From File" /SupportPath/Packages/sequence_from_file-1.2.rvpkg I L - 1.3 "Session Manager" /SupportPath/Packages/session_manager-1.3.rvpkg I L - 2.2 "RV Color/Image Management" /SupportPath/Packages/source_setup-2.2.rvpkg I L - 1.3 "Window Title" /SupportPath/Packages/window_title-1.3.rvpkg
The first three columns indicate installation status (I), load status (L), and whether or not the package is optional (O).
If you want to include a support path directory that is not in RV_SUPPORT_PATH, you can include it like this:
shell> rvpkg -list -include /path/to/other/support/area
shell> rvpkg -list -only /path/to/area
9.1.2 Getting Information About the Environment
shell> rvpkg -env
This will show alternate version package areas constructed from the RV_SUPPORT_PATH environment variable to which packages maybe added, removed, installed and uninstalled. The list may differ based on the platform.
9.1.3 Getting Information About a Package
shell> rvpkg -info /path/to/file.rvpkg
Name: Window Title Version: 1.3 Installed: YES Loadable: YES Directory: Author: Tweak Software Organization: Tweak Software Contact: an actual email address URL: http://www.tweaksoftware.com Requires: RV-Version: 3.9.11 Hidden: YES System: YES Optional: NO Writable: YES Dir-Writable: YES Modes: window_title Files: window_title.mu
9.1.4 Adding a Package to a Support Area
shell> rvpkg -add /path/to/area /path/to/file1.rvpkg /path/to/file2.rvpkg
9.1.5 Removing a Package from a Support Area
shell> rvpkg -remove /path/to/area/Packages/file1.rvpkg
Unlike adding, the package in this case is the one in the support area's Packages directory. You can remove multiple packages at the same time.
If the package is installed rvpkg will interactively ask for confirmation to uninstall it first. You can override that by using -force as the first argument:
shell> rvpkg -force -remove /path/to/area/Packages/file1.rvpkg
9.1.6 Installing and Uninstalling Available Packages
shell> rvpkg -install /path/to/area/Packages/file1.rvpkg shell> rvpkg -uninstall /path/to/area/Packages/file1.rvpkg
If files are missing when uninstalling rvpkg may complain. This can happen if multiple versions where somehow installed into the same area.
9.1.7 Combining Add and Install for Automated Installation
If you're using rvpkg from an automated installation script you will want to use the -force option to prevent the need for interaction. rvpkg will assume the answer to any questions it might ask is “yes”. This will probably be the most common usage:
shell> rvpkg -force -install -add /path/to/area /path/to/some/file1.rvpkg
Multiple packages can be specified with this command. All of the packages are installed into /path/to/area.
shell> rvpkg -force -remove /path/to/area/Packages/file1.rvpkg
9.1.8 Overrideing Default Optional Package Load Behavior
shell> rvpkg -optin /path/to/area/Packages/file1.rvpkg
In this case, rvkpg will rewrite the rvload2 file associated with the support area to indicate the package is no longer optional. The user can still unload the package if they want, but it will be loaded by default after running the command.
9.2 Package File Contents
A package file is zip file with at least one special file called PACKAGE along with .mu, .so, .dylib, and support files (plain text, images, icons, etc) which implement the actual package.
Creating a package requires the zip binary. The zip binary is usually part of the default install on each of the OSes that RV runs on.
The contents of the package should NOT be put in a parent directory before being zipped up. The PACKAGE manifest as well as any other files should be at the root level of the zip file.
When a package is installed, RV will place all of its contents into subdirectories in one of the RV_SUPPORT_PATH locations. If the RV_SUPPORT_PATH is not defined in the environment, it is assumed to have the value of RV_HOME/plugins followed by the home directory support area (which varies with each OS: see the user manual for more info). Files contained in one zip file will all be under the same support path directory; they will not be installed distributed over more than one support path location.
The install locations of files in the zip file is described in a filed called PACKAGE which must be present in the zip file. The minimum package file contains two files: PACKAGE and one other file that will be installed. A package zip file must reside in the subdirectory called Packages in one of the support path locations in order to be installed. When the user adds a package in the RV package manager, this is where the file is copied to.
9.3 PACKAGE Format
The PACKAGE file is a YAML file providing information about how the package is used and installed as well as user documentation. Every package must have a PACKAGE file with an accurate description of its contents.
Each element of the modes list describes one Mu module which is implemented as either a .mu file or a .so file. Files implementing modes are assumed to be Mu module files and will be placed in the Mu subdirectory of the support path location. The other fields are used to optionally create a menu item and/or a short cut key either of which will toggle the mode on/off. The load field indicates when the mode should be loaded: if the value is “delay” the mode will be loaded the first time it is activated, if the value is “immediate” the mode will be loaded on start up.
package: Window Title author: Tweak Software organization: Tweak Software contact: some email address of the usual form version: 1.0 url: http://www.tweaksoftware.com rv: 3.6 requires: '' modes: - file: window_title load: immediate description: | <p> This package sets the window title to something that indicates the currently viewed media. </p> <h2>How It Works</h2> <p> The events play-start, play-stop, and frame-changed, are bound to functions which call setWindowTitle(). </p>
When the package zip file contains additional support files (which are not specified as modes) the package manager will try to install them in locations according to the file type. However, you can also directly specify where the additional files go relative to the support path root directory.
For example if you package contains icon files for user interface, they can be forced into the support files area of the package like this:
files: - file: myicon.tif location: SupportFiles/$PACKAGE
9.4 Package Management Configuration Files
There are two files which the package manager creates and uses: rvload2 (previous releases had a file called rvload) in the Mu subdirectory and rvinstall in the Packages subdirectory. rvload2 is used on start up to load package modes and create stubs in menus or events for toggling the modes on/off if they are lazy loaded. rvinstall lists the currently known package zip files with a possible an asterisk in front of each file that is installed. The rvinstall file in used only by the package manager in the preferences to keep track of which packages are which.
The rvload2 file has a one line entry for each mode that it knows about. This file is automatically generated by the package manager when the user installs a package with modes in it. The first line of the file indicates the version number of the rvload2 file itself (so we can change it in the future) followed by the one line descriptions.
3 window_title,window_title.zip,nil,nil,nil,true,true,false
- The mode name (as it appears in a require statement in Mu)
- The name of the package zip file the mode originally comes from
- An optional menu item name
- An optional menu shortcut/accelerator if the menu item exists
- An optional event to bind mode toggling to
- A boolean indicating whether the mode should be loaded immediately or not
- A boolean indicating whether the mode should be activated immediately
- A boolean indicating whether the mode is optional so it should not be loaded by default unless the user opts-in. (Added in 3.10.9. The rvload2 file version was also bumped up to version 3.)
Each field is separated by a comma and there should be no extra whitespace on the line. The rvinstall file is much simpler: it contains a single zip file name on each line and an asterisk next to any file which is current known to be installed. For example:
crop.zip layer_select.zip metadata_info.zip sequence_from_file.zip *window_title.zip
In this case, five modes would appear in the package manager UI, but only the window title package is actually installed. The zip files should exist in the same directory that rvinstall lives in.
9.5 Developing a New Package
In order to start a new package there is a chicken and egg problem which needs to be overcome: the package system wants to have a package file to install.
The best way to start is to create a source directory somewhere (like your source code repository) where you can build the zip file form its contents. Create a file called PACKAGE in that directory by copying and pasting from either this manual (listing 9.3) or from another package you know works and edit the file to reflect what you will be doing (i.e. give it a name, etc).
If you are writing a Mu module implementing a mode or widget (which is also a mode) then create the .mu file in that directory also.
shell> zip new_package-0.0.rvpkg PACKAGE the_new_mode.mu
This will create the new_package-0.0.rvpkg file. At this point you're ready to install your package that doesn't do anything. Open RV's preferences and in the package manager UI add the zip file and install it (preferably in your home directory so it's visible only to you while you implement it).
Once you've done this, the rvload2 and rvinstall files will have been either created or updated automatically. You can then start hacking on the installed version of your Mu file (not the one in the directory you created the zip file in). Once you have it working the way you want copy it back to your source directory and create the final zip file for distribution and delete the one that was added by RV into the Packages directory.
9.5.1 Older Package Files (.zip)
RV version 3.6 used the extension .zip for its package files. This still works, but newer versions prefer the extension .rvpkg along with a preceding version indicator. So a new style package will look like: rvpackagename-X.Y.rvpkg where X.Y is the package version number that appears in the PACKAGE file. New style package files are required to have the version in the file name.
9.5.2 Using the Mode Manager While Developing
It's possible to delay making an actual package file when starting development on individual modes. You can force RV to load your mode (assuming it's in the MU_MODULE_PATH someplace) like so:
shell> rv -flags ModeManagerLoad=my_new_mode
You can get verbose information on what's being loaded and why (or why not by setting the verbose flag):
shell> rv -flags ModeManagerVerbose
shell> rv -flags ModeManagerVerbose ModeManagerLoad=my_new_mode
If your package is installed already and you want to force it to be loaded (this overrides the user preferences) then:
shell> rv -flags ModeManagerPreload=my_already_installed_mode
shell> rv -flags ModeManagerReject=my_already_installed_mode
9.5.3 Using -debug mu
Normally, RV will compile Mu files to conserve space in memory. Unfortunately, that means loosing a lot of information like source locations when exceptions are thrown. You can tell RV to allow debugging information by adding -debug mu to the end of the RV command line. This will consume more memory but report source file information when displaying a stack trace.
9.5.4 The Mu API Documentation Browser
The Mu modules are documented dynamically by the documentation browser. This is available under RV's help menu “Mu API Documentation Browser”.
9.6 Loading Versus Installing and User Override
The package manager allows each user to individually install and uninstall packages in support directories that they have permission in. For directories that the user does not have permission in the package manager maintains a separate list of packages which can be excluded by the user.
For example, there may be a package installed facility wide owned by an administrator. The support directory with facility wide packages only allows read permission for normal users. Packages that were installed and loaded by the administrator will be automatically loaded by all users.
In order to allow a user to override the loading of system packages, the package manager keeps a list of packages not to load. This is kept in the user's preferences file (see user manual for location details). In the package manager UI the “load” column indicates the user status for loading each package in his/her path.
9.6.1 Optional Packages
The load status of optional packages are also kept in the user's preferences, however these packages use a different preference variable to determine whether or not they should be loaded. By default optional packages are not loaded when installed. A package is made optional by setting the ``optional'' value in the PACKAGE file to true.
Chapter 10 A Simple Package
This first example will show how to create a package that defines some key bindings and creates a custom personal menu. You will not need to edit a .rvrc.mu file to do this as in previous versions.
We'll be creating a package intended to keep all our personal customizations. To start with we'll need to make a Mu module that implements a new mode. At first won't do anything at all: just load at start up. Put the following in to a file called mystuff.mu.
use rvtypes; use extra_commands; use commands; module: mystuff { class: MyStuffMode : MinorMode { method: MyStuffMode (MyStuffMode;) { init("mystuff-mode", nil, nil, nil); } } \: createMode (Mode;) { return MyStuffMode(); } } // end module
Now we need to create a PACKAGE file in the same directory before we can create the package zip file. It should look like this:
package: My Stuff author: M. VFX Artiste version: 1.0 rv: 3.6 requires: '' modes: - file: mystuff load: immediate description: | <p>M. VFX Artiste's Personal RV Customizations</p>
Assuming both files are in the same directory, we create the zip file using this command from the shell:
shell> zip mystuff-1.0.rvpkg PACKAGE mystuff.mu
The file mystuff-1.0.rvpkg should have been created. Now start RV, open the preferences package pane and add the mystuff-1.0.rvpkg package. You should now be able to install it. Make sure the package is both installed and loaded in your home directory's RV support directory so it's private to you.
At this point, we'll edit the installed Mu file directly so we can see results faster. When we have something we like, we'll copy it back to the original mystuff.mu and make the rvpkg file again with the new code. Be careful not to uninstall the mystuff package while we're working on it or our changes will be lost. Alternately, for the more paranoid (and wiser), we could edit the file elsewhere and simply copy it onto the installed file.
To start with let's add two functions on the ``<'' and ``>'' keys to speed up and slow down the playback by increasing and decreasing the FPS. There are two main this we need to do: add two method to the class which implement speeding up and slowing down, and bind those functions to the keys.
First let's add the new methods after the class constructor MyStuffMode() along with two global bindings to the ``<'' and ``>'' keys. The class definition should now look like this:
... class: MyStuffMode : MinorMode { method: MyStuffMode (MyStuffMode;) { init("mystuff-mode", [("key-down-->", faster, "speed up fps"), ("key-down--<", slower, "slow down fps")], nil, nil); } method: faster (void; Event event) { setFPS(fps() * 1.5); displayFeedback("%g fps" % fps()); } method: slower (void; Event event) { setFPS(fps() * 1.0/1.5); displayFeedback("%g fps" % fps()); } }
The bindings are created by passing a list of tuples to the init function. Each tuple contains three elements: the event name to bind to, the function to call when it is activated, and a single line description of what it does. In Mu a tuple is formed by putting parenthesis around comma separated elements. A list is formed by enclosing its elements in square brackets. So a list of tuples will have the form:
[ (...), (...), ... ]
(key-down-->, faster, speed up fps)
So the event in this case is key-down–> which means the point at which the > key is pressed. The symbol faster is referring to the method we declared above. So faster will be called whenever the key is pressed. Similarily we bind slower (from above as well) to key-down–<.
("key-down--<", slower, "slow down fps")
[("key-down-->", faster, "speed up fps"), ("key-down--<", slower, "slow down fps")]
from rv.rvtypes import * from rv.commands import * from rv.extra_commands import * class PyMyStuffMode(MinorMode): def __init__(self): MinorMode.__init__(self) self.init("py-mystuff-mode", [ ("key-down-->", self.faster, "speed up fps"), ("key-down--<", self.slower, "slow down fps") ], None, None) def faster(self, event): setFPS(fps() * 1.5) displayFeedback("%g fps" % fps(), 2.0); def slower(self, event): setFPS(fps() * 1.0/1.5) displayFeedback("%g fps" % fps(), 2.0); def createMode(): return PyMyStuffMode()
10.1 How Menus Work
Adding a menu is fairly straightforward if you understand how to create a MenuItem. There are different types of MenuItems: items that you can select in the menu and cause something to happen, or items that are themselves menus (sub-menu). The first type is constructed using this constructor (shown here in prototype form) for Mu:
MenuItem(string label, (void;Event) actionHook, string key, (int;) stateHook);
("label", actionHook, "key", stateHook)
The actionHook and stateHook arguments need some explanation. The other two (the label and key) are easier: the label is the text that appears in the menu item and the key is a hot key for the menu item.
The actionHook is the purpose of the menu item–it is a function or method which will be called when the menu item is activated. This is just like the method we used with bind() — it takes an Event object. If actionHook is nil, than the menu item won't do anything when the user selects it.
The stateHook provides a way to check whether the menu item should be enabled (or greyed out)–it is a function or method that returns an int. In fact, it is really returning one of the following symbolic constants: NeutralMenuState, UncheckMenuState, CheckedMenuState, MixedStateMenuState, or DisabledMenuState. If the value of stateHook is nil, the menu item is assumed to always be enabled, but not checked or in any other state.
MenuItem(string label, MenuItem[] subMenu);
("label", subMenu)
Usually we'll be defining a whole menu — which is an array of MenuItems. So we can use the array initialization syntax to do something like this:
let myMenu = MenuItem {"My Menu", Menu { {"Menu Item", menuItemFunc, nil, menuItemState}, {"Other Menu Item", menuItemFunc2, nil, menuItemState2} }}
MenuItem myMenu = {"My Menu", Menu { {"Menu Item", menuItemFunc, nil, menuItemState}, {"Other Menu Item", menuItemFunc2, nil, menuItemState2}, {"Sub-Menu", Menu { {"First Sub-Menu Item", submenuItemFunc1, nil, submenu1State} }} }};
("My Menu", [ ("Menu Item", menuItemFunc, None, menuItemState), ("Other Menu Item", menuItemFunc2, None, menuItemState2)])
You'll see this on a bigger scale in the rvui module where most the menu bar is declared in one large constructor call.
10.2 A Menu in MyStuffMode
Now back to our mode. Let's say we want to put our faster and slower functions on menu items in the menu bar. The fourth argument to the init() function in our constructor takes a menu representing the menu bar. You only define menus which you want to either modify or create. The contents of our main menu will be merged into the menu bar.
By merge into we mean that the menus with the same name will share their contents. So for example if we add the File menu in our mode, RV will not create a second File menu on the menu bar; it will add the contents of our File menu to the existing one. On the other hand if we call our menu MyStuff RV will create a brand new menu for us (since presumably MyStuff doesn't already exist). This algorithm is applied recursively so sub-menus with the same name will also be merged, and so on.
So let's add a new menu called MyStuff with two items in it to control the FPS. In this example, we're only showing the actual init() call from mystuff.mu:
init("mystuff-mode", [ ("key-down-->", faster, "speed up fps"), ("key-down--<", slower, "slow down fps") ], nil, Menu { {"MyStuff", Menu { {"Increase FPS", faster, nil}, {"Decrease FPS", slower, nil} } } });
If we wanted to use menu accelerators instead of (or in addition to) the regular event bindings we add those in the menu item constructor. For example, if we wanted to also use the keys - and = for slower and faster we could do this:
init("mystuff-mode", [ ("key-down-->", faster, "speed up fps"), ("key-down--<", slower, "slow down fps") ], nil, Menu { {"MyStuff", Menu { {"Increase FPS", faster, "="}, {"Decrease FPS", slower, "-"} } } });
The advantage of using the event bindings instead of the accelerator keys is that they can be overridden and mapped and unmapped by other modes and ``chained'' together. Of course we could also use > and < for the menu accelerator keys as well (or instead of using the event bindings).
from rv.rvtypes import * from rv.commands import * from rv.extra_commands import * class PyMyStuffMode(MinorMode): def __init__(self): MinorMode.__init__(self) self.init("py-mystuff-mode", [ ("key-down-->", self.faster, "speed up fps"), ("key-down--<", self.slower, "slow down fps") ], None, [ ("MyStuff", [ ("Increase FPS", self.faster, "=", None), ("Decrease FPS", self.slower, "-", None)] )] ) def faster(self, event): setFPS(fps() * 1.5) displayFeedback("%g fps" % fps(), 2.0); def slower(self, event): setFPS(fps() * 1.0/1.5) displayFeedback("%g fps" % fps(), 2.0); def createMode(): return PyMyStuffMode()
10.3 Finishing up
Finally, we'll create the final rvpkg package by copying mystuff.mu back to our temporary directory with the PACKAGES file where we originally made the rvpkg file.
Next start RV and uninstall and remove the mystuff package so it no longer appears in the package manager UI. Once you've done this recreate the rvpkg file from scratch with the new mystuff.mu file and the PACKAGES file:
shell> zip mystuff-1.0.rvpkg PACKAGES mystuff.mu
shell> zip mystuff-1.0.rvpkg PACKAGES mystuff.py
You can now add the latest mysuff-1.0.rvpkg file back to RV and use it. In the future add personal customizations directly to this package and you'll always have a single file you can install to customize RV.
Chapter 11 The Custom Matte Package
Now that we've tried the simple stuff, let's do something useful. RV has a number of settings for viewing mattes. These are basically regions of the frame that are darkened or completely blackened to simulate what an audience will see when the movie is projected. The size and shape of the matte is an artistic decision and sometimes a unique matte will be required.
Previous versions of this manual presented a different approach which still works in RV 3.6, but is no longer the preferred method.
In this example we'll create a Python package that reads a file when RV starts to get a list of matte geometry and names. We'll make a custom menu out of these which will set some state in the UI.
To start with, we'll assume that the path to the file containing the mattes is located in an environment variable called RV_CUSTOM_MATTE_DEFINITIONS. We'll get the value of that variable, open and parse the file, and create a data struct holding all of the information about the mattes. If it is not defined we will provide a way for the user to locate the file through an open-file-dialog and then parse the file.
11.1 Creating the Package
Use the same method described in Chapter 10 to begin working on the package. If you haven't read that chapter please do so first. A completed version of the package created in this chapter is included in the RV distribution. So using that as reference is a good idea.
11.2 The Custom Matte File
The file will be a very simple comma separated value (CSV) file. Each line starts with the name of the custom matte (shown in the menu) followed by four floating point values and then a text field description which will be displayed when that matte is activated. So each line will look something like this:
matte menu name, aspect ratio, fraction of image visible, center point of matte in X, center point of matte in Y, descriptive text
11.3 Parsing the Matte File
Before we actually parse the file, we should decide what we want when we're done. In this case we're going to make our own data structure to hold the information in each line of the file. We'll hold all of the information we collect in a Python dictionary with the following keys:
"name", "ratio", "heightVisible", "centerX", "centerY", and "text"
Next we'll write a method for our mode that does the parsing and updates our internal mattes dictionary.
If you are unfamiliar with object oriented programing you can substitute the word function for method. This manual will sometimes refer to a method as a function. It will never refer to a non-method function as a method.
def updateMattesFromFile(self, filename): # Make sure the definition file exists if (not os.path.exists(filename)): raise KnownError("ERROR: Custom Mattes Mode: Non-existent mattes" + " definition file: '%s'" % filename) # Walk through the lines of the definition file collecting matte # parameters order = [] mattes = {} for line in open(filename).readlines(): tokens = line.strip("\n").split(",") if (len(tokens) == 6): order.append(tokens[0]) mattes[tokens[0]] = { "name" : tokens[0], "ratio" : tokens[1], "heightVisible" : tokens[2], "centerX" : tokens[3], "centerY" : tokens[4], "text" : tokens[5]} # Make sure we got some valid mattes if (len(order) == 0): self._order = [] self._mattes = {} raise KnownError("ERROR: Custom Mattes Mode: Empty mattes" + " definition file: '%s'" % filename) self._order = order self._mattes = mattes
There are a number of things to note in this function. First of all, to keep track of the order in which we read the definitions from the mattes file you will see that stored in the “_order” Python list. The “_mattes” dictionary's keys are the same as the “_order” list, but since dictionaries are not ordered we use the list to remember the order.
We check to see if the file actually exists and if not simply raise a KnownError Exception. So the caller of this function will have to be ready to except a KnownError if the matte definition file cannot be found or if it is empty. The KnowError Exception is simply our own Exception class. Having our own Exception class allows us to raise and except Exceptions that we know about while letting others we don't expect to still reach the user. Here is the definition of our KnownError Exception class.
class KnownError(Exception): pass
We use the built-in Python readlines() method to go through the mattes file contents one line at a time. Each time through the loop, the next line is split over commas since that's how defined the fields of each line.
If there are not exactly 6 tokens after splitting the line, that means the line is corrupt and we ignore it. Otherwise, we add a new dictionary to our “_mattes” dictionary of matte definition dictionaries.
try: definition = os.environ["RV_CUSTOM_MATTE_DEFINITIONS"] except KeyError: definition = ""
from rv import commands, rvtypes import os class KnownError(Exception): pass class CustomMatteMinorMode(rvtypes.MinorMode): def __init__(self): rvtypes.MinorMode.__init__(self) self._order = [] self._mattes = {} self._currentMatte = "" self.init("custom-mattes-mode", None, None, None) try: definition = os.environ["RV_CUSTOM_MATTE_DEFINITIONS"] except KeyError: definition = "" try: self.updateMattesFromFile(definition) except KnownError,inst: print(str(inst)) def updateMattesFromFile(self, filename): # Make sure the definition file exists if (not os.path.exists(filename)): raise KnownError("ERROR: Custom Mattes Mode: Non-existent mattes" + " definition file: '%s'" % filename) # Walk through the lines of the definition file collecting matte # parameters order = [] mattes = {} for line in open(filename).readlines(): tokens = line.strip("\n").split(",") if (len(tokens) == 6): order.append(tokens[0]) mattes[tokens[0]] = { "name" : tokens[0], "ratio" : tokens[1], "heightVisible" : tokens[2], "centerX" : tokens[3], "centerY" : tokens[4], "text" : tokens[5]} # Make sure we got some valid mattes if (len(order) == 0): self._order = [] self._mattes = {} raise KnownError("ERROR: Custom Mattes Mode: Empty mattes" + " definition file: '%s'" % filename) self._order = order self._mattes = mattes def createMode(): return CustomMatteMinorMode()
11.4 Adding Bindings and Menus
The mode constructor needs to do three things: call the file parsing function, do something sensible if the matte file parsing fails, and build a menu with the items found in the matte file as well as add bindings to the menu items.
We have already gone over the parsing. Once parsing is done we either have a good list of mattes or an empty one, but either way we move on to setting up the menus. Here is the method that will build the menus and bindings.
def setMenuAndBindings(self): # Walk through all of the mattes adding a menu entry as well as a # hotkey binding for alt + index number # NOTE: The bindings will only matter for the first 9 mattes since you # can't really press "alt-10". matteItems = [] bindings = [] if (len(self._order) > 0): matteItems.append(("No Matte", self.selectMatte(""), "alt `", self.currentMatteState(""))) bindings.append(("key-down--alt--`", "")) for i,m in enumerate(self._order): matteItems.append((m, self.selectMatte(m), "alt %d" % (i+1), self.currentMatteState(m))) bindings.append(("key-down--alt--%d" % (i+1), m)) else: def nada(): return commands.DisabledMenuState matteItems = [("RV_CUSTOM_MATTE_DEFINITIONS UNDEFINED", None, None, nada)] # Always add the option to choose a new definition file matteItems += [("_", None)] matteItems += [("Choose Definition File...", self.selectMattesFile, None, None)] # Clear the menu then add the new entries matteMenu = [("View", [("_", None), ("Custom Mattes", None)])] commands.defineModeMenu("custom-mattes-mode", matteMenu) matteMenu = [("View", [("_", None), ("Custom Mattes", matteItems)])] commands.defineModeMenu("custom-mattes-mode", matteMenu) # Create hotkeys for each matte for b in bindings: (event, matte) = b commands.bind("custom-mattes-mode", "global", event, self.selectMatte(matte), "")
You can see that creating the menus and bindings walks through the contents of our “_mattes” dictionary in the order dictated by “_order”. If there are no valid mattes found then we add the alert in the menu to the user that the environment variable was not defined. You can also see from the example above that each menu entry is set to trigger a call to selectMatte for the associated matte definition. This is a neat technique where we use a factory method to create our event handling method for each valid matte we found. Here is the content of that:
def selectMatte(self, matte): # Create a method that is specific to each matte for setting the # relevant session node properties to display the matte def select(event): self._currentMatte = matte if (matte == ""): commands.setIntProperty("#Session.matte.show", [0], True) extra_commands.displayFeedback("Disabling mattes", 2.0) else: m = self._mattes[matte] commands.setFloatProperty("#Session.matte.aspect", [float(m["ratio"])], True) commands.setFloatProperty("#Session.matte.heightVisible", [float(m["heightVisible"])], True) commands.setFloatProperty("#Session.matte.centerPoint", [float(m["centerX"]), float(m["centerY"])], True) commands.setIntProperty("#Session.matte.show", [1], True) extra_commands.displayFeedback( "Using '%s' matte" % matte, 2.0) return select
Notice that we didn't say which matte to set it to. The function just sets the value to whatever its argument is. Since this function is going to be called when the menu item is selected it needs to be an event function (a function which takes an Event as an argument and returns nothing). In the case where we want no matte drawn, we'll pass in the empty string (“”).
The menu state function (which will put a check mark next to the current matte) has a similar problem. In this case we'll use a mechanism with similar results. We'll create a method which returns a function given a matte. The returned function will be our menu state function. This sounds complicated, but it's simple in use:
The thing to note here is that the parameter m passed into currentMatteState() is being used inside the function that it returns. The m inside the matteState() function is known as a free variable. The value of this variable at the time that currentMatteState() is called becomes wrapped up with the returned function. One way to think about this is that each time you call currentMatteState() with a new value for m, it will return a different copy of matteState() function where the internal m is replaced the value of currentMatteState()'s m.
def currentMatteState(self, m): def matteState(): if (m != "" and self._currentMatte == m): return commands.CheckedMenuState return commands.UncheckedMenuState return matteState
Selecting mattes is not the only menu option we added in setMenuAndBindings(). We also added an option to select the matte definition file (or change the selected one) if none was found before. Here is the contents of the selectMatteFile() method:
def selectMattesFile(self, event): definition = commands.openFileDialog(True, False, False, None, None)[0] try: self.updateMattesFromFile(definition) except KnownError,inst: print(str(inst)) self.setMenuAndBindings()
Notice here that we basically repeat what we did before when parsing the mattes definition file from the environment. We update our internal mattes structures and the setup the menus and bindings.
It is also important to clear out any existing bindings when we load a new mattes file. Therefore we should modify our parsing function do this for us like so:
def updateMattesFromFile(self, filename): # Make sure the definition file exists if (not os.path.exists(filename)): raise KnownError("ERROR: Custom Mattes Mode: Non-existent mattes" + " definition file: '%s'" % filename) # Clear existing key bindings for i in range(len(self._order)): commands.unbind( "custom-mattes-mode", "global", "key-down--alt--%d" % (i+1)) ... THE REST IS AS BEFORE ...
class CustomMatteMinorMode(rvtypes.MinorMode): def __init__(self): rvtypes.MinorMode.__init__(self) self._order = [] self._mattes = {} self._currentMatte = "" self.init("custom-mattes-mode", None, None, None) try: definition = os.environ["RV_CUSTOM_MATTE_DEFINITIONS"] except KeyError: definition = "" try: self.updateMattesFromFile(definition) except KnownError,inst: print(str(inst))
11.5 Handling Settings
Wouldn't it be nice to have our package remember what our last matte setting was and where the last definition file was? Lets see how to add settings. First thing is first. We need to write our settings in order to read them back later. Lets start by writing out the location of our mattes definition file when we parse a new one. Here is an updated version of updateMattesFromFile():
def updateMattesFromFile(self, filename): # Make sure the definition file exists if (not os.path.exists(filename)): raise KnownError("ERROR: Custom Mattes Mode: Non-existent mattes" + " definition file: '%s'" % filename) # Clear existing key bindings for i in range(len(self._order)): commands.unbind( "custom-mattes-mode", "global", "key-down--alt--%d" % (i+1)) # Walk through the lines of the definition file collecting matte # parameters order = [] mattes = {} for line in open(filename).readlines(): tokens = line.strip("\n").split(",") if (len(tokens) == 6): order.append(tokens[0]) mattes[tokens[0]] = { "name" : tokens[0], "ratio" : tokens[1], "heightVisible" : tokens[2], "centerX" : tokens[3], "centerY" : tokens[4], "text" : tokens[5]} # Make sure we got some valid mattes if (len(order) == 0): self._order = [] self._mattes = {} raise KnownError("ERROR: Custom Mattes Mode: Empty mattes" + " definition file: '%s'" % filename) # Save the definition path and assign the mattes commands.writeSettings( "CUSTOM_MATTES", "customMattesDefinition", filename) self._order = order self._mattes = mattes
See how at the bottom of the function we are now writting the definition file to the CUSTOM_MATTES settings. Now lets also update the selectMatte() method to remember what matte we selected.
def selectMatte(self, matte): # Create a method that is specific to each matte for setting the # relevant session node properties to display the matte def select(event): self._currentMatte = matte if (matte == ""): commands.setIntProperty("#Session.matte.show", [0], True) extra_commands.displayFeedback("Disabling mattes", 2.0) else: m = self._mattes[matte] commands.setFloatProperty("#Session.matte.aspect", [float(m["ratio"])], True) commands.setFloatProperty("#Session.matte.heightVisible", [float(m["heightVisible"])], True) commands.setFloatProperty("#Session.matte.centerPoint", [float(m["centerX"]), float(m["centerY"])], True) commands.setIntProperty("#Session.matte.show", [1], True) extra_commands.displayFeedback( "Using '%s' matte" % matte, 2.0) commands.writeSettings("CUSTOM_MATTES", "customMatteName", matte) return select
Here notice the second to last line. We save the matte that was just selected. Lastly lets see what we have to do to make use of these when we initialize our mode. Here is the final version of the constructor:
class CustomMatteMinorMode(rvtypes.MinorMode): def __init__(self): rvtypes.MinorMode.__init__(self) self._order = [] self._mattes = {} self._currentMatte = "" self.init("custom-mattes-mode", None, None, None) try: definition = os.environ["RV_CUSTOM_MATTE_DEFINITIONS"] except KeyError: definition = str(commands.readSettings( "CUSTOM_MATTES", "customMattesDefinition", "")) try: self.updateMattesFromFile(definition) except KnownError,inst: print(str(inst)) self.setMenuAndBindings() lastMatte = str(commands.readSettings( "CUSTOM_MATTES", "customMatteName", "")) for matte in self._order: if matte == lastMatte: self.selectMatte(matte)(None)
Here we grab the last known location of the mattes definition file if we did not find one in the environment. We also attempt to look up the last matte that was used and if we can find it among the mattes we parsed then we enable that selection.
11.6 The Finished custom_mattes.py File
from rv import commands, rvtypes, extra_commands import os class KnownError(Exception): pass class CustomMatteMinorMode(rvtypes.MinorMode): def __init__(self): rvtypes.MinorMode.__init__(self) self._order = [] self._mattes = {} self._currentMatte = "" self.init("custom-mattes-mode", None, None, None) try: definition = os.environ["RV_CUSTOM_MATTE_DEFINITIONS"] except KeyError: definition = str(commands.readSettings( "CUSTOM_MATTES", "customMattesDefinition", "")) try: self.updateMattesFromFile(definition) except KnownError,inst: print(str(inst)) self.setMenuAndBindings() lastMatte = str(commands.readSettings( "CUSTOM_MATTES", "customMatteName", "")) for matte in self._order: if matte == lastMatte: self.selectMatte(matte)(None) def currentMatteState(self, m): def matteState(): if (m != "" and self._currentMatte == m): return commands.CheckedMenuState return commands.UncheckedMenuState return matteState def selectMatte(self, matte): # Create a method that is specific to each matte for setting the # relevant session node properties to display the matte def select(event): self._currentMatte = matte if (matte == ""): commands.setIntProperty("#Session.matte.show", [0], True) extra_commands.displayFeedback("Disabling mattes", 2.0) else: m = self._mattes[matte] commands.setFloatProperty("#Session.matte.aspect", [float(m["ratio"])], True) commands.setFloatProperty("#Session.matte.heightVisible", [float(m["heightVisible"])], True) commands.setFloatProperty("#Session.matte.centerPoint", [float(m["centerX"]), float(m["centerY"])], True) commands.setIntProperty("#Session.matte.show", [1], True) extra_commands.displayFeedback( "Using '%s' matte" % matte, 2.0) commands.writeSettings("CUSTOM_MATTES", "customMatteName", matte) return select def selectMattesFile(self, event): definition = commands.openFileDialog(True, False, False, None, None)[0] try: self.updateMattesFromFile(definition) except KnownError,inst: print(str(inst)) self.setMenuAndBindings() def setMenuAndBindings(self): # Walk through all of the mattes adding a menu entry as well as a # hotkey binding for alt + index number # NOTE: The bindings will only matter for the first 9 mattes since you # can't really press "alt-10". matteItems = [] bindings = [] if (len(self._order) > 0): matteItems.append(("No Matte", self.selectMatte(""), "alt `", self.currentMatteState(""))) bindings.append(("key-down--alt--`", "")) for i,m in enumerate(self._order): matteItems.append((m, self.selectMatte(m), "alt %d" % (i+1), self.currentMatteState(m))) bindings.append(("key-down--alt--%d" % (i+1), m)) else: def nada(): return commands.DisabledMenuState matteItems = [("RV_CUSTOM_MATTE_DEFINITIONS UNDEFINED", None, None, nada)] # Always add the option to choose a new definition file matteItems += [("_", None)] matteItems += [("Choose Definition File...", self.selectMattesFile, None, None)] # Clear the menu then add the new entries matteMenu = [("View", [("_", None), ("Custom Mattes", None)])] commands.defineModeMenu("custom-mattes-mode", matteMenu) matteMenu = [("View", [("_", None), ("Custom Mattes", matteItems)])] commands.defineModeMenu("custom-mattes-mode", matteMenu) # Create hotkeys for each matte for b in bindings: (event, matte) = b commands.bind("custom-mattes-mode", "global", event, self.selectMatte(matte), "") def updateMattesFromFile(self, filename): # Make sure the definition file exists if (not os.path.exists(filename)): raise KnownError("ERROR: Custom Mattes Mode: Non-existent mattes" + " definition file: '%s'" % filename) # Clear existing key bindings for i in range(len(self._order)): commands.unbind( "custom-mattes-mode", "global", "key-down--alt--%d" % (i+1)) # Walk through the lines of the definition file collecting matte # parameters order = [] mattes = {} for line in open(filename).readlines(): tokens = line.strip("\n").split(",") if (len(tokens) == 6): order.append(tokens[0]) mattes[tokens[0]] = { "name" : tokens[0], "ratio" : tokens[1], "heightVisible" : tokens[2], "centerX" : tokens[3], "centerY" : tokens[4], "text" : tokens[5]} # Make sure we got some valid mattes if (len(order) == 0): self._order = [] self._mattes = {} raise KnownError("ERROR: Custom Mattes Mode: Empty mattes" + " definition file: '%s'" % filename) # Save the definition path and assign the mattes commands.writeSettings( "CUSTOM_MATTES", "customMattesDefinition", filename) self._order = order self._mattes = mattes def createMode(): return CustomMatteMinorMode()
Chapter 12 Automated Color and Viewing Management
- Determination of the input color space
- Deciding whether the input color space should be converted to the linear working space and with what transform
- Displaying the working color space on a particular device possibly in a manner which simulates another device (e.g, film look on an LCD monitor).
- Examining particular image attributes it's often possible to determine color space. Some images may use a naming convention or may be located in a particular place on a file system which indicates its color space. It's even possible that a separate file or program needs to be executed to get the actual color space.
- Input spaces can be transformed to working spaces using the built in transforms like sRGB, gamma, or Cineon log space to linear space. If RV does not have a built-in transform for the color space, a file LUT (one per input source) may be used to interpolate independent channel functions using a channel LUT or a general function of R G and B channels using a 3D LUT. Values from a CDL can also be used to bring the input into the working color space.
- Ideally RV will have a fixed set of display transform which map a linear space to a display. This makes it possible to load multiple sets of images with differing color spaces, transform them to a common linear working space, and display them using a global display transform. RV has built-in transform for sRGB and gamma and can also use a channel or 3D LUT if a custom function is needed.
In addition to the color issues there are a few others which might need to be detected and/or corrected:
- Unrecorded or incorrect pixel aspect ratios (e.g., DPX files with inaccurate headers)
- Special mattes which should be used with particular images
- Incorrect frame numbers
- Incorrect fps
- Specific production information which is not located in the image (e.g., shot information, tracking information)
RV lets you customize all of the above for your facility and workflow by hooking into the user interface code. The most important method of doing so is using special events generated by RV internally and setting internal state at that time.
12.1 The source-group-complete Event
The source-group-complete event is generated whenever media is added to a session; this includes when a Source is created, or when the set of media held be a Source is modified. By binding a function to this event, it's possible to configure any color space or other image dependant aspects of RV at the time the file is added. This can save a considerable amount of time and headache when a large number of people are using RV in differing circumstances.
See the sections below for information about creating a package which binds source-group-complete to do color management.
12.2 The default source-group-complete behavior
By default RV binds its own color management function located in the source_setup.mu file called sourceSetup(). This is part of the source_setup system package introduced in version 3.10. In RV 6.0 the source_setup is implemented in Python.
It's a good idea to override or augment this package for use in production environments. For example, you may want to have certain default color behavior for technical directors using movie files which differs from how a coordinator might view them (the coordinator may be looking at movies in sRGB space instead of with a film simulation for example).
RV's default color management tries to use good defaults for incoming file formats. Here's the complete behavior shown as a set of heuristics applied in order:
- If the incoming image is a TIFF file and it has no color space attribute assume it's linear
- If the image is JPEG or a quicktime movie file (.mov) and there is no color space attribute assume it's in sRGB space
- If there is an embedded ICC profile and that profile is for sRGB space use RV's internal sRGB space transform instead (because RV does not yet handle embedded ICC profiles)
- If the image is TIFF and it was created by ifftoany, assume the pixel aspect ratio is incorrect and fix it
- If the image is JPEG, has no pixel aspect ratio attribute and no density attribute and looks like it comes from Maya, fix the pixel aspect ratio
- Use the proper built-in conversion for the color space indicated in the color space attribute of the image
- Use the sRGB display transform if any color space was successfully determined for the input image(s)
- A DPX or Cineon is loaded which is determined to be in Log space — turn on the built in log to linear converter
- A JPEG or Quicktime movie file is determined to be in sRGB space or if no space is specified assumed to be in sRGB space — apply the built-in sRGB to linear converter
- An EXR is loaded — assume it's linear
- A TIFF file with no color space indication is assumed to be linear, if it does have a color space use that.
- A PNG file with no color space is assumed linear, otherwise use the color space attribute in the file
- Any file with a pixel aspect ratio attribute will be assumed to be correct (unless it's determined to have come from Maya)
- The monitor's ``gamma'' will be accounted for automatically (because RV assumes the monitor is an sRGB device)
In addition, the default color management implements two varieties of user-level control, as examples of what you can do from the scripting level.
First, environment variables with a standard format can be used to control the linearization process for a given file type. An environment variable of the form “RV_OVERRIDE_TRANSFER_<type>” will set the linearization transform for the specified file type (and this will override the default rules described above). For example, if the environment variable “RV_OVERRIDE_TRANSFER_TIF” is set to “sRGB” then all files with extension “tif” or “TIF” will be linearized with the sRGB transform. If you want, you can also specify the bit depth. So you could set RV_OVERRIDE_TRANSFER_TIF_8 to sRGB and RV_OVERRIDE_TRANSFER_TIF_32 to Linear. The transform function name must be one of the following standard transforms. (The number following “Gamma” is arbitrary.)
Second, any of the above-described environment variable names and standard transform names can appear on the command line following the “-flags” option. For example:
rv test.dpx -flags "RV_OVERRIDE_TRANSFER_DPX=ALEXA LogC"
12.3 Breakdown of sourceSetup() in the source_setup Package
The source_setup system package defines the default sourceSetup() function. This is where RV's default color management comes from. The function starts by parsing the event contents (which contains the name of the file, the type of source node, and the source node name) as well as setting up the regular expressions used later in the function:
RV 6.0 uses Python to implement the source_setup package. Previous versions used Mu. The actual sourceSetup() function in source_setup.py may differ from what is described here since it is constantly being refined.
args = event.contents().split(";;") group = args[0] fileSource = groupMemberOfType(group, "RVFileSource") imageSource = groupMemberOfType(group, "RVImageSource") source = fileSource if imageSource == None else imageSource linPipeNode = groupMemberOfType(group, "RVLinearizePipelineGroup") linNode = groupMemberOfType(linPipeNode, "RVLinearize") lensNode = groupMemberOfType(linPipeNode, "RVLensWarp") fmtNode = groupMemberOfType(group, "RVFormat") tformNode = groupMemberOfType(group, "RVTransform2D") lookPipeNode = groupMemberOfType(group, "RVLookPipelineGroup") lookNode = groupMemberOfType(lookPipeNode, "RVLookLUT") typeName = commands.nodeType(source) fileNames = commands.getStringProperty("%s.media.movie" % source, 0, 1000) fileName = fileNames[0] ext = fileName.split('.')[-1].upper() igPrim = self.checkIgnorePrimaries(ext) mInfo = commands.sourceMediaInfo(source, None)
sourceGroup000000;;new
The split() function is used to create a dynamic array of strings to extract the source group's name. The nodes associated with the source group are then located and the media names are taken from the source node. The source node is either an RVImageSource which stores its image data directly in the session or an RVFileSource which references media on the filesystem. Both of these node types have a media component which contains the actual media names (usually a single file in the case of an RVFileSource node).
In RV 6 pipelines were introduced. There are three pipeline group nodes in each source group node and one pipeline group in the display group. For the default source_setup the linearize pipeline group is need to get the default RVLinearize node it contains.
The next section of the function iterates over the image attributes and caches the ones we're interested in. The most important of these is the Colorspace attribute which is set by the file readers when the image color space is known.
srcAttrs = commands.sourceAttributes(source, fileName) attrDict = dict(zip([i[0] for i in srcAttrs],[j[1] for j in srcAttrs])) attrMap = { "ColorSpace/ICC/Description" : "ICCProfileDesc", "ColorSpace" : "ColorSpace", "ColorSpace/Transfer" : "TransferFunction", "ColorSpace/Primaries" : "ColorSpacePrimaries", "DPX-0/Transfer" : "DPX0Transfer", "ColorSpace/Conversion" : "ConversionMatrix", "JPEG/PixelAspect" : "JPEGPixelAspect", "PixelAspectRatio" : "PixelAspectRatio", "JPEG/Density" : "JPEGDensity", "TIFF/ImageDescription" : "TIFFImageDescription", "DPX/Creator" : "DPXCreator", "EXIF/Orientation" : "EXIFOrientation", "EXIF/Gamma" : "EXIFGamma", "ARRI-Image/DataSpace" : "ARRIDataSpace"} for key in attrMap.keys(): try: exec('%s = "%s"' % (attrMap[key],attrDict[key])) except KeyError: pass
The function sourceAttributes() returns the image attributes for a given file in a source. In this case we're passing in the source and file which caused the event. The return value of the function is a dynamic array of tuples of type (string,string) where the first element is the name of the attribute and the second is a string representation of the value. Each iteration through the loop, the next tuple is used to assign the attribute value to the a variable with name of the attribute.
The variables ICCProfileName, Colorspace, JPEGPixelAspect, etc, are all variable of type string which are defined earlier in the function.
Before getting to the meat of the function, there are two helper functions declared: setPixelAspect() and setFileColorSpace().
The next major section of the function matches the file name against the regular expressions that were declared at the beginning and against the values of some of the attributes that were cached.
# # Rules based on the extension # if (ext == 'DPX'): if (DPXCreator == "AppleComputers.libcineon" or DPXCreator == "AUTODESK"): # # Final Cut's "Color" and Maya write out bogus DPX # header info with the aspect ratio fields set # improperly (to 0s usually). Properly undefined DPX # headers do not have the value 0. # if (int(PixelAspectRatio) == 0): self.setPixelAspect(lensNode, 1.0) elif (DPXCreator == "Nuke" and (ColorSpace == "" or ColorSpace == "Other (0)") and (DPX0Transfer == "" or DPX0Transfer == "Other (0)")): # # Nuke produces identical (uninformative) dpx headers for # both Linear and Cineon files. But we expect Cineon to be # much more common, so go with that. # TransferFunction = "Cineon Log" elif (ext == 'TIF' and TransferFunction == ""): # # Assume 8bit tif files are sRGB if there's no other indication; # fall back to linear. # if (mInfo['bitsPerChannel'] == 8): TransferFunction = "sRGB" else: TransferFunction = "Linear" elif (ext == "ARI" and TransferFunction == ""): # # Assume tif files are linear if there's no other indication # TransferFunction = "ALEXA LogC" elif (ext in ['JPEG','JPG','MOV','AVI','MP4'] and TransferFunction == ""): # # Assume jpeg/mov is in sRGB space if none is specified # TransferFunction = "sRGB" elif (ext in ['J2C','J2K','JPT','JP2'] and ColorSpacePrimaries == "UNSPECIFIED"): # # If we're assuming XYZ primaries, but ignoring primaries just set # transfer to sRGB. # if (igPrim): TransferFunction = "sRGB"; if (igPrim): commands.setIntProperty(linNode + ".color.ignoreChromaticities", [1], True) if (ICCProfileDesc != ""): # # Hack -- if you see sRGB in a color profile name just use the # built-in sRGB conversion. # if ("sRGB" in ICCProfileDesc): TransferFunction = "sRGB" else: TransferFunction = "" if (TIFFImageDescription == "Image converted using ifftoany"): # # Get around maya bugs # print("WARNING: Assuming %s was created by Maya with a bad pixel aspect ratio\n" % fileName) self.setPixelAspect(lensNode, 1.0) if (JPEGPixelAspect != "" and JPEGDensity != ""): info = commands.sourceMediaInfo(source, fileName) attrPA = float(JPEGPixelAspect) imagePA = float(info['width']) / float(info['height']) testDiff = attrPA - 1.0 / imagePA if ((testDiff < 0.0001) and (testDiff > -0.0001)): # # Maya JPEG -- fix pixel aspect # print("WARNING: Assuming %s was created by Maya with a bad pixel aspect ratio\n" % fileName) self.setPixelAspect(lensNode, 1.0) if (EXIFOrientation != ""): # # Some of these tags are beyond the internal image # orientation choices so we need to possibly rotate, etc # if not self.definedInSessionFile(tformNode): rprop = tformNode + ".transform.rotate" if (EXIFOrientation == "right - top"): commands.setFloatProperty(rprop, [90.0], True) elif (EXIFOrientation == "right - bottom"): commands.setFloatProperty(rprop, [-90.0], True) elif (EXIFOrientation == "left - top"): commands.setFloatProperty(rprop, [90.0], True) elif (EXIFOrientation == "left - bottom"): commands.setFloatProperty(rprop, [-90.0], True)
At this point in the function the color space of the input image will be known or assumed to be linear. Finally, we try to set the color space (which will result in the image pixels being converted to the linear working space). If this succeeds, use sRGB display as the default.
if (not noColorChanges): # # Assume (in the absence of info to the contrary) any 8bit file will be in sRGB space. # if (TransferFunction == "" and mInfo['bitsPerChannel'] == 8): TransferFunction = "sRGB" # # Allow user to override with environment variables # TransferFunction = self.checkEnvVar(ext, mInfo['bitsPerChannel'], TransferFunction) if (self.setFileColorSpace(linNode, TransferFunction, ColorSpace)): # # The default display correction is sRGB if the # pixels can be converted to (or are already in) # linear space # # For gamma instead do this: # # setFloatProperty("#RVDisplayColor.color.gamma", float[] {2.2}, true); # # For a linear -> screen LUT do this: # # readLUT(lutfile, "#RVDisplayColor", true); # # If this is not the first source, assume that user or source_seetup # has already set the desired display transform if len(commands.sources()) == 1: self.setDisplayFromProfile()
12.4 Setting up 3D and Channel LUTs
The default source-group-complete event function does not set up any non-built-in transforms. When you need to automatically apply a LUT, as a file, look, or a display LUT, you need to do the following:
readLUT(file, nodeName, True)
The nodeName will be ``#RVDisplayColor'' (to refer to it by type) for the display LUT. For a file or look LUT, you use the associated node name for the color node — in the default sourceSetup() function this would be the linNode variable. The file parameter to readLUT() will be the name of the LUT file on disk and can be any of the LUT types that RV reads.
12.5 Setting CDL Values From File
As with using LUT files to fill in where built-in transforms do not cover your needs, you can read in CDL property values from a file. Use the following to read values from a CDL file on disk:
readCDL(file, nodeName, True)
When using readCDL the “nodeName” should be that of the targeted RVColor or RVLookLUT node to which you are applying the CDL values read from “file”. In the default RV graph you will find CDL properties to set in the RVColor and RVLookLUT nodes for each source, but there are none out-of-the-box in the display pipeline. However, you can add RVColor or RVLookLUT nodes to any pipeline you need CDL control that does not have them by default.
You can also add RVCDL nodes where you want CDL control, but these nodes do not require the use of readCDL. With RVCDL nodes you only need to set the node's node.file property and it will automatically load and parse the file from the path provided. Errors will be thrown if the file provided is invalid.
12.6 Building a Package For Color Management
As of RV 3.6 the recommend way to handle all event bindings is via a package. In version 3.10 the color management was made a system package. In version 3.12 the package was converted to Python. To customize color management you can either create a new package from scratch as described here, or copy, rename, and hack the existing source_setup package.
The use of source-group-complete is no different from any other event. By creating a package you can override the existing behavior or modify it. It also makes it possible to have layers of color management packages which (assuming they don't contradict each other) can collectively create a desired behavior.
from rv import rvtypes, commands, extra_commands import os, re class CustomColorManagementMode(rvtypes.MinorMode): def sourceSetup (self, event, noColorChanges=False): // do work on the new source here event.reject() def __init__(self): rvtypes.MinorMode.__init__(self) self.init("Source Setup", None, None, [ ("source-group-complete", self.sourceSetup, "Color and Geometry Management") ], "source_setup", 20) def createMode(): return CustomColorManagementMode()
Note that we use the sortKey “source_setup” and the sortOrder “20”. This will ensure that our additional sourceSetup runs after the default color management.
The included optional package “ocio_source_setup” is a good example of a package that does additional source setup.
Chapter 13 Network Communication
RV can communicate with multiple external programs via its network protocol. The mechanism is designed to function like a “chat” client. Once a connection is established, messages can be sent and received including arbitrary binary data.
- Controlling RV remotely. E.g., a program which takes input from a dial and button board or a mobile device and converts the input into commands to start/stop playback or scrubbing in RV.
- Synchronizing RV sessions across a network. This is how RV's sync mode is implemented: each RV serves as a controller for the other.
- Monitoring a Running RV. For VFX theater dailies the RV session driving the dailies could be monitored by an external program. This program could then indicate to others in the facility when their shots are coming up.
- A Display Driver for a Renderer. Renders like Pixar's RenderMan have a plug-in called a display driver which is normally used to write out rendered frames as files. Frequently this type of plug-in is also used to send pixels to an external frame buffer (like RV) to monitor the renderer's progress in real time. It's possible to write a display driver that talks to RV using the network protocol and send it pixels as they are rendered. A more advanced version might receive feedback from RV (e.g. a selected rectangle on the image) in order to recommend areas the renderer should render sooner.
Any number of network connections can be established simultaneously, so for example it's possible to have a synchronized RV session with a remote RV and drive it with an external hardware device at the same time.
13.1 Example Code
There are two working examples that come with RV: the rvshell program and pyNetwork.py python example.
The rvshell program uses a C++ library included with the distribution called TwkQtChat which you can use to make interfacing easier — especially if your program will use Qt. We highly recommend using this library since this is code which RV uses internally so it will always be up-to-date. The library is only dependent on the QtCore and QtNetwork modules.
The pyNetwork example implements the network protocol using only python native code. You can use it directly in python programs.
13.1.1 Using rvshell
To use rvshell, start RV from the command line with the network started and a default port of 45000 (to make sure it doesn't interfere with existing RV sessions):
shell> rv -network -networkPort 45000
shell> rvshell user localhost 45000
Assuming all went well, this will start rvshell connected to the running RV. There are three things you can experiment with using rvhell: a very simple controller interface, a script editor to send portions of script or messages to RV manually, and a display driver simulator that sends stereo frames to RV.
Start by loading a sequence of images or a quicktime movie into RV. In rvshell switch to the “Playback Control” tab. You should be able to play, stop, change frames and toggle full screen mode using the buttons on the interface. This example sends simple Mu commands to RV to control it. The feedback section of the interface shows the RETURN message send back from RV. This shows whatever result was obtained from the command.
The “Raw Event” section of the interface lets you assemble event messages to send to RV manually. The default event message type is remote-eval which will cause the message data to be treated like a Mu script to execute. There is also a remote-pyeval event which does the same with Python (in which case you should type in Python code instead of Mu code). Messages sent this way to RV are translated into UI events. In order for the interface code to respond to the event something must have bound a function to the event type. By default RV can handle remote-eval and remote-pyeval events, but you can add new ones yourself.
When RV receieves a remote-eval event it executes the code and looks for a return value. If a return value exists, it converts it to a string and sends it back. So using remote-eval it's possible to querry RV's current state. For example if you load an image into RV and then send it the command renderedImages() it will return a Mu struct as a string with information about the rendered image. Similarily, sending a remote-pyeval with the same command will return a Python dictionary as a string with the same information.
The last tab “Pixels” can be used to emulate a display driver. Load a JPEG image into rvshell's viewer (don't try something over 2k — rvshell is using Qt's image reader). Set the number of tiles you want to send in X and Y, for example 10 in each. In RV clear the session. In rvshell hit the Send Image button. rvshell will create a new stereo image source in RV and send the image one tile at a time to it. The left eye will be the original image and the right eye will be its inverse. Try View → Stereo → Side by Side to see the results.
13.1.2 Using rvNetwork.py
13.2 TwkQtChat Library
A single Client instance is required to represent your process and to manage the Connections and Server instances. The Connection and Server classes are derived from the Qt QTcpSocket and QTcpServer classes which do the lower level work. Once the Client instance exists you can get pointer to the Server and existing Connections to directly manipulate them or connect their signals to slots in other QObject derived classes if needed.
The application should start by creating a Client instance with its contact name (usually a user name), application name, and port on which to create the server. The Client class uses standard Qt signals and slots to communicate with other code. It's not necessary to inherit from it.
The most important functions on the Client class are list in table 13.1.
13.3 The Protocol
There are two types of messages that RV can receive and send over its network socket: a standard message and a data message. Data messages can send arbitrary binary data while standard messages are used to send UTF-8 string data.
The greeting is used only once on initial contact. The standard message is used in most cases. The data message is used primarily to send binary files or blocks of pixels to/from RV.
13.3.1 Standard Messages
When an application first connects to RV over its TCP port, a greeting message is exchanged. This consists of an UTF-8 byte string composed of:
In response, the application should receive a NEWGREETING message back. At this point the application will be connected to RV.
When RV receives a standard message (MESSAGE type) it will assume the payload is a UTF-8 string and try to interpret it. The first word of the string is considered the sub-message type and is used to decide how to respond:
The EVENT and RETURNEVENT messages are the most common. When RV receives an EVENT or RETURNEVENT message it will translate it into a user interface event. The additional part of the string (after EVENT or RETURNEVENT) is composed of:
MESSAGE 34 EVENT my-event-name red green blue
The first word indicates a standard message. The next word (34) indicates the length of the rest of the data. EVENT is the message sub-type which further specifies that the next word (my-event-name) is the event to send to the UI with the rest of the string (red green blue) as the event contents.
If a UI function that receives the event sets the return value and the message was a RETURNEVENT, then a RETURN will be sent back. A RETURN will have a single string that is the return value. An EVENT message will not result in a RETURN message.
Generally, when a RETURNEVENT is sent to your application, a RETURN should be sent back because the other side may be blocked waiting. It's ok to send an empty RETURN. Normally, RV will not send EVENT or RETURNEVENT messages to other non-RV applications. However, it's possible that this could happen while connected to an RV that is also engaged in a sync session with another RV.
Finally a DISCONNECT message comes with no additional data and signals that the connection should be closed.
Ping and Pong Messages
There are three lower level messages used to keep the status of the connection up to date. This scheme relies on each side of the connection returning a PONG message if it ever receives a PING message whenever ping pong messages are active.
Whether or not it's active is controlled by sending the PINGPONGCONTROL message: when received, if the payload is the UTF-8 value “1” then PING messages should be expected and responded to. If the value is “0” then responding to a PING message is not mandatory.
For some applications especially those that require a lot of computation (e.g. a display driver for a renderer) it can be a good to shut down the ping pong notification. When off, both sides of the connection should assume the other side is busy but not dead in the absence of network activity.
13.3.2 Data Messages
The PIXELTILE message is used to send a block of pixels to or from RV. When received by RV the PIXELTILE message is translated into a pixel-block event (unless another event name is specified) which is sent to the user interface. This message takes a number of parameters which should have no whitespace characters and separated by commas (“,”):
PIXELTILE(media=out.9.exr,layer=diffuse,view=left,w=16,h=16,x=160,y=240,f=9)
Which would be parsed and used to fill fields in the Event type. This data becomes available to Mu and Python functions binding to the event. By default the Event object is sent to the insertCreatePixelBlock() function which fins the image source associated with the meda and inserts the data into the correct layer and view of the image. Each of the keywords in the PIXELTILE header is optional.
The DATAEVENT message is similar to the PIXELTILE but is intended to be implemented by the user. The message header takes at least three parameters which are ordered (no keywords like PIXELTILE). RV will use only the first three parameters:
DATAEVENT(my-data-event,unused,special-data)
Which would be sent to the user interface as a my-data-event with the content type “special-data”. The content type is retrievable with Event.contentType(). The data payload is available via Event.dataContents() method.
Chapter 14 Webkit JavaScript Integration
RV can communicate with JavaScript running in a QWebView widget. This makes it possible to serve custom RV-aware web pages which can interact with a running RV. JavaScript running in the web page can execute arbitrary Mu script strings as well as receive events from RV.
If you are not familiar with Qt's webkit integration this page can be helpful.
14.1 Executing Mu or Python from JavaScript
RV exports a JavaScript object called rvsession to the Javascript runtime environment. Two of the functions in that namespace are evaluate() and pyevaluate(). By calling evaluate() or pyevaluate() or pyexec() you can execute arbitrary Mu or Python code in the running RV to control it. If the executed code returns a value, the value will be converted to a string and returned by the (py)evaluate() functions. Note that pyevaluate() triggers a python eval which takes an expression and returns a value. pyexec() on the other hand takes an arbitrary block of code and triggers a python exec call.
As an example, here is some html which demonstates creating a link in a web page which causes RV to start playing when pressed:
<script type="text/javascript"> function play () { rvsession.evaluate("play()"); } </script> <p><a href="javascript:play()">Play</a></p>
If inlining the Mu or Python code in each call back becomes onerous you can upload function definitions and even whole classes all in one evaluate call and then call the defined functions later. For complex applications this may be the most sane way to handle call back evaluation.
14.2 Getting Event Call Backs in JavaScript
RV generates events which can be converted into call backs in JavaScript. This differs slightly from how events are handled in Mu and Python.
The rvsession object contains signal objects which you can connect by supplying a call back function. In addition you need to supply the name of one or more events as a regular expression which will be matched against incoming events. For example:
function callback_string (name, contents, sender) { var x = name + " " + contents + " " + sender; rvsession.evaluate("print(\"callback_string " + x + "\\n\");"); } rvsession.eventString.connect(callback_string); rvsession.bindToRegex("source-group-complete");
connects the function callback_string() to the eventString signal object and binds to the source-group-complete RV event. For each event the proper signal object type must be used. For example pointer events are not handled by eventString but by the eventPointer signal. There are four signals available: eventString, eventKey, eventPointer, and eventDragDrop. See tables describing which events generate which signals and what the signal call back arguments should be.
In the above example, any time media is loaded into RV the callback_string() function will be called. Note that there is a single callback for each type of event. In particular if you want to handle both the “new-source” and the “frame-changed” events, your eventString handler must handle both (it can distinguish between them using the “name” parameter passed to the handler. To bind the handler to both events you can call “bindToRegex” multiple times, or specify both events in a regular expression:
rvsession.bindToRegex("source-group-complete|frame-changed");
The format of this regular expression is specified on the qt-project website.
14.3 Using the webview Example Package
This package creates one or more docked QWebView instances, configurable from the command line as described below. JavaScript code running in the webviews can execute arbitrary Mu code in RV by calling the rvsession.evaluate() function. This package is intended as an example.
These command-line options should be passed to RV after the -flags option. The webview options below are shown with their default values, and all of them can apply to any of four webviews in the Left, Right, Top, and Bottom dock locations.
shell> rv -flags ModeManagerPreload=webview
The above forces the load of the webview package which will display an example web page. Additional arguments can be supplied to load specific web pages into additional panes. While this will just show the sample html/javascript file that comes with the package in a webview docked on the right. To see what's happening in this example, bring up the Session Manager so you can see the Sources appearing and disappearing, or switch to the defaultLayout view. Note that you can play while reconfiguring the session with the javascript checkboxes.
The following additional arguments can be passed via the -flags mechanism. In the below, POS should be replaced by one of Left, Right, Bottom, or Top.
- ModeManagerPreload=webview
- Force loading of the webview package. The package should not be loaded by default but does need to be installed. This causes rv to treat the package as if it were loaded by the user.
- webviewUrlPOS=URL
- A webview pane will be created at POS and the URL will be loaded into it. It can be something from a web server or a file:// URL. If you force the package to load, but do not specify any URL, you'll get a single webview in the Right dock lockation rendering the sample html/javascript page that ships with the package. Note that the string "EQUALS" will be replaced by an "=" character in the URL.
- webviewTitlePOS=string
- Set the title of the webview pane to string.
- webviewShowTitlePOS=true or false
- A value of true will show and false will remove the title bar from the webview pane.
- webviewShowProgressPOS=true or false
- Show a progress bar while loading for the web pane.
- webviewSizePOS=integer
- Set the width (for right and left panes) or height (for top and bottom panes) of the web pane.
shell> rv -flags ModeManagerPreload=webview \ webviewUrlRight=file:///foo.html \ webviewShowTitleRight=false \ webviewShowProgressRight=false \ webviewSizeRight=200 \ webviewUrlBottom=file:///bar.html \ webviewShowTitleBottom=false \ webviewShowProgressBottom=false \ webviewSizeBottom=300
Chapter 15 Hierarchical Preferences
Each RV user has a Preferences file where her personal rv settings are stored. Most preferences are viewed and edited with the Preferences dialog (accessed via the RV menu), but preferences can also be programmatically read and written from custom code via the readSetting and writeSetting Mu commands. The preferences files are stored in different places on different platforms.
Initial values of preferences can be overridden on a site-wide or show-wide basis by setting the environment variable RV_PREFS_OVERRIDE_PATH to point to one or more directories that contain files of the name and type listed in the above table. For example, if you have your RV.conf file under $HOME/Documents/ you can set RV_PREFS_OVERRIDE_PATH=$HOME/Documents. Each of these overriding preferences file can provide default values for one or more preferences. A value from one of these overriding files will override the users's preference only if the user's preferences file has no value for this preference yet.
In the simplest case, if you want to provide overriding initial values for all preferences, you should
- Delete your preferences file.
- Start RV, go to the Preferences dialog, and adjust any preferences you want.
- Close the dialog and exit RV.
- Copy your preferences file into the RV_PREFS_OVERRIDE_PATH.
If you want to override at several levels (say per-site and per-show), you can add preferences files to any number of directories in the PATH, but you'll have to edit them so that each only contains the preferences you want to override with that file. Preferences files found in directories earlier in the path will override those found in later directories.
Note that this system only provides the ability to override initial settings for the preferences. Nothing prevents the user from changing those settings after initialization.
It's also possible to create show/site/whatever-specific preferences files that always clobber the user's personal preferences. This mechanism is exactly analogous to the above, except that the name of the environment variable that holds paths to clobbering prefs files is RV_PREFS_CLOBBER_PATH. Again, the user can freely change any “live” values managed in the Preferences dialog, but in the next run, the clobbering preferences will again take precedence. Note that a value from a clobbering file (at any level) will take precedence over a value from an overriding file (at any level).
Chapter 16 Node Reference
This chapter has a section for each type of node in RV's image processing graph. The properties and descriptions listed here are the default properties. Any top level node that can be seen in the session manager can have the “name” property of the “ui” component set in order to control how the node is listed.
RVCache
RVCacheLUT and RVLookLUT
The RVCacheLUT is applied in software before the image is cached and before any software resolution and bit depth changes. The RVLookLUT is applied just before the display LUT but is per-source.
RVCDL
RVChannelMap
RVColor
The color node has a large number of color controls. This node is usually evaluated on the GPU, except when normalize is 1. The CDL is applied after linearization and linear color changes.
RVDispTransform2D
RVDisplayColor
RVDisplayGroup and RVOutputGroup
The display group provides per device display conditioning. The output group is the analogous node group for RVIO. The display groups are never saved in the session, but there is only one output group and it is saved for RVIO. There are no user external properties at this time.
RVDisplayStereo
This node governs how to handle stereo playback including controlling the placement of stereo sources.
RVFileSource
The source node controls file I/O and organize the source media into layers (in the RV sense). It has basic controls needed to mix the layers together.
RVFolderGroup
RVFormat
This node is used to alter geometry or color depth of an image source. It is part of an RVSourceGroup.
RVImageSource
The RV image source is subset of what RV can handle from an external file (basically just EXR). Image sources can have multiple views each of which have multiple layers. However, all views must have the same layers. Image sources cannot have layers within layers, orphaned channels, empty views, missing views, or other weirdnesses that EXR can have.
RVLayoutGroup
The source group contains a single chain of nodes the leaf of which is an RVFileSource or RVImageSource. It has a single property.
RVLensWarp
This node handles the pixel aspect ratio of a source group. The lens warp node can also be used to perform radial and/or tangential distortion on a frame. It implements the Brown's distortion model (similar to that adopted by OpenCV or Adobe Lens Camera Profile model) and 3DE4's Anamorphic Degree6 model. This node can be used to perform operations like lens distortion or artistic lens warp effects.
Example use case: Using OpenCV to determine lens distort parameters for RVLensWarp node based on GoPro footage. First capture some footage of a checkboard with your GoPro. Then you can use OpenCV camera calibration approach on this footage to solve for k1,k2,k3,p1 and p2. In OpenCV these numbers are reported back as follows. For example our 1920x1440 Hero3 Black GoPro solve returned:
fx=829.122253 0.000000 cx=969.551819 0.000000 fy=829.122253 cy=687.480774 0.000000 0.000000 1.000000 k1=-0.198361 k2=0.028252 p1=0.000092 p2=-0.000073
The OpenCV camera calibration solve output numbers are then translated/normalized to the RVLensWarpode property values as follows:
warp.model = "opencv" warp.k1 = k1 warp.k2 = k2 warp.p1 = p1 warp.p2 = p2 warp.center = [cx/1920 cy/1440] warp.fx = fx/1920 warp.fy = fy/1920
set("#RVLensWarp.warp.model", "opencv"); set("#RVLensWarp.warp.k1", -0.198361); set("#RVLensWarp.warp.k2", 0.028252); set("#RVLensWarp.warp.p1", 0.00092); set("#RVLensWarp.warp.p2", -0.00073); setFloatProperty("#RVLensWarp.warp.offset", float[]{0.505, 0.4774}, true); set("#RVLensWarp.warp.fx", 0.43185); set("#RVLensWarp.warp.fy", 0.43185);
Example use case: Using Adobe LCP (Lens Camera Profile) distort parameters for RVLensWarp node. Adobe LCP files can be located in '/Library/Application Support/Adobe/CameraRaw/LensProfiles/1.0' under OSX. Adobe LCP file parameters maps to the RVLensWarp node properties as follows:
warp.model = "adobe" warp.k1 = stCamera:RadialDistortParam1 warp.k2 = stCamera:RadialDistortParam2 warp.k3 = stCamera:RadialDistortParam3 warp.p1 = stCamera:TangentialDistortParam1 warp.p2 = stCamera:TangentialDistortParam2 warp.center = [stCamera:ImageXCenter stCamera:ImageYCenter] warp.fx = stCamera:FocalLengthX warp.fy = stCamera:FocalLengthY
RVLinearize
The linearize node has a large number of color controls. The CDL is applied before linearization occurs.
OCIO (OpenColorIO), OCIOFile, OCIOLook, and OCIODisplay
OpenColorIO nodes can be used in place of existing RV LUT pipelines. Properties in RVColorPipelineGroup, RVLinearizePipelineGroup, RVLookPipelineGroup, and RVDisplayPipelineGroup determine whether or not the OCIO nodes are used. All OCIO nodes have the same properties and function, but their location in the color pipeline is determined by their type. The exception is the generic OCIO node which can be created by the user and used in any context.
NOTE: THIS IS INCOMPLETE – SEE ACCOMPANYING OCIO INTEGRATION DOCUMENT
RVOverlay
Overlay nodes can be used with any source. They can be used to draw arbitrary rectangles and text over the source but beneath any annotations. Overlay nodes can hold any number of 3 types of components: rect components describe a rectangle to be rendered, text components describe a string (or an array of strings, one per frame) to be rendered, and window components describe a matted region to be indicated either by coloring the region outside the window, or by outlining it. The coordiates of the corners of the window may be animated by specifying one number per frame.
In the below the “id” in the component name can be any string, but must be different for each component of the same type.
RVPaint
Paint nodes are used primarily to store per frame annotations. Below id is the value of nextID at the time the paint command property was created, frame is the frame on which the annotation will appear, user is the username of the user who created the property.
RVPrimaryConvert
The primary convert node can be used to perform primary colorspace conversion with illuminant adaptation on a frame that has been linearized. The input and output colorspace primaries are specified in terms of input and output chromaticities for red, green, blue and white points. Illuminant adaptation is implemented using the Bradford transform where the input and output illuminant are specified in terms of their white points. Illuminant adaptation is optional. Default values are set for D65 Rec709.
PipelineGroup, RVDisplayPipelineGroup, RVColorPipelineGroup, RVLinearizePipelineGroup, RVLookPipelineGroup and RVViewPipelineGroup
The PipelineGroup node and the RV specific pipeline nodes are group nodes that manages a pipeline of single input nodes. There is a single property on the node which determines the structure of the pipeline. The only difference between the various pipeline node types is the default value of the property.
RVRetime
Retime nodes are in many of the group nodes to handle any necessary time changes to match playback between sources and views with different native frame rates. You can also use them for “artistic retiming” of two varieties.
The properties in the “warp” component (see below) implement a key-framed “speed warping” variety of retiming, where the keys describe the speed (as a multiplicative factor of the target frame rate - so 1.0 implies no difference, 0.5 implies half-speed, and 2.0 implies double-speed) at a given input frame. Or you can provide an explicit map of output frames from input frames with the properties in the “explicit” component (see below). Note that the warping will still make use of what it can of the “standard” retiming properties (in particular the output fps and the visual scale), but if you use explicit retiming, none of the standard properties will have any effect. The “precedence” of the retiming types depends on the active flags: if “explicit.active” is non-zero, the other properties will have no effect., and if there is no explicit retiming, warping will be active if “warp.active” is true. Please note that neither speed warping nor explicit mapping does any retiming of the input audio.
RVRetimeGroup
RVSequence
Information about how to create a working EDL can be found in the User's Manual. All of the properties in the edl component should be the same size.
RVSequenceGroup
The sequence group contains a chain of nodes for each of its inputs. The input chains are connected to a single RVSequence node which controls timing and switching between the inputs.
RVSession
The session node is a great place to store centrally located information to easily access from any other node or location. Almost like a global grab bag.
RVSoundTrack
RVSourceGroup
The source group contains a single chain of nodes the leaf of which is an RVFileSource or RVImageSource. It has a single property.
RVSourceStereo
RVStack
The stack node is part of a stack group and handles control for settings like compositing each layer as well as output playback timing.
RVStackGroup
The stack group contains a chain of nodes for each of its inputs. The input chains are connected to a single RVStack node which controls compositing of the inputs as well as basic timing offsets.
RVSwitch
RVSwitchGroup
The switch group changes it behavior depending on which of its inputs is “active”. It contains a single Switch node to which all of its inputs are connected.
RVTransform2D
The 2D transform node controls the image transformations. This node is usually evaluated on the GPU.
RVViewGroup
Chapter 17 Additional GLSL Node Reference
This chapter describes the list of GLSL custom nodes that come bundled with RV. These nodes are grouped into five sections within this chapter based on the nodes "evaluationType" i.e. color, filter, transition, merge or combine. Each sub-section within a section describes a node and its parameters. For a complete description of the GLSL custom node itself, refer to the chapter on that topic i.e. "Chapter 3: Writing a Custom GLSL Node".
The complete collection of GLSL custom nodes that come with each RV distribution are stored in the following two files located at:
Linux & Windows: <RV install dir>/plugins/Nodes/AdditionalNodes.gto <RV install dir>/plugins/Support/additional_nodes/AdditionalNodes.zip Mac: <RV install dir>/Contents/PlugIns/Nodes/AdditionalNodes.gto <RV install dir>/Contents/PlugIns/Support/additional_nodes/AdditionalNodes.zip
The file "AdditionalNodes.gto" is a GTO formatted text file that contains the definition of all the nodes described in this chapter. All of the node definitions found in this file are signed for use by all RV4 versions. The GLSL source code that implements the node's functionality is embedded within the node definition's function block as an inlined string. In addition, the default values of the node's parameters can be found within the node definition's parameter block. The accompanying support file "AdditionalNodes.zip" is a zipped up collection of individually named node ".gto" and ".glsl" files. Users can unzip this package and refer to each node's .gto/.glsl file as examples of custom written RV GLSL nodes. Note the file "AdditionalNodes.zip" is not used by RV. Instead RV only uses "AdditionalNodes.gto" which was produced from all the files found in "AdditionalNodes.zip".
These nodes can be applied through the session manager to sources, sequences, stacks, layouts or other nodes. First you select a source (for example) and from the session manager "+" pull menu select "New Node by Type" and type in the name of the node in the entry box field of the "New Node by Type" window.
17.1 Color Nodes
17.1.1 Matrix3x3
This node implements a 3x3 matrix multiplication on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
17.1.2 Matrix4x4
This node implements a 4x4 matrix multiplication on the RGBA channels of the inputImage. The inputImage alpha channel is affected by this node.
17.1.3 Premult
This node implements the "premultiply by alpha" operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
17.1.4 UnPremult
This node implements the "unpremultiply by alpha" (i.e. divide by alpha) operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
17.1.5 Gamma
This node implements the gamma (i.e. pixelColor^gamma) operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node. Input parameters:
17.1.6 CDL
This node implements the Color Description List operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
17.1.7 CDLForACESLinear
This node implements the Color Description List operation in ACES linear colorspace on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
If the inputImage colorspace is NOT in ACES linear, but in some X linear colorspace; then one must set the 'toACES' property to the X-to-ACES colorspace conversion matrix and similarly the 'fromACES' property to the ACES-to-X colorspace conversion matrix.
17.1.8 CDLForACESLog
This node implements the Color Description List operation in ACES Log colorspace on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
If the inputImage colorspace is NOT in ACES linear, but in some X linear colorspace; then one must set the 'toACES' property to the X-to-ACES colorspace conversion matrix and similarly the 'fromACES' property to the ACES-to-X colorspace conversion matrix.
17.1.9 SRGBToLinear
This linearizing node implements the sRGB to linear transfer function operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
17.1.10 LinearToSRGB
This node implements the linear to sRGB transfer function operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
17.1.11 Rec709ToLinear
This linearizing node implements the Rec709 to linear transfer function operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
17.1.12 LinearToRec709
This node implements the linear to Rec709 transfer function operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
17.1.13 CineonLogToLinear
This linearizing node implements the Cineon Log to linear transfer function operation on the RGB channels of the inputImage. The implementation is based on Kodak specification "The Cineon Digital Film System". The inputImage alpha channel is not affected by this node.
17.1.14 LinearToCineonLog
This node implements the linear to Cineon Log film transfer function operation on the RGB channels of the inputImage. The implementation is based on Kodak specification "The Cineon Digital Film System". The inputImage alpha channel is not affected by this node.
17.1.15 ViperLogToLinear
This linearizing node implements the Viper Log to linear transfer function operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
17.1.16 LinearToViperLog
This node implements the linear to Viper Log transfer function operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
17.1.17 RGBToYCbCr601
This node implements the RGB to YCbCr 601 conversion operation on the RGB channels of the inputImage. Implementation is based on ITU-R BT.601 specification. The inputImage alpha channel is not affected by this node.
17.1.18 RGBToYCbCr709
This node implements the RGB to YCbCr 709 conversion operation on the RGB channels of the inputImage. Implementation is based on ITU-R BT.709 specification. The inputImage alpha channel is not affected by this node.
17.1.19 RGBToYCgCo
This node implements the RGB to YCgCo conversion operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
17.1.20 YCbCr601ToRGB
This node implements the YCbCr 601 to RGB conversion operation on the RGB channels of the inputImage. Implementation is based on ITU-R BT.601 specification. The inputImage alpha channel is not affected by this node.
17.1.21 YCbCr709ToRGB
This node implements the YCbCr 709 to RGB conversion operation on the RGB channels of the inputImage. Implementation is based on ITU-R BT.709 specification. The inputImage alpha channel is not affected by this node.
17.1.22 YCgCoToRGB
This node implements the YCgCo to RGB conversion operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
17.1.23 YCbCr601FRToRGB
This node implements the YCbCr 601 "Full Range" to RGB conversion operation on the RGB channels of the inputImage. Implementation is based on ITU-R BT.601 specification. The inputImage alpha channel is not affected by this node.
17.1.24 RGBToYCbCr601FR
This node implements the RGB to YCbCr 601 "Full Range" conversion operation on the RGB channels of the inputImage. Implementation is based on ITU-R BT.601 specification. The inputImage alpha channel is not affected by this node.
17.1.25 AlexaLogCToLinear
This node implements the Alexa LogC to linear conversion operation on the RGB channels of the inputImage. Implementation is based on Alexa LogC v3 specification. The inputImage alpha channel is not affected by this node. Default values are for EI=800 and LogCBlackSignal = 0.
NB: To translate logC to linear scene data, we use the Alexa v3 spec a,b,c,d,e,f, cutoff coefficients in its equivalent form as Alexa file format metadata params (or this node's properties); see eqns below
For a given EI (e.g. EI=800; a=5.555556; b=0.052272; c=0.247190; d=0.385537; e=5.367655; f=0.092809; cutoff=0.010591) LogCBlackSignal = 0 LogCGraySignal = 1.0 / a (0.18) LogCBlackOffset = b + a * LogCBlackSignal (0.052272) LogCCutPoint = e * cutoff + f (0.149658) // i.e. LogCLinearCutPoint in RV's imageinfo LogCEncodingGain = c (0.247190) LogCEncodingOffset = d (0.385537) LogCLinearSlope = e / (a * c) (3.90864) LogCLinearOffset = (f - d - (e * b ) / a) / c (-1.38854)
17.1.26 LinearToAlexaLogC
This node implements the linear to Alexa LogC conversion operation on the RGB channels of the inputImage. Implementation is based on Alexa LogC v3 specification. The inputImage alpha channel is not affected by this node. Default values are for EI=800 and LogCBlackSignal = 0.
NB: To translate linear to LogC data, we use the Alexa v3 spec a,b,c,d,e,f, cutoff coefficients in its equivalent form as Alexa file format metadata params (or this node's properties); see eqns below
For a given EI (e.g. EI=800; a=5.555556; b=0.052272; c=0.247190; d=0.385537; e=5.367655; f=0.092809; cutoff=0.010591) LogCBlackSignal = 0 LogCGraySignal = 1.0 / a (0.18) LogCBlackOffset = b + a * LogCBlackSignal (0.052272) LogCCutPoint = a * cutoff + b (0.111111) LogCEncodingGain = c (0.247190) LogCEncodingOffset = d (0.385537) LogCLinearSlope = e / (a * c) (3.90864) LogCLinearOffset = (f - d - (e * b ) / a) / c (-1.38854)
17.1.27 Saturation
This node implements the saturation operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
17.2 Transition Nodes
This section describes all the GLSL nodes of evaluationType "transition" found in "AdditionalNodes.gto".
17.2.1 CrossDissolve
This node implements a simple cross dissolve transition effect on the RGBA channels of two inputImage sources beginning from startFrame until (startFrame + numFrames -1). The inputImage alpha channel is affected by this node.
17.2.2 Wipe
This node implements a simple wipe transition effect on the RGBA channels of two inputImage sources beginning from startFrame until (startFrame + numFrames -1). The inputImage alpha channel is affected by this node.
Appendix A Open Source Components
RV uses components licensed under the GNU LGPL and other open-source licenses. There is no GPL code in any of RV's binaries. LGPL code for which Tweak Software (or Tweak Films) is the copyright holder is sometimes directly compiled into RV (not as a shared library).
Tweak Software takes open source licensing seriously. Open source software can have huge social benefits and we ourselves have benefited from the work of open source developers. We have in the past contributed, time, code and funding to open source projects and will continue to do so in the future.
A.1 GTO
The session file (.rv) is a form of GTO file. The GTO file library is distributed under the terms similar to the BSD license and is available from our website. The GTO format was invented and is copyrighted by Tweak Films.
The GTO source distribution includes a handful of tools to edit GTO files independently of any application. Also included is a Python module which makes editing the files extremely easy.
A.2 Libquicktime
RV can use libquicktime which is distributed under the terms of the GNU LGPL to read and write QuickTime, dv, MP4, and AVI movie files. The libquicktime library can be found in $RV_HOME/lib as a shared object. Libquicktime is capable of reading codecs not shipped with RV. You can read documentation at the libquicktime website to find out how to write or install new codecs. Plugin codecs can be found in $RV_HOME/plugins/lqt in the RV distribution tree. New codecs can be installed in the same location.
A.3 FFMPEG
On Linux, RV uses an LGPL-only version of FFMPEG by itself and as a libquicktime plugin to decode H.264 video and AAC audio. Source code for FFMPEG is included with RV. We build FFMPEG with the flags generated by its configure script, but we do not use its make files. RV (via the ffmpeg libquicktime plugin) uses only a small portion of FFMPEG. These portions are restricted to the codecs for which Tweak Software has a license – namely the AVC1 (H.264) video and AAC audio codecs for decoding only (the MPEG4 codecs generally).
If you are using ffmpeg through our ffmpeg plugin directly then you can find directions in $RV_HOME/src/mio_ffmpeg/README for recompiling with support for additional codecs ffmpeg supports. The obligation is yours, however, to sort out licensing if doing so.
A.4 FreeType
A.5 FTGL
A.6 LibRaw
A.7 Libtiff
Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that (i) the above copyright notices and this permission notice appear in all copies of the software and related documentation, and (ii) the names of Sam Leffler and Silicon Graphics may not be used in any advertising or publicity relating to the software without the specific, prior written permission of Sam Leffler and Silicon Graphics.
THE SOFTWARE IS PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, EXPRESS, IMPLIED OR OTHERWISE, INCLUDING WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL SAM LEFFLER OR SILICON GRAPHICS BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT OR CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER OR NOT ADVISED OF THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF LIABILITY, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
A.8 LibEXIF
A.9 Libjpeg
This software is based in part on the work of the Independent JPEG Group. RV uses the Independent JPEG Group's free JPEG software library to decode jpeg.
A.10 OpenJPEG
Copyright (c) 2002-2007, Communications and Remote Sensing Laboratory, Universite catholique de Louvain (UCL), Belgium
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS `AS IS' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
A.11 OpenEXR
RV uses the the OpenEXR library. The source code for the library and tools can be found on the OpenEXR web site.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of Industrial Light & Magic nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
A.12 Minizip
RV uses the minizip package (which comes with the libz source code) Copyright (C) 1998-2005 Gilles Vollant.
A.13 Audiofile
A.14 OpenColorIO
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of Sony Pictures Imageworks nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
A.15 Yaml-CPP and libyaml
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
A.16 tinyxml
A.17 libresample
A.18 OpenImageIO
Based on BSD-licensed software Copyright 2004 NVIDIA Corp. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of the software's owners nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
A.19 Atomic Ops
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
A.20 Boehm-Demers Garbage Collector
THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
Permission is hereby granted to use or copy this program for any purpose, provided the above notices are retained on all copies. Permission to modify the code and to distribute modified code is granted, provided the above notices are retained, and a notice that the code was modified is included with the above copyright notice.
A.21 mp4v2
Software distributed under the License is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License.
The Initial Developer of the Original Code is Cisco Systems Inc. Portions created by Cisco Systems Inc. are Copyright (C) Cisco Systems Inc. 2001 - 2005. All Rights Reserved.
3GPP features implementation is based on 3GPP's TS26.234-v5.60, and was contributed by Ximpo Group Ltd. Portions created by Ximpo Group Ltd. are
Dave Mackie dmackie@cisco.com Alix Marchandise-Franquet alix@cisco.com Ximpo Group Ltd. mp4v2@ximpo.com Bill May wmay@cisco.com
A.22 lcms
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
A.23 OpenCV
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistribution's of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistribution's in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* The name of Intel Corporation may not be used to endorse or promote products derived from this software without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the Intel Corporation or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage.
A.24 PySide
This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License version 2.1 as published by the Free Software Foundation. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
A.25 PyOpenGL
PyOpenGL is based on PyOpenGL 1.5.5, Copyright © 1997-1998 by James Hugunin, Cambridge MA, USA, Thomas Schwaller, Munich, Germany and David Ascher, San Francisco CA, USA.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* The names of the contributors may not be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Appendix B Licensed Components
B.1 MPEG-4
THIS PRODUCT IS LICENSED UNDER THE MPEG-4 VISUAL PATENT PORTFOLIO LICENSE FOR THE PERSONAL AND NON-COMMERCIAL USE OF A CONSUMER FOR (i) ENCODING VIDEO IN COMPLIANCE WITH THE MPEG-4 VISUAL STANDARD ("MPEG-4 VIDEO") AND/OR (ii) DECODING MPEG-4 VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A PERSONAL AND NON- COMMERCIAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO PROVIDER LICENSED BY MPEG LA TO PROVIDE MPEG-4 VIDEO. NO LICENSE IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE. ADDITIONAL INFORMATION INCLUDING THAT RELATING TO PROMOTIONAL, INTERNAL AND COMMERCIAL USES AND LICENSING MAY BE OBTAINED FROM MPEG LA, LLC. SEE HTTP://WWW.MPEGLA.COM.
B.2 AVC
THIS PRODUCT IS LICENSED UNDER THE AVC PATENT PORTFOLIO LICENSE FOR THE PERSONAL AND NON- COMMERCIAL USE OF A CONSUMER TO (i)ENCODE VIDEO IN COMPLIANCE WITH THE AVC STANDARD (“AVC VIDEO”) AND/OR (ii)DECODE AVC VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A PERSONAL AND NON-COMMERCIAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO PROVIDER LICENSED TO PROVIDE AVC VIDEO. NO LICENSE IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE. ADDITIONAL INFORMATION MAY BE OBTAINED FROM MPEG LA, L.L.C. SEE HTTP://WWW.MPEGLA.COM