RVIO and rthread and tga support

Hi Guys,

Love the Zen desk.  Thanks for setting this up for your users. 

Having an issue with RVIO and TGA support.  Firstly we really, really, really, need accelerated support for TGA files in RVIO and RV.  To compond the slowness of TGA support in RVIO on Windows the rthread flag crashes rvio.  Attached is a sample python script and a screen grab of the crash

import os
inPath = " P:/Desktop/testFootage/testFootage.#.tga"
outPath = "P:/Desktop/ testFootage/testFootage.mov"
cmd = "rvio"
cmd = cmd + " -rthreads " + "2"
cmd = cmd + " " + inPath
cmd = cmd + " -o "+ outPath
os.system( cmd )
If I remove the rthread flag, RVIO works but its pretty dog slow.  If I swap the footage for a dpx sequence the rthread works and RVIO is still slow.  I can do some performance comparisons, but honestly RVIO is entirely unreasonably slow compared to Nuke, DJV_Convert, After Effects on our Windows XP 64bit systems. 

Please Help!



3 条评论

  • 0
    Jim Hourihan

    Hi Mike, we're in the processes of writing a TGA reader/writer similar to the current DPX/Cineon I/O. It will have the same I/O options on all platforms. I think what's going on here is that on windows we have an umbrella library that picks up any formats we don't have optimized readers/writers for an it appears to have a crumby TGA implementation.



  • 0
    Mike Romey

    Looking forward to it.  Really need a optimized TGA library also really need fast performance on RVIO.  Anything you guys can do to make this a reality will seat RVIO into our studio well.  Seth mentioned a long time ago you had a hardware renderer for RVIO, would it be possible to bring this back as a command flag.  This flag would essentially require the workstation to have a graphics card vs. a render node that is headless and would also make RVIO perform quite a bit faster I suspect.


  • 0
    Alan Trombla

    For the record, optimized TGA support and a GPU-accelerated version of RVIO were both added in the 3.8 release (latest version at the moment is 3.8.6).