Hi! We’re trying to develop a workflow that allows us to use Blender for realtime graphics rather than Unreal, for various reasons. Right now the big problem we’re trying to overcome is synchronous rendering across multiple nodes.
With Unreal and nDisplay, we can tell each node to only render a certain portion of a whole image (by pixel coordinates.) We can do the same with Blender with render regions, but there’s no way to ensure the rendering is actually synchronized. Our idea is to build a program with Unreal that reads genlock/timecode from a Blackmagic card and sends that data via OSC to Blender to trigger a render, and simultaneously shares that data with Lightact. The Lightact instances would communicate together with Lightsync.
Of course, all output nodes will also be sync’d together with Quadro Sync,
My first question is if this approach seems valid, or if there’s something on the Lightact side that I’m missing.
My second question is if the Lightact plugin still works if the project is packaged into an .exe, as for this we ideally wouldn’t need to run the whole UE editor just to get and pass timing information.
Thanks!