That seems to work thanks… but following the thread, the spout device is not sending. I’m sure there’s a way to enable it on startup (also my two audio device were not in play and didn’t have their system devices/drivers still selected)?
When I try to set the spout to send though, manually, the log says ‘Failed to initialize device: Source not selected’
I’m not sure I understand what you mean by the Audio devices remembering the drivers they are set to, could you elaborate? Essentially, if the Stream you choose in the Audio device remains connected to your server, the Audio device should still be connected to it upon reopening the project. Is this what you were asking about?
Regarding triggering the stream of Device nodes, as you kindly mentioned, you can use the ‘Set Device Status’ node.
Regarding the Spout sender, if I understand correctly, you had selected a Spout source as required, but when you tried sending a Stream, you got the Log informing you that the failure of initializing the device was due to the source not being selected, correct? Then, when you disabled LightNet, you stopped getting this log and you were able to send data as expected? The same thing happened with NDI as well, right?
Essentially, when LightNet is enabled, you do need to include an additional step – and that is to choose the machine you want to Run the node on. In the case of NDI, you would need to change the Run on property from Everywhere to your local machine, as shown in the screenshot below:
Thanks Sara. I think the two were related - when I enabled LightNet the Audio devices forgot which driver they were set to, maybe.
When I am using LightNet, is there some equivalent to a canvas on the audio side? Or is it unneeded? Specifically, let’s say I have two servers playing stereo audio, and then also I want timecode, do I need to add a Play Audio node to the layer layout for each device?
I notice that the standard Video node has a selectable audio device, but nothing in the nodegraph using that parameter? Is using the audio in a movie clip a good practice? I’ve become used to the disguise convention on handling audio separately.
Nope, there is no equivalent to a Canvas on the audio side.
So you would like to play an Audio file, at a specific time, as defined by your Timecode? Is this the use case? If so, there are a couple of different ways you can do this:
By using the Get Timecode Time node and then using conditional nodes to play the Audio at the correct Time.
By placing a Marker on the Timeline at the correct time, and then using the Run on Marker node which you would connect to the Play Audio node. In this case, the Timeline would have to be synced to the Timecode.
Typically the best practice is to handle Audio separately from the Video but if your Video has Audio embedded to it, you still have the option to use it.
Actually simpler - just looking to have a video layer on the timeline (no timecode involved generally) and a matching audio layer, then send that to various devices. Currently I have a layer with a ‘get Audio’ and multiple Play Audio nodes that seems to work.
For timecode, I’d separately like to generate arbitrary LTC (pretty normal to use an audio file) and output that to some other selected devices.
What does ‘get audio’ do? Is it the ‘play audio’ nodes that is getting the correct audio for that point on the timeline?
Hi Sara, I seem to have Audio working across LightNet but not NDI or Spout.
The NDI sender is set to run everywhere. The Spout devices don’t seem to have that option.
When I try to enable the senders, on either Primary or Secondary, or via a layer layout, I get the log error about Failed to initialize device: Source not selected
Hmm - if If I set the NDI sender to only run on the Secondary, then I can start it sending. (EDIT but I don’t see anything on the signal - using a Layer Layout to send to it) The Spout device doesn’t have that option. FYI in my use case I want the Spout senders to run on all Secondary machines (everywhere is fine too)
EDIT - as far as I can tell LightNet is properly set up. I can transfer files and Sync and control.
We were able to reproduce the NDI sender and receiver issues meaning they should be fixed in the next release. Thank you very much for that!
As you kindly mentioned, for now, on the NDI Sender device node, instead of selecting Run On: Everywhere, it would be best to just manually select all the servers in the Run On list individually.
You mentioned that once you successfully enabled the NDI Sender device, you were not able to send a Texture from the Layer Layouts. May I ask, were both of these statements true when you tried achieving this:
On the Primary machine, the playhead was on top of the Layer, meaning the Layer was active and the Texture was rendering.
The Secondary LightNet machine was synced to the Primary, meaning the specific Layer was also active on the Secondary machine, through which we are trying to send the NDI Stream.
If these two requirements are fulfilled, you should have no problem sending an NDI Signal through your Secondary machine. Or is this not the case for you? If so, could you please send us your project in a packaged form, so that we could take a look?
Regarding Spout, are you encountering this issue in LightNet only, or even in a single-machine setup?
Thank you once again for all of your feedback Bruce, it is much appreciated.