Dynamic blending

Hi
How to setup dynamic blending, I have screen surface tracked which has two projectors blending on it.
So we have mapped surface with blending, we know the position of screen and projectors.
I would like to put life screen position data in to LA (for e.g. screen is going back making bigger distance from projectors) can somehow by using thrower or recalculating canvas blending parameters be recalculated? Or I missing this feature?

K.

Hello K.

This feature is in active development but is not ready to be shipped yet (even in beta). However, if you send us your project file and tell us exactly what you want to do, we might be able to implement a quick fix just for you. Let me know over PM or e-email if you want.

What’s available at the moment

At the moment, I’m afraid that the only solution I can think of is to import a blend map (a texture) as an image and then play with Texture resize, Texture add and other kinds of nodes.

I don’t know how your blend map and your whole setup looks like, but I am assuming you used the warp & blend tools in Canvas setup window. If so, you can export the softedge blend map from Projector’s layout as shown below:

The screenshots are saved to: C:\ProgramData\Lightact\Lightact3\saves

From there you could use the layout below as a guideline on how to adapt your blend mask dynamically:

A short explanation:

  • Gradient node serves a simulation of your blend mask
  • Float constructor serves as a simulation of your screen tracking data. You’ll need to transform the screen tracking data into a float that will properly represent how blend mask should adapt based on the location of the screen.
  • Float clamp node is there just so that we don’t go over 960px, which is the projector’s texture in the sample project I am using.
  • Vec2 joiner creates a vec2 from 2 floats
  • Texture Crop or Pad in combination with Texture Blend (multiply mode) effectively stretches the gradient so that the blend mask appears bigger or smaller.

Again, this might not serve your purpose, but please do let me know if it solves your problem or not.

Cheers,
Mitja

2 Likes

Hmmm Yes I would like use that way but I thing I will use notch witch simulation of moving screen witch will modify blend map ( now I trying do this alone just in Notch). Ofc problem will be the extra latency but for sooth slow movements it could be ok

Thanks for show me the path
K.

No worries. My pleasure.

By the way, which system are you using for tracking the location of the screen? Just so we know what we should focus on… :slightly_smiling_face:

Optitrack+motive :slight_smile:

Interesting. How are you getting the data to Notch? What kind of protocol do they support?

sorry for being lazy and asking you instead of reading their documentation :slight_smile:

Notch has integration with Motive (it is Beta but for me works now :slight_smile: )

1 Like