Using Camera/RealSense input to trigger events

Hi Again,

I was imagining having a touch video wall. I watched the most recent video using the optical flow node - which seems to only output a CV Mat array. Is it possible to read the output from the optical flow node, and if there is movement in a particular location, use this to interact with another object in light act (or trigger a play video)? I hope this makes sense - for example Innovative Digital Signage: Interactive Touch Video Wall - Cellcom Digital Device Catalogue - YouTube

thanks Jason

Hi Jason,

Welcome to the new Answerhub.

These features are coming soon, Jason (and sorry if this answer seems to be very common with your questions :wink:)

I’ll update this topic when this feature is improved.

Cheers,
Mitja

Good to know :slight_smile: You guys are doing a great job with the platform

Hi Jason,

Since v3.1.1. the most straightforward approach would be to connect Optical flow (or BW & thresholded Optical flow) to the layout below:


where Get Pixel from Texture node extracts the color from a certain pixel (in relative coordinates), which you can then check against a condition.

Also, there has been a significant upgrade in Lightact’s computer vision capabilities, which you can read more about in rather dry Computer vision user guides (I should be able to make some video tutorials later this week.) :slight_smile:

These new capabilities allow you to pass the computer vision data (for example, Circle and Contour descriptions) to Unreal Engine where you can create anything you want with them. For example, create procedural meshes based on this data.

I’ll update this post in about a week to include the new video tuts and user guides.

Cheers,
M

Thanks! This has been really interesting to try. Look forward to your videos!

Perhaps I am not understanding the Get Pixel from Texture correctly. I have built a feed from the IntelReal sense camera. I’m able to fully track the movements using Find CV Contours. I resize this into a smaller texture - like a touch bar for the top of my camera view. However it seems that the Get Pixel doesn’t recognize the (white) image. Attached is a screen shot and the light act file. thanks Jason

computer vision 2.la (33.6 KB)

Hey Jason,

Always interesting to see what you are up to! :slight_smile:

It might be worth exploring Texture slice node instead of MOG2 or Find contours. It outputs the value of selected channel if it is in the range (the range is from 0=0 to 1=255). Have a look at the layout below.

Because when you have a depth map, its’ much more efficient to use Texture slice than OpenCV nodes. Those are usually used when you need to detect things in a video (RGB) feed, not depth map.

Does this help?

p.s. I used this image in the layout above.

Cheers,
Mitja

The fascinating thing is that I can make this example work when using the RealSense RGB feed, but not the Stereo (3d sensor) image feed…

I see that in the Get pixel from Texture node you’ve got 10, 10 entered as the pixel coordinate. Judging from the preview image, it’s quite possible that pixel 10, 10 is indeed a black one. Can you change the coordinates a bit until you hit a white pixel?

Thanks!
Mitja

Yes - I’m actually moving my hand around. I can see the Get Pixel trigger and change the value in the Color Reader from Black to White when using the RGB feed (first image). But for some reason it doesn’t trigger when using the feed from the Stereo stream. The only difference between the 2 setups is the feed - RGB feed and Stereo feed…

Hmm, that’s weird. It seems to be working on my side.

What if you set Texture slice Mask range to 0-1, so that it will be all white. And perhaps set Greyscale input to false, so that you’ll see some difference in grey values. Does it work then?

Cheers,
Mitja

sadly No. I assume it’s specific to the Real Sense camera…

Hey Jason,

I looked at the video you sent. From what I see I suspect the following:

  • you are trying to get the color of pixel 10,10 . That’s at the very top left corner of the texture considering the resolution is 640 x 360 in RGB.
  • When you switch to Stereo stream it seems from the video that that top left corner never gets ‘lit’ even after changing the Mask range to 0-100. The texture previews are always aligned with the left and right edges of the node above, so if you want to get red color of pixel 10,10 then the texture’s top left corner (the corner just below the bottom left corner of the node) should be red.

Can you do a favour for me and try detecting pixel, say, 150 by 150 or something like that?

p.s. the textures being passed around from node to node are of the exactly same type (Texture2DRef in Cinder parlance), so Get Pixel from Texture node should work with stereo texture as well as RGB.

Cheers,
Mitja

That did it!! thanks very much for all the help