I was imagining having a touch video wall. I watched the most recent video using the optical flow node - which seems to only output a CV Mat array. Is it possible to read the output from the optical flow node, and if there is movement in a particular location, use this to interact with another object in light act (or trigger a play video)? I hope this makes sense - for example Innovative Digital Signage: Interactive Touch Video Wall - Cellcom Digital Device Catalogue - YouTube
where Get Pixel from Texture node extracts the color from a certain pixel (in relative coordinates), which you can then check against a condition.
Also, there has been a significant upgrade in Lightact’s computer vision capabilities, which you can read more about in rather dry Computer vision user guides (I should be able to make some video tutorials later this week.)
These new capabilities allow you to pass the computer vision data (for example, Circle and Contour descriptions) to Unreal Engine where you can create anything you want with them. For example, create procedural meshes based on this data.
I’ll update this post in about a week to include the new video tuts and user guides.
Perhaps I am not understanding the Get Pixel from Texture correctly. I have built a feed from the IntelReal sense camera. I’m able to fully track the movements using Find CV Contours. I resize this into a smaller texture - like a touch bar for the top of my camera view. However it seems that the Get Pixel doesn’t recognize the (white) image. Attached is a screen shot and the light act file. thanks Jason
It might be worth exploring Texture slice node instead of MOG2 or Find contours. It outputs the value of selected channel if it is in the range (the range is from 0=0 to 1=255). Have a look at the layout below.
Because when you have a depth map, its’ much more efficient to use Texture slice than OpenCV nodes. Those are usually used when you need to detect things in a video (RGB) feed, not depth map.
I see that in the Get pixel from Texture node you’ve got 10, 10 entered as the pixel coordinate. Judging from the preview image, it’s quite possible that pixel 10, 10 is indeed a black one. Can you change the coordinates a bit until you hit a white pixel?
Yes - I’m actually moving my hand around. I can see the Get Pixel trigger and change the value in the Color Reader from Black to White when using the RGB feed (first image). But for some reason it doesn’t trigger when using the feed from the Stereo stream. The only difference between the 2 setups is the feed - RGB feed and Stereo feed…
What if you set Texture sliceMask range to 0-1, so that it will be all white. And perhaps set Greyscale input to false, so that you’ll see some difference in grey values. Does it work then?
I looked at the video you sent. From what I see I suspect the following:
you are trying to get the color of pixel 10,10 . That’s at the very top left corner of the texture considering the resolution is 640 x 360 in RGB.
When you switch to Stereo stream it seems from the video that that top left corner never gets ‘lit’ even after changing the Mask range to 0-100. The texture previews are always aligned with the left and right edges of the node above, so if you want to get red color of pixel 10,10 then the texture’s top left corner (the corner just below the bottom left corner of the node) should be red.
Can you do a favour for me and try detecting pixel, say, 150 by 150 or something like that?
p.s. the textures being passed around from node to node are of the exactly same type (Texture2DRef in Cinder parlance), so Get Pixel from Texture node should work with stereo texture as well as RGB.