Hacker Newsnew | past | comments | ask | show | jobs | submit | getToTheChopin's commentslogin

Thank you! I spent a lot of time tuning the synth parameters in tonejs to try to get something I was happy with.

The scales are based on the Japanese "in" pentatonic scale (sometimes called the Sakura scale), which I find really pretty.


here's the demo video on youtube as well: https://youtu.be/XBn5F72aIbg


thank you. I thought of this on a roadtrip and was very keen to start building it when I got home.


This is a computer vision experiment, using voice + hand gestures to control a shader animation.

This uses a liquid Chladni pattern, with controls for color and wave frequency / amplitude.

Created using threejs, mediapipe computer vision, and web speech API.


I'll try it, thank you! I separated them into completely different interaction modes to avoid misfires, but there's definitely room for efficiencies


I would be surprised if sign language didn't have an efficient way to convey digits.


It's a shame that computer vision tech like Leap / Eyetoy / Kinect didn't have lasting power. So much fun to build with


I made a demo game where you need to dodge the evil bouncing DVD logo by moving your body: https://x.com/measure_plan/status/1924830500541157570

I'm working on a couple other body movement concepts and hope to share soon :)


See also Webcam Mania: https://webcam.sulat.net/

It uses a bit simpler approach, only detecting movement, but it works good enough for such games.


Good points, maybe a second camera (phone?) pointed downwards at the tabletop would be good for that. Then the user can rest their hands in a "normal" position.

Thank you for the feedback!


yes I'd love to go further with this concept so that 3D / CAD designers could easily present their models during video calls.

thank you!


Sorry about that, the instructions need to be improved.

Does this video demo help?

https://x.com/measure_plan/status/1929900748235550912

If it makes it clearer, I'll upload it to the github repo directly


That video did help. I think I was thrown off by two things: 1) I was expecting 3D controls with more direct mapping (e.g. rotating my hand rotates the model). This is more like gesture mouse controls. 2) Some of the controls were too subtle. The scaling between my gesture size and effect on screen was smaller than I expected.

Great area to develop though. There's so much untapped potential in applying Mediapipe.


Thank you for the feedback. I'll continue to work on it!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: