Ok, so back in the 90s I knew someone who was working in a research department. They had a big project that reminds me of this known informally as “desktop computing” (you’ll appreciate this is cutting edge early 90s tech, not a patch on what we have now).
Lots of work on dynamic image capture, lots of work on interpolating what’s going on due to the low (by our standards) capabilities of the camera. Except...
...the camera was, to say the least, badly documented. A research student going through the documentation (looking for something else) finally noticed the thing had been misconfigured the entire time. In fact, the camera was significantly more capable.
The camera got reconfigured, people continued to produce research papers and no-one died. It was, however, extremely funny.
Super great, super interesting to cheaper components for the i/o. And bonus for me, I discovered the Recurse Center while reading and clicking. https://www.recurse.com/manual
Interesting I had not heard of Dynamicland previously. The posted project though reminds me a bit of reacTable [1] although the latter is targetted to music media creation. It's a 'product' now, I guess, but it started off as a research project published in early 2000s. I gather that Dynamicland goes back quite a bit further which is interesting.
In our group we wanted to see how dead easy it would be to build an interactive projection system, so we did it.. just mounted a projector and a camera above a table [2] -- can't believe it's already been 10 years since then. It was surprisingly simple -- we didn't bother with AR tags, just used silhouettes via simple video processing as inputs to vector field operations to generate movement and influence physics simulations and audio-generating algorithms. The advantage being that you literally just stick your hand in there and something happens, it was great for kids. But building the whole system was quite easy too, it would make a great "science project" for the right age.
Oh, another fun "tangible computing" project from the same era was "BeatBearing", dug up the video here... [3]
I came across Reactable a couple weeks ago, and have been playing with ReacTIVision, which is their computer vision engine. It seems pretty promising for quickly bootstrapping a table-based interface. (Although I do wish their calibration docs had a bit more detail.)
Thank you! Keeping things simple, even naive, has made it easy to tear it down and start over, which I've done a few times now.
As for RealTalk, the author is also interested to see what she comes up with! In the past week I've been playing with the notion of "claims" and "wishes", inspired by a description of RealTalk by Tabitha Yong, seen in this fantastic post by Omar Rizwan: https://rsnous.com/posts/notes-from-dynamicland-geokit/
You might also want to look at natural language datalog (https://github.com/harc/nl-datalog), which is a prototype of the query language used in Dynamicland.
Kudos for the project, but I am afraid that if people keep calling random AR experiments "Dynamicland", it will quickly lose meaning - see https://paperprograms.org, a project much closer to Dynamicland, but at least still cognizant of the fact it's not a "small Dynamicland":
>Paper Programs recreates just a tiny part of what makes Dynamicland so interesting. To learn more about their system and vision, be sure to visit Dynamicland in Oakland -- and please consider donating, too!
>Paper Programs is inspired by the projector and camera setup of the 2017 iteration of Dynamicland. I liked how you could physically hold a program in your hands, and then put on any surface in the building, where it would start executing, as if by magic. [...]
>In contrast, Dynamicland is a community space designed around Realtalk. Realtalk is a research operating system (in development for several years) designed to bring computation into the physical world. It is more general than papers, projectors, and cameras. Dynamicland is intended as a new medium of human communication, and is designed to be learned and used by a community of people interacting face-to-face, not over the internet.
But also - I realize you know about the limitations, but being only able to interact with one static program at a time, and not being able to see the source code of the components is crucial, it's what makes it "not even a Dynamiclandish prototype" for me.
But I don't want to be a killjoy and kill your enthusiasm; I think it's great what you've done! But I'm just irked at namedropping Dynamicland like this.
Come on, don't do that! That's like if someone shows you their play compiler and you say:
> If people keep calling random programs 'a compiler', it will quickly lose meaning... A compiler is intended as a new medium of human-machine interaction, capable of transforming arbitrary source programs into machine code...I realize you know about the limitations, but only supporting static assignment, not being able to compile functions, it's what makes it "not even a compiler" for me.
Dynamicland was always Dynamicland, even when it was starting out as a tiny implementation.
The long term goal seems to be to get to something Dynamicland like, even if the current version has no Realtalk equivalent and is just a projector. Dynamicland itself started partially as a "random AR experiment" (http://tablaviva.org/).
Lots of work on dynamic image capture, lots of work on interpolating what’s going on due to the low (by our standards) capabilities of the camera. Except...
...the camera was, to say the least, badly documented. A research student going through the documentation (looking for something else) finally noticed the thing had been misconfigured the entire time. In fact, the camera was significantly more capable.
The camera got reconfigured, people continued to produce research papers and no-one died. It was, however, extremely funny.