Hacker News new | past | comments | ask | show | jobs | submit login
Bret Victor update (worrydream.com)
351 points by dcre on July 5, 2023 | hide | past | favorite | 128 comments



His latest demo of Realtalk, Biomolecular design in Realtalk, showcases the strenghts of Dynamicland. Really impressive stuff, as usual. https://youtu.be/_gXiVOmaVSo?t=865


What do you think will be the keys to any future battle between this, something like Apple Vision Pro, and traditional desktop metaphors? I have to admit, on first viewing every single component of this looks unbearably clunky but presumably it’s going somewhere, and I can’t say I want to live in a future where I have to strap on a headset either.


The tree of Spatial Computing has many branches. The use of Tangible User Interface mechanics in a super specific application like this is a perfect bottom up design from function approach and I love it. He has all the building blocks, no pun intended, to reshape the same experience deeply customized for any number of domains like digital twin monitoring of manufacturing plant lines or dynamic ERP dashboards or K-12 like the original Dynamic Land work. Looking forward to whatever he is getting ready to reveal.


I have become convinced that the only mass-marketable form of spatial computing is a neural implant or contact lens.


First problem: power and connectivity - you can't have wires dangling from inside your skull or eyes.

Second, that _completely_ ignores touch, which is the largest of the senses. There's a reason car companies are going back to knobs!


Third problem, all devices produce heat. Lasers, LEDs, and cameras. There's a hard limit to how much heat you want to conduct into the cornea.


Fourth problem: it will all be used for advertising, surveillance and disinformation anyway. Beamed directly into your skull and eyes so you can't avoid it? Awesome.


The sunglasses form factor would be more mass-marketable than contact lenses, I believe.


My only concern is for people like me who already have really poor eyesight (mine is -9 on one and -11 on the other). I am envious of people who can wear sunglasses, and the only option I have is to get contacts and that doesn’t seem very pleasant. Also why I never fully came around to VR/AR because it felt like I’m wearing double goggles, but I acknowledge that there are others who wear glasses and don’t share the same concerns.


What does the poor eyesight have to do with not being able to wear sunglasses?

There are prescription sunglasses for shortsightness, astigmatism, etc.

So the sunglasses-form VR would just have to cater for that. Apple's headset-form VR already does (offers the option to have prescription lenses in it).


With extreme corrections it get harder to put that into anything other than flat lenses and a lot of sun glasses aren't that shape. I recently went through getting perscription safety glasses and had to get special inserts to flatten out the lens profile to make my relatively minor -3.75 to -4 prescription. I was able to get some relatively normal Oakleys in my script for sunglasses though.


Also you generally want sunglasses to be much wider and taller than prescription glasses, because otherwise you get too much light bleed around the sides. Making large lenses is when the thickness takes off, especially for nearsightedness where the thickness ends up at the edges of the lens.


Or just find a pair that fits over the prescription pair and wear them both.


Not easy to wear two pairs at the same time, and not convenient either.


They probably meant clip-on sunglasses that clip to your usual glases.


no, I do mean just wearing two pairs of glasses, one on top of each other.


I used to absolutely dead contact lenses and not understand how anyone could choose to wear them. Then I got diagnosed with keratoconus and had to wear rigid contact lenses. It turns out it's not that bad, especially with hybrid or sclerals. I imagine entirely soft ones are even more pleasant if that's an option for you. It's gonna be weird the first few hours or and it will take a few days to get comfortable handling them, but it's really convenient. You don't have to clean them throughout the day like glasses, you can wear sunglasses over them etc. I recommend you try them for a few days and see how you actually like it. It's not a scary as it sounds.


I have a matched set of -9, so I don't really have to track left/right with contacts. Which is kind of nice.

Day to day, I wear glasses, and have no plans to change that. If I'm going to a show or a movie, or if I'm planning on spending a lot of time outside, I wear the contacts. The contacts are set up for distance not reading. and that's becoming more important as I get older.

It can be disorienting for a few minutes, I forget how much glasses distort peripheral vision.

I get a small supply of daily disposables. I keep a pair in my laptop bag. With vision as poor as ours, you know how difficult it is if your glasses are lost or damaged. It's not life changing, but it is a genuine quality of life improvement for me.

I encourage you to get a contact exam, to verify you can wear contacts. They should teach you how to put them in and take them out. It can be stressful, but you never touch your eye. The doctor or assistant can answer any questions you have. They're not particularly expensive (in the US anyway). You don't have to wear them. Try them out, they might have some benefits you haven't realized.

That said, I don't think i'd wear contacts just for AR/VR. The headsets already seem like a hassle, and extra tooling to make my eyes use them effectively, even more of a hassle.


Don't worry. I'm blind and I'm totally fucked in a 3d first ux. :)


I've put lenses on all my sunglasses. Works great, except when I go into a store and I have to lift my sunglasses and suddenly it's all blurry.


I would expect the second / third generation of sunglasses form would be showing you a high quality video display instead of seeing "through" it - so I suppose correcting for vision problems would be zooming or other effects on the video. This of course is my expectation based on no knowledge in the field and probably sounds silly to those with that knowledge.


Is it the idea of contacts that you don’t like? My wife is in the -15 to -16 range with astigmatism and has worn contacts for 25 years. She doesn’t have any abnormal trouble with VR when wearing them. At -11 you can still get glasses lenses at all the budget places, 1.74 refractive index even!


A lot of online glasses places do two-for-one deals so I just get two of the same frames, one with normal lenses and one with tinted. That way I know the lenses will fit the frame.


There are sunglasses (normally worn by older people) that are meant to be worn over your normal glasses.


Feels like that's the endgame, even if it's decades away. Being able to arbitrarily address a human's supposed 576-megapixel visual resolution trumps basically anything else. Don't think this is relevant for this next generation of UIs though (if that generation does indeed materialise).


The numbers I’m familiar with are: 120M rods and 6M cones (each eye). What’s the derivation for your number? Is it projecting the rod density at the fovea over the whole visual field?


from: https://clarkvision.com/articles/eye-resolution.html

How many megapixels equivalent does the eye have?

The eye is not a single frame snapshot camera. It is more like a video stream. The eye moves rapidly in small angular amounts and continually updates the image in one's brain to "paint" the detail. We also have two eyes, and our brains combine the signals to increase the resolution further. We also typically move our eyes around the scene to gather more information. Because of these factors, the eye plus brain assembles a higher resolution image than possible with the number of photoreceptors in the retina. So the megapixel equivalent numbers below refer to the spatial detail in an image that would be required to show what the human eye could see when you view a scene.

Based on the above data for the resolution of the human eye, let's try a "small" example first. Consider a view in front of you that is 90 degrees by 90 degrees, like looking through an open window at a scene. The number of pixels would be 90 degrees * 60 arc-minutes/degree * 1/0.3 * 90 * 60 * 1/0.3 = 324,000,000 pixels (324 megapixels). At any one moment, you actually do not perceive that many pixels, but your eye moves around the scene to see all the detail you want. But the human eye really sees a larger field of view, close to 180 degrees. Let's be conservative and use 120 degrees for the field of view. Then we would see 120 * 120 * 60 * 60 / (0.3 * 0.3) = 576 megapixels. The full angle of human vision would require even more megapixels. This kind of image detail requires A large format camera to record.


Ahem, my calculation was that I once googled it and remember the number. My point’s more about having complete control of your vision rather than the specific number, but I assume that will one day be strictly better than any possible screen.


all your resolution is clumped up in the middle (around the fovea), and generally, nobody treats vision as being a rectangular grid of pixels (for many reasons).


Going to be great to have to get a neural implant to get a job.


This looks super impressive, but at the same time, I'd really like to hear someone actually working in molecular biology what they think of this. While I am working in this field, I only deal with software, not molecular design and wetlab stuff. But I see how people there currently work and have a very hard time imagining them fiddling with pieces of paper and origami blocks on a desk, looking at poor quality images and awkwardly move stuff with a laser pointer. I like the physicality, but at the same time, it seems very limiting to me in terms of scope.


This is very impressive work and I'm glad people are doing it. For any billionaire, funding it would be a no-brainer.

That said, it makes more salient for me the dichotomy between demos and reality. This demo looks very much like molecular biology does in the movies.

But what is it to look like something "in the movies"? I don't know exactly... All tools have overhead, no matter how cleverly designed. Maybe "the movies" is what you get when you ignore this overhead.

There's a clear phenomenon in looking back on old demos and concluding we must have taken a wrong turn because our systems are still not as good. This is true of Victor's own demos from a decade ago, and many well-known demos from decades earlier.

But rewatching a Victor demo earlier this year, which had blown my mind when I saw it live, it seemed full of a kind of naivety (and a bit of sleight of hand). Of course, he made clear it was only a demo. It was meant to inspire. But now I feel that's an easy excuse for not investigating what really advances our use of tools – and what holds it back.


You can just say “Worse Is Better”.

> …what really advances our use of tools - and what holds us back.

It’s money! It’s cost! The cheaper tool wins.

“Worse Is Better”


> For any billionaire, funding it would be a no-brainer.

Billionaires still want ROI.


I have no idea what I just watched. :-( Bret Victor makes me feel inadequate.


Transcribed for accessibility. (Google Lens did a really good job but I cleaned up what I noticed.)

July 4, 2023

Hi

Dynamicland is still going, just quietly. We closed the Oakland space for covid, but Realtalk development and collaborations have continued basically as originally planned, if more slowly due to our small size.

I'm hoping to spend the summer working bionano, and get back to preparing new on the Dynamicland website in the fall. It might be ready by the end of 2023? It'll have everything.

I'll try posting at @bret@dynamic.land (currently Mastodon until we have time to do our own thing in Realtalk). Id appreciate if you don't ask for my Opinions about things.

Thanks, Bret


This is a copy of that element, you don't need Google Lens. :-)

<img width="1056" height="792" src="July2023.jpg" alt="Hi,

Dynamicland is still going, just quietly. We closed the Oakland space for covid, but Realtalk development and collaborations have continued -- basically as originally planned, if more slowly due to our small size.

I'm hoping to spend the summer working on bionano, and get back to the new Dynamicland website in the fall. It might be ready by the end of 2023? It'll have everything.

I'll try posting at @bret@dynamic.land (currently Mastodon until we have time to do our own thing in Realtalk). (I'd appreciate if you don't ask for my opinions about things.)

Thanks, -Bret">


Semantic Web FTW xD


Quick link to his Mastodon: https://posts.dynamic.land/@bret


The image is clickable.


Right now (2023-07-05T13:35:45-0400), and for me, it is not.


I stand corrected. I see the link to his Mastodon instance is! Wow, completely missed it.


It’s an overly cute trick with poor ergonomics :)


We don’t have enough long-term research these days. I wish Bret Victor well and am looking forward to future innovations from him and his crew.


I think it's the worst indictment of our entire industry that Victor doesn't have oodles of funding for life.


The optimist in me hopes that a UBI-like system would enable more people to tackle research, nonprofits, and other long term “social good” type projects.


Perhaps he doesn't want the stress from investors who only care about their ROI?


Grant funding, I would have thought GP meant.


I think he does? Has he indicated he doesn't have enough funding?


He has publicly lamented the lack of funding for long-term research many times, yes.


Reminds me of a demo of using MS hololens to tell a scientist where to pipette something.

Just put the plates on a liquid handling robot - stop trying to treat the person as a robot.

The other aspect - the human -> computer ( as oppose to the computer -> human ) interface is interesting - it sort of reminds me of the first multi-touch demos.

ie exploring how best for humans to communicate intent to computers - though again - a bit like the example above around automation - in my view the best UI is no or less UI!


I assume most wet labs have robots that do part of the workflow. But they are mostly useful for automating repetitive tasks, and need a lot of setup before they can get to work.

For small experiments, when setup is 90% of doing the job, then you just don't win much by letting a robot do the remaining 10%.


From what I can tell, at least for academic wet labs, robots seem fairly unavailable/unaffordable.


The university lab I interned in had a 96 well pipetting robot that was shared between a few research groups. I never used it. I think it was mostly useful if you wanted to do an experiment many times with varying parameters. But for most of the stuff we did, the steps were just not repetitive enough that programming a robot would make sense.

Also, there are usually a lot of steps between the pipetting that the pipetting robot couldn't do like loading stuff in a centrifuge, checking DNA concentration, putting it in the PCR machine, etc.


Not your typical liquid handlers sure - but advances in robotics like Robocat type things should mean you could build a much more general robot that could operate much more like a human in a chain of tasks.


That true - but then they won't be able to use this either.....


Set up is in two parts.

Programming the robot, and putting the and register the reagents and source material.

Given that they can produce instructions for humans, I don't see why they could produce instructions of the robot ( with perhaps some human instructions for making the reagents and source material etc available ).

A lot of traditional liquid handling robots are large and have relatively large buffer tanks etc - so need flushing etc etc. But I don't see why you could use something like RoboCat to be much more flexible for simple container to container pipetting.


Interesting. What are some good examples of computer -> human interface?


Depends on what you are trying to do.

One interface I think is quite good is the spreadsheet - ( and I'm not talking horrible ribbons or poor charting - I'm talking about the core concept of a reactive grid of cells ).

You enter numbers and the computer immediately updates the results - the cells updating is the computer -> human interface, but it feels almost invisible as part of the task. The user has a mental model of how the spreadsheet works and treats it almost like a real world object.

You get a similar effect with the drawing/painting tools which use touch.


Great to see he's still around. Bret Victor's post on abstraction and his visualization of Strogatz classic paper on networks are two of my favorite pieces posted on the internet.

http://worrydream.com/LadderOfAbstraction/

http://worrydream.com/#!/ScientificCommunicationAsSequential...


His talk on "Inventing on Principle" and Randy Pausch's "The Last Lecture" are two videos I recommend every student to watch.


Oh, thanks for bringing that up. That network visualization is just so simple, so good and so inspiring. What a great way to convey research.


Glad to hear from Dynamicland; the last video got me so excited I decided to try and build a demo for tinkering with at home: https://github.com/deosjr/elephanttalk

Only dependency is on openCV through GoCV, otherwise pure Go. Supports a webcam/beamer setup. Uses Lisp for scripting instead of Lua (see https://github.com/deosjr/elephanttalk/blob/main/cmd/elephan...).

It's a lot slower/less mature than paperprograms, but it does attempt to implement the wish/claim/when model using a homebrew datalog implementation :)


> I’d appreciate if you don’t ask for my opinions about things.

I feel this.

Nice looking web page also.


I feel that as well. I'm at the point where I am knee deep in my PhD work, and I see that my commitment to that work makes me want to remove myself from other stuff. I just don't want to use any of my energy on anything else, and especially dealing with people in anything but an uncomplicated manner is the last thing I want to do. I am sure it must be doubly so if there is an audience to whatever you say and end up regret.

I have often envied how (I romanticize) it was before social media and hyper-connectivity, where you might be able to just fade out a little bit to focus on a project. Its a bit hard to get that long term focus when people expect you to be available and able to reply within the hour. Just saying "no" drains enough as it is. I don't have the backbone to put out that message though.


I've echoed this sentiment from time to time and someone recently asked me "Why don't you just disconnect and apologize later?"

Might not work for you, but I found this surprisingly liberating: allowing myself to just drop the ball, not respond, or just say "Sorry I can't" with no justification, so I can focus my energy where I need it


That line was my favorite.

It's challenging to be a public figure, present yourself on contemporary social media, and not be pressured (implicitly or explicitly) to speak up on everything.

Though I don't read this as being about political or social issues. How else would you have someone politely say, "I'd like to be left alone to do my work, but I'd also like to share with you what I'm working on."


What do you mean? "Though I don't read this as being about political or social issues."


ha, yeah, that reads as an argument fragment. I didn't express my whole thought.

It's the sort of statement that might get someone called out as being somewhere negative on some political spectrum, but I read it here as academic.

Dude probably gets absolutely spammed with, "can you read my paper | review my project | comment on this article?" requests.


People are more likely to ask Bret about computing issues.


I just made the mistake of reading a few recent DHH blog posts, and I really appreciate the difference between these two people. Both have a large following, but one sits down and does his work, while the other owns a company and seems physically unable to stop opining.


The crown of thought leadership is heavy and full of thorns.


In defense of DHH, a lot of the things he blogs about are completed and successful (to him) company open-source projects. He's sharing what they have done. It's not all bad.

A lot of views are contrary and he presents his views strongly, so it's kind of off-putting.


look i'm as much of a bret victor fanboi as the next guy, but that one felt a little like "whoa ok you're too cool for school now huh"


I think it conveys a sense of dedication to his work that is at odds with the ever-looming desire to pontificate than it does aloofness toward others.

I’m not familiar with him or his work beyond the cursory glance at his Wikipedia page that I made after recognizing the homepage from marginalia.nu binges, but I associated the statement with a certain level of exhaustion, or disinclination rather, from the desire to publicly comment on whatever bleeding edge or attention-grabbing topic a man of his profession and experience may encounter. I have no clue what this guy does with certainty, it sounds sort of Alan Kay-like? But based off this post it sounds like there’s a lot of ground he intends to cover with whatever he’s got going on that requires a lot of thought. Thought that he is reluctant to share at a toot’s notice.


If you're not familiar with him, you could watch his "Inventing on Principle" talk [1] from 11 years ago. Starting at around the 12 minute mark should get you right to the good stuff (though it's all worth watching and thinking about). This is the talk that made him famous.

Unfortunately, the demo he showed in that talk never really turned in to a product or movement that revolutionized programming as at the time many of us hoped it might.

[1] - https://www.youtube.com/watch?v=PUv66718DII


A pitfall of "Inventing on Principle", one which I believe Victor himself later noted, is that the talk is really a modest statement on what bringing some philosophy into your work could do, but it has an overwhelmingly engaging demo. Everyone tried to copy the demo in a kind of "wave of the future next gen programming" hype cycle instead of taking home the concept and trying to put that into work they were already doing.

And, having been one of those impressed with the demo, now I look back on it thinking "this business of software is cargo-culting the whole way down, isn't it."


Light Table was the only real attempt I'm aware of. Any other notables?


Swift creator also influenced though i don't remember where i heard or read that.


Yes! I've seen first hand how much it helps (is it just me in particular? maybe not everyone needs it?) programming things I can visualize (in a literal sense of seeing, but I think the essence is really being in close contact with intermediate results). So, I just make ways to visualize things :)

A particularly simple and amazing tool for this is p5.js, with an editor that, to me, enables exactly the kind of workflow shown here: editor.p5js.org

Whatever you are going to do, you can draw alongside to help you make sense of it better. I do this on my own mind sometimes, but for example visualizing and small neural networks (which I have been messing around with) can't be done on my imagination alone.

I think the important is just that you have some library that makes drawing easy, and put it alongside your work. Of course, it's extra work to make diagrams of everything. But sometimes they really help. I think it's important to simply have this possibility in mind.

(maybe it should have a name? Visualization-enhanced programming? Print-programming? I think (e.g. jupyter) notebooks strive toward this, although tbh I prefer the p5js environment without a linear format)

It should be up to us to decide if and when this extra work is worthwhile. I think really designing (and discovering) things is what benefits the most from this style (i.e. game design, product design, scientific experimentation). So you need programmers working on a design problem and/or designers working on a programming problem; or you could have them side-by-side collaborating (and not just a designer imagining things, and a programmer following specifications to the letter).

It also requires a kind of artistic taste to make things look good, but of course that's something you can develop.

It's a really enjoyable process to be able to get a feel for what you're doing!

(See here: https://editor.p5js.org/gustavo.nramires/sketches/sLA07wpa_ for some personal experiments in visualizing graphs and sparse neural nets, https://editor.p5js.org/gustavo.nramires/sketches/1Bb5VIHi2 for designing an algorithm to efficiently make a roguelike map)

I think you're saying is that we don't get this for 'free'. There's probably no tool (although maaaybe AI could help?) that just instantly visualizes anything you could do. Instead it's up to us to do it :) (although good graphics libraries and a tool that encourages you to recompile often is almost essential imo!)

And here's the kind of art I could likely never just simply imagine to existence, experimentation is essential! https://editor.p5js.org/gustavo.nramires/sketches/pVFFT_8E5


I've used this technique quite a bit in the past, but often not enough. I work in games so a lot of problems are actual visual 2d or 3d problems. Things like: finding the closest valid object I can target at my given heading. Simple enough, but as things get complicated it's often the simple things that trip you up, and visualizations can often make the problem obvious.

That p5 editor looks nice for this sort of thing. It's important to be able to get the visualizations in quickly, otherwise you risk wasting time.

A tool I've dreamed up, that I never seem to have time to implement, is to send debug information to some form of database, and then being able to query and render that data as you like (from another client). To see bar/line charts of data, spatial visuals, more abstract graphs like in your examples, timeline scrubbers, etc... maybe some day I'll get around to making it.

Anyway, thanks for your comment. I'm interested in this kind of thing.


> This is the talk that made him famous.

He was already pretty famous before that from Magic Ink, if not even earlier work.


I vaguely recall a meme somewhere about how ridiculous it is to ask, say, a sports or movie star their opinion on politics or something else that isn't their field just because they're famous.

Heck, even within one's field, Bret may have neat ideas about interfaces for programming and attained a sort of celebrity status for some of us, but that doesn't mean he can confidently answer random questions from strangers. He's a researcher, not a guru.


> I vaguely recall a meme somewhere about how ridiculous it is to ask, say, a sports or movie star their opinion on politics or something else that isn't their field just because they're famous.

Jürgen Klopp when asked about this opinion on COVID?

https://www.youtube.com/watch?v=tZ4bKnpxbYc — great answer in a time where every-fucking-body had to share their opinion on it, and still do.


Those are the whites white teeth I have ever seen, incredible.

Also the message was great, the combination rarely happens!


That's so great. Thank you for sharing it.


That last part stands out to me, him being a researcher, that is. I wonder if there is any detriment to the idea of researchers or other “knowledge workers” who use social media at the risk of exposing new developments or ideas to people who may be uninformed or unqualified in regard to whatever the individual is exposing publicly. Could/does this affect what they share or how they/others perceive their work that is shared?


Sure does. Faraday stopped his EM experiments, not cuz of ignorant ppl, but "great" mathematicians of the day were mocking him for his lack of math skills. Without those experiments, no one would have ever dreamt up electromagnetism. And on the flip side, "qualified" ppl who couldnt grok the math of Maxwell or Heaviside, would dismiss the math as just too complex. This type of stuff happens all the time. Social media amplifies it. But you just have to find good ppl and ignore the herd to make progress. And dont mistake capturing Attention on Social Media or HN for progress.


Discoveries are always preceded by a long string of "No" and why would you do this?

Use good science, but run weird experiments!


Tech is extremely boring on social media because there are many more opinions than those truly dedicated to the craft.

You can argue that it shouldn’t be zero sum. But time and attention both are, and you can’t focus fully if you’re trying to “grow an audience.”


Formulating an opinion worth sharing takes work.


Haha, this reminds me of the Web 0.1 post on The Daily WTF (2006):

https://thedailywtf.com/articles/web_0_0x2e_1


I love it. Only thing I'd change is it's a little too zoomed out...we don't need to see any background.

Other than that, once zooming a hair, I found it extremely quick and easy to read. For whatever reason, moreso than if it were in a standard font. Three full paragraphs with zero scrolling.


But per the Web 0.1 design style guide.

"Though it may not seem to visible in the screenshot, the designer certainly went with the preferred technique of the wooden table / digital camera."


Much love for Bret Victor! I've been working on a project inspired by http://worrydream.com/DrawingDynamicVisualizationsTalkAddend... which was released 10 years ago!


Anyone interested in Dynamicland may want to check out Paper Programs.

It's been about 5 years since I last played with it.

It’s great fun and should be much more interactive with 5+ years of browser and hardware development.

https://github.com/janpaul123/paperprograms


I had this set up on a vertical wall at my previous apartment and it was so much fun. I’m working on getting it set up in my current place, too, and I’d really like to get it projecting downwards so that I don’t have to deal with taping things to the wall.

Fyi for those interested: you need a color printer. You need a good projector. You need a good webcam. I already had the printer, got a projector as a hand-me-down, and shelled out the $ for a good webcam.

It can be a little finicky if you don’t have the perfect setup. For me, I got it to work in an hour or so, once I figured out a good mounting location for the projector. Then I had to periodically fiddle with the webcam settings as the light in the room changed. I’m considering investing in black fabric to improve the contrast.

Also, I suspect much of Dynamicland’s magic is in RealTalk, which is not open source. The Paper Programs repo uses Javascript with a realtalk-esque mini framework. This means that Paper Programs doesn’t quite achieve the goals set out by Dynamicland, ie, that it spills be approachable and imminently learnable to all ages, backgrounds, skill levels. My partner and I are both programmers, and both patient, so we’ve been able to replicate a little bit of Dynamicland’s social magic.


I tried setting up Paper Programs in my office, and had similar issues trying to get it to work. Glad that you were able to play with it.


I am glad he's still doing it but I am not very optimistic that a tangible computer built for children and laymen will lead anywhere good.

Even getting professional programmers to reason correctly about their program is hard. And getting them to not build "intuitive" tangles that devolve into complete spaghetti is even harder.

Everything I know tells me that the software we consider good and intuitive actually requires an enormous amount of unintuitive design and engineering to pull off. Not appreciating this, and not knowing what you don't know, is a plague in modern application development, the kind which makes people boot up Windows 3.1/NT and remark how utterly instant everything is.


A quick introduction of what this is about would be great!


Bret Victor is a programmer and programming communicator (because he makes really good videos and articles) who has, for a number of years, been working on a project involving programming using physical objects and image recognition. It's called DynamicLand. The latest demo video is linked throughout this thread. This appears to be a fairly standard progress update, but it being a photograph of paper on a table is very much in keeping with the idea of DynamicLand, which I guess is why it's been posted.


I guess it's this?

https://dynamicland.org/

Some kind of paper computing collaborative art project?


I wish more of this Dynamicland stuff was made available. I'd be happy to set up a projector and camera on my ceiling if my kids could play with some of this stuff.


If you just want the projector and the camera stuff there's https://paperprograms.org/


I have a faint memory of watching a Youtube demo video couple years back of a concept similar to Bret Victor's DynamicLand where you connect lego-block-like things together and they do cool stuff. Does anyone remember what that product is, and what its current status is?


I don't know if this is it, but your description reminds me of HyperCubes which was a research project using AR: https://youtu.be/GnVp4hrQ7QU?t=82


When I first saw Dynamicland I assumed it'd quietly evaporate within a month, I'm glad to hear it's still being built.


Can we all take a moment and appreciate that Bret Victor's natural handwriting is good enough to make a typeface from?


Let's remember that Bret Victor is famous for presentations, a bit like Grant Sanderson (3Blue1Brown). So it shouldn't come as a surprise that he has a pleasant voice and handwriting, and a charismatic way of expressing himself. I'm not particularly a Bret Victor cynic, but let's be clear what he is celebrated for: it is presentation.


I still don’t get what this is about


dang, now inexplicably I need his opinion on things...


Is this someone we are supposed to know about?


If you haven't yet seen this video, please do: https://www.youtube.com/watch?v=8pTEmbeENF4 (Bret Victor The Future of Programming)

This won't tell you who he is, exactly, but it does give insight into how he thinks. And it's a pretty great talk, in my opinion.


I'm afraid I don't have time to watch some video -- please provide a 3-sentence synopsis here.



Thanks for posting this. I was not inclined to respond after already providing information that he could have viewed when time afforded him. Here's hoping he has time to read a Wikipedia article instead of having people do the work for him.


I got the sinking feeling something might have happened to Bret Victor when I saw the title. Thankfully, its just an update about his project.

Maybe it can be slightly edited to reflect that.


Me too. "Short update from Bret Victor" would be less worrying :)


I don't like this.

This reduces the scientist or designer to some small cog in the machinery, to the point where they are further alienated from their work.

Remove the scientist and replace them with an algorithm doing some poor facsimile of gradient descent for some protein energy metric. Or make it multiplayer where some crowd is doing the depth first search. No individual.

The only analogy I can think of is cooking. You can reduce all of the steps into a checklist, which can be optimized, but instinct is still the differentiating factor. Or rather, taste.

Maybe this helps designers develop taste better, but imo the interface is not the problem. It's training scientists to have better sense of taste or smell. There's a dimension above all of this that is still yet to be tapped.


I am not in the field but I just felt trapped by the whole thing. To me it appears to be skeuomorphic design to the nth degree. Why would you create an entire system of artifacts in the real to replace something you could do more flexibly with higher resolution and better visualization in CAD?

Like if these were real processes you were doing in the lab and they were somehow being captured more accurately or efficiently, I could see that perhaps making sense.

But this just seems like extra steps and the novelty is you’re working on a table. It feels claustrophobic somehow.


In regards to the specific UI concern around it feeling claustrophobic, I can see that but I feel like there are many use cases where focus via constraints is very useful. Analyzing abstract ideas into a real thing can be powerful at helping humans see the connections and hidden relationships that are sometimes difficult to see in the digital realm.

That is essentially the AR value proposition in it's entirety in my opinion. Meeting the human experience half way, so we can use our immense spatial reasoning skills in an environment that supports them.


I've seen molecular chemists and biologists (and even for a weird set of 3d shapes mathematicians) experiment with physical stuff before transferring the ideas elsewhere - the tactile aspect and the ability to intuitively rotate a 3d shape while visually perceiving it directly rather than as a translation of something else can be a really helpful tool for thought, so snap together molecule sets seem to be a pretty common posession - perhaps it'll never catch on amongst younger scientists but I can certainly see practitioners who're currently, say, 30-35+ getting a blast out of using it at least sometimes.


I think spatially, which I 1) did not recognize that I was doing for a long time, and 2) thought was much more common than it is.

For years I tried to figure out why my short term memory seemed so good even though I'm neuroatypical in a way that's usually associated with reduced working memory. Turns out I've been building mind palaces basically for longer than I can remember, but not with images, which is why none of the descriptions ever resonated for me. Mind palaces with a blindfold on, if you will.

Being able to rotate items in my brain sure came in handy as a bike mechanic. I'm also handy in a move when you have one box that won't fit in your car/truck. But it also led to arguments with my father about whether I was turning a bolt the wrong way, when wrenching a bolt that is on the back side of something.


I can't really visualize complex shapes in 3D and so molecular graphics (like pymol) is really important to me. Even in other areas, like math shape viz (which I do for "fun") and various other 3D modelling, it's really key that I have a quick way of assembling things and rotating them around.


I dunno, now you’ve got a bin of parts to store, and what happens if they get lost or damaged? You order more and wait for shipping? Print more and have grad students glue magnets in? The mean-time-to-kitchen-drawer seems low.

I wonder if someone needs to better introduce the SpaceMouse to this crowd.


One would anticipate the people who've already owned snap together molecule sets for years have either already figured out a good way to make sure a bin of parts stays where it's put or accepted that it's their fate to order more of things sometimes.

Similar to teaching yourself to always put tools back in the toolbox immediately after use lest they go for a walk, really.

I mean, I do see your point about -you- not wanting to deal with a bin of parts, I'm not sure I would either - but for the people who've already encountered that problem and found a way to deal with it that works for them, it doesn't seem like it'd be an obstacle.


I guess that answers it for me and seems to reinforce it is strongly skeuomorphic — it isn’t really possible to have a box of aminos/proteins in the same way you can have a box of Hs Os and Cs. They wanted a box of parts and this is the closest thing.


I have another take on this but I think we end up at the same place.

Someone once said that Civilization is defined by our ability to think in the abstract.

Scientists spend most of their time thinking in the abstract. Philosophers too. Designers and teachers are translators from the abstract to the real and back, or at least the good ones are.

Giving scientists 'tools' that reduce abstract thought is a crutch, that when overused will lead to injury.

I don't think that's who Bret is targeting, but if it is I hope he goes back to his first principles soon and doesn't end up where many of us are, working to create something and finding we contributed to the creation of something a lot darker.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: