Doom's staying power might be the most remarkable thing about this. I can't find it, sadly, but I still vaguely remember a PC Magazine article around 1994 about a Doom mod that would run on Unix and delete files in a similar manner.
'Can it run Doom' and 'Can it run Crysis' are two very famous benchmarks for entirely different goals, but they still stick around because they're easy to remember and easy to grasp. The versatility of Doom is really one of the best example of hackery that has ever been
Right? I think it's because DooM is so recognizable and so snappy. Fast action, clear visuals, no cutscenes or chainsaw animations to interrupt the flow. Only Quake1,2,3 games have similar traits and are also used for tech demos and proofs of concept.
Hm. Wired #1 probably predates psdoom by a few years. Psdoom was developed later AFAIK. 1999 or so? At least after 1997 when Id released the Doom source code.
Thinking about VR here, not just 3D: I played a bit of VRChat recently, and it has surprised me how well multiplayer 3D rooms in VR work for socializing. It has benefits over normal audio and video calls: people can naturally cluster to have separate conversations and move between groups as in real life, there's the body language (hands and head positions are tracked and visible to other people) that make it easier to see who is paying attention to who, and the sense of presence given by VR makes your mind feel more involved in the social situation. I really think this is something that VR legitimately does better than traditional computer interfaces.
So if there are development or work scenarios where VR is beneficial, the benefit could come from VR being better for realtime collaboration between people in some scenarios.
I was working on a 'visual debugger' that centered around a 3D display of data structures evolving in time[0], and learned some lessons from it.
I would say the key thing is to understand the specific properties that differ when adding another dimension.
The main thing you gain in terms of expressing information, or in terms of designing an interaction scheme, is another axis on which you can 'analyze' your problem: this is a generalization of the basic idea you see with a 3D data plot vs 2D. This 3rd dimension is a new element of the 'syntax' of your UI, which can be made to map to a specific concept in your program.
So then the question is: does the problem you're trying to solve have a matching structure that would be clarified by having an additional axis to map its visual representation on to?
Additionally, the property that caught my attention initially was about using 3D space + human visual processing to define an information prioritization scheme: 'depth' into a 3D scene is is a natural, ready-made system for indicating priority:
Objects nearer to the camera (less 'depth'), occlude further objects. Further objects have their (projected) size reduced as a function of their depth, and are often darkened as well. All this adds up to a convenient/flexible way of talking about priority (from the programmer's perspective) and a natural way of reading priority (from the user's perspective).
That's the theory anyway: my experience with it was a mixed bag, and at least for my specific project, when I've thought about how I'd do things differently now I think I'd just stick with 2D—at least to start with.
There are a lot of more unsolved problems that show up when thinking about UI in 3D, possibly because the design space is larger, but certainly at least because it's less explored. And the number of possible ways things can go wrong, or at least considerations needing to be made, from both programming and design standpoints, is also significantly larger in my experience.
That's really all I ask. I doubt very much that any movement in virtual space is going to be as efficient as a keyboard and mouse, so just let me see everything and call it a day.
Honestly, that Community episode was the only thing I could think of during this discussion.
Unless someone thinks of something to increase information density in the VR environment by an order of magnitude (or several,) it will be about this ridiculous.
That's only true if you don't consider the context that the code lives in. For example coloration in editors could be considered a visualization of additional dimensions of info (syntax, types) and autocompletes are like a branching "time-like" dimension. Besides that code mostly tries to solve problems that can be represented and show visually.
The need exists, how to turn that into a solution with a useable interface that I'd use for multiple hours a day is beyond me though.
Computers have essentially 1D instructions. They execute instruction after instruction. Making a 2D programming language would cause an extra layer of abstraction between the programmer and instructions. There would be performance and memory use issues.
An IDE that takes advantage of depth could be kinda cool. Imagine if every level of indentation resulted in text that appeared to be further way. It would be best to discourage spaghetti code by making it physically uncomfortable
That would be pretty cool though. Imagine coding on VR goggles on, and you move your head forward to "zoom in" to the code, with some eye tracker built into the goggles, move the cursor, and code. Using physical head gestures for navigation, if done correctly, has great potential.
I think this might open up new ways of interacting with systems that are more accessible than they are today. Currently, system diagrams are limited by how much info you can cram into 2 dimensions; without some kind of abstraction it becomes mostly too information dense to be useful.
What could we do with 3D? What if we could build and lay out different systems/ system designs in a 3D space? What if I could point to our MySQL cluster, our kubernetes cluster, our applications etc?
Only half formed thoughts but I am super excited by what avenues VR will open up for developers.
I personally really don’t like staring at text in VR. There’s less angular resolution and more visual artifacting in the HMD compared to a good 72dpi screen (or even better, a retina screen) at 2 feet.
Given that the most natural space for a human to operate in is 3D (given the physical world), I believe so. I think the fact that GUIs mostly show 2D data is a product of the limitation of their display medium. For example, watching videos in 360 degrees has been possible for a while now.
I think VR might have potential (forgetting motion sickness), but one thing I think we need is the ability to see our hands. Therefore I think AR is realistically the productivity interface of tomorrow.
I think ultimately AR solves some massive problems:
1. Your "display" is wherever you need it to be. No need to lug around a laptop or monitor. It's as large or small as you need it to be, even partially transparent if required.
2. The ability to integrate information overlay on reality will be really useful, especially if natural hand tracking continues to get better. Imagine a sat nav app helping you navigate some populated area or building. Imagine an app guiding you through the shopping mall based on an optimal route computed from your shopping list, performing price comparisons for you. Imagine working on a document or piece of code and being able to walk over to your colleague and share your display with them. The possibilities are potentially endless.
3. Energy usage should be much less, as you're not wasting light making your background brighter. Suddenly your only real energy concerns are computation, and with cloud computing and internet connectivity improving, maybe you can offload most of the energy usage onto dedicated machines elsewhere.
My guess would be within 10 years we see some key break through. It's really a shame that Google backed out with their glass project.
Disagree. Humans operate in a 3D world, but act on a 2D plane. Very few places in nature let you have one sector over another (one person over another). We have 3D presentation, but there are 4 cardinal directions, no up and down because we don't fly and don't dive very often. We're like Warcraft3 / Starcraft(2) creatures.
For this reason even if we had flying cars, we would crash all the time. Pilots get disoriented easily, and they are cream of the crop.
It sounds like you're considering a human moving about a mostly flat world, rather than interacting with nearby objects. Also, I would argue that humans do still have a natural sense for traveling in the Z-axis, we did after all used to climb and plan paths in 3D.
VR hand tracking is already shipping. You can see your hands in VR using capacitive tracking such as the Valve Index or via pose estimation on camera data such as in the Oculus Quest.
For software development, I don't quite see it, because what we are building is not really a 2D or 3D thing but more conceptual (not sure what the right word would be). Visualizations help to understand and troubleshoot a system, but for the actual changes or additions I see more potential in NLP to communicate intent to the computer that it then turns into code rather than different GUIs. 3D GUIs I reckon would be similar to visual programming languages in a sense that they are really useful for certain use cases and to interact with a lower level system, but don't universally replace the environments we currently have.
Where I think 3D has much more potential, is using something like google glass or full on VR headset to augment what is seen in the "real world" in professional settings.
E.g. a couple years back I had a conversation with my sister who is anesthetist (at the time in the heart surgery department if I remember correctly) about them experimenting with robotically-assisted and remote surgery. If you only see what you are working on trough a camera where depth perception can be problematic, I could see it beneficial to have real time overlays to outline shapes or other augmentations to what you are seeing.
Also there might be interesting things in manufacturing, like showing you if you are within tolerances during assembly or have some algorithms/AI running over it to highlight aspects that seem out of the normal.
But in either case it might well be more distraction that actually useful for those people.
The spatial structure is important. Right now there's nothing, not even tensorboard, that preserves spatial structure. For example, look at these two images:
You're looking at weights, followed by bias. You can see visually that the bias lines up with the weights. If you tried visualizing that with tensorboard, it'd shrink the images to squares with padding. Which would be fine if you could rearrange squares so that one is above the other, but you can't. Meaning you lose the ability to correlate visually what's going on.
Someone please make an ML doom blaster so that I can at least reset layers of my ML model by shooting them.
When I was 10, I imagined the internet was like a first person 3D game and the police would have to hunt hackers by running after them in the virtual world.
This reminded me of being around the same age, and thinking that if me and a friend is on the same webpage we would see each other on screens in the browser window somewhere.
A 3D environment might be the most intuitive one in the long run but it will never be faster because you either are moving through 3D space just to manipulate a virtual object, or you're waving your hands around in AR/VR space to manipulate virtual objects.
While on the other side I'm simply moving my fingers and hands swiftly across a small surface to perform the same action.
Well tabs are to a window what a full-size windows are to a screen.
Which is to say, yes, we do seem to have difficulties with overlapping multiple surfaces slightly differing on the (virtual) depth axis.
Curiously, DOOM was more of 2D game with 3D graphics. The altitude was irrelevant in gameplay except for the falls and the elevators. I mean two objects weren't allowed at (x,y,z1) and (x,y,z2). Just about that time I remember "Descent" and it was much less playable chiefly because it was full 3D and on higher levels often didn't give any "ground" reference (i.e. axis z was not special at all).
Depends on your problem, programming/teach a robotic arm in all dimensions is often faster in a 3D environment or VR, but programming that arm in just 2 dimensions is probably faster done in normal code aka 2D.
It a bit like the discussion of Gui's vs Terminal even if both of them are 2D.
Other Doom-related news: An Amiga Doom clone is coming along nicely after years of Amiga users wanting more Doom :-) (The video has some interesting technical details)
This kind of thing is what makes me love the hacker community so much. There is probably no conceivable practical use for this (at least from what I can see right now), but it’s fun, and was a challenge, and someone just went ahead and did it.
We took Show HN out of the title because the project author appears to be someone else. Show HN is for sharing your own work: https://news.ycombinator.com/showhn.html.
If you're storax and I got that wrong, please let us know at hn@ycombinator.com and we'll put it back.
Would be great if this could be abstracted out of kubernetes. Like, Killing system processes in doom! Chrome is hanging again... let me fire up doom and shoot it's face; what a stress relief!
> This is a fork of the excellent gideonred/dockerdoomd using a slightly modified Doom, forked from https://github.com/gideonred/dockerdoom, which was forked from psdoom.
I did something similar but with Minecraft.
Your k8s resources become animals (pods are pigs, services are chickens etc) and if you kill them they get killed on the cluster.