Hacker News new | past | comments | ask | show | jobs | submit login
Kube Doom, Kill Kubernetes Pods Using Id's Doom (github.com/storax)
257 points by nfrankel on Oct 5, 2020 | hide | past | favorite | 80 comments



Doom's staying power might be the most remarkable thing about this. I can't find it, sadly, but I still vaguely remember a PC Magazine article around 1994 about a Doom mod that would run on Unix and delete files in a similar manner.


You might be thinking of psDoom, which let you kill processes. http://psdoom.sourceforge.net/


Which this kubernetes version is a fork of a fork of


I just went to google this, and saw it was from 2001. I'm so old.

https://www.cs.unm.edu/~dlchao/flake/doom/chi/chi.html


there was one for files too, but I remember it becoming popular closer to 96 or 97



That's absolutely Jurassic ;)


'Can it run Doom' and 'Can it run Crysis' are two very famous benchmarks for entirely different goals, but they still stick around because they're easy to remember and easy to grasp. The versatility of Doom is really one of the best example of hackery that has ever been


Right? I think it's because DooM is so recognizable and so snappy. Fast action, clear visuals, no cutscenes or chainsaw animations to interrupt the flow. Only Quake1,2,3 games have similar traits and are also used for tech demos and proofs of concept.


I replayed it a couple years ago and it holds up really, really well. Great level design, fantastic gameplay.


Reminds me of http://psdoom.sourceforge.net/, from way back when, using 'Doom' as an interface to manage Linux processes.


Looks like it's a fork of a fork of psdoom


I seem to recall this being covered in Wired #1 way back.


Hm. Wired #1 probably predates psdoom by a few years. Psdoom was developed later AFAIK. 1999 or so? At least after 1997 when Id released the Doom source code.


Hmm, it was in 8.02

https://www.wired.com/2000/02/commando-line-interface/

Wish I still had some of these


Half-OT:

Do you think that a 3D environment can in some way be(come) a better GUI for development/operations than the 2D stuff we have now?


Thinking about VR here, not just 3D: I played a bit of VRChat recently, and it has surprised me how well multiplayer 3D rooms in VR work for socializing. It has benefits over normal audio and video calls: people can naturally cluster to have separate conversations and move between groups as in real life, there's the body language (hands and head positions are tracked and visible to other people) that make it easier to see who is paying attention to who, and the sense of presence given by VR makes your mind feel more involved in the social situation. I really think this is something that VR legitimately does better than traditional computer interfaces.

So if there are development or work scenarios where VR is beneficial, the benefit could come from VR being better for realtime collaboration between people in some scenarios.


I was working on a 'visual debugger' that centered around a 3D display of data structures evolving in time[0], and learned some lessons from it.

I would say the key thing is to understand the specific properties that differ when adding another dimension.

The main thing you gain in terms of expressing information, or in terms of designing an interaction scheme, is another axis on which you can 'analyze' your problem: this is a generalization of the basic idea you see with a 3D data plot vs 2D. This 3rd dimension is a new element of the 'syntax' of your UI, which can be made to map to a specific concept in your program.

So then the question is: does the problem you're trying to solve have a matching structure that would be clarified by having an additional axis to map its visual representation on to?

Additionally, the property that caught my attention initially was about using 3D space + human visual processing to define an information prioritization scheme: 'depth' into a 3D scene is is a natural, ready-made system for indicating priority:

Objects nearer to the camera (less 'depth'), occlude further objects. Further objects have their (projected) size reduced as a function of their depth, and are often darkened as well. All this adds up to a convenient/flexible way of talking about priority (from the programmer's perspective) and a natural way of reading priority (from the user's perspective).

That's the theory anyway: my experience with it was a mixed bag, and at least for my specific project, when I've thought about how I'd do things differently now I think I'd just stick with 2D—at least to start with.

There are a lot of more unsolved problems that show up when thinking about UI in 3D, possibly because the design space is larger, but certainly at least because it's less explored. And the number of possible ways things can go wrong, or at least considerations needing to be made, from both programming and design standpoints, is also significantly larger in my experience.

[0] http://symbolflux.com/projects/avd


Yes, I learned gestalt theory too.

You happen to have good resources on 3D visualization research?


I'd love to see some exploration of how VR could be used past the idea of "you have infinite monitor space now".


That's really all I ask. I doubt very much that any movement in virtual space is going to be as efficient as a keyboard and mouse, so just let me see everything and call it a day.

Relevant community commentary on the potential ridiculousness of VR interfaces: https://www.youtube.com/watch?v=z4FGzE4endQ


Honestly, that Community episode was the only thing I could think of during this discussion.

Unless someone thinks of something to increase information density in the VR environment by an order of magnitude (or several,) it will be about this ridiculous.


I think, the problem with programming is that it's essentially 1D text arranged a bit on a 2D space.

So 2D programming isn't really a thing. That's probably why 3D isn't a thing either.


That's only true if you don't consider the context that the code lives in. For example coloration in editors could be considered a visualization of additional dimensions of info (syntax, types) and autocompletes are like a branching "time-like" dimension. Besides that code mostly tries to solve problems that can be represented and show visually.

The need exists, how to turn that into a solution with a useable interface that I'd use for multiple hours a day is beyond me though.

For me Bret Victor's talks point to this problem where we treat code as something we only interact with as text but it can be so much more: https://vimeo.com/36579366 https://vimeo.com/64895205


Computers have essentially 1D instructions. They execute instruction after instruction. Making a 2D programming language would cause an extra layer of abstraction between the programmer and instructions. There would be performance and memory use issues.


Yes, maybe 2D for multiple processors?


My problem with programming is that I don't even want to look at the screen, type on the keyboard or use a mouse.

Constantly translating my thoughts and ideas manually through my hands and eyes directly is annoying.


Might be an easier way to handle parallelization.


An IDE that takes advantage of depth could be kinda cool. Imagine if every level of indentation resulted in text that appeared to be further way. It would be best to discourage spaghetti code by making it physically uncomfortable


That would be pretty cool though. Imagine coding on VR goggles on, and you move your head forward to "zoom in" to the code, with some eye tracker built into the goggles, move the cursor, and code. Using physical head gestures for navigation, if done correctly, has great potential.


Been thinking about this too.

I think this might open up new ways of interacting with systems that are more accessible than they are today. Currently, system diagrams are limited by how much info you can cram into 2 dimensions; without some kind of abstraction it becomes mostly too information dense to be useful.

What could we do with 3D? What if we could build and lay out different systems/ system designs in a 3D space? What if I could point to our MySQL cluster, our kubernetes cluster, our applications etc?

Only half formed thoughts but I am super excited by what avenues VR will open up for developers.


There was an interesting post that touched on that topic a while ago: https://news.ycombinator.com/item?id=24162703


I personally really don’t like staring at text in VR. There’s less angular resolution and more visual artifacting in the HMD compared to a good 72dpi screen (or even better, a retina screen) at 2 feet.


Given that the most natural space for a human to operate in is 3D (given the physical world), I believe so. I think the fact that GUIs mostly show 2D data is a product of the limitation of their display medium. For example, watching videos in 360 degrees has been possible for a while now.

I think VR might have potential (forgetting motion sickness), but one thing I think we need is the ability to see our hands. Therefore I think AR is realistically the productivity interface of tomorrow.

I think ultimately AR solves some massive problems:

1. Your "display" is wherever you need it to be. No need to lug around a laptop or monitor. It's as large or small as you need it to be, even partially transparent if required.

2. The ability to integrate information overlay on reality will be really useful, especially if natural hand tracking continues to get better. Imagine a sat nav app helping you navigate some populated area or building. Imagine an app guiding you through the shopping mall based on an optimal route computed from your shopping list, performing price comparisons for you. Imagine working on a document or piece of code and being able to walk over to your colleague and share your display with them. The possibilities are potentially endless.

3. Energy usage should be much less, as you're not wasting light making your background brighter. Suddenly your only real energy concerns are computation, and with cloud computing and internet connectivity improving, maybe you can offload most of the energy usage onto dedicated machines elsewhere.

My guess would be within 10 years we see some key break through. It's really a shame that Google backed out with their glass project.


Disagree. Humans operate in a 3D world, but act on a 2D plane. Very few places in nature let you have one sector over another (one person over another). We have 3D presentation, but there are 4 cardinal directions, no up and down because we don't fly and don't dive very often. We're like Warcraft3 / Starcraft(2) creatures.

For this reason even if we had flying cars, we would crash all the time. Pilots get disoriented easily, and they are cream of the crop.


It sounds like you're considering a human moving about a mostly flat world, rather than interacting with nearby objects. Also, I would argue that humans do still have a natural sense for traveling in the Z-axis, we did after all used to climb and plan paths in 3D.


VR hand tracking is already shipping. You can see your hands in VR using capacitive tracking such as the Valve Index or via pose estimation on camera data such as in the Oculus Quest.


I haven't used that particular hand tracking, but I think it needs to have zero noticeable latency and no glitching effects (within reason).


Capacitive hand tracking should have sub-frame latency.


For software development, I don't quite see it, because what we are building is not really a 2D or 3D thing but more conceptual (not sure what the right word would be). Visualizations help to understand and troubleshoot a system, but for the actual changes or additions I see more potential in NLP to communicate intent to the computer that it then turns into code rather than different GUIs. 3D GUIs I reckon would be similar to visual programming languages in a sense that they are really useful for certain use cases and to interact with a lower level system, but don't universally replace the environments we currently have.

Where I think 3D has much more potential, is using something like google glass or full on VR headset to augment what is seen in the "real world" in professional settings.

E.g. a couple years back I had a conversation with my sister who is anesthetist (at the time in the heart surgery department if I remember correctly) about them experimenting with robotically-assisted and remote surgery. If you only see what you are working on trough a camera where depth perception can be problematic, I could see it beneficial to have real time overlays to outline shapes or other augmentations to what you are seeing.

Also there might be interesting things in manufacturing, like showing you if you are within tolerances during assembly or have some algorithms/AI running over it to highlight aspects that seem out of the normal.

But in either case it might well be more distraction that actually useful for those people.


Not strictly related, but I've been wanting a 3D debugger for ML models for damn near a year now.

It seems like a slam-dunk idea. Stack images like https://battle.shawwn.com/sdb/visualizations/2020-09-16-117m... in 3D space, like a cube. Put them in order of layers. Ditto for activations.

The spatial structure is important. Right now there's nothing, not even tensorboard, that preserves spatial structure. For example, look at these two images:

https://battle.shawwn.com/sdb/visualizations/2020-09-16-117m...

https://battle.shawwn.com/sdb/visualizations/2020-09-16-117m...

You're looking at weights, followed by bias. You can see visually that the bias lines up with the weights. If you tried visualizing that with tensorboard, it'd shrink the images to squares with padding. Which would be fine if you could rearrange squares so that one is above the other, but you can't. Meaning you lose the ability to correlate visually what's going on.

Someone please make an ML doom blaster so that I can at least reset layers of my ML model by shooting them.


I thought it was a pretty compelling idea when I first saw it in the original Jurassic Park movie, even if that was a metaphor for "hacking."


The "unix system" from Jurassic Park was a real application made by SGI.

https://en.wikipedia.org/wiki/Fsn_(file_manager)


All of https://en.wikipedia.org/wiki/File_manager#3D_file_managers look interesting, particularly https://en.wikipedia.org/wiki/File_System_Visualizer reminds me of s3dfm and looks like it might be mildly useful, unlike tdfsb(6), which is a bit fun to use in a large directory of images (though that takes surprisingly long to load), but has little use beyond that.


When I was 10, I imagined the internet was like a first person 3D game and the police would have to hunt hackers by running after them in the virtual world.


This reminded me of being around the same age, and thinking that if me and a friend is on the same webpage we would see each other on screens in the browser window somewhere.


> Do you think that a 3D environment can in some way be(come) a better GUI for development/operations than the 2D stuff we have now?

There's a whole new dimension to work with. Working with it is optional. In principle I'd say yes.

That assumes we've found ways to efficiently use the extra dimension, including visualizing and interacting with it.


It's all about speed and efficiency.

A 3D environment might be the most intuitive one in the long run but it will never be faster because you either are moving through 3D space just to manipulate a virtual object, or you're waving your hands around in AR/VR space to manipulate virtual objects.

While on the other side I'm simply moving my fingers and hands swiftly across a small surface to perform the same action.


We sort of already have a 3D environment: windowing systems. But every IDE I’ve used in years instead uses tabs. Perhaps something of note there.


Well tabs are to a window what a full-size windows are to a screen.

Which is to say, yes, we do seem to have difficulties with overlapping multiple surfaces slightly differing on the (virtual) depth axis.

Curiously, DOOM was more of 2D game with 3D graphics. The altitude was irrelevant in gameplay except for the falls and the elevators. I mean two objects weren't allowed at (x,y,z1) and (x,y,z2). Just about that time I remember "Descent" and it was much less playable chiefly because it was full 3D and on higher levels often didn't give any "ground" reference (i.e. axis z was not special at all).


I think being able to stack things that would normally be a sub menu might be interesting. it would need something better than a mouse though.


Yes, for three dimensional data visualizations and especially three dimensional visualization that can be seen changing over time.


There are many 2D viz tools for IaC that fall short, because a cloud resource can have too many connections to other resources.

That is, too many for 2D, but in 3D the connections wouldn't even cross each other.


Depends on your problem, programming/teach a robotic arm in all dimensions is often faster in a 3D environment or VR, but programming that arm in just 2 dimensions is probably faster done in normal code aka 2D.

It a bit like the discussion of Gui's vs Terminal even if both of them are 2D.


Ooooh. Will the future have Kube Doom Eternal? Kube Crysis 3? Kube (Mortal) Kombat XI?

Infrastructure As Games to make it all fun again.


I'm not sure those are ever going to be open sourced unfortunately.


Other Doom-related news: An Amiga Doom clone is coming along nicely after years of Amiga users wanting more Doom :-) (The video has some interesting technical details)

https://www.youtube.com/watch?v=kgEpnRxx5Fc


Doom is playable under an Amiga with a 68060 or a 68040 with an accelerator board.


Thinking out loud: would anyone actually want to do work in this kind of environment?

I see attempts at remote work like Sococo https://www.sococo.com/

and I'm thinking---maybe...but there's not much to do in those environments.

On the other hand, I'm not sure I'd want to be swarmed by antagonistic kubernetes pods.


This kind of thing is what makes me love the hacker community so much. There is probably no conceivable practical use for this (at least from what I can see right now), but it’s fun, and was a challenge, and someone just went ahead and did it.


We took Show HN out of the title because the project author appears to be someone else. Show HN is for sharing your own work: https://news.ycombinator.com/showhn.html.

If you're storax and I got that wrong, please let us know at hn@ycombinator.com and we'll put it back.


Sorry if this is very minor, but if possible, please also take out an extra ‘o’ from “Dooom” in the title.


Thanks! Fixed.


Would be great if this could be abstracted out of kubernetes. Like, Killing system processes in doom! Chrome is hanging again... let me fire up doom and shoot it's face; what a stress relief!


It is :-) From the Readme:

> This is a fork of the excellent gideonred/dockerdoomd using a slightly modified Doom, forked from https://github.com/gideonred/dockerdoom, which was forked from psdoom.

Here you go: http://psdoom.sourceforge.net/screenshots.html


How emberassing that I missed that; Thanks!


This was an actual thing and probably the original inspiration behind this article: https://www.cs.unm.edu/~dlchao/flake/doom/


this was done many many years ago, it’s called psdoom


I did something similar but with Minecraft. Your k8s resources become animals (pods are pigs, services are chickens etc) and if you kill them they get killed on the cluster.

Check it out here : https://medium.com/@eric.jadi/minecraft-as-a-k8s-admin-tool-...


This reminded me of a comment from a thread ~1 week ago:

> I've slaughtered more machines than any of you.

https://news.ycombinator.com/item?id=24603393


Now we need something like Left for Dead for npm packages :) /s


Reminds me of Docker Doom: https://github.com/GideonRed/dockerdoomd


I'm not into DevOps, but that might have just changed.


This is awesome. I like the idea to of virtual conferences on GTA, there is a lot of potential for turning work into better experiences.


As if this is not enough, star count in the repo is currently 666! Coincidence?


should add grenades that drain nodes


No grenades in Doom. Rocket launcher or BFG, though...


Call it DevOPS


Why not DeadOps?


so cool




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: