Hacker Newsnew | past | comments | ask | show | jobs | submit | crq-yml's commentslogin

I think the problem can be defined equally as: we can't invest in something more abstract than "plain text" at this time. When we try, it gets downgraded to a plain text projection of the syntax.

The plain text encoding itself exists in a process of incremental, path-dependent development from Morse Code signals to Unicode resulting in a "Gigantic Lookup Table" (GLUT, my coining) approach to symbolic comprehension. The assumption is useful - lots of features can "just work" by knowing that a particular bit pattern is always a particular symbol.

If we push up the abstraction level, we get a different set of symbols that are better suited to the app, but not equivalent GLUT tooling. Instead we usually get parsing of plain text as a transport. For example, CSV parsing. It is sloppy; it is also good enough.

Edit: XML is also a key example. It goes out of its way to respect the text transport approach. There are dedicated XML editors. But people want to edit it as plain text and they can't quite get there because funny-business with character encodings gets in the way, adding a bunch of ampersands and semicolons onto the symbols they want to edit. Thus we have ended up with "the CSV of hypertext documents", Markdown.


You can emulate the Tandy in a web browser, e.g. https://dosee.link/#emulator

So if you evaluate it by hardware, it's true that the phone isn't giving the same I/O capability. But the application software is there, there are far more apps for a phone and you can access the old ones in some degree too.

If you need an actually hackable PC equivalent, we have all kinds of boards and configurations, from microcontrollers to rasPi style computers through FPGA boards. Any of them are a tiny fraction of the cost of the old desktops.


Marshall Vandruff, one of the teachers on the popular art education channel "Proko", spoke at length about working with airbrush in discussing his illustration career:

https://youtu.be/8qDI8NfCyeg

To hear him tell it, it was not particularly glamorous, and hours of fastidious airbrushing to get huge, smooth gradient backgrounds was an RSI-inducer.

I'm pretty sure we can do a better digital emulation of an airbrush than what's currently in paint programs, it just needs more of the actual physics and pigments to be modelled. We've gotten a bit stuck on the RGB raster graphics paradigm and only a few programs are really doing the work to break away from it.


Honestly you can get like 80% of the way there by just doing a lot of smooth gradients and putting a little noise over it, it's trivial once you stop thinking in terms of "manipulating a virtual paint-depositing tool" and start thinking about what it looks like on the illustration board. Turning a gradient into an emulation of a more deliberately-uneven, splattery, and possibly drippy application of paint is a bit more complex though.


I don't disagree, but I think the optimization potential tends to be limited to trivial automations in practice. The idioms we use to code at a higher level are mostly wrappers on something we know how to do at a lower level with a macro. The compiler still helps because it guard-rails it and puts you on the path to success as you build more complex algorithms, but you have to get into very specific niches to reach both of "faster than C" and "easier to code the idiom".

As ever, hardware is an underappreciated factor. We have make-C-go-fast boxes today. That's what the engineering effort has been thrown towards. They could provide other capabilities that prefer different models of computing, but they don't because the path dependency effect is very strong. The major exceptions come from including alternate modalities like GPU, hardware DSP and FPGA in this picture, where the basic aim of the programming model is different.


> As ever, hardware is an underappreciated factor. We have make-C-go-fast boxes today.

That is a very common misconception. There have been numerous attempts at architectures that cater to higher-level languages. Java-bytecode-interpreting CPUs have been tried and were slower than contemporary "normal" CPUs at executing bytecode. Phones and smartphones were a supposed hot market for those, didn't fly, native bytecode execution is dead nowadays. Object-orientation in CPUs has been tried, like in Intels iAPX432. Type-tagged pointers and typed memory has been tried. HLLCAs were all the rage for some time ( https://en.wikipedia.org/wiki/High-level_language_computer_a... ). Early CISC CPUs had linked lists as a fundamental data type. Lisp machines did a ton of stuff in hardware, GC, types, tagging, polymorphism. All of it didn't work out, more primitive hardware with more complex interpreters and compilers always won. Not because C was all the rage at the time, but because high-level features in hardware are slow and inflexible.

What came of it was the realization that a CPU must be fast and flexible, not featureful. That's why we got RISC. That's why CISC processors like x86 internally translate down to RISC-like microcode. The only thing that added a little more complexity were SIMD and streaming architectures, but only in the sense that you could do more of the same in parallel. Not that HLL constructs were directly implemented into a CPU.


Memory tagging is making a comeback, and the primitives/API being "fast and flexible" doesn't account for the ridiculous complexity that goes into a CPU, that does indirectly help with linked lists/object oriented/etc.

Also, C is in no way special, not even particularly low-level.


It's a question of verisimilitude, not realism: we are looking for experiences that we can believe in.

Firearms in games tend to be less real because they prioritize making you believe in the power fantasy of a gun: it looks and sounds fearsome, and enables the bearer to dispense death. Running and jumping, likewise: there's no need to explain in an empirical sense how or why Mario jumps extremely high - it's an aesthetic choice that highlights the thing the game is about.

We tend to get stuck on portrayals of physics, camera, and photorealistic rendering in games because in those instances, we have tools that are good at systematizing verisimilitude: the car can behave more like a real car by fastidiously emulating everything we know about real cars. Those simulations can be made comparable to ones used in industry.

But many aspects of games can't take that approach and have to be cartooned to some less grounded approximation: the way in which human figures move and talk, or how a national economy works, or the pacing of combat.

As makers of designed products, we're meeting players in the middle by making choices that cohere with the rest of the game's goals while staying believable to their expectations. There are lots of ways to achieve verisimilitude while destroying the overall structure of the game, and that's a classic newbie-designer pitfall: "do X but with more detail".


Tulip CC is the modern C64. The software is mostly fleshed out for audio apps right now, but feature for feature, it's doing the same kinds of thing, with the same eye to budget:

https://github.com/shorepine/tulipcc


The difference is that in the exploitative AAA shop, the company pays salary and benefits. In the exploitative indie shop, "something" will happen that means you are also unpaid and have no recourse because the company either doesn't actually have any money or has pulled a disappearing act and made themselves impossible to reach.

Basically, the reason to sign up for tiny companies with no reputation is to give yourself project experience. But it won't necessarily result in deeper wisdom about the process. It could just mean the boss is overconfident.

Going it alone, the obvious alternative, tends to whip game developers into a self-exploiting mode where they crunch really hard on features or assets, when they actually need to step back, make some painful cuts that throw out months of effort, and refocus their design to have better synergy. The push and pull of a team tends to mitigate those outcomes through earlier interventions, but without financing it's very hard to keep one going.

So, yes, the big companies do have advantages. The upside of the indie space is that it is more in line with the rest of the arts than a corporate career path - it allows the process to be something other than a production built off the back of a market survey. But that means a prerequisite is exposure to the arts and to processes that aren't strictly industrial design. This isn't a well-developed thing in the indie scene since the early influences they are working from all tend to be in the industrial design motif: addictive arcade games, sprawling epic RPGs, etc. Starting from these kinds of premises tends to scope the project incorrectly for the available skills, while simultaneously forgoing alternatives that no company would consider.


Peri's approach is built around his Hollywood production experience: you don't have to be the real thing, you have to be believably close to the real thing. IOW, verisimilitude.

And that's exciting, that's the path to a lot of creative answers. It's a good response to the shareholder-centric corporate idiom: don't look at the balance sheet as the game. Look at the people and the assets as elements used to stage a show, and put on a show that people want to believe in. And it's a good answer for the AI-saturated landscape we've entered: make products that are very similar to the best ones of the past.


Adding noise to the signal here, but "Don't look at the balance sheet as the game. Look at the people and the assets as elements used to stage a show, and put on a show that people want to believe in." is a quote I want to remember forever.


Typically it's done through source code generation or a runtime interpreter - state machine systems implementing a "DSL->source code" mechanism have been around for nearly as long as high level languages, and by taking this approach they have a lot of freedom to include compiler techniques of their choosing. If dynamism is called for then the diagram is typically kept in memory and interpreted at runtime.

Doing it through types is intellectually interesting and makes the result more integrated into the edit-compile loop instead of involving a step in the build process or a step of invoking an interpreter, but it might not change the practical state of the art.


Yes, but their expressiveness may vary. An important role of polystate is code reuse. It can express more complex states and still be type-safe.


Precisely the opposite. Rent-seekers eagerly invite comparison for purposes of valuation, and push the lens of art towards technical and political measurements. When a work is incomparable in the way in which it achieves verisimilitude it is escaping this system.


Agreed here. Every time my kids bring up tier list rankings I have to again explain this to them.


You are basically talking about the part where the system is so broken or rent captured that the only way out is through the bottom. Sure, but that doesn't make it good automatically.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: