Hacker Newsnew | past | comments | ask | show | jobs | submit | more crq-yml's commentslogin

CD and other formats create trade-offs vs MIDI event sequences - it's a simple playback method offering a lot of fidelity but in exchange, you're tied to having either "one track at a time and the CD spins up in between" (Redbook CD), cueing uncompressed sampled tracks(feasible but memory intensive) or cueing one or more lossy-compressed streams(which added performance or hardware-specific considerations at the time, and in many formats, also limits your ability to seek to a particular point during playback or do fine-grained alterations with DSP). So as a dynamic music system it tends to lend itself to brief "stings" like the Half-Life 1 soundtrack, or simple explore/combat loops that crossfade or overlay on each other. Tempo and key changes have been off the table, at least up until recently(and even then, it really impacts sound quality). DJ software offers the best examples of what can be done when combining prerecorded material live and there are some characteristic things about how DJs perform transitions and mashups which are musically compelling but won't work everywhere for all material.

MIDI isn't really that much better, though - it's a compatibility-centric protocol, so it doesn't get at the heart of the issue with dynamic audio of "how do I coordinate this". All it is responsible for is an abstract "channel, patch number, event" system, leaving the details involved in coordinating multiple MIDI sequences and triggering appropriate sounds to be worked out in implementation. An implementation that does everything a DAW does with MIDI sequences has to also implement all the DSP effects and configuration surfaces, which is out of scope for most projects, although FMOD does enable something close to that.

I think the best approach for exploring dynamic and interactive right now is really to make use of systems that allow for live coding - Pure Data, Supercollider, etc. These untangle the principal assumptions of "either audio tracks or event sequences" and allow choice, making it more straightforward to coordinate everything centrally, do some synthesis or processing, some sequencing, adopt novel methods of notation. The downside is that these are big runtimes with a lot of deployment footprint, so they aren't something that people just drop into game engines.


We're on the cusp of an indie software boom.

The reasons why exist in Taleb's antifragility thesis: the antifragile will gain from disorder.

The nature of the tech industry, in the decades roughly since the Cold War ended(which put to rest a certain pattern of tech focused on the MIC and moved SV forward into its leadership position), has promoted fragility along several of Taleb's dimensions: it aims to scale, it aims to centralize, and it aims to intervene. The pinnacle achievement of this trend is probably the iPhone, a convergent do-everything device that promises to change your life.

But it's axiomatic(in Taleb's view, which I will defer to since his arguments are good enough for me) that this won't last, and with talk of "the end of the US empire", and a broader pessimism in tech, there seems to be popular agreement that we are done with the scale narrative. AI is a last holdout for that narrative since it justifies further semiconductor investment, stokes national security fears of an "AI race" and so on - it appeals to forces with big pocketbooks, that are also big in scale, and also in a position of fragility themselves. But eventually they will tap out too, for the same reasons. Whether that's a "next year" thing or a "twenty years" thing is hard to predict - the fall of the USSR was similarly hard to predict.

The things that are antifragile within software are too abstract to intentionally develop within a codebase, and are more a measure of philosophy and how one models the world with data - CollapseOS is at the extreme end of this, where the viewpoint is bacterial - "all our computing infrastructure is doomed if it is not maintainable by solo operators" - but there are intermediate points along it where the kinds of software that are needed are in that realm of a plugin to a large app or an extension to an existing framework, and development intentionally aims not to break outside that scope. That thesis agrees with the "small niches" view of things.

Since we have succeeded in putting computers into every place we can think of, many of the things we need to do with them do already have a conventional "way of doing it," but can exist in a standard and don't need to be formalized behind a platform and business model - and LLM stuff does actually have some role in serving that by being an 80% translator of intent, while a technical specialist has work in adding finish and polish. And that side of things agrees with going deeper into the stack.

I believe one pressing issue that's faced with this transition is the inclination brought from the corporate dev environment to create a generalized admixture of higher and lower level coding - to do everything as a JS app with WASM bits that are "rewritten in Rust" - when what indie software needs is more Turbo Pascals, Hypercards, and Visual Basics, environments that are tuned to be complete within themselves and towards the kinds of apps being written, while being "compatible enough" to deploy to a variety of end-user systems.


It's attempting to reconcile the stigmergic ideas of the Binding Chaos books(which, though not called out as such, are very present in the article) with the "Dark Enlightenment", and therefore it must labor to curse the epistemic aspects of swarms and characterize them negatively with terms like "society of average".

It may actually be intended as such, however, I think the effect is more like the esotericism of Plato's Republic - the text says one thing, but a careful reader is likely to come to opposite conclusions.


I was imagining this kind of product the other day, in the context of building content creation pipelines(I was looking into extending Blender to call out to ffmpeg and similar tools), and now here's a full example of where it could go - I'm on my phone but I'll dig in later.

My thought is that flow graphs are a great fit for the AI wave since the weakness in the graphs is mostly around "random access" problems that need to model a broad technical vocabulary, while the weakness in using the AI is in it not knowing the boundaries of the problem space and just generating endless text slop instead of using it to generate a configuration or remake a common algorithm with a small tweak. Tying the two together into a system that looks like "human oversight over automated coding details" should be a major step in the right direction.


The underlying issue with game engine coding is that the problem is shaped in this way:

* Everything should be random access(because you want to have novel rulesets and interactions)

* It should also be fast to iterate over per-frame(since it's real-time)

* It should have some degree of late-binding so that you can reuse behaviors and assets and plug them together in various ways

* There are no ideal data structures to fulfill all of this across all types of scene, so you start hacking away at something good enough with what you have

* Pretty soon you have some notion of queries and optional caching and memory layouts to make specific iterations easier. Also it all changes when the hardware does.

* Congratulations, you are now the maintainer of a bespoken database engine

You can succeed at automating parts of it, but note that parent said "oftentimes", not "always". It's a treadmill of whack-a-mole engineering, just like every other optimizing compiler; the problem never fully generalizes into a right answer for all scenarios. And realistically, gamedevs probably haven't come close to maxing out what is possible in a systems-level sense of things since the 90's. Instead we have a few key algorithms that go really fast and then a muddle of glue for the rest of it.


It's easier to appreciate math when you are disinterested in the results or applications, because the nature of academic topics near the core grouping of math/philosophy/empiricism is that they are discovered with a lot of meandering at first, and then sometime down the line they become repurposed into a direct application that can be learned by rote. School tends to instruct in some of the most directly applicable stuff first - the "three R"s" plus some civics and training aligned with national goals. And that means that school predominantly teaches associations between math and rote methods, to the disgruntlement of many mathematicians. The "meandering" part is left to self-selected professionals, so it doesn't get explored to much depth.

So I think a good motive for math study is really in games and puzzles, where the questions posed aren't about win/lose or right/wrong, but about exploring the scenario further and clarifying the constraints or finding an interesting new framing. Martin Gardner wrote a long-running column and a few books in this vein which are still highly regarded decades later.


It's a formal model that we can opt into surfacing, or subsume into convenient pre-packaged idioms. For engineering purposes you want to be aware of both.

It's way easier to make sense of why it's relevant to write towards a formalism when you are working in assembly code and what is near at hand is load and store, push and pop, compare and jump.

Likewise, if the code you are writing is actually concurrent in nature(such as the state machines written for video games, where execution is being handed off cooperatively across the game's various entities to produce state changes over time) most prepackaged idioms are insufficient or address the wrong issue. Utilizing a while loop and function calls for this assumes you can hand off something to the compiler and it produces comparisons, jumps, and stack manipulations, and that that's what you want - but in a concurrent environment, your concerns shift towards how to "synchronize, pause and resume" computations and effects, which is a much different way of thinking about control flow that makes the formal model relevant again.


What I've noticed(as someone relatively active around retro hardware communities) is that the users sort into demographics with rather conflicting goals:

* I want to run Linux, or CP/M, or a particular version of Basic, or something else with existing software.

* I want to recompile existing C code to the new hardware

* I want to directly control the hardware with the most minimal kinds of OS layer, and probably using assembly, not C.

* The challenges of the hardware are part of my motives for using this platform

* I want to work within a certain retro aesthetic and I don't mind adding more hardware or leaning on other people's code to get there

So in practice, users tend to arrive in waves according to what developments have happened on that platform previously: neo-retro micro designs released in the past few years like X16, Agon, or Neo6502 first attract people doing the essential hardware bringup tasks. Then they settle in to a cycle of incremental porting and toolchain construction. Gradually the platform becomes understood well enough to do some decent applications. The whole while, everything you wanted to do on the device would be accomplished more readily by using a commodity Linux box, but part of the appeal is in having control over the details.

When we use computers to be productive we're usually giving up a lot of the details to software packages and dependencies that are poorly understood by the larger software world and have become the defacto thing everyone uses simply because no equal to them exists. In the time when the retro stuff was actually new, there was a lot of anticipation for those tasks becoming possible someday, but they weren't yet, and that characterized computing as unrealized potential instead of a dull, opaque reality.


It's a strategy to redefine the doctrine of information warfare on the public Internet from maneuver(leveraged and coordinated usage of resources to create relatively greater effects) towards attrition(resources are poured in indiscriminately until one side capitulates).

Individual humans don't care about a proof-of-work challenge if the information is valuable to them - many web sites already load slowly through a combination of poor coding and spyware ad-tech. But companies care, because that changes their ability to scrape from a modest cost of doing business into a money pit.

In the earlier periods of the web, scraping wasn't necessarily adversarial because search engines and aggregators were serving some public good. In the AI era it's become belligerent - a form of raiding and repackaging credit. Proof of work as a deterrent was proposed to fight spam decades ago(Hashcash) but it's only now that it's really needed to become weaponized.


I'd nominate Ugandan president Yoweri Museveni. As authoritarian "president for life" since 1986, he's demonstrated some savvy statecrafting amid Africa's resource wars and ethnic violence, making the country a point of stability and economic growth on the continent.

(Of course, he's got plenty of negatives on the record too. But I think in the game of "Great Man History", he's already left a big legacy.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: