Hacker Newsnew | past | comments | ask | show | jobs | submit | more crq-yml's commentslogin

Over generations, we'll get to where the software tools are relatively more important, but as of right now, it is almost a certainty that the quality of the software you write benefits from writing less code and relatively more of it with a paper draft that is based on carefully reading sources and documentation, and not from any other kind of tools investment.

The new stuff is often fast and many jobs need it by way of keeping up with the standard practices and creating a hiring filter, but it's a supplement, not a foundational tool for thought. It can't be all that important because it's constantly shifting.


There's a quiet rumbling left over and doing more background development, trading is still taking place. The major impasse is really with "what's the framing for this invention now" - the invention is still interesting but it isn't a good technology until you have established a motive to depend on it at scale.

The flood gates are opening to do things with blockchains again in the US, although the motive for that is "more scams".


It feels like they are still searching for demand and a use case. Having made a lot of tech in the space and in ipfs, I find that all the things it’s good at are just more readily accepted when in web2


Yesterday, the operators of Artfight, the yearly art trading game, reported a breach that affected perhaps a few hundred people(an XSS attack through a news post comment that scooped up autofill information). Users were panicked, in large part because they never hear about breaches affecting them.

I believe the industry should work towards mandating communication on these incidents instead of maintaining the charade that nothing is happening. This needs to come through one of legislation or labor action.


Artfight seems to be USA based, but has a GDPR aware privacy policy. If they follow the GDPR, they are already legally required by Art. 33 to do this.


The pipeline bottlenecks all changed in favor of bruteforcing the things that BSP had been solving with an elegant precomputed data structure - what BSP was extremely good at was eliminating overdraw and getting to where the scene could render exactly the number of pixels that were needed and no more. It's optimized around small, low-detail scenes that carefully manage occlusion.

More memory, bandwidth and cache means that more of your solutions are per-pixel instead of per-vertex and you can tolerate overdraw if it means you get to have higher polycount models. Likewise, the environment collision that was leveraged by the BSP process reduced the number of tests against walls, but introduced edge cases and hindered general-purpose physics features. Scaling physics leads in the direction of keeping the detailed collision tests at their original, per-poly detail, but doing things with sorting or tree structures to get a broadphase that filters the majority of tests against AABB or sphere bounds.

On a Wii(original Wii) 3D action game I helped ship, we just rendered the whole level at once, using only the most basic frustum culling technique; the hardware did the lifting, mostly through the z-buffer.


Adding to this, the nice thing about the bsp partitioning was you could also leverage it to make off screen monsters go to sleep or reduce their tick rate. Was helpful for optimizing AI as well as rendering. DOOM not only had some of the first pseudo 3d but also huge numbers of enemies... something that a lot of other games still cut down on


I occasionally dally with XML in hobby code as a document source format, but I think what drives me away at the end is silly stuff with syntax, because it is a big spec well beyond the angle-bracket stuff, it wants to cover all the bases, do security right and do character encoding right - which means that "plain text editing" in it is some really unintuitive stuff where you can type in something like this paragraph and it might be parsed in a compatibility mode, but it won't be valid. As an interchange format or something loaded into application software tailored for it, it has more legs - and LLMs definitely wouldn't have any trouble making sense of it, which is good. A lot of yesteryear's challenges came from programmers short on time and eager to hack in features taking a heavily structured spec and wielding it like a blunt instrument. "XML? Sure, I can write a regex for that." Repeat that across three different programs and three different authors and you have a mess.

There is a format that I've hit upon that actually does get at what I want for myself, and that's BBCode. That's a great source format for a lot of stuff - still basically an angle-bracket, but with the right amount of structure and flexibility to serve as a general-purpose frontend syntax. Early implementations were "write a regex for that" but after decades of battle-testing, there are more graceful parsers around these days as well.


It's built into the premise of productivity, which ultimately derives from Locke's "labor theory of property" - if you put labor into something, you have a claim on its ownership. Later philosophers studying trade and economy built this up to mean: "if we can increase the quantities of things that are legible as assets, optimizing their production on the balance sheet signals increased labor productivity". We've been awkwardly gluing that concept onto everything ever since and trying to measure things like teaching, wildlife management and "designed in California" electronics as productive activities.

Locke was thinking in agrarian terms, about how farmers work the land, or how many ships a nation has in its fleet - he was building off the mercantilism of prior centuries which was also balance-sheet driven, but in a pure trading sense, with no connection between labor/ownership. That connection was important to ushering in coherent industrial policy, since it expressed the idea of importing raw materials and then exporting finished ones at a profit - when you come up with new categories of asset to sell, or optimizations on existing ones, the economy provides more goods and services, it experiences a gain in material wealth. The real limitation is that it's too localized: commons goods are persistently attacked by it because they don't figure into the balance sheet.


This is a great point. I'm reminded of Lakoff's metaphors. I wonder if these primitive zero-sum views of the worlds economy mostly stem from folks inability to even conceive of a less physical metaphor for the mechanism of exchange.

Locke can be forgiven, but his modern disciples less so.


I got sour on games for a while but I think there are good things awaiting them, because we're starting to get past the hurdle of "new technology usurps the old" actually being germane to the artistic processes that go into game design. Like, it still exists because the devices are so locked down, but it's stopped being a tech-driven business - there's little interest in AAA now, and the broader trends are shaken up too; there's more of a symbiotic pipeline of "make a game that helps people make video content" taking hold, one which has little relationship to recency or production values.

That said I have been pursuing the sustainable elements of gaming for years at this point, seeing the same issues - and for me what it comes down to is what I summarize as "the terrarium problem" - the bigger the software ecosystem you build the game over, the more of the jungle you have to port to the next platform du jour. When we approach gaming as a software problem it's just impossible, we can't support all the hardware and all the platforms.

But within that there are elements of "I can plan for this". Using tech that is already old is one way; Flash, for example, is emulated now. But if you go back to an earlier console generation or retro computers, you can find even more accuracy, better preservation. I took the compromise of "neo retro", since there are several SBCs around that mix old chips with new stuff - those have much more comfy specs to tinker with, while building on some old ideas. Tech that assumes less of a platform is another: I've taken up Forth, because Forth is the language that assumes you have to DIY everything, so it perpetuates ground-up honesty within your software, especially within a retro environment where there's no API layer to speak of and you have full control. And tech that has more of a standardized element is good: if something is "data structure portable", it's easier to recreate(this is why there are many homebrew ports of "Another World" - it's all bytecode).

The last piece of the puzzle in it is - okay, if I take things in that direction, how do I still make it fun to develop with? And that's the part I've been working on lately. I think the tools can be fun. Flash found some fun in it. But Flash as a model is too complex, too situated in just supplying every feature. PICO-8 is also fun, but very focused on a specific aesthetic. I think it's related to data models, conventions and defaults. Getting those things right clears the way.


I think the reason, if you had to pin it down to one instead of "confluence of factors", could be almost entirely demographics:

* The Boomers, an extremely large generation, enter the workforce in number in the 70's...

* Simultaneously, the older generations with more of the industrial and trades know-how that powered the mid-century economy are now retiring.

* The new focus of the economy in the 70's involved a restructuring around imported-goods, services, high tech, with corresponding winners and losers.

* Part of this restructuring also involved a revamped military to roll out the new "smart weapons" and run a professionalized military instead of the conscription troops that fought in Vietnam.

When you add those trends together, it will look like a stagnant graph: investing in high-tech doesn't come with immediate payoffs, and the new cohort is larger so they have to compete more intensively for top positions while not having the job experiences of the previous. DoD spending has an outsized influence on the balance sheet, and 1994 marks the start of the Internet boom, which reflects the tech sector's move away from defense contracts(the end of the Cold War came with major layoffs in defense) and towards VC moonshots, right around the same time that Gen X is entering - a much smaller generation. The Boomers are at that point entering more senior career roles; both cohorts are productive in the sense of facilitating deskilling, offshoring and cost-cutting, which make the balance sheets of the 90's look great, and reflective in many cases of genuine efficiencies that were now coming to fruition.


I think it would be more productive to define tech in terms of "constant size vs variable size". Small is relative, but constant is constant, and the lie that is particularly enabling of bloat is that we scale between "zero and any amount" of something: any amount of data in a file, any size or color depth in an image, any number of parameters in a function call, any number of APIs, any number of processes, and so on. This is both a question of the memory/storage quantities and the amount of compute being done - when you vary both, you end up with complexity in terms of resource management.

Computers today do routinely scale to quantities so large that it seems like "any," but they are not good human-interface scales. Old and constant tended to be the case because the design was overtly burdened by the additional logic needed for variability, but the way that we engineer and program computers now tries very hard to do variable-everything, from the system power management on upwards to the top-level presentation.


It took me a few tries(over a few years) to properly approach the task of writing a Forth, and when I approached it, I made my Forth in Lua, and all I really did was implement the wordlist in FORTH-83 as the spec indicated, and rewrite every time my model assumptions were off. No diving into assembly listings. Eventually I hit the metaprogramming words and those were where I grasped the ways in which the parser and evaluator overlap in a modal way - that aspect is the beating heart of a bootstrappable Forth system and once you have it, the rest is relatively trivial to build when starting from a high level environment.

The thing is, pretty much every modern high level language tends to feel a bit clumsy as a Forth because the emphasis of the execution model is different - under everything with an Algol-like runtime, there's a structured hierarchy of function calls with named parameters describing subprograms. Those are provisions of the compiler that automate a ton of bookkeeping and shape the direction of the code.

It's easier to see what's going on when starting from the metaphor of a line-number BASIC (as on most 8-bit micros) where program execution is still spatial in nature and there usually aren't function calls and sometimes not even structured loops, so GOTO and global temporaries are used heavily instead. That style of coding maps well to assembly, and the Forth interpreter adds just a bit of glue logic over it.

When I try to understand new systems, now, I will look for the SEE word and use that to tear things down word by word. But I still usually don't need to go down to the assembly(although some systems like GForth do print out an assembly listing if asked about their core wordset).


I understand implementing words as you think they should be. However, you need the core first, and that's where I'm working right now. I'm trying to get the central loop, dictionary, and threading model functional.

Which brings up another complication -- the threading model. There are multiple, of course. But sometimes I want to figure out, for example, what the `w` variable does. Is it different between indirect threading and subroutine threading? Maybe!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: