Hacker Newsnew | past | comments | ask | show | jobs | submit | more crq-yml's commentslogin

I think it's healthy, because it creates an undercurrent against building a higher abstraction tower. That's been a major issue: we make the stack deeper and build more of a "Swiss Army Knife" language because it lets us address something local to us, and in exchange it creates a Conway's Law problem for someone else later when they have to decipher generational "lava layers" as the trends of the marketplace shift and one new thing is abandoned for another.

The new way would be to build a disposable jig instead of a Swiss Army Knife: The LLM can be prompted into being enough of a DSL that you can stand up some placeholder code with it, supplemented with key elements that need a senior dev's touch.

The resulting code will look primitive and behave in primitive ways, which at the outset creates a myriad of inconsistency, but is OK for maintenance over the long run: primitive code is easy to "harvest" into abstract code, the reverse is not so simple.


Not only that.

This article starts with "gaming" examples. Simplified to hell but "gaming".

How many games still look like they're done on a Gameboy because that's what the engine supports and it's too high level to customize?

How about the "big" engines, Unity and Unreal? Don't the games made with them kinda look similar?


I love writing shaders and manually shovelling arrays into the graphics card as much as anyone, and I know first hand how this will give the game very much your own style.

But is that the direction LLM coding goes? My experience is that LLM produces code which is much more generic and boring than what skilled programmers make.


But do users care that your code is boring?


If the end result is, and you’re doing games, they might

Edit to add to my sibling comment:

> But some abstractions which makes standard stuff easy, makes non-standard stuff impossible.

I swear that at some point i could tell a game was made in Unity based on the overall look of the screenshots. I didn't know it's the fault of the default shaders, but they all looked samey.


Since games are all about artistic expression and entertainment, I would say yes, it matters to the end users. The thing is, if you don't have your hand in the code details you might not know the directions it can be taken (and which it cannot). Seeing the possibility of the code is one way to get the creativity juices flowing for a game programmer. Just look at old games, which demonstrate extreme creativity even on the limits put on the software by the old hardware. But being stuck into the code, seeing the technical possibilities allowed this.

I think this is what the comment above was lamenting about abstractions. I am all for abstraction when it comes to being productive. And I think new abstractions open new possibilities some times! But some abstractions which makes standard stuff easy, makes non-standard stuff impossible.


I think this depends a lot on the stack. for stacks like elixir and Phoenix, imho the extraction layer is about perfect. For anyone in the Java world, however, what you say is absolutely true. Having worked in a number of different stacks, I think that some ecosystems have a huge tolerance for abstraction layers, which is a net negative for them. I would sure hate to see AI decimate something like elixir and Phoenix though


Quick-dry gel formulas are the new wave of gel pen and they are pretty easy to use as a lefty. Bic actually has my favorite of them, the Gelocity. It's very good if you like the Bic ballpoint's oily rolling action and want that as a gel.


For most pens, it's really all about the quality of the writing surface, and blame is incorrectly attributed to the pen when it's taken outside its design limits. Paper that is overly thin or rough with a hard backing(like most school desks) tends to be less forgiving and ballpoints become likely to clog more easily because the ball will roll without good contact - for those, dry media, marker or brush will do the best. But smooth, heavy papers backed by kraft board will be very sympathetic to all pens.


I think I do some of this, but my framing is not explicitly about adopting monastic practices - rather, it's about having a "novelty budget" each day. Every novel stimulus is an opportunity to careen off course.

However, if the task ahead of me is great and I'm motivated, then I automatically seek less novelty to focus on it. IOW, maintaining a boring baseline of routine so that novelty is selective is important as a way of being able to "jump into action". It's good to get off the phone. It doesn't replace the intrinsic motivation.

There's an aspect to productivity advice that is about shouting down your burnout by adding more productivity hacks or taking stimulants or flagellating oneself. Burnout's root cause has to be approached by asking the tougher questions about life and aligning with a philosophy that is truthful to that. The work itself will have moments of routine boredom, exhilaration, and heartbreak, but the motive has to endure all of it.


The majority of what we make is temporary, and a majority of software amounts to wheel reinvention. But this is true throughout history. Crafted objects had always have their design iterated upon and adjusted to meet the available dependencies.

Liberation isn't found in "fighting", as if there were some kind of ideological showdown that resolved everything. That's a way to drain your energy for living while waiting for a final resolution that never comes. Indeed, by directing it towards "enemies" you can accumulate guilt in yourself, and become increasingly unstable.

Rather, it has to be part of living and personal character, to take opportunities to "do hard things" because they are interesting, not because you are coerced. When you do this, succeed, and pass down the knowledge, the potential opens up to others. You don't have to believe right things to stumble into new ideas.


I did spend a little while evaluating what GPT could output for a fairly intensive gaming task - implement a particular novel physical dynamics model. (Yes, there are newer tools now: I haven't used them and don't want to pay to use them.) And what it did was impressive in a certain sense in that it wrote up a large body of code with believable elements, but closer examination demonstrated that, of course, it wasn't actually going to function as-is, it made up some functions, it left others stubbed. I would never trust it to manage memory or other system resources correctly, because AFAIK nobody has implemented that kind of "simulate the program lifecycle" bookkeeping into the generated output.

There's a domain that I do find some application for it in, which is "configure ffmpeg". That and tasks within game engines that are similar in nature, where the problem to be solved involves API boilerplate, do benefit.

What also works, to some degree, is to ask it to generate code that follows a certain routine spec, like a state machine. Again, very boilerplate-driven stuff, and I expect an 80% solution and am pleasantly surprised if it gets everything.

And it is very willing to let me "define a DSL" on the fly and then act as a stochastic compiler, generating output patterns that are largely in expectation with the input. That is something I know I could explore further.

And I'm OK with this degree of utility. I think I can refine my use of it, and the tools themselves can also be refined. But there's a ceiling on it as a directly impactful thing, where the full story will play out over time as a communication tech, not a "generate code" tech, with the potential to better address Conway's law development issues.


There's an aspect to Waymo's plan that means that they never really have to solve highway driving to capture market share. Highways are appealing to car manufacturers because they incentivize speed, acceleration, braking, and more nefariously, to be the bigger vehicle and more capable of surviving accidents. When automobiles were a new, growing market, manufacturers lobbied for fast streets, wide highways, and ample parking.

A viable self-driving business plan, on the other hand, has to accommodate taking final responsibility in an accident. That was what got manufacturers into the lobbying game to begin with - they needed to create a public that saw themselves as responsible owners while everyone else on the road was a meanace, and worked towards that reality through both consumer marketing and the financing and regulation systems around autos. Self-drive means that the goal changes to "every ride we provide is a safe one, and we do not serve customers that ask for danger".

And that means that some markets like regional airports and particularly sprawling, car-dependent metros may go unserved for some time, depending on how the strategists feel about their chances, but then the aspect of courting the public shifts towards strongarming governments into more intensive road safety measures, and then to only professional human drivers, and then perhaps to mandated self-drive in urban areas. Having tons of capital to throw around lets you dream very big.

In this way the problem gets redefined incrementally towards something that meets with where the engineering actually is and allows Waymo to compete while retaining its excellent record.


There's a related phenomenon in that we now have an "iPad kid" generation that gets sucked into these extremely precise digital tools without a lot of context, following the beginner's trope of overvaluing rendering and draftmanship to the end of making pieces that all take hundreds of hours and do very little to utilize the machine's ability to automate or dissect information.

I remember coming across a livestream of someone whose line-making process was to zoom in and scrub over a tiny area repeatedly for several minutes to create the effect of a single ink brush stroke. The effect was pleasant and had a very intentionally designed quality to it, but I came back a few hours later and he had made hardly any progress. The goal he had in mind was really better suited for vector tools, but the machine wasn't stopping him in the way that paper would give out under intensive scrubbing. I'm quite sure, extrapolating that anecdote, that there's someone out there trying to intentionally design each pixel in a 4k image.

IMHO the single most important thing digital provides is new ways to see - I'll often direct students to use the threshold filter to discover new lighting shapes in references or indicate planning problems with their value structure.


> IMHO the single most important thing digital provides is new ways to see - I'll often direct students to use the threshold filter to discover new lighting shapes in references or indicate planning problems with their value structure.

Completely agree. The hidden structures of an artwork are pretty much invisible with access to some flavor or other of a digital tool.

I believe that Rembrandt would have killed to have the ability to photograph his work and apply a de-saturation in order to see its lightness map. In fact... he did something similar... view his paintings in candle light, which does indeed almost de-saturate his colors.

Likewise, any decent Impressionist would have loved to be able to create a Saturation map using Photoshop's Selective Color adjustment.


The thing that really drove the PC era was that the commodity desktop spec was rapidly gaining capability, compilers for them were getting good enough to depend on high-level languages, and the most affordable way to meet the market where it was, was not to build custom(as had been the case in the 80's when looking at stuff like video editing systems, CGI, digital audio, etc.) but to use Microsoft as a go-between. All the heralded architectures of the 80's were doing things more custom, but it amounted to a few years of advantage before that convergence machinery came along and stripped away both customers and developers.

Apple did survive in that era, though not unassisted, and the differentiation they landed on(particularly after Jobs came back) was to market a premium experience as an entry point. I think that is probably going to be the exit from today's slop.

In this era, spec is not a barrier - and you can make <$100 integrated boards that are competent small computers, albeit light on I/O - and that means that there's a lot more leeway to return to the kinds of specialty, task-specific boxes that the PC had converged. There's demand for them, at least at a hobbyist level.

For example, instead of an ST and outboard synths for music, you could now get an open-source device like the Shorepine Tulip - an ESP32 touchscreen board set up with Micropython and some polished DSP code for synths and effects. It's not powerful enough to compete with a DAW for recording, but as an instrument for live use, it smashes the PC and its nefarious complexities.


There are movements towards making these. It's an integrated SW+HW stack problem and Apple has had years of leadership on that space. Times are changing.

What you would see in the past are "PC or Android with a tablet manufacturer's sticker on it". Wacom has a history of occasionally licensing their stuff for a laptop. And XPPen, for example, has made a few in the "Magic Drawing Pad" series now and they needed a few iterations to move away from being a generic OSI tablet to actually using their digitizer tech. These products don't excite tech enthusiasts - a fully integrated device, as opposed to screen and digitizer, comes with more concerns about all-round performance and value - and so far, the premium on them makes them compete with iPads. But there is tremendous demand for it - seemingly every "art kid" sees an iPad and Procreate as a milestone, because that combination is what the content creators they watch are using.


> Times are changing.

Are they? The iPad is in it's 15th year now. Apple has shown exactly what needs to be done yet there isn't enough interest in the open source community to develop (or sponsor development of) a solution. I don't see much evidence of change.

> because that combination is what the content creators they watch are using

But also because it's the best hardware and software and can be relatively inexpensive, especially if you are willing to buy used.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: