Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

React became popular, so companies adopted it because it was widely used and had a strong ecosystem. This led to more jobs requiring React. More developers learned React, which in turn led to more libraries, tutorials, and community contributions appearing. Increased resources and community support made it easier for new developers to start using React. Its popularity kept growing, which encouraged even more companies to adopt it.

The loop repeats.

Its popularity is self-reinforcing.

It’s also a major reason why React dominated the data that LLMs were trained on. You'll notice it's the default UI code when you prompt LLMs, unless you specify otherwise.

Personally, I prefer Svelte.


yes and the question is do horses have 20 years or less i.e. 5 years?

Humans, their sounds and smells. Yes, I tried BJJ.

I read some analysis about specifically this battery pack, that shows it may not be the bee's knees: https://www.lumafield.com/first-article/posts/whats-hiding-i...

Chances are you're not listening to it in an environment that's quiet enough for 65dB SNR to be even noticeable.

Limited space seems like a surprising thing to lean on, especially when put next to a text label, with the increase pixel count/resolution on devices over the last 5-10-15 years.

Words create better beginners than icons alone.

UI/UX can evolve as the baseline digital literacy of users evolves usually very slowly, to remain maximally inclusive.

One thing that I'd consider is start with text labels, and the more they are used, start showing an icon. Just hold a left bar for them to start appearing so that learning can happen.

The screenshot where there was only some menu items with icons feels more memorable for that reason.


I saw a similar (I think!) paper "Grassmannian Optimization Drives Generalization in Overparameterized DNN" at OPT-ML at neurips last week[0]

This is a little outside my area, but I think the relevant part of that abstract is "Gradient-based optimization follows horizontal lifts across low-dimensional subspaces in the Grassmannian Gr(r, p), where r  p is the rank of the Hessian at the optimum"

I think this question is super interesting though: why can massively overparametrised models can still generalise?

[0]: https://opt-ml.org/papers/2025/paper90.pdf


On the margin, you can let anything influence your voting decision.

> Now I have to wait, first XYZ might not be on the cassette I have with me,

> There has to be some minimal amount of effort for a `thing`, when you go below it, it just becomes nothing.

I had this conversation with someone at the weekend. It's hard to find new music on Spotify because it's too easy to find stuff you already like.

I'm in my early 50s. I grew up in the 80s, in a fairly rural part of the UK with basically one music shop nearby and the next nearest a good four hours each way on the bus.

In 1988 when I was 15, a load of awesome albums came out that I really wanted and mostly couldn't afford. I bought Public Enemy - Fear of a Black Planet, Iron Maiden - Seventh Son, 808 State - Newbuild, and probably a couple of others. I'm sure I got into FLA and and The Pixies round about then too.

These tapes were about a tenner each and I had to repair quite a lot of Amstrad satellite receiver power supplies in my weekend job, and if I spent it all on tapes I'd have no money left for beer.

An awful lot of my tapes were pirate copies from friends, which we swapped at school. To this day I'm convinced that Appetite For Destruction was mixed to sound "right" when copied onto a battered old TDK D90 that's been rattling around in your schoolbag for a month by your mate's big brother who bought the CD because he's got a good job earning nearly £5/hr working on a fishing boat and has a really nice stereo.

The upshot of this is that I listened to a lot of things that I simply did not like very much, because it was new and I hadn't listened to it a million times. That being said, I don't think there was much I heard and thought "yeah I don't care for this at all", but there were definitely tapes I listened to that I wouldn't have picked out by myself.

I wouldn't have listened to 10,000 Maniacs if someone my dad worked with hadn't put it on in the car, and gave me his copy of the tape. I might not have listened to Dire Straits so much if another of my dad's friends hadn't given me a handful of bootlegs of their concerts and a copy of Making Movies, and one of the bigger kids in high school (hi Aaron, hope you're doing well) hadn't given me a pirate copy of Brothers in Arms.

I've since bought all of these on at least one other format.

I wouldn't have listened to Suzanne Vega I don't think, if my aunt hadn't given me a copy of her eponymous first album for Christmas when I was about 12 or 13 (it hadn't been out long in the UK), and I absolutely love Suzanne Vega. Loved her stuff from the first note of "Cracking". Have you ever listened to or watched something that you wanted to play at ten times speed just so you could put it into your head faster, then play it again at one tenth speed so you could pick up all the details?

This doesn't even touch on mixtapes, where someone else puts the effort in to curate a collection of things they think you will like, that represents who you are to them. Mixtapes were beautiful.

Now, with any luck, people will get into media they can hold in their hand. Even just things like MP3s on an SD card in some homebrew Arduino blob of a player.

There's more to music than just the noise it makes.


Maybe you are hateful and intolerant to ads!

Nothing more addictive than adding more padding

Is this really true? Messaging like this will cause a lot of developers to just give up. Most places I've worked at did accessibility at best as a best effort sort of thing. After reading this, there will be no attempts made to improve the state of affairs.

Perhaps that will be an improvement? I don't know.


This. If I'm out looking for a "Save" button, I'm going to pattern match "ancient disk icon" without even thinking about it.

It's also the reason why some menu entries get icons and others don't.

If the icon doesn't convey information by itself (like a "move to top" icon example), then it's there as a visual anchor - and you don't really need to have 4 of the same "delete" icon if a menu has 4 different "delete" options next to each other. Just one is enough of an anchor to draw your attention to the "delete group", and having just one keeps the visual noise low.

Likewise, you don't need visual anchors for every single option - just the commonly used ones, the ones you expect people to be looking for, and the ones that already have established pictography.


You mean it hasn't already?

I have similar stories, I showed the Confluent consultants a projection of their Kafka quote vs Kinesis and it was like 10x, even they were confused. The ingress/egress costs are insane. I think they just do very deep discounts to certain customers. The product is good but if you pay full ticket it probably doesn't make sense.

I agree with this. Consistency > immediate design beauty

> Not everything NEEDS to be useful

... until the not-useful becomes distracting and/or causes information overload.

In the case of Apple, I've been a user of the Accessibility menu ever since they introduced the stupid parallax wiggling of the icons. Right now i use: reduce motion, bold text and reduce transparency. Because I want to see what I'm looking for when using the phone and not squint through pointless effects.


Even cut copy paste are not used consistently.

In many AI chatbots ..i see the "paste" icon used for the function of "copy"


a) __attribute__((warn_unused_result)) is non-standard in the first place, are you looking for [[nodiscard]] - GCC does not warn on cast to void with that?

b) A return value that is explicitly marked like this is very different from an unused variable that gp suggested the cast to void idiom for. GCC does not warn on variables that are unused except for a cast to void.


The scenario was not about people being discriminated against. It was about the way people handle and react to the language of others.

Rather than improving testing for fallible accessibility assists, why not leverage AI to eliminate the need for them? An agent on your device can interpret the same page a sighted or otherwise unimpaired person would giving you as a disabled user the same experience they would have. Why would that not be preferable? It also puts you in control of how you want that agent to interpret pages.

This article could have been one quarter the length.

> 1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.

They have more work to do until they don't.

The number of bank tellers went up for a while after the invention of the ATM, but then it went down, because all the demand was saturated.

We still need food, farming hasn't stopped being a thing, nevertheless we went from 80-95% of us working in agriculture and fishing to about 1-5%, and even with just those percentages working in that sector we have more people over-eating than under-eating.

As this transition happened, people were unemployed, they did move to cities to find work, there were real social problems caused by this. It happened at the same time that cottage industries were getting automated, hand looms becoming power-looms, weaving becoming programmable with punch cards. This is why communism was invented when it was invented, why it became popular when it did.

And now we have fast-fashion, with clothes so fragile that they might not last one wash, and yet still spend a lower percentage of our incomes on clothes than the pre-industrial age did. Even when demand is boosted by having clothes that don't last, we still make enough to supply demand.

Lumberjacks still exist despite chainsaws, and are so efficient with them that the problem is we may run out of rainforests.

Are there any switchboard operators around any more, in the original sense? If I read this right, the BLS groups them together with "Answering Service", and I'm not sure how this other group then differs from a customer support line: https://www.bls.gov/oes/2023/may/oes432011.htm

> 2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.

This would be absolutely correct — I've made the analogy to Amdahl's law myself previously — if LLMs didn't also do so many of the other things. I mean, the linked blog post is about answering new-starter questions, which is also not the only thing people get paid to do.

Now, don't get me wrong, I accept the limitations of all the current models. I'm currently fairly skeptical that the line will continue to go up as it has been for very much longer… but "very much longer" in this case is 1-2 years, room for 2-4 doublings on the METR metric.

Also, I expect LLMs to be worse at project management than at writing code, because code quality can be improved by self-play and reading compiler errors, whereas PM has slower feedback. So I do expect "manage the AI" to be a job for much longer than "write code by hand".

But at the same time, you absolutely can use an LLM to be a PM. I bet all the PMs will be able to supply anecdotes about LLMs screwing up just like all the rest of us can, but it's still a job task that this generation of AI is still automating at the same time as all the other bits.


I've used Copilot ih VSCode quite extensively, using both GPT-5(.1) and Sonnet 3.7 and Sonnet 4.5 as the LLM backend. (Before our company got the deals in place for OpenAI and Anthropic to use their tooling directly)

The results are measurably worse every time vs. doing the exact same operation with Codex Cli or Claude Code.

There's something in the Microsoft AI framework that makes the worse and more "stupid" than the competition.


A synonym is a different word with the same meaning. A homonym is the same word with a different meaning.

Pulsar supports the kafka protocol, so anything that works with Kafka should just work with Pulsar.

https://docs.streamnative.io/cloud/build/kafka-clients/kafka...


Full robotic servants are very costly, only AI servants are cheap enough. But I do think we're going to see more wars and robotic use in wars.

>There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.

Yes, it’s a god of the gaps situation. We don’t know what the ceiling is. We might have hit it, there might be a giant leap forward ahead, we might leap back (if there is a rug pull).

The most interesting questions are the ones that assume human equivalency.

Suppose an AI can produce like a human.

Are you ok with merging that code without human review?

Are you ok with having a codebase that is effectively a black box?

Are you ok with no human being responsible for how the codebase works, or able to take the reins if something changes?

Are you ok with being dependent on the company providing this code generation?

Are we collectively ok with the eventual loss of human skills, as our talents rust and the new generation doesn’t learn them?

Will we be ok if the well of public technical discussion LLMs are feeding from dries up?

Those are the interesting debates I think.


It's "strange" because they don't like the answer. Namely, that the universe is strongly deterministic, that relativity demands a block-universe model of time, and that all observers merely trace an indexed line through the block. A so-called "worldline."

Consider (note -- I'm basically doxxing myself, but this is an example from a book I'm set to publish next year):

"One bright day you and your friend Tacitus are playing a duet. You've got a guitar, he's got a harp; you're sitting on one side of the garden, he's sitting on the other. You simultaneously hit a note. Your note is 1 and Tacitus' note is 2. Shortly thereafter, you hit note 3 and Tacitus hits note 4. Unbeknownst to you, a couple of interstellar spacefaring yachts, Dragon and Crab, have passed each other directly overhead. On those yachts, with velocity at .9c shifting their own reference frames, things flow differently. On Dragon, notes 2 and 3 appear to sound simultaneously. On Crab, which is heading in the opposite direction, it's 4 and 1 that sound at the same time. So, to Crab, prior event 1 is simultaneous with present event 4 – or, alternatively framed, future event 4 takes place simultaneously with present event 1.

"Now imagine that the garden you're sitting in is very large, and you're separated from Tacitus by about a kilometer. You'd notice a bit of time lag affecting your performance, but Crab would notice something much stranger: That note 4 takes place before note 1. In other words, that an event in your future has already taken place."

And just as Crab saw 4 played before 1, things that shall happen in our future are in principle already available to other observers.

One doesn't need to travel at a large fraction of the speed of light for this to be true. Even a sufficiently fast aircraft, moving directly overhead the duet, would have its perception of time altered somewhat, so that 2 takes place slightly before 1 – a very small effect, on the order of attoseconds – even though to the guitar player the two events are perfectly simultaneous, e.g. happen in the same Planck moment.

Science doesn't like the obvious implications of this, because determinism is considered something that poisons science by making free experimentation impossible. But there are no credible alternatives. The only ones I've seen proposed boil down to solipsism or relativism -- and those are even worse, especially when they're obfuscated to such an extent that they seem almost credible, e.g. in "cone presentism."

To be clear, solipsism in this context is: "One frame of reference is true, and every other observer is wrong about the reality of its perceptions to whatever extent they disagree with the privileged frame."

Relativism is essentially: "A future I cannot perceive and do not experience is not my future; I disavow it." (More sophisticated forms attempt to play games with semantics and redefine concepts like "present time.")


I used to code at night with a kerosene lamp sitting on my desk. I love the light spectrum of a live fire.

"Portable" (they couldn't even fit in a pocket) CD players were the worst thing imho. Too sensitive to even small shocks, which was particularly annoying while taking longer walks, and draining batteries like crazy. I switched from cassette players to MP3 players, almost completely skipping the era of CD players. I've tried it once or twice because my sister had it, and never again.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: