Hacker Newsnew | past | comments | ask | show | jobs | submit | Dylan16807's commentslogin

Please don't repeat some guy's guess about spaces as fact, especially when that's not how windows parses paths.

A good point. And don't believe how the debug the AI system produced relates to what it did either.

It depends on stuff.

Sometimes a URL can have a password in it.

But when it's just a sequential-ish ID number, you have to accept that people will change the ID number. If you want security, do something else. No prosecuting.


Are you talking about CPU support? I installed a 32 bit program on basic linux mint just the other day. If I really need to load up a pentium 4 I can deal with it being an older kernel.

That's exactly what I mean, I wish Linux was more like NetBSD in its architecture support. It kind of sucks that it is open source but it acts like a corporate entity that calculates profitability of things. There is one very important reason to support things in open source: Because you committed to it, and you can. If there are practical reasons such as lack of willing maintainers (I refuse to believe out of all the devs that beg to have a serious role in kernel maintenance, none are willing to support i386 - if NetBSD has people, so too Linux), totally understandable.

You'd expect Microsoft to support things because it doesn't make money for them anymore or some other calculated cost reason, but Microsoft is supporting old things few people use even when it costs them performance/secure edges.


Well for now the kernel still supports it. And the main barrier going forward is some memory mapping stuff that anyone could fix.

Though personally, while I care a lot about using old software on new hardware, my desire to use new software on old hardware only goes so far back and 32 bit mainstream CPUs are out of that range.


I think eventually 32 bit hardware and software shouldn't be supported. But there are still plenty of both. We shouldn't get rid of good hardware because it's too old, that's wasteful. 16bit had serious limits but 32 bit is still valid for many applications and environments that don't need >3GB~ ram. For example, routers shouldn't use 64bit processors unless they're handling that much load, die size matter there, that's why they use Arm mostly, and that's why Arm has thumb mode (less instruction width = smaller die size). I'm sure the tiny amounts of money and energy saved by not having that much register/instruction width adds up when talking about billions of devices.

Open source isn't where I'd expect abandonware to happen.


> We shouldn't get rid of good hardware because it's too old, that's wasteful.

Depends on how much power it's wasting, when we're looking at 20 year old desktops/laptops.

> 32 bit is still valid for many applications and environments that don't need >3GB~ ram.

Well my understanding is that if you have 1GB of RAM or less you have nothing to worry about. The major unresolved issue with 32 bit is that it needs complicated memory mapping and can't have one big mapping of all of physical memory into the kernel address space. I'm not aware of a plan to remove the entire architecture.

It's annoying for that set of systems that fit into 32 bits but not 30 bits, but any new design over a gigabyte should be fine getting a slightly different core.

> For example, routers shouldn't use 64bit processors unless they're handling that much load, die size matter there

I don't think that's right, but correct me if I missed something. A basic 64 bit core is extremely tiny and almost the same size as a 32 bit core. If you're heavy enough to run Linux, 64 bit shouldn't be a burden.


In multiple ways, / doesn't have to be one of your drives.

They're pretty hostile to it though. With normal developer mode, every boot it actively prompts the user to factory wipe it!

Citation needed that panicking and quitting early is a "poverty mentality".

What specifically are you calling revisionism? I don't see anything in their post that's tied to these numbers.

They said it's good. They didn't say it matches the best decades of the economy.


> My point: cool benchmark, what does it matter?

And then people explained why the effects are smoothed over right now but will matter eventually and you rejected them as if they didn't understand your question. They answered it, take the answer.

> It didn’t merely say Google’s TPUs were better. It said that Nvidia can’t compete.

Can't compete at clusters of a certain size. The argument is that anyone on nVidia simply isn't building clusters that big.


Have they successfully gotten any?

> At a certain point, paying a few dollars per month to host a decade old version of a $13 product that gets downloaded once a year actually is a problem.

The few dollars was talking about total cost, not per-version cost.

If we're talking about a single version of filezilla that rarely gets downloaded, the hosting cost is somewhere below a penny per month, possibly actual zero. And they might need to store 25 archival versions total? It's nothing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: