Hacker News new | past | comments | ask | show | jobs | submit | crthpl's comments login

Why would charging for extended support hurt Microsoft? They "win" if you pay for it, or if you upgrade to windows 11. If you do neither, you'll probably get hacked down the line if you don't airgap, and Microsoft doesn't gain or lose anything (except goodwill, but that doesn't matter with the kind of monopoly they have).


I think there is a deterministic algorithm (based on GPS position?) for determining who climbs and who descends.


I'm guessing MacOS, Linux, Android, Windows, iOS, and Web?


Correct


The idiom "Don't judge a book by its cover" applies to many situations, but books are not one of them.



TIL:

    #:~:text=
Are there more of these, what is this magic!


It's a "text fragment"; started as a Chrome thing, other browsers are slowly adding support.

https://developer.mozilla.org/en-US/docs/Web/Text_fragments


At least he was totally right about Java not being a hacker language.


So, what actually makes Java successful despite all of these?


2 things (in the corporate world): Android and Spring framework. It's also basically OOP the language which means large enterprises are biased towards it.

Other factors are that it has very strong built-in libraries and has become a popular choice for "how to program" classes so lots of people are familiar with it. Having a similar name to JavaScript (the web's native language) also helps.


These are reasons for why an individual hacker might not take to a language. They don't factor in to a large organisation's choice of language. And to be honest, quite a few of these arguments would apply to Javascript as well.


dawn is the WebGPU backend in chromium, while wgpu is the WebGPU backend for firefox written in Rust. wgpu is seeing a lot of use in non-browser uses; there are some examples on their website.

https://wgpu.rs/


The 7 million is probably how many different hues we can see. We can see many more different brightness levels.


The article that mentioned it already includes all the different components, lightness etc


Not if only one species came to earth.


Or perhaps the other way around: we are social because we have language.


You think language would spontaneously evolve without a disposition to socialize a-priori? That doesn't make any sense.


Presumably we needed to “notice” an orange in a tree before we could point to it. Before you could communicate with signs, you could probably realize something equivalent to “that’s food”, despite not having language yet.


Their Performance section shows that the 12GB is 50%-150% faster than the 3080 Ti (so better than 3090 Ti!), and it also has more cuda cores and memory than the 3070.


Yea but that's how new tech works. If the price of every new generation of hardware scaled linearily with the increase in performance we'd be needing mortgages for a new PC/GPU.


That's how new tech worked in the Moore's Law era. Times are changing and you're observing the evidence of that.

And yes moore's law notionally died a long time ago, but it's been able to be hidden in various ways... and things are getting past the point where it can be hidden anymore.

Node costs are way way up these days, TSMC is expensive as hell and everyone else is at least 1 node behind. Samsung 5nm is worse than TSMC N7 for instance, and TSMC N7 is 5+ years old at this point.

Samsung 8nm (a 10+ node) used for Ampere was cheap as hell, that's why they did it, same for Turing, 12FFN is a rebranded 16nm non-plus with a larger reticle. People booed the low-cost offering and wanted something on a competitive, performant, efficient node... and now NVIDIA hopped to a very advanced customized N5P node (4N - not to be confused with TSMC N4, it is not N4, it is N5P with a small additional optical shrink etc) and people are back to whining about the cost. If they keep cost and power down by making the die size smaller... people whine they're stonewalling progress / selling small chips at big chip prices.

Finally we get a big chip on a competitive node, this is much closer to the edge of what the tech can deliver, and... people whine about the TDP and the cost.

Sadly, you don't get the competitive node, giant chip, and low prices all at the same time, that's not how the world works. Pick any two.

AMD is making some progress at shuffling the memory controller off to separate dies, which helps, but generally they're probably going to be fairly expensive too... they're launching the top of the stack first which is a similar Turing-style "capture the whales while the inventory of last-gen cards and miner inventory sells through" strategy. That likely implies decently high prices too.

There's nothing that can really be done about TDP either - thermal density is creeping up on these newer nodes too, and when you make a big chip at reasonable clocks the power gets high these days. AMD is expected to come in at >400W TBP on their products too.


I think typically the XX70 has been in line with the xx90 TI of the last generation, so this is in fact a better improvement over that. It's still dumb though to have two xx80's.


Their performance numbers at these events have also been highly cherry picked for years. Real world is never nearly as good as they claim.


Eye candy is one of the main reasons people build Lego bulids.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: