Wonder how this compares with the Consumer Data Right in Australia. The standards and discussions around them are right there on GitHub [1] - pretty surreal seeing accounts for the big banks. Of course it's mostly completely useless for anything DIY as semi-understandably access to any real data is gated behind certification requirements.
A blog post related to the Rails 8 beta 1 was posted by dhh shortly after his talk. It's not a transcription of the talk, but hits many of the key points:
Hmm, maybe they are referring to a specific branding used on recent models, e.g. a "Windows on Snapdragon" sticker similar to "Windows 8 Ready" and stuff.
Linux laptops still tend to be special order or niche brands, but there is nice positive traction in that space.
This is quintessential MSFT. I have absolutely less than zero doubt that there is a informal/formal agreement that they're called such. They always pull this crap. And then their OEMs realize later that the "Windows 8 Ready" sticker is almost more of a pariah.
yes the beautiful advantage of having some super-forked kernel.
the solution is to get everyone on the same kernel, which is then updatable - not hack together something that kinda works on top of a never updated snowflake
The linux kernel team understandably does not want to maintain overcomplicated shims, keep multiple versions of the same subsystem and debug opaque problems so some can keep their precious binaries secret.
You keep the binaries, you get to maintain them and solve their problems. Seems fair.
>> The linux kernel team should offer a fixed compatibility layer for drivers. […]
> You are free to implement it.
But would it be accepted?
From what I see with both NVidia's and OpenZFS's compat layer, it seems to indicate that the Linux kernel folks are actively hostile against any such thing.
(Contrast this with, say, the FreeBSD folks where both the kernel- and user-land API/ABI stays frozen over the life of a major version release.)
I think this is more related to GPL licensing than them not wanting to assist. If it's completely generic then maybe, but how do you argue that the closed source Nvidia license or the dubious license for zfs can be linked to the GPL licensed kernel without being invitation of one of those licenses. Specially concidering how vague the GPL is regarding what it even means to be using GPL licensed code.
If I present an API anyone can use and Nvidia happens to target it it's a different ballgame than if I implemented and shim specifically targeted at Nvidia's binary blob. To my knowledge the latter is a violation of GPL should it be the inverse and therefore an explicit exemption would need to be put in place for the shim in the GPL and that is the showstopper.
The hostility doesn't seem to care if things are generic or not.
Remember when they broke ZFS by marking the "save FPU state" function as GPL-only? Telling the kernel to save registers so they can be used for scratch space is one of the most implementation-independent things you can do.
> If it's completely generic then maybe, but how do you argue that the closed source Nvidia license or the dubious license for zfs can be linked to the GPL licensed kernel without being invitation of one of those licenses.
It's having a stable foundation of the kernel's API that others can code against. As it stands, the (compat) shim layer(s) have to constantly be tweaked for new kernels:
> Upcoming Linux 6.7 release will change a couple of interfaces. Adapt or die!
And "dubious license"? Really? Given the incorporation of CDDL code of DTrace into macOS and FreeBSD, and of OpenZFS into FreeBSD (and almost into macOS), it seems that license is quite flexible and usable, and that any limitations exist on the GPL side of the equation.
> To my knowledge the latter is a violation of GPL should it be the inverse and therefore an explicit exemption would need to be put in place for the shim in the GPL and that is the showstopper.
How is coding against an API a violation of the GPL? Neither Nvidia's, nor OpenZFS's, code is a derivation of any GPL: (Open)ZFS was created on a completely different operating system, and was incorporated into others (e.g., FreeBSD, macOS), so I'm not sure how anyone can argue with a straight face that OpenZFS is derived from GPL.
Similarly for Nvidia: how can it be derived from GPL Linux when it the same driver is available for other operating systems (and has been for decades: I was playing RtCW on FreeBSD in 2002):
It's one of the reasons the whole DKMS infrastructure had to be created: since API stability does not exist you have to rebuild kernel modules for every kernel—even open source ones like OpenZFS, Lustre, BeeGFS, etc.
> I'm not sure how anyone can argue with a straight face that OpenZFS is derived from GPL
GPL requires any code linked against it which fundamentally requires it to be useful (Nvidia/zfs linked into the linux kernel to be used in linux) to also be licensed as GPL or GPL compatible. To my knowledge that was the major stopper for zfs and a major stopper for me to use any GPL code in software I plan to distribute. IANAL but GPL is vague enough that it can be argued either way. The "dubious license" terminology was incorrect from me, I was referring to it not being GPL compliant or at least there is skeptocism about whether it is. The BSD license does not have these restrictions and is fine.
Regarding the compat shim, if it has to link to any kernel APIs stability isn't guaranteed, if you want a reasonable guarantee of stability then as one of the above comments mentioned and I alluded to. Do the legwork to allow userspace drivers in the linux kernel, nobody said it's easy and it seems like the consensus is to just keep the shim up to date instead of trying to push that but if you want a stable API to code against that's your path. As was mentioned in this comment section Linus has publicly stated where possible userspace APIs are to be treated as sacred.
> You keep the binaries, you get to maintain them and solve their problems. Seems fair.
And you get to use modern mobile hardware like what a pinephone has.. lose, lose. Especially that modems simply can’t be open-source for the most part due to governmental regulations, so this everything open-source fairy tale doesn’t make much sense in mobile hardware.
Notice that it's only mobile where that's a problem; x86 machines have apparently been doing the impossible for ~30 years now. And no, modems aren't an excuse; the modem itself can't be FOSS (probably) but it's just a self-contained device running its own firmware; there's nothing stopping you communicating with it via FOSS drivers.
Did you miss the first, say, 25 years of the 30? Because I have literally reset my video driver blindly from terminal once, because it just failed after an update, and similar stuff. Also, this is pretty much due to the oligopoly of a few entities in hardware space, making it easier to support said devices, and standards.
Phones aren’t built from a small selection of devices, and they often have patented firmware that the phone manufacturer couldn’t even publish even if they want to. As I shown, look at the best attempt at building an open-source phone. It sucks majorly.
Although I have thoughts on the matter, I'm not actually arguing about the quality of drivers, I'm arguing that x86 linux has had minimal problems with keeping all drivers in-tree, and if vendors want to support linux they've generally had no problem working with that. It is for some reason only mobile and embedded vendors that find it impossible to upstream drivers and that insist on shipping hacked-up vendor forks of ancient kernels.
And again, firmware is a separate matter; there's plenty of cases of linux shipping blobs for the device to run, so long as the driver to talk to that device is FOSS.
I'd agree with you, but ever since mobile devices have taken off, aren't we much worse than before? In the last peak PC years, say, before 2011, I was under the impression that hardware vendors were starting to play ball, but now things seem super locked down and Linux seems to be falling behind in this tug of war between FOSS - binary-only.
I think the problem with mobile devices is not the software but the hardware.
These devices are locked down partly because of business interests (planned obsolescence), but another part is personal identity security.
A run-of-the-mill Android or iOS device carries more secrets inside, which have much more weight (biometric data, TOTP, serial numbers used as secure tokens/unique identifiers, etc). This situation makes them a "trusted security device," and allowing tampering with it opens an unpleasant can of worms. For example, during my short Android stint, I found out that no custom ROM can talk with the secure element in my SIM, and I'm locked out of my e-signature, which is not pleasant.
If manufacturers can find a good way to make these secure elements trustworthy without the need to close down the platform and weld shut, I think we can work around graphics and Wi-Fi drivers.
Of course, we also have the "radio security" problem. Still, I think it can be solved by moving the wireless radio to an independent IP block inside the processor with a dedicated postbox and firmware system. While I'd love to have completely open radio firmware, the wireless world is much more complex (I'm a newbie HAM operator, so I have some slight ideas).
So, the reasons for closing down a mobile device are varied, but the list indeed contains the desire for more money. If one of the hardware manufacturers decides to spend the money and pull the trigger (like AMD did with its HDMI/HDCP block), we can have secure systems that do not need locking down. Still, I'm not holding my breath, because while Apple loves to leave doors for tinkerers on their laptops, iPhone is their Fort Knox. On the other hand, I don't have the slightest confidence in the company called Broadcom to do the right thing.
Interesting details, thank you for providing them!
Regarding this:
> Still, I'm not holding my breath, because while Apple loves to leave doors for tinkerers on their laptops, iPhone is their Fort Knox.
People don't realize, but 99% of what Apple does hinges on the iPhone. The rest of the products pack a much lower punch if the iPhone were to vanish from the face of the Earth completely. It's the product all their customers have and they have it at all times with them. It's the product that's probably the easiest to use and the easiest to connect to other things.
So yeah, the iPhone will probably be the last non-military device on the planet to be opened up :-)
What is your reason for believing the closed source nature of nVidia's graphics drivers played a role in their success? AMD's and Intels Windows drivers are also closed source and so were AMD's Linux drivers when nVidia managed to secure the lead.
nVidia is also finally moving to an open source kernel module so a closed source one doesn't seem important to them for keeping their moat.
Presumably amadeuspagel means nvidia has walked a careful line with CUDA & ML.
CUDA is much more accessible/documented/available than a lot of comparable products. FPGAs were even more closed, more expensive and had much worse documentation. Things like compute clusters were call-for-pricing, prepare to spend as much as a small house.
On the other hand, CUDA is closed enough the chips that run it aren't a commodity. If you want to download that existing ML project and run it on your AMD card - someone will have to do the leg work to port it.
That means they've been able to invest quite a lot of $$$ into CUDA, knowing the spending gets them a competitive advantage.
nVidia built this bubble by playing dirty on many fronts.
Their Windows drivers are a black box which doesn't conform to many of the standards, or behave the way they see fit, esp. about memory management and data transfers. GameWorks library actively sabotaged AMD cards (not unlike Intel's infamous compiler saga). Many nVidia optimized games either ran on completely unoptimized or outright AMD/ATI hostile code-paths. e.g.: GTA3 ran silky smooth on an nVidia Geforce MX400 (bottom of the barrel card) while thrice powerful ATI cards stuttered. Only a handful of studios (kudos to Valve) optimized engines for both and showed that a "paltry" 9600XT can do HDR@60FPS.
On Datacenter front, they actively undermined OpenCL and other engine by aritifically performance-capping them (you can use only one DMA controller in Tesla cards, which actually have three DMA engines), slowing down memory transfers and removing ability to stream in/out of the card. They "supported" versions of OpenCL, but made it impossible to compile on their hardware, except OpenCL 1.1.
On driver front, they have a signed firmware mechanism, and they provide a seriously limited firmware to nouveau just to enable their hardware. You can't use any advanced features of their cards, because the full-capability firmware refuses to work with nouveau. Also, they re not opening their kernel module. Secret sauce is moving to firmware, leaving an open interfacing module. CUDA, GL and everything is closed source.
On the other hand, they actively said that "The docs of the cards are in the open. We neither help, nor sabotage nouveau driver project. They're free", while cooking a limited firmware for these guys.
They bought Mellanox, the sole infiniband supplier to vertically integrate. Wonder how they will cripple the IB stack with licenses and such now.
They're the Microsoft of hardware world. They're greedy, and don't hesitate to make dirty moves to dominate the market. Because of what they did, I neither respect nor like them.
Then I guess Rhodium is the largest metal and the Hinkley Point C nuclear power station is the largest building in Britain.
If you conflate value with size, both terms become near useless. Value can only mean something in relation to something else.
Picture this: You have a large company with thousands of employees having similar revenue to their competitor, which accomplishes the same with one employee and a much smaller operation.
Which one is likely to be more valuable? Obviously the smaller one. If we however conflate value with size, as is so often done in popular economics, just pointing out this single fact becomes a complicated exercise of having to carefully employ language that we neutered for no good reason at all. Not to speak of all the misunderstandings this is going to create with people who aren't used to this imprecise use of the English language.
If you mean revenue, say revenue, if you mean value, say value, if you mean size, say size. Don't use "large" to say "valuable". Why would you do that if there's a perfectly good word already? Imprecise language is often used to either confuse or leave open an avenue to cover one's ass later... which brings us back to popular economics.
> "Size" is unitless, so I disagree with your rationale.
You're going to have to expand that a little bit.
> valuation is a very common size metric
It's not a size metric.
A world where things grow larger the more people value them might be interesting though.
> and there was no confusion about OP's meaning.
Their comment makes much less sense if you replace "biggest" with "most valuable".
It's trivially obvious that the correlation between valuation and how much a company can invest into software is incredibly weak if it exists at all. On software development spending NVidia is eclipsed by many companies with sometimes only a fraction of its valuation.
So either it's a non sequitur or we are incorrectly assuming that Nvidia became the largest company.
"Size" is not only a metric of physical dimension.
OP said "biggest", and meant "largest valuation". This happens to be incorrect -- nVidia was never the highest-valued public company -- but they were the second-largest, and came very close to first.
If you did not know what OP meant immediately from awareness of business news, you still should have considered "valuation" as one of the obvious possibilities. If you did not, then you might be lacking adequate context for this conversation, and might be better served by asking questions instead of demonstrating your confusion via misplaced pedantry.
Size is not a metric. You can measure size with a metric and you can measure value with another metric. Measuring both the same way only leads to nonsense. I think we're getting to the bottom of the confusion now.
> misplaced pedantry
Pedantry is the easiest way of dismantling comments that try to turn nonsense into an argument by being intentionally vague. Should you argue directly against vague statements, the speaker can retroactively make them mean whatever they want to. You'll be chasing moving goalposts. Employ pedantry until they well and truly nail themselves down, and then explain why whatever is left is nonsense. Worked like a charm.
Also, to get ahead of any further personal attacks, this pedantry absolutely is fun to me. I wouldn't be here otherwise.
You are simply wrong. Being condescending and wrong is a fatal mix.
Size is a unitless dimension. A category of metrics, if you must. OP's word of "bigger" can be applied to population, area, weight, importance, memorability, and yes, valuation.
Can be, and frequently is, among humans. Zero humans are confused.
> Pedantry is the easiest way of
... demonstrating that you're a jerk. Nothing else.
> this pedantry absolutely is fun to me
Got it. My mistake for assuming good faith.
> I wouldn't be here otherwise
That's the most disappointing thing I've read in a while.
> Size is a unitless dimension. A category of metrics, if you must.
Just make size a category of dimensions and I'd underwrite that. It certainly doesn't refer to a single dimension.
Mathematically valuation would absolutely be a size/magnitude, but we're clearly not speaking in mathematical terms, given how the terminology is being abused. Mathematically plenty of things that are a magnitude/size do not constitute a metric space, and the singular would be wrong anyways.
I'm taking metric to mean "standard of measurement", which is why size is still not a metric. Saying "size is a category of metrics" would be getting close enough I suppose, but really we're talking about the actual dimensions.
Now that we've got that out of the way, I'm still firmly grouping valuation as a measurement of value, and not a measurement of size. I'm also standing by the assertion that not having these two be disjoint sets only leads to confusion and nonsense.
> OP's word of "bigger" can be applied to population, area, weight, importance, memorability, and yes, valuation.
Nice try. They applied it to "company". You can do that, but now we're not talking about a company's value. We have the adjective valuable for that.
> A world where things grow larger the more people value them might be interesting though.
Do you personally grow larger when you have more money in the bank? When more of your friends get elected in the senate? When you hire someone? When you buy a new house?
This is not a difference between ChromeOS and Android. ChromeOS is replete with binary vendor blobs, which is why the webcams, wifi, and touchpads work correctly on Chromebooks and poorly or not at all on other laptops running vanilla Linux.
You don’t have to go anywhere near ideological debate to argue against binary blobs in the OS. The blobs could be verbatim from Stallman himself and blessed in the holy waters of EFF, and they would still be bad for dozens of technical reasons.
good. everyone should be hostile to binary drivers.
you have to not understand the first thing about kernel drivers to even consider binary drivers upstream. for starters, who provides a new binary when the api change every every new kernel version?
Mainline kernel tends to have only basic support (if at all) for many SoCs that actually get used in phones, especially full power management support has been lacking.
Most PC and server hardware has FLOSS drivers (with proprietary firmware), even Qualcomm is upstreaming support for the new Snapdragon Elite (maybe it's made by a different team?).
I think phone SoCs are the odd ones out, which sadly doesn't mean they'll improve any time soon. Supporting an ABI for binary drivers in Linux might help phones, but it would give everyone else a chance to regress in their support, so I understand Linux kernel developers' position.
Disk encryption is useful if your data falls into the wrong hands. Having an unencrypted disk is useful if you need to do data recovery and have no backups.
Very few people have backups... OTOH, SSDs tend to fail as bricks with no hope of any data recovery.
Would be really nice if there was some sort of intermediate state / loading indicator, as there a bit of a delay when clicking each option and it's a little unsettling trying to work out what's not working.
[1] https://consumerdatastandardsaustralia.github.io/standards/#...