> instead of learning Web standards, rather ships Chrome alongside their application
I am confused.
- The "shipping Chrome alongside their application" part seems to refer to Electron; but Electron is hardly guilty of what is described in the article.
- The "learning web standards" bit seems to impune web developers; but how are they guilty of the Chrome monopoly? If anything, they are guilty of shipping react apps instead of learning web standards; but react apps work equally well (or poorly) in all major browsers.
- Finally, how is Chrome incompatible with web standards? It is one of the best implementer of them.
Devs, particularly those with pressure to ship or who don't know better, unfortunately see 'it works in Chrome' as 'it works', even if it is a quirk of Chrome that causes it to work, or if they use Chrome related hacks that break compatibility with other browsers to get it to work in Chrome.
- Sometimes the standards don't define some exact behavior and it is left for the browser implementer to come up with. Chrome implements it one way and other browsers implement it the other way. Both are compatible with the standards.
- Sometimes the app contains errors, but certain permissive behaviors of Chrome mean it works ok and the app is shipped. The developers work around the guesses that Chrome makes and cobble the app together. (there may be a load of warnings in the console). Other browsers don't make the same guesses so the app is shipped in a state that it will only work on Chrome.
- Sometimes Chrome (or mobile Safari) specific APIs or functions are used as people don't know any better.
- Some security / WAF / anti-bot software relies on Chrome specific JavaScript quirks (that there may be no standards for) and thinks that the user using Firefox or another browser that isn't Chrome or iOS safari is a bot and blocks them.
In many ways, Chrome is the new IE, through no fault of Google or the authors of other browsers.
This is a lesson in capitalism. It’s so much more profitable to ignore small users bases when you can just tell them to “try switching to Chrome”.
I think you’re wrong about Safari itself being the reason chrome isn’t a 90%+ market owner; rather, it’s apple’s requirement that no other browser engine can exist on iOS.
Other browser engines can exist. JIT has to be the system’s. Others can use Apple’s JavascriptCore to gain access to it and do whatever they want on top.
> I think you’re wrong about Safari itself being the reason chrome isn’t a 90%+ market owner; rather, it’s apple’s requirement that no other browser engine can exist on iOS.
It sounds like capitalism has so far saved us from a Chrome monopoly, then.
To nitpick, you mean "unfettered capitalism". As in no government involvement. Which has the identical problem to unfettered anarchy: coalitions form, creating governments. Since many markets have network effects (e.g. bulk purchasing gives lower price per unit) a monopoly tends to be one of the possible steady state solutions. But any monopoly can choose to become a governor of their market, being able to impose regulation even through means other than government (e.g. pull resources, poach, lawsuits, or even decide to operate at a loss until the competition is dead (i.e. "Silicon Valley Strategy").
I just mention this because it's not a problem exactly limited to capitalism. It's a problem that exists in many forms of government and economics (like socialism). It just relies on asymmetric power
Yup. It's quite obvious that such unfettered, true capitalism quickly decays to the good ol' rule of warlords.
There should be a name for this kind of fallacy, where you look at a snapshot of a dynamic system (or worse, at initial conditions), and reason from them as if they were fixed - where even mentally simulating that system a few time steps into the future makes immediately apparent that the conditions mutate and results are vastly different than expected.
No, Safari is the new IE, nothing works on it, it's full of bugs and Apple is actively preventing web standards to move on.
Do you remember how much Apple prevented web apps to be a thing by blocking web push, and breaking most things if run in PWA mode?
Apple are by far the worst offender and I can't wait for Safari to die
I made a reader app for learning languages. Wiktionary has audio for a word. Playing the file over web URL works fine, but when I add caching to play from cached audio blob, safari sometimes delays the audio by 0.5-15 seconds. Works fine on every other browser.
Web features being pushed by Google via Chrome, aren't standards, unless everyone actually agrees they are worthy of becoming one.
Shipping Electron junk, strengthens Google and Chrome market presence, and the reference to Web standards, why bother when it is whatever Chrome is capable of.
Web devs with worthy skills of forgotten times, would rather use regular processes alongside the default system browser.
Are we really trying to argue about cross platform GUI in 2025? This was solved decades ago. Just not in ways that are trying to directly appeal to modern webdevs by jamming a browser into every desktop application.
I don't even hate Electron that much. I'm working on a toy project using Electron right now for various reasons. This was just a bizarre angle to approach from.
You can't trust the system browser to be up to date and secure or for it to render things how you want. You can not guarantee a good user experience unless you ship the browser engine with your app.
Yeah sure but I use most web apps through the browser either way so I'm already in "possibly incompatible land" and you can reasonably expect any user facing device to have an updated browser OR one specific browser in case of embedded. We're not in Windows XP software distribution times anymore.
- They raise request for feedback from the Mozilla and WebKit teams.
- Mozilla and WebKit find security and privacy problems.
- Google deploys their implementation anyway.
- This functionality gets listed on sites like whatpwacando.today
- Web developers complain about Safari being behind and accuse Apple of holding back the web.
- Nobody gives a shit about Firefox.
So we have two key problems, but neither of them are “Google controls the standards bodies”. The problem is that they don’t need to.
Firstly, a lot of web developers have stopped caring about the standards process. Whatever functionality Google adds is their definition of “the web”. This happened at the height of Internet Explorer dominance too. A huge number of web developers would happily write Internet Explorer-only sites and this monoculture damaged the web immensely. Chrome is the new Internet Explorer.
The second problem is that nobody cares about Firefox any more. The standards process doesn’t really work when there are only two main players. At the moment, you can honestly say “Look, the standards process is that any standard needs two interoperable implementations. If Google can’t convince anybody outside of Google to implement something, it can’t be a standard.” This makes the unsuitability of those proposals a lot plainer to see.
But now that Firefox market share has vanished, that argument is turning into “Google and Apple disagree about whether to add functionality to the web”. This hides the unsuitability of those proposals. This too has happened before – this is how the web worked when Internet Explorer was battling Netscape Navigator for dominance in the 90s, where browsers were adding all kinds of stupid things unilaterally. Again, Chrome is the new Internet Explorer.
The web standards process desperately needs either Firefox to regain standing or for a new independent rendering engine (maybe Ladybird?) to arise. And web developers need to stop treating everything that Google craps out as if it’s a done deal. Google don’t and shouldn’t control the definition of the web. We’ve seen that before, and a monoculture like that paralyses the industry.
Why not forbid them to ship any non-standard feature in their pre-installed default build of Chrome? Experimental features could be made available in a developer build, that would have to be manually installed in a non-obvious way, so that they cannot gain traction before standardization.
PWA is an antifeature anyway; it's an operating system inside a browser. This benefits companies that have market-dominant browsers and do not have operating systems; on a technical level it's just stupid.
I love PWAs when the alternative is Electron, I'd rather let one browser instance run my crapps since it improves memory sharing and other resource utilization.
I really like being able to install websites as apps too so my WM can manage them independently.
This is what Mozilla has to say about Web Bluetooth:
> This API provides access to the Generic Attribute Profile (GATT) of Bluetooth, which is not the lowest level of access that the specifications allow, but its generic nature makes it impossible to clearly evaluate. Like WebUSB there is significant uncertainty regarding how well prepared devices are to receive requests from arbitrary sites. The generic nature of the API means that this risk is difficult to manage. The Web Bluetooth CG has opted to only rely on user consent, which we believe is not sufficient protection. This proposal also uses a blocklist, which will require constant and active maintenance so that vulnerable devices aren't exploited. This model is unsustainable and presents a significant risk to users and their devices.
Which PWA features did Apple and Mozilla remove on security grounds? What was Mozilla’s justification? What’s your justification for claiming they lied about it and it wasn’t for security reasons?
> Firstly, a lot of web developers have stopped caring about the standards process. Whatever functionality Google adds is their definition of “the web”.
Businesses who hire such web developers will lose huge amounts of sales, since 90% of visitors are on mobile and half of those are on Safari.
I use Chrome on Android because it's the default browser and I'm lazy, not because I actually like it. When a phone forces me to choose one I'm not very likely to choose Chrome. It's going to be the same for iOS users.
Why do they have this in the list of examples of 'run-on sentences':
> Bad: To access your programs click the Start button.
> Improvement: To access your programs, click Start.
Sure, the improved version has added a comma, but the initial version is not a 'run-on sentence'; it does not contain 'two or more complete ideas that are joined without punctuation'. The comma here is completely intonational; it would not be needed if the word order was different, as in 'Click Start to access your programs'.
> While I'd generally agree that most companies can change their name, encroaching on a basic letter should be off limits
To me it's the other way around. If the platform had been named X from the start, then a language would have developed around it, including what its messages are called, or what verb is used to refer to posting a message. We, the public, wouldn't have known any better. With Twitter, we do know better — better name, better nouns, better verbs (even a better logo; but that's by the by). Bosses can rename their products as much as they like; it's just surprising to me that we as a public so obligingly give up this tiny bit of our language.
> like naming your company "The" or "God".
Consider truth social :-) I am amazed people agree to call the messages there 'truths', and reposts, 'retruths'. So embarrassing.
"Tweets" was already an embarrassing term. We used to be fine with just "posts" or "comments" instead of trying to put the company branding in every term.
> If the platform had been named X from the start, then a language would have developed around it, including what its messages are called, or what verb is used to refer to posting a message.
I'm not really sure. Some things don't compound, that's why I think a preposition for instance would make a bad name. But even if you may be right, I still want to put up a fight against corporate entities trying to take over basic concepts (X, the unknown, the letter that marks the spot, etc.). I don't want to be forced to use your name if your name is an absurdity, the same way I can't make a brand called "Trump is an idiot" (even if it's true).
> Wasm-compiled SQLite is so successful that it actually replaced a part of the web platform, causing Chrome to remove WebSQL entirely
The causual inference here is almost certainly incorrect. According to Chrome Blog [0], WebSQL turned out to be a non-starter as early as November 2010, which is before Webassembly was released, and before it became known that SQLite could be ported to Webassembly to run in web browser.
You keep saying the word "teachers"; but that word does not appear in the text of the article. Why focus on the teachers in particular?
Also, there are various incentives for teachers to publish books. Money is just one of them (I wonder how much revenue books bring to the teachers). Prestige and academic recognition is another. There are probably others still. How realistic is the depiction of a deprived teacher whose livelihood depended on the books he published once every several years?
This requires good faith on behalf of the crawler? So it's DOA; why even bother implementing this?
Also, what a piece of zero-trust shit the web is becoming thanks to a couple of shit heads who really need to extract monetary value out of everything. Even if this non-solution were to work, the prospect of putting every website behind Cloudsnare is not a good one anyway.
What the web needs right now, to be honest, is machetes. In ample quantity. Tell me who's running that crawler that is bothering you and I will put them to the sword. They won't even need to present a JWK in the header.
Maybe I didn't understand the proposal completely yet, but wouldn't the crawler only have to cooperate (send the right headers, implement that auth framework, etc) if they want to pay?
The standard response to a crawler is a 402 Payment Required response, probably as a result of an aggressive bot detection.
So essentially, it's turning a site's entire content into an API: Either sign up for an API key or get blocked.
The question remains though how well they will be able to distinguish bot traffic from humans - also, will they make an exception for search engines?
That is not what I understood, and it sounds terrible. What if you're not a crawler but random Joe surfing the internet? Clearly Joe should see content without payment? So they need some way to tell the crawler and Joe apart, and presumably they require the crawler to set certain request headers. The headers aren't just to issue the payment, it's to identify the crawler in the first place?
The idea behind the headers is to allow bots to bypass automatic bot filtering, not blockade all regular traffic. In other words:
- we block bots (the website owner can configure how aggressively we block)
- unless they say they're from an AI crawler we've vetted, as attested by the signature headers
- in which case we let them pay
- and then they get to access the content
(Disclosure: I wrote the web bot auth implementation Cloudflare uses for pay per crawl)
Thanks for replying! Do you have some provision for false positives as well, like sending a captcha in the body of the 402 response? (So in case the client was a human and not a bot, they could still try to solve the captcha)
The writeup doesn't talk about actively misbehaving crawlers a lot, but this bit implies for me that the headers are for the "happy path", I.e. crawlers wanting to pay:
> Each time an AI crawler requests content, they either present payment intent via request headers for successful access (HTTP response code 200), or receive a 402 Payment Required response with pricing.
I don't see how it would make sense otherwise, as the requirements for crawlers include applying for a registration with Cloudflare.
Who in their right mind would jump through registration hoops only so they can not access a site? This wouldn't even keep away the crawlers that are operating today.
I agree there has to be some way to distinguish crawlers from regular users, but the only way I can see how this could be done is with bot detection algorithms.
...which are imperfect and will likely flag some legitimate human users as bots. So yes, this will probably leading to web browsing becoming even more unpleasant.
It's Cloudflare. That means they are good at DoS and DDoS protection. AI crawlers are basically DoS agents. I think CF can start with an honor system that also has attached to it the implied threat to block crawlers from all CF hosted content, and that is a pretty big hammer to hit the abusers with.
So I'm cautiously optimistic. Well, I suppose pessimistic too: if this works what this will mean is that all contents will end up moving into big player hosting like CF.
Pro-diversity folks, especially in the humanities, did pick it up. There are several trends you can observe in popular literature on linguistics for general public. One, you will see linguists express regret that languages are dying out, and argue that diversity of languages is a general good (see e.g. Language Death by David Crystal). Another, you will hear linguists argue against the dismissive attitude towards regional dialects, and for a more permissive attitude towards the norm.
The BBC admitted more and more announcers with regional accents; and in 2017, it launched an online news service in West African Pidgin English.
I am confused.
- The "shipping Chrome alongside their application" part seems to refer to Electron; but Electron is hardly guilty of what is described in the article.
- The "learning web standards" bit seems to impune web developers; but how are they guilty of the Chrome monopoly? If anything, they are guilty of shipping react apps instead of learning web standards; but react apps work equally well (or poorly) in all major browsers.
- Finally, how is Chrome incompatible with web standards? It is one of the best implementer of them.