While I like the joke, Russia is actually making its own original microcontrollers, for example see K1921VG015 (RISC-V 32/50 MHz/1M flash/256K SRAM/square 100-pin package). The $6 price tag seems a little bit expensive to me but well the developers want to eat something too. There are also ARM-based microcontrollers (much more expensive), and of course (even more expensive) famous Elbrus CPUs with unique VLIW architecture.
I'm no hardware guy so no clue how easy it's to move from 28nm TSMC process to some other 28nm process that China supposedly might soon have according to the news.
But honestly I doubt China would just enable Russia develop and produce own CPUs. When it's comes to chinese everything they never localized production in Russia: heavy machinery, cars, electronics, etc. They are main benefactors of Russia not having own anything.
"DNS tunneling" (abnormal number of DNS requests) actually might be caused by a software that doesn't use DNS cache. I was once banned by 8.8.8.8 (Google's DNS server) for sending too many requests because youtube-dl was making a DNS request for each tiny segment of a video (and there were thousands of them).
Well, maybe one shouldn't be using Google DNS server when violating ToU to download Google's video.
But an abnormal number of DNS requests AND recorded outbound data totaling 10GB, with no other obvious indication of a less-subversive means of data transfer? I'd be very surprised if youtube-dl could come close to even 10MB of DNS requests at its chattiest
Delivery drivers on e-bikes are the worst in my country, they ride on the sidewalk between pedestrians and cross the roads ignoring traffic lights and rules, feels like you are in a developing Asian country. Why don't they want to ride on the roadway? Because with their and car drivers style of driving it's dangerous.
Delivery drivers are a completely separate problem, and they are a menace whether they ride bikes, scooters, or cars. The issue there is the gig economy apps incenctivising by every means possible (including threats of termination) that drivers go as fast as possible, rules and their own safety be damned. All for someone to get their soggy fries in 35mins instead of 40.
Not sure specifically, but most common chips aren’t fabbed on processes that require cutting edge machines like you hear about for nVidia or iPhone chips.
All of the little chips in everything else are fabricated on much simpler processes that require much less complex machinery.
Reciprocal is inaccurate, you should stop using that term - this label was chosen to obfuscate what is going on and confuse those who don't know better.
> I spend far more on restaurants, household services, and vehicle maintenance than those companies pay me. I have a massive trade imbalance with those companies.
And if, for example, a sales tax was increased this would motivate you to buy less services, make food at home and learn how to fix your car.
Sure. But at the cost of the time that I currently use to do other things.
Are you moving the argument from conflating budget and trade deficits to saying the United States’ multi-century economic focus on consumer spending is a mistake, and we need to shift to a savings-focused economy like China used to be? I also think that’s wrong, but it has nothing at all to do with the federal government’s budget deficit.
Or are you under the mistaken impression that trade income is the only income the country has?
Can we use Occam's Razor and assume that nobody knows what would be the optimal tariff rates and if you don't have a reliable mathematical model the only choice you are left with is experimentation and A/B tests.
> The plan is to roll out to 5% of users on the Firefox 138 stable release, ramp up to 50% of users
What an awful idea. How is a web developer supposed to test the website when he and user might have different browser behaviour? It looks like someone read about deployment at Facebook and wanted to implement the same thing without any valid reason. Firefox is not a server-side software and this style of deployment doesn't make much sense.
> How is a web developer supposed to test the website when he and user might have different browser behaviour?
So that is always going to be the case with a change like this, simply because people use older versions, use different browsers, etc.
If this change breaks popular-site.com then it will continue work fine for 95% of people, and it breaks for "only" 5%, one of whom will (hopefully) report it. This allows the Firefox people to test the waters to make sure it's not going to horrible break things and break things for too many people.
Remember: The people who would supposed to benefit from a rollout like this are the website operators, not Firefox. The change is happening; I doubt Mozilla is going to reverse course because a couple websites look a little wonky (totally reasonable).
How would they report it? "Hey, your website looks weird. Yeah I'm running Firefox 138 stable and I'm in the 5% experimental group who received the default h1 styling change"?
What is more likely to happen is: A website might get a report, the dev goes to reproduce it, and they're in the control group, so they can't; and because it was only one or two reports, it gets closed.
Firefox also has a built-in "broken website" reporter in the Help menu, which sends a report to Mozilla. The WebCompat team analyzes those reports and can decide to report a Gecko bug if it has a bug, contact the website if they have a bug, or apply a sitepatch. We're also monitoring the broken website reports for regressions from this change in particular.
I understand that a rollout makes reproduction harder, and not everyone will be aware of the change. It's a tradeoff when deciding whether to do a rollout or ride the trains normally.
You should first investigate the cause of the bug before spamming Firefox developers with potentially invalid reports. And it is difficult to investigate when your browser and user's browser behave different.
Yeah that makes no sense. Versions are meaningless if they don't represent standard defaults. But then, evergreen software makes versions something like an implementation detail anyway.
They're not testing the browser, they're testing the Web. Testing in the real world with real users is the only way to adequately probe the effects of potentially breaking changes like this.
This is not new either. Many such changes have been reverted after discovering that they broke more things than expected. One example is the Object.groupBy static method, which was initially Array.prototype.group.
> They're not testing the browser, they're testing the Web.
That sounds like a pretty lame excuse.
So if a self-driving car manufacturer does testing, they are not testing the car, they are testing the environment? Sounds like a pretty neat trick, maybe marketing should adopt this attitude.
Some testing has already happened before starting the rollout.
The change has been shipping in Firefox Nightly for a year. I have analyzed impact of affected pages in the HTTP Archive dataset (about 12,000,000 pages), twice:
You can't test against the entire web, especially internationally. Do you know what the most popular Vietnamese sites are? I don't either. Do that for >200 languages. Never mind of course there's tons of non-public stuff.
quote: Do you know what the most popular Vietnamese sites are? I don't either. Do that for >200 languages.
was the example given, it's true I do not know what the most popular Vietnamese sites are, and the person who posted that and pretty much anyone on this site does not know what the most popular Vietnamese sites are or the same thing for 200 plus languages, but there are a few companies out there for which finding that out is pretty much child's play. One of those companies makes the Chrome browser.
Also testing against the entire web, obviously asking people to report problems is not really testing against the entire web, it is testing against a subset of the web. This should not really need to be pointed out to the H part of HN, but here I am. A similar thing would be if you had a big index of the web and you tested against that.
>2. Better not to get salty about some downvotes
your and my definitions about saltiness seem to vary a great deal.
on edit: Noting really that the thing to determine is how many sites actually use the UA styles for H1 instead of overriding, which would also be child's play for Google, and determining the most popular sites or pages that do so. I believe that the sites that do so are few and far between and not exceptionally popular, as well as probably very simply styled, if this belief is true it would also be relatively simple for someone to figure out what side effects were likely - if they had the data and processing power to do so. But Mozilla does not, therefore they must ask people to tell them if they have problems.
A web page’s behavior should be independent of any particular styling of the hierarchical elements used to represent the document it’s presenting.
Just like any web page should work perfectly with JavaScript disabled, it should work perfectly with a user-supplied style sheet.
If you’re making a web page that has this problem, what you’re making should not be a web page, and you should feel bad about the choices that led you to that point in your life.
It will take a while to fix this problem in our industry since we’ve waited so long on it, but the best time to start is now.
Yes, the user should ultimately have the final say about what colors, fonts, and sizes are used by their browser. We've handed way too much control over to the web developers. Web sites don't have to be pixel perfect. If I want to render text using Comic Sans by default, that shouldn't "break" anything.
> We've handed way too much control over to the web developers.
We lost this battle by 1999. And again when we started to deliver full web applications instead of documents.
I wish we had a second protocol that was more document and information focused. Something that gave zero control over programming or layout to providers.
I just want to exchange information P2P in a dense swarm approximating modern social media. I want to use my own client configured how I like it to choose what to ingest and how to flag it and present it.
Who are we kidding. With those as our choices, it's no wonder most people just use the face book.
Hyper scale businesses captured most of the internet's value and humans and turned tech into a series of walled gardens for eyeball attention doom scroll maximization. Retweeting the for you page is what some committee of product managers decided was best for us all. Who are we to question the architectures of power?
It almost sounds like a perverse weird utopia to imagine a world where we controlled all of the information flows ourselves. I can't think why we should have all the power.
the internet has become more accessible both to consumers and people trying to sell a product. it has become crowded in "the market" you refer to. but all the good things all still there, and are doing much better than they were. i still use all of those protocols and a bunch of newer ones.
I remember when the web came out, I said "This is just a prettier gopher." Seeing the end result of 30 years of web development, I kind of wish it stayed just a prettier gopher.
> If I want to render text using Comic Sans by default, that shouldn't "break" anything
Not sure how we could expect users to switch between whatever font they want, and things not breaking.
Different fonts both appear and have different sizes, so what might look perfect with one font (a button where the text is aligned in the center vertically/horizontally), can look massively different with another (say the font's characters are wider, so now the text either overflows or breaks into two parts, making the button "broken").
The fix is called reset.css. Allows you to skip headlines like subj and only emotionally react to the "but you already have an app platform, it's web, shtupid" and "you should feel bad about your choices" split around your practical needs.
"you should feel bad about the choices that led you to that point in your life" - I suggest you find help to deal with the bitterness you have inside of you.
I suggest you examine the state of the industry and the garbage that’s passed off as reasonable these days and ask yourself whether what you perceive in my comments is bitterness.
The garbage that dominates the web has everything to do with centralization of power, and nothing with HTML vs JS. The former is a people problem and the latter is just tech.
For many places where apps would previously be accepted users don't want native applications anymore.
For work other than some very industry specific high performance software most businesses software is web based, and users ( those paying the bills anyways ) want them to be web based because it is much more portable and easy to deploy.
They actually are though, in reality. Your point is what should they be, not what they actually are. The truth of the situation is that webpages _actually are_ applications now, due to the new features the browsers have added to facilitate that
I actually work at Qt and many of our new products are web apps
Web pages mostly aren't applications though. Like people will sometimes post links to corporate engineering blogs here that require javascript (Uber comes to mind). It's purely a document; there's no interactivity at all. Making it an "application" is just incompetence (it's more expensive to develop and gives a worse user experience).
It might be true that applications actually are web pages now (e.g. Slack, WebEx), but I almost never encounter web pages that are actually applications.
That’s too bad, but not unexpected given that Qt’s itself a non-native toolkit anywhere other than Linux that developers on those other platforms shouldn’t touch.
Cross-platform frameworks are inevitably crap. If you really need to run an application on multiple platforms, write a cross-platform core and implement the human interface atop it for each platform using each platform’s native frameworks and languages.
The result is always a better product with happier users. If you don’t want to invest in that, you should at minimum be willing to admit to yourself that you’re OK with giving your users something subpar because it’s to your advantage to do so.
I got stuck trying that. To facilitate non JS by default my products are checkboxes and the checkout button submits the form (it is a spa). This works well but navigating back the user is presented with an "error" page asking if they want to submit the form again. Is there a solution for that?
I have done this in projects with POST forms: ”To avoid the resubmit warning when hitting back, use the Post/Redirect/Get pattern: after a POST, redirect with a 303 to a normal GET page. The browser skips saving the POST in history, so back just takes you to the form without complaining.”
The usual way to deal with it is to respond to form submissions with an HTTP redirect to another page. The user can still hit the back button, but the scary popup won't occur.
However, you run the risk of the user re-submitting the form anyway. Since this is involving orders and money, you may want the order confirmation / submit page to have a nonce or temporary ID representing the checkout session in the URL such that, upon revisiting that page, you can lookup the order status on the backend and redirect them to the order success page again.
This will prevent re-submission but still be confusing for the user. Why even allow submitting when navigating back? If the order has been submitted already, the submit button should be greyed out with a message saying that the order has been submitted already.
The original task was to do this without JS, so my first guess would be: Instruct the browser to re-load the page upon navigating back (cacheability headers), identify the order using an ID in the URL, then when reloading detect its already-submitted state on the server.
This is why I suggested the URL for the submission page be unique- having a session nonce / token or similar. That way, once the user checks out you invalidate the checkout session, and if the user hits the back button you redirect them to the appropriate page.
I specifically called out the issue of re-submitting certain forms and proposed the above solution. I don't think relying on cache headers is going to be sufficiently reliable.
I'm not arguing against a re-submission check. You'll need that anyway to prevent attackers from bypassing the browser and messing up your data.
But even with a nonce and a re-submission check, the cache headers are essential to make sure that when the user presses the back button, they'll see a greyed-out submit button. If the browser does not reload that page, the button will still be clickable. It won't work correctly because the re-submission check will fail, but a clickable and guaranteed non-functional button is very bad UI.
The latter is one of the main reasons that we have so much JS/SPAs. Sure, you can build an application without it that is somewhat functional, but the UI will be low-quality -- even if this particular example might be fixable with cacheability headers.
There is no re-submission check. When the user hits the back button, and requests the HTML from the server, the serve responds with a redirect. The user never sees the expired cart.
> Instruct the browser to re-load the page upon navigating back (cacheability headers), identify the order using an ID in the URL, then when reloading detect its already-submitted state on the server
Re-loading the page on navigating back would be done using cacheability headers. This is the most shaky part, and I'm not sure if it is possible today. If this does indeed not work, then this would be one of the "things that Javascript has solved that the non-JS web is still stuck with" I meantioned in my other post, i.e. one of the reasons that JS is more popular than non-JS pages today.
Identifying the order using an ID in the URL is standard practice everywhere.
When the order page gets requested, the server would take that ID, look the order up in the database and see that it is already submitted, responding with a page that has its submit button disabled.
The must have js part for me starts where one can open the store in multiple tabs then add and remove things on the first two and check out on the 3rd tab.
And that’s a bad thing, and the people involved in making those sites should feel bad and do something about their mistakes rather than just shrug. And we should call them what they are: Mistakes, abuses of the web.
You have now stated that "those people should feel bad" for the second time. Personal attacks will hardly bring any change into this world. I'd instead suggest that you propose actual ways to solve the same problems that Javascript-based SPAs have solved which the non-JS web is still stuck with.
Just add a matter of precision, personal attacks won't generally improve the world on the overwhole aftermaths. But they do generally change the world into a less pleasant place to live in.
Sure, the people making these things should feel bad about it, and ideally change their choices... But if 5% of Firefox users can't use your page, and your page is important, they'll consider Firefox broken and go use something else.
Firefox isn't doing so well on market share, and appearing to be broken isn't going to help.
I made a web game that's simply not possible without js. Is it your contention the it simply shouldn't exist? Lots of people seem to enjoy it and I think they'd be disappointed.
Actually the more I’ve gone in my career the more I’ve realized we underestimate the cost of changes. Getting vendors to change a single line of code can be incredible amounts of money.
> If you’re making a web page that has this problem, what you’re making should not be a web page, and you should feel bad about the choices that led you to that point in your life.
The web is explicitly and intentionally a software distribution platform. JavaScript is a web standard and is no less “part of the web” than HTML and CSS. You’re f
The indignation in this thread and from OP is ridiculous, especially when all the ire is aimed at Firefox, who are doing the same thing all browser makers do.
When you are changing the very fabric of the whole web, rolling things out in a gradual, controlled way is paramount. Not just because people can find and report issues before roll-out reaches 100%, but also because browsers collect telemetry on features and how they work in the wild that can be used to gauge the effect.
In case you were actually looking for an answer to your question, it's in the article:
> To test in Firefox with the new behavior, set layout.css.h1-in-section-ua-styles.enabled to false in about:config.
Gbd article doesn't specify how to test in Chrome (probably something in chrome://flags), but you can read the deprecation warnings dumped into your console. You may need to enable them in your default log level. If so, there may be a lot of other behaviour that you'll probably want to fix.
The fact that every webdev carries around/trades personal "reset" stylesheets to undo everything the browser does by default is insane. It really highlights the disconnect between W3C and reality.
I think the fact that every webdev feels the need to override the user's agent and impose their own idea of what size a H1 should be is, well, kind of imposing. I might want to make H1 be 50 point comic sans. This should not matter to the web developer.
I don't understand how one can design a website that would survive arbitrary style changes. I think that is unrealistic, so the designer should expect all default styles to be standard. And if the user has too much free time to change the font size then it is their own problem; my suggestion is that they simply use reader mode and change the styles there.
I would like to remind that some time ago browsers allowed to change the default font size; it never worked well so Opera started to scale the whole page instead. Other browsers followed it.
Android browsers seem to repeat the same mistake by the way: they override developer's styles when the user changes font size in OS accessibility settings.
I mean, it's my computer. I should ultimately be in control of how a document renders. I might need larger text due to poor eyesight, or need to use a screen reader. Or I might just be irritated that the web developer just up and decided that light gray on dark gray text looked cool (it doesn't, it barely can be read). Or I might want to scroll with my keyboard because a mouse is painful to my RSI. If I go out of the way and set up accessibility settings, I would expect all applications on my device (including the web browser) to respect those settings.
It matters because we’re the one that has to deal with ticket when user complains that their tweaked-to-within-an-inch-of-its life system inevitably breaks the world.
The problem isn’t you per say, it’s the 5000 people that mis-follow some YouTube video because it looks cool without it actually understanding what they’re changing , how to undo it, or what the implications are.
Mozilla doesn't agree with you; if they thought that was true they wouldn't do the weird phased rollout to stable.
Although I agree with codedokode insofar as I don't see how the phased rollout in stable could possibly help. Hopefully they've thought of something I haven't otherwise it is silly.
You shouldn't but there are many sites that assume the default font is black and the default background is white. (I'm sure I forgot to set them myself.)
There are [to] many ways to set the font size. I don't even know which on is the correct choice, if there is such a thing.
Maybe not trying to control it is the best approach? How can one tell?
Bigger problem is when users run into a problem and can't find information about it because others can't reproduce it. I ran into this when they ran an A/B test on removing the search box and it drove me crazy that no one else seemingly had the same behavior, until I found and disabled the A/B test plugin.
To test in Firefox with the new behavior, set layout.css.h1-in-section-ua-styles.enabled to false in about:config.
Rolling out potentially risky changes in this way is not new, and is also a strategy that other browsers employ. It allows for course-correcting if necessary and is less disruptive than shipping to 100% of users directly.
Test what exactly? If your test is "The page renders how I expect when using the browser's default styling" then that's a terrible test, because that can change at any time without your control. You just have to accept what that is. There isn't a valid test for it.
The fix for this is to define your own H1 margin and font size and then to test that the site looks correct with those values. Your test is should be that your styling works, not that Firefox's styling works. That'd be like testing a dependency. You shouldn't be doing that.
For example, I tested something in my own private Azure Subscription, but the feature was simply missing in the customer subscription.
Microsoft was enabling features randomly without even documenting this or showing any kind of user-visible indicator of what feature set was available or not.
if you're a web developer, it's always a good idea to keep a copy of the beta release installed to test your sites on, so you can see these changes before it hits the stable release.
Even if they rolled out to 100% of the users on day 1, you'd sill have an heterogeneous audience with all the folks using older versions of Firefox. It likely takes (at least) many months before 99% of Firefox users upgrade to a any specific release.
I guess you could override the styles to whatever you want to keep it consistent, and then once the update rolls out to enough users you can stop overriding the styles if you want.
I get studies and exp. But that's different than rolling out a default behavior to all users, which still requires safe velocity guardrails so you don't Crowdstrike yourself.
Firefox dev edition is nice but when changes are rolled out gradually like this, forcing the setting in about:config is the only reliable way. Firefox runs experiments on its users in dev edition as much as it does in stable.
In practice, it's like not even one web developer would have problem. The default h1 style on web is quite disgusting in general, mostly with extremely big font size and wide padding. Made it very unusable on mobile devices. Without overriding it, you should already have problems.
The feedback loop is Firefox's "report broken site" in the Help menu. Success is there are no reports about broken sites due to this change, or the breakage is minor -- differently sized headings and different margins on some pages is expected and is acceptable. Sites becoming unusable is not acceptable.
reply