Honestly I do on a regular basis. Also that doesn't sound like a lot for a browser to maintain. Imagine opening most links in a new tab. That's how it happens and that's me.
As you can see from https://www.youtube.com/watch?v=7iwgyzX-76g the browser becomes completely unusable with a few thousand tabs, and the experience is massively degraded way before that.
You also would run out of RAM way before opening 4,000 tabs, with most consumer boards supporting only a maximum of 128GB of RAM.
Before you continue digging this hole you should consider web browsers don't load all old tabs on startup. I have definitely had 2000+ tabs open and that didn't even slow down the browser as usually less than 100 or so are loaded at a time. I basically use tabs as bookmarks.
Note that I use Firefox and not Chrome, and my tab usage is one of the reasons.
I don't know a way to check the exact number, but I'm 100% sure I'm over 2,000 tabs.
At work now so can't check but my PC at home has 64Gb RAM and up to half of that is used by Firefox.
I've noticed it gets unresponsive over 160 windows. I currently have 156 open (it tells you the number when you exit Firefox. Each of those windows has at least 20 tabs.
Firefox is good for an excessive number of tabs because it only loads the tabs when you activate them. Chrome is a memory hog.
what don't things get moved to HN even with few upvotes and then disappear after a couple seconds?
So with 4 upvotes could just be it was its turn to be put on the front page but 4 people had already voted.
Also I thought there was some sort of thing that rewarded if you got a lot of votes quickly, like if you got 4 upvotes in a couple minutes it would probably push you to the front page and keep you there long enough to get more votes?
However take all that with grain of salt because I have never actually tried to figure out how upvotes work, this was just my not trying to understand at all sort of soaked up ideas.
Here's a collection of comments from the same PR from users arguing for stabilization:
"I work on chumsky, a parser combinator crate. I've recently been experimenting with GATs internally as a way to control exactly what code Rust generates. Instead of praying to the LLVM gods that the compiler might optimise things, I use a GAT to project a particular parsing 'strategy' into the implementation of parsers. I've found that I can significantly improve the performance of the library by an order of magnitude, even beating out hand-written parsers, nom, and serde_json (with several caveats) without harming the library's expressivity (and, in fact, improving it). This all happens without the GATs themselves being exposed to library users at all."
"The first time I realized the Iterator trait was insufficient for what I wanted was before Rust 1.0 in 2014 when I wrote one of the first versions of the csv crate. All I wanted to do was write an iterator that lent out a borrow of an internal buffer in order to avoid allocating a new record on each iteration."
"I've been using GATs on nightly for a little less than a year for various experimental proc-macro crates. I can only say that GATs simplify a lot of things for me! I'm not doing things like LendingIterator, my interest is more in "DSL"s. Things like e.g. generate a struct that temporarily stores all the parameters to some function (where some of those parameters will be non-static references). The main concern will often be how much code can I avoid autogenerating i.e. is it possible to write abstractions as libraries over these things. GATs allow me to do that with ease. [...] The one I'm currently working on is unimock. The GAT stuff is only in the gat-mock branch, not released on crates.io yet. That GATified trait is MockFn."
"There is no way to use async traits in an embedded context (no_std) without GAT's or pulling in the alloc crate (to use async-trait). Pulling in alloc for most embedded platforms is not feasible, therefore we are currently locked to nightly for the embedded-hal-async crate."
"Issue #95 on the RustAudio crate for example says, "The first [solution] would be to make PortType generic over a 'a lifetime...however, this has a cascading effect, which would force all downstream users of port types to specify their lifetimes". Pythonesque made a simpler point here, "Without GATs, I ended up having to make an Hkt trait that had to be implemented for every type, define its projections, and then make everything heavily parametric and generic over the various conversions.""