I think the best vertical tabs implementation in firefox is Sidebery. The use of "panes" to group tabs is brilliant. Older versions were buggy, but version 5 has been rock solid for me.
Can't agree more, have been using sidebery for about a month now, and even completely dropped chromium which I ran beside firefox for the last years to only running firefox with sideberry and container-tabs now.
I've been using vertical tabs (first TreeStyleTabs, now Sideberry for the last ~6 months) and I'm in the same boat.
Chrome is faster, snappier and works better on more websites I commonly use, but the fact that I cannot have "vertical tabs as trees" ruins the entire browser experience for me, so it's basically the only reason I use Firefox for the last decade or something.
Add NoScript and Firefox will be much faster than Chrome. It will make you aware of how much untrusted code poorly developed sites expect you to run on their behalf.
Well, turn off JavaScript in Chrome and you back to Chrome being faster. Turning off JS is obviously not a solution when the complaint is that (assuming the same amount of work) Chrome is faster for some JS.
NoScript doesn't turn off javascript. It allows you to selectively disable some scripts while whitelisting others. You can't use much of the modern web without JS but you can neuter the dozens of trackers and ad bloat some sites insist on running on your computer.
Running uBlock Origin in “Medium mode” [1] also does wonders (= blocking 3p-scripts and frames).
It’s interesting to see how many websites work in this mode, and the amount of crap you’re not seeing. Websites load so much faster. And, you can then (permanently, or not) easily whitelist some specific domains like content providers, etc. while browsing.
How have I not heard about this in the bajillion times I whined about tab groups?
I kinda dislike that Firefox only have one good option that involves completely hiding each group currently not in use, but it functioned ith their tab containers which made it worth the hassle.
How do panes scale for many groups? Can you manage 20, 30 panes? Or does it become annoying at this amount?
Sidebery is nice, but it's missing an API allowing other addons to interact with it. This is a big benefit of Tree Style Tabs, especially as you can even exploit it as a user.
I use Sidebery, and I added some custom userChrome.css to have the sidebar collapse to only take up 36px, and expand on hover, absolutely love using it
I switched to sideberry a while back, and yeah - very much agreed, it's leagues ahead of others in terms of base experience breadth (container tabs and whatnot are fully integrated) and customization options.
Their wiki also has a very simple and effective userChrome.css tweak to hide the top tab bar when the side panel is open. That's a rather crucial vertical space savings on a small laptop.
I've added commands to Tridactyl that expand/collapse the tabs I'm on in Tree Style Tabs, using their javascript API. Does Sidebery have anything like that?
Sleep is primarily preventative, as it plays a role in the immune system. Personally, I haven't been sick in many years and when other people are getting sick I just get tired and sleep a few more hours than usual.
I think it's interesting that human minds generally (though not always!) improve when exposed to the output of other human minds. It seems to be the opposite for current LLMs.
Maybe it's less about "Human VS Robot" and more about exposure to "Original thoughts VS mass-produced average thoughts".
I don't think a human mind would be improving if they're in a echo-chamber with no new information. I think the reason the human mind is improving is because we're exposed to new, original and/or different thoughts, that we hadn't considered or come across before.
Meanwhile, a LLM will just regurgitate the most likely token based on the previous one, so there isn't any originality there, hence any output from a LLM cannot improve another LLM. There is nothing new to be learned, basically.
> Humans are very capable of looking around themselves and thinking "I can do better than this"
Doesn't this require at least some perspective of what "better than this" means, which you could only know with at least a bit of outside influence in one way or another?
humans haven’t been had the same set of all encompassing “training experiences” like LLMs have. we each a subset of knowledge that may overlap with some other’s knowledge, but is largely unique. so when we interact with each other we can learn new things, but with LLMs I imagine it is a group of experienced but antiquated professors developing their own set of out of touch ideas
A sequence of AI models trained on each other's output gets mutations, which might help or hurt, but if there's one dominant model at any given time then it's like asexual reproduction with only living descendant in each generation (and all the competing models being failures to reproduce). A photocopy of a photocopy of a photocopy — this seems to me to also be the incorrect model which Intelligent Design proponents seem to mistakenly think is how evolution is supposed to work.
A huge number of competing models that never rise to dominance would be more like plants spreading pollen in the wind.
A huge number of AI there are each smart enough to decide what to include in its training set would be more like animal reproduction. The fittest memes survive.
Memetic mode collapses still happen in individual AI (they still happen in humans, we're not magic), but that manifests as certain AI ceasing to be useful and others replacing them economically.
A few mega-minds is a memetic monoculture, fragile in all the same ways as a biological monoculture.
A different biological analogy occurred to me which I've mentioned
before in a security context. It isn't model degeneration but the
amplification of invisible nasties that don't become a problem until
way down the line.
Natural examples are prions such as Bovine spongiform encephalopathy
[0] or sheep scrapie. This seems to really become a problem in systems
with a strong and fast positive feedback loop with some selector. In
the case of cattle it was feeding rendered bonemeal from dead cattle
back to livestock. Prions are immune to high temperature removal so
are selected for and concentrated by the feedback process.
To really feel the horror of this, read Ken Thompson's "Reflections on
Trusting Trust" [1] and ponder the ways that a trojan can be replicated
iteratively (like a worm) but undetectably.
It isn't loss functions we should worry about. It's gain functions.
Have you ever heard of the telephone game? This is what is going on here. Or imagine an original story of something that really happened. If it goes by 100 people in a chain, how much do you think the story will resemble the original one?
I mean it makes sense that (even impressively functional) statistical approximations would degrade when recursed.
If anything I think this just demonstrates yet again that these aren't actually analogous to what humans think of as "minds", even if they're able to replicate more of the output than makes us comfortable.
Humans exhibit very similar behavior. Prolonged sensory deprivation can drive a single individual insane. Fully isolated/monolithic/connected communities easily become detached from reality and are susceptible to mass psychosis. Etc etc etc. Humans need some minimum amount of external data to keep them in check as well.
I searched instead for "size byte bits", third result has the answer. It seems like the engine gives equal weight to all words in the search, so "are", "in" and "a" throw it off.
excellent! I'm tired of search engines that optimize for natural language queries because the inevitable trade-off is that they become useless at keyword/exact queries.
I think this paragraph on the difficulty of building good independent indexes should not be overlooked. What's going on with Cloudfare?
> When talking to search engine founders, I found that the biggest obstacle to growing an index is getting blocked by sites. Cloudflare is one of the worst offenders. Too many sites block perfectly well-behaved crawlers, only allowing major players like Googlebot, BingBot, and TwitterBot; this cements the current duopoly over English search and is harmful to the health of the Web as a whole.
CloudFlare isn't that bad in my experience. They were really aggressively blocking me when I started out, but there are some hoops[1] you can jump through to make them recognize your bot. Goes a long way.
It does depend on the sites' settings though. Some are set to block all bots, and then you're kinda out of luck.
In general, I've found that like 99% of the problems you might encounter running a bot can be solved by just finding the right person and sending them an email explaining your situation. In almost all cases, they'll let you through.