Hacker News new | past | comments | ask | show | jobs | submit | buo's comments login



I think the best vertical tabs implementation in firefox is Sidebery. The use of "panes" to group tabs is brilliant. Older versions were buggy, but version 5 has been rock solid for me.

https://github.com/mbnuqw/sidebery


Another former Tree Style Tabs user, now on Sideberry with no regrets.

I am excited that FireFox is working this in by default so I don't have to keep fiddling with userChrome.css to get rid of the top tab bar.


Looks like we won't have nesting in Firefox's implementation which made it kinda pointless to me.


So they've copied Edge's poor implementation of vertical tabs. Blech.

Hey, look on the bright side: maybe chromium will get vertical tabs soon!


Can't agree more, have been using sidebery for about a month now, and even completely dropped chromium which I ran beside firefox for the last years to only running firefox with sideberry and container-tabs now.


I've been using vertical tabs (first TreeStyleTabs, now Sideberry for the last ~6 months) and I'm in the same boat.

Chrome is faster, snappier and works better on more websites I commonly use, but the fact that I cannot have "vertical tabs as trees" ruins the entire browser experience for me, so it's basically the only reason I use Firefox for the last decade or something.


Add NoScript and Firefox will be much faster than Chrome. It will make you aware of how much untrusted code poorly developed sites expect you to run on their behalf.


Well, turn off JavaScript in Chrome and you back to Chrome being faster. Turning off JS is obviously not a solution when the complaint is that (assuming the same amount of work) Chrome is faster for some JS.


NoScript doesn't turn off javascript. It allows you to selectively disable some scripts while whitelisting others. You can't use much of the modern web without JS but you can neuter the dozens of trackers and ad bloat some sites insist on running on your computer.


I'm well aware of what NoScript does, I'm already using it. It seems you're missing the point of the comparison.


Running uBlock Origin in “Medium mode” [1] also does wonders (= blocking 3p-scripts and frames). It’s interesting to see how many websites work in this mode, and the amount of crap you’re not seeing. Websites load so much faster. And, you can then (permanently, or not) easily whitelist some specific domains like content providers, etc. while browsing.

[1] https://github.com/gorhill/uBlock/wiki/Blocking-mode:-medium...


How have I not heard about this in the bajillion times I whined about tab groups?

I kinda dislike that Firefox only have one good option that involves completely hiding each group currently not in use, but it functioned ith their tab containers which made it worth the hassle.

If this does too, I'm switching permanently


How do panes scale for many groups? Can you manage 20, 30 panes? Or does it become annoying at this amount?

Sidebery is nice, but it's missing an API allowing other addons to interact with it. This is a big benefit of Tree Style Tabs, especially as you can even exploit it as a user.


I have 20 panes and it works fine.


I use Sidebery, and I added some custom userChrome.css to have the sidebar collapse to only take up 36px, and expand on hover, absolutely love using it


I switched to sideberry a while back, and yeah - very much agreed, it's leagues ahead of others in terms of base experience breadth (container tabs and whatnot are fully integrated) and customization options.

Their wiki also has a very simple and effective userChrome.css tweak to hide the top tab bar when the side panel is open. That's a rather crucial vertical space savings on a small laptop.


I've added commands to Tridactyl that expand/collapse the tabs I'm on in Tree Style Tabs, using their javascript API. Does Sidebery have anything like that?


Started using Sideberry over a year ago and have not looked back since. Very good stuff.


Sidebery is amazing. I have been using it for more than a year now and I love it.


I am healthy, and intend to remain so -- I wear an N95 in all indoor spaces.


The single best prevention for viruses is an adequate amount of quality sleep.


I'd love to see some research on how quality sleep helps with, say, Ebola.


Sleep is primarily preventative, as it plays a role in the immune system. Personally, I haven't been sick in many years and when other people are getting sick I just get tired and sleep a few more hours than usual.


I think it's interesting that human minds generally (though not always!) improve when exposed to the output of other human minds. It seems to be the opposite for current LLMs.


Maybe it's less about "Human VS Robot" and more about exposure to "Original thoughts VS mass-produced average thoughts".

I don't think a human mind would be improving if they're in a echo-chamber with no new information. I think the reason the human mind is improving is because we're exposed to new, original and/or different thoughts, that we hadn't considered or come across before.

Meanwhile, a LLM will just regurgitate the most likely token based on the previous one, so there isn't any originality there, hence any output from a LLM cannot improve another LLM. There is nothing new to be learned, basically.


> I don't think a human mind would be improving if they're in a echo-chamber with no new information

If this were true of humans, we would have never made it this far

Humans are very capable of looking around themselves and thinking "I can do better than this", and then trying to come up with ways how

LLMs are not


> Humans are very capable of looking around themselves and thinking "I can do better than this"

Doesn't this require at least some perspective of what "better than this" means, which you could only know with at least a bit of outside influence in one way or another?


Every human has feelings and instincts, they answer what "better than this" means.

Yes, even in math and science, those were built on top of our feeling of "better than this" iterated over thousands of years.


Parsimony, explanatory power, and aesthetics. These are things that could be taught to a computer, and I think we will. We had to evolve them too.


humans haven’t been had the same set of all encompassing “training experiences” like LLMs have. we each a subset of knowledge that may overlap with some other’s knowledge, but is largely unique. so when we interact with each other we can learn new things, but with LLMs I imagine it is a group of experienced but antiquated professors developing their own set of out of touch ideas


Reproductive analogy:

A sequence of AI models trained on each other's output gets mutations, which might help or hurt, but if there's one dominant model at any given time then it's like asexual reproduction with only living descendant in each generation (and all the competing models being failures to reproduce). A photocopy of a photocopy of a photocopy — this seems to me to also be the incorrect model which Intelligent Design proponents seem to mistakenly think is how evolution is supposed to work.

A huge number of competing models that never rise to dominance would be more like plants spreading pollen in the wind.

A huge number of AI there are each smart enough to decide what to include in its training set would be more like animal reproduction. The fittest memes survive.

Memetic mode collapses still happen in individual AI (they still happen in humans, we're not magic), but that manifests as certain AI ceasing to be useful and others replacing them economically.

A few mega-minds is a memetic monoculture, fragile in all the same ways as a biological monoculture.


A different biological analogy occurred to me which I've mentioned before in a security context. It isn't model degeneration but the amplification of invisible nasties that don't become a problem until way down the line.

Natural examples are prions such as Bovine spongiform encephalopathy [0] or sheep scrapie. This seems to really become a problem in systems with a strong and fast positive feedback loop with some selector. In the case of cattle it was feeding rendered bonemeal from dead cattle back to livestock. Prions are immune to high temperature removal so are selected for and concentrated by the feedback process.

To really feel the horror of this, read Ken Thompson's "Reflections on Trusting Trust" [1] and ponder the ways that a trojan can be replicated iteratively (like a worm) but undetectably.

It isn't loss functions we should worry about. It's gain functions.

[0] https://en.wikipedia.org/wiki/Bovine_spongiform_encephalopat...

[1] https://tebibyte.media/blog/reflections-on-trusting-trust/


I do get to choose what I read, though.


Have you ever heard of the telephone game? This is what is going on here. Or imagine an original story of something that really happened. If it goes by 100 people in a chain, how much do you think the story will resemble the original one?


A more appropriate analogy would be isolating someone from the rest of the world and only being able to read their own writings from now on.

While some persons can strive in these kind of environment (think Kant for example), many would become crazy.


Different loss function


This might be my biases speaking, but I have a hunch that there's still more potential for human generated content to poison our minds, than AI.


It's almost as if LLMs and human minds operate entirely differently from each other.


I mean it makes sense that (even impressively functional) statistical approximations would degrade when recursed.

If anything I think this just demonstrates yet again that these aren't actually analogous to what humans think of as "minds", even if they're able to replicate more of the output than makes us comfortable.


Humans exhibit very similar behavior. Prolonged sensory deprivation can drive a single individual insane. Fully isolated/monolithic/connected communities easily become detached from reality and are susceptible to mass psychosis. Etc etc etc. Humans need some minimum amount of external data to keep them in check as well.


Besides, it's not just about profit. Retail is where they abuse workers, so it makes sense not to do business with that part of the company.


From what I’ve heard from a friend in AWS, it’s not exactly all roses over there either.


> I'll continue to boycott Amazon.

You're not the only one. I haven't bought anything from Amazon in the past five years or so.


I searched instead for "size byte bits", third result has the answer. It seems like the engine gives equal weight to all words in the search, so "are", "in" and "a" throw it off.


excellent! I'm tired of search engines that optimize for natural language queries because the inevitable trade-off is that they become useless at keyword/exact queries.


I think this paragraph on the difficulty of building good independent indexes should not be overlooked. What's going on with Cloudfare?

> When talking to search engine founders, I found that the biggest obstacle to growing an index is getting blocked by sites. Cloudflare is one of the worst offenders. Too many sites block perfectly well-behaved crawlers, only allowing major players like Googlebot, BingBot, and TwitterBot; this cements the current duopoly over English search and is harmful to the health of the Web as a whole.


CloudFlare isn't that bad in my experience. They were really aggressively blocking me when I started out, but there are some hoops[1] you can jump through to make them recognize your bot. Goes a long way.

It does depend on the sites' settings though. Some are set to block all bots, and then you're kinda out of luck.

In general, I've found that like 99% of the problems you might encounter running a bot can be solved by just finding the right person and sending them an email explaining your situation. In almost all cases, they'll let you through.

[1] https://blog.cloudflare.com/friendly-bots/


That's good to know -- thanks!


Not a license, but there are some terms of use: https://commoncrawl.org/terms-of-use/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: