Hacker News new | past | comments | ask | show | jobs | submit login
Firefox tooltip bug fixed after 22 years (bugzilla.mozilla.org)
966 points by MallocVoidstar on Oct 10, 2023 | hide | past | favorite | 337 comments



If you're curious about how you'd even go about finding and fixing a bug in the Firefox codebase, Mike Conley has been livestreaming his dev work at Mozilla weekly for years. A lot of the stuff he records involves picking a bug from the backlog, and then meticulously hunting it down and murdering it.

I really recommend giving it a watch, especially if you're just starting out in programming and want to see what work in a real codebase is like: https://youtube.com/@mikeconleytoronto


When I was in college (so like... almost as long ago as this bug was filed, yikes!) I spent a lot of time volunteering for Mozilla. I started out doing bug triage for their QA team, which involved things like making sure it was in the proper Bugzilla component and had sane steps to reproduce and so on. Even this was very enlightening, as someone with absolutely zero real world experience. Seeing how a large scale project was run, watching how the "real" devs worked through fixes and features. Introducing concepts like diffs and patches and CI/CD (it was nightly builds in those days so not actually Continuous, but moving in the right direction).

Later on I figured out how to build the apps, which was a bear, especially on Windows. Firefox, Thunderbird, and the whole Mozilla Suite which was still a thing back then, with integrated browser, mail, HTML editor etc. And finally contributed some bug fixes of my own. Nothing significant, we're talking like fixing typos and misaligned icons, but it still felt incredible to be shipping code that literal millions of people would use.

All of this was immediately useful in my first job out of school. I started out doing IT support and QA at a startup but quickly found they had no build and deploy automation. Someone would push to SVN and you might not find out for days that it broke the build. I got them set up with a build server using what I had learned from Mozilla (Buildbot? Cruisecontrol? Hudson? Been too long) that would do a build on push and email the team if it couldn't compile or failed tests. Was pretty fun having a bunch of seasoned developers ask me "how the hell does our IT guy know about all this shit?" I was able to carve out a DevOps role for myself before that term existed, thanks to everything I learned playing with Mozilla in my free time.

This is a roundabout way of saying I totally agree, hacking on Mozilla projects is a great way to get real world experience :)


> Cruisecontrol?

I had forgotten about CruiseControl. Long ago, I had a gig for an avionics developer in a team whose technical lead had gotten deep into agile and had mostly sane practices (sources control, build scripts and automated tests were not a given back them).

Still, we had a colleague that was often a bit to quick to commit broken builds or failed tests to the trunk. One day, I stumbled upon this program - CruiseControl - designed to automate build and test tasks. I had no notion that continuous integration was a thing.

As a practical joke, I setup a VM, installed Cruise Control, gave access to our subversion server and created some jobs using existing Ant tasks, just to email blast the team when a broken build was committed.

It got positive results way beyond the initial intent (mainly by eliminating "build worked on my machine"). Two years later, all SW projects of the company had been moved to Hudson.


They used Tinderbox back then. It was definitely considered to be “continuous integration”, because the build machines never stopped. They continually rebuilt the software, starting a new build whenever the previous one finished. It just took hours to run a build, and more hours to run the tests, so there was no way to go faster.

I too got a big head start by volunteering for Mozilla.


At last. More of this please, Mozilla...

One of the most annoying versions of this bug is when launching a full screen game, the mouse gets re-positioned and triggers a tooltip, which is on top of the game, and the game doesn't like alt-tabbing when you unfocus it to go deal with Firefox, so you just have to restart it...


Huh. At this very moment I am listening to Jonathan Blow's talk to DevGAMM four years ago, which has this kind of glitch as a recurring motif.

https://www.youtube.com/watch?v=ZSRHeXYDLko&t=2729s


It's a common motif in desktop videogaming. The modern desktop computing model is a multitask windowed architecture where the mouse controls the cursor on the screen, a focused window receives events, and the keyboard generates events that may or may not translate to filling text buffers or activating behaviors based upon the subtle state of the concept of "focus."

A videogame generally wants none of that: no windows controlled by the OS, no OS-provided cursor because it won't fit the game's visual theming (or makes no sense in that kind of game at all), other behaviors on the mouse, different keyboard behavior, and (if they could get away with it) no multitasking; you need all that CPU for game stuff. So shifting a desktop PC from "not playing a game" mode to "playing a game" mode is, historically, an extremely modal shift involving kicking most of the OS to the curb, rejecting its reality and replacing it with your own.

Modern OSes have better abstractions for this, and modern computers can actually tolerate running background tasks alongside high-performance games (we've crossed a threshold where most reasonably-optimized games can't find anything to do with all your CPU because the experience is still long-polled by human perception speed). But the fundamental design tension is forever there.


I'm still waiting for to fix basic container tab behaviour [1]. I can't help but think that their product management is completely broken.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1479858


There's countless people who have screamed that they want to switch from chrome, but can't because profiles don't work the way we want them too. And it's two seemingly small changes that need to happen too.

1. Make the UI around profiles better 2. When I click a link anywhere outside of the browser, open it in the last active window.

Number 2 is the most important honestly


Many people don't know that Profiles is carried over from Netscape 3.x days (IIRC).

Code powering Profiles is probably is much more convoluted than people have realized.

Addendum: The earliest reference I can find is around 2000 (https://math.vanderbilt.edu/schectex/wincd/tips_netscape.htm), which coincides with Netscape 3/4 era.


And, if I remember correctly, the whole Profile things was initially hidden in Firefox. I remember you couldn't easily switch profile without using the command line.


Yes, it was there, but not really supported. We had to (ab)use the feature for some stuff, but everything was read-only and locked down by enterprise profiles, so it was not a big problem for us.

I think profiles are not supported as first class citizens, still.

Shall that be improved? Yes. Will that be easy? I don't think so.


If they have the resources to constantly revamp the Firefox and Thunderbird UIs, they have the resources to rewrite the profile system.


UI reboots are so sexy though -- if this is like any company I've been involved with, they hire/promote a new design boss every couple years, and that person spends 6 months making a sexy Illustrator/Sketch/Figma (in that order over the years) document showing how hott things would look if we changed all the fonts and aped whatever widgets had been redone in last year's iOS. Instant greenlight. Try getting that kind of C-suite buy-in for a boring under-the-hood fix!


This constant redesign with no real improvement cycle irritates me to no end.


Actually Thunderbird Supernova and Firefox’s latest iteration all very good both in space efficiency and performance.

I’m saying this as a lover of 90s dense interfaces, like Eclipse.


Did you see the code? Do you know how deep it goes? I don't have the definitive answer, so if you have some insight, I'll love to hear it.

UI is top of the proverbial iceberg. It's relatively easy to modify it.


I'm using Firefox on Linux and 2 is how it's been working for me for a looong time now. Clicked links in external programs always open in the last active Firefox window for me. Is this OS or even window-manager specific?


I think they mean they have windows open in different profiles (profiles, not containers), and want the last active one to pick up links from external apps...

I, for one, would like the last active container to be used.


Interesting. I like the option which Edge gives, which is I select the profile that gets URL opens from outside the browser (my "main work profile") and add site-level exceptions that route certain sites to the right profiles (example: LinkedIn links should open in my personal profile, Demo Server accounts open in the dedicated demo profile).

If I relied on "last active" I would frequently get things opening up in the wrong place just because say, last time I was in a browser I happened to be in a my personal profile, and 30 minutes later, I click a link in work Slack.


Yeah, I can see how that can end up messy, but filtering by url also doesn't always work. To give one example: I may open google docs in different containers because I have several accounts (personal and work). So, if I receive a link to a doc that was shared to me, and open it from the mail client... it won't necessarily end up in the right container. It can end up working fine when you use the menu to open in another container, but on many sites, the original url is redirected to a login page or whatever and opening in another container is not going to be helpful at all because you've now lost the original link you needed to open.


You can do that with sidebery. Firefox’s default container plug-in is just a baseline. Other plugins do a much better job.


Are you using different profiles ? I have two running at the same time, links always open in the browser instance running the "default" profile, not the other one.


i also have two profiles. i noticed that for me links open in the profile that was started last. in my case i always want them to open in the default profile, so i have to make sure to start the other profile first, and default second.

this may not be easy to fix.

the problem here is how the url opener selects the process.

at that point it can't even know which process has the last active window. it would have to connect to both processes and request that information instead of just sending the url to the most recent process in the list. and then each process would have to return when it was last active to be able to choose which one of these was active more recently. this may not be possible without some more elaborate book-keeping of timestamps when a window gets activated.

this increases the complexity of the url opening feature in a non-trivial way.

it may not even be desirable to handle this in firefox only.

what if you run chromium and firefox? now you want a generic url opener that sends the url to the last active browser.

what we really want here is to have a generic way to send an url to an open window.


> at that point it can't even know which process has the last active window. it would have to connect to both processes

Forgive me as I'm not a "real" desktop programmer, only web technologies really, but...

Is there not some master process that knows the basic details about its various separate processes? I mean, the main process has to be able to tell the others to quit, right? I would have thought it could keep track of the metadata of each for coordination purposes, like, as a process's window becomes Front it could check in and say 'yo this is process 12345, I'm profile X and I'm becoming Active.'


i don't know how wayland does it, but one of the challenges in X11 was that it was not trivial or even possible to relate an X11 window to its unix process because the window could be coming from a remote machine. in other words X11 knows nothing about unix processes.

kill in X11 is done by sending a message to the window to go away. in the normal case this triggers an exit in the process, but it is possible for the process to keep running or even open a new window. X11 can not force a program to really terminate because again the process could be from a different machine or a different user.

there are two ways to talk to a process: one is to find the X11 window and send messages to it through the X11 protocol. this is done for example to share clipboard contents. it would be possible to send urls that way too.

the other is to find the unix process, and use some rpc mechanism to talk to it that way.

as far as i can tell, firefox is using the unix process. this may be because of the cross platform nature of firefox. process RPC is easier to do in a cross platform way. the X11 method does not work on windows or MacOS, so that would mean that each system needs custom code to handle this.


I got tired of dealing with it and just told my system that URLs should be opened by a script I wrote that just puts the URL in the clipboard and shows a notification that it did so. So I open/click a link, then go find the window I want it to open in and paste and go.


If you want to avoid that, but have control of where urls are opened (and use a mac);

https://github.com/johnste/finicky


Containers are enough for your use case. I know that for some usage cases, are enough. But for nearly 90% users, they are enough and even better that profiles.


You sure?

What I want to do is sandbox my accounts, specifically my work account. Here's a sceneario

I click a link in slack, I want the link to open in the work container.

How do I do this?

Because using containers, It will default open in the default container, i then have to right click the tab, and say "open in work container" .. which then opens that tab again, in the Work Container.

What I want is to not have to do that extra work. Click link -> Work Container.

You can't, at least, not default... you need to add extra plugins, like Multi-Account containers or whatever the plugin is called. That's a LOT of work for the average user. Compared to profiles where when you click the link, it opens in the last active window, which remembering windows are profiles, will be most likely the right profile. So if im working constantly on my work profile window, i click slack, and then click a link, its right back where i want.

The problem with container tabs, is that its tabs, not windows. So is there any way i can say to firefox "this new window is WORK Containers please"


Looks like there's an extension which adds a new protocol handler, which allows firing the container you like for a given link from the command line.

It's available at https://github.com/honsiorovskyi/open-url-in-container and Mozilla Extension store, so you can directly install it.

I think you can register a new "application" to open links, not set at default, and use Right Click -> Open With -> Firefox (WORK CONTAINER).

Will that work?


This is exactly the reply I was expecting. The "yes it works but here's some extra things you need to do to make it work". No average user is going to do that.

To me containers still don't replace profiles.


Last week I was at a project meeting cum conference, where most people attending were scientists, but not computer people. They have showed what they did with computers and programming languages, and it amazed me with no end.

These people wired tons of plugins on their browsers, VSCode and Obsidian installations, created web sites and tools which broke new ground and their tools were running on distributed systems at very good speeds. Some of these tools requires human years to develop even by “computer people”, yet these people sat with their partners during the pandemic and developed these amazing things while wearing trainers and drinking coffee casually.

These people would make this plugin draw circles up in the air while most of us “not average users” are reading the docs.

So, what is an average user? By your definition I’m an average user, because I use containers, but I didn’t use this extension, but found it in five minutes, because you wanted a solution.

Now I’ll install and use it fully tomorrow morning, before my coffee gets warm. Because it’s worth it.


Yes this! This is the main thing stopping me moving.

The ux is currently horrible because you have to run multiple copies of the browser


I remember commenting with a "me too + info" on a bug years and years ago and finally stopped using Firefox for good about a decade ago mostly because this bug had been annoying me multiple times a day for years and Chrome was finally good enough to replace it.

It was always such a pain having to re-focus almost every firefox window until it disappeared to figure out where it'd come from because it'd often be something fairly generic like "Previous" that it'd choose to tooltip and push in front of absolutely everything else.


I am more shocked this behavior did not inadvertently change sometime in the intervening years. That is some impressive bug backwards compatibility.


bugwards compatibility - sounds like a Microsoft thing.


Isn't "it's not a bug, it's a feature" an MS mantra?


Linus Torvalds would like a word with you. If something is changed in the kernel, even if it's technically a bugfix and something in userspace breaks, it's a kernel bug, period.



I have always blamed my various linux desktop window managers for that bug, never realizing its always the same culprit :facepalm:

On occasion (a smaller screen) it could be quite annoying as it might interfere with the display of a form or other critical element.

Looking forward to the update and the next 22 years of firefox not just being bugfree but being the impactful application it once was.


I will keep on blaming the window managers, because I have certainly seen it outside Firefox. And Linux.


I see it on Winforms applications too.


Excel showed me tooltips’ shadows on other desktops. Took me a few minutes to figure it out!


It also happened on Windows and continues to happen with some other applications across operating systems. There's no one specific component or GUI layer to blame.


I think I have seen it also with LibreOffice.


It happens elsewhere. I notice it a lot with DBeaver.


For me it regularly happens with Chrome on Fedora/GNOME/X.Org.


i had the bug in other gtk applications so i blamed gtk or gnome :(

Apparently it can be fixed by the application


https://phabricator.services.mozilla.com/rMOZILLACENTRAL8ae3...

If you see the five line fix for the bug and think "I could have done this!" You could!

The actual answer is "why didn't't I?"

So to answer that question for myself : How easy is it to compile Firefox and run the test suite?

The long bug hunting bit after that is the fun part.


I regularly contribute to the software I use and I tried doing something for Firefox. It was so hard for an outsider that I just stopped trying it. The entire system involving hg, various repositories, outdated wikis, very complex building, bugzilla etc is just so... hard to get into.

If Firefox was on GitHub (or even a self-hosted GitLab) where I can just fork it, play around and have the exact same CI, that would be an entire different story. I whish it was as easy as that.


It's one command to clone from git or hg at your convenience, one line to install the development dependencies (that figures out the os/distro and does the right thing) and one line to build.

Then one line to install the command line tooling to submit patches and another line to submit a patch (or a patch series).

All the docs are there for all major platforms, aimed at first timer in our code base: https://firefox-source-docs.mozilla.org/contributing/contrib...

It used to be much more complex than that but this isn't the case anymore. Using GitHub (that we need to use anyways because lots of projects and lots of web standards use it) feels like banging two rocks together compared to what our tooling can do, especially at that scale.

Source: I work at Mozilla and frequently onboard contributors of all backgrounds and levels.


Those 3 lines look nicer than the "3 commands to install Gentoo"!

https://web.archive.org/web/20160309055829/http://www.bash.o...


Beautiful. Now I can finally do the necessary custom build: Replace all keyboard shortcuts with emacs ones.


I'm curious, can you elaborate on what's missing at GH/GL?


I use GitHub everyday, and problems, or what I miss from the mozilla stuff are:

- git is not as modern as mercurial (no absorb, no evolve, no nice ncurse gui, weird explicit branching model). I've been using both daily for more than ten years, but mercurial is largely superior in every respect

- no way to properly do patch stacks and proper reviews, with static analysis and linting integrated into the review tool, no way to lose comments by force push or that kind of thing (phabricator is amazing)

- no way to properly do private issues for security bugs or touchy audits that could reveal large problems

- no hierarchical categories for issues, no severity, priority, little provision for advanced filtering for a project our size. Bugzilla is very nice and much better than anything else I've tried. You can use tags for a lot of that on gh but it's messy - code browsing is primitive compared to https://searchfox.org/ (but most code browsing tool are, in comparison)

- my notifications are completely flooded by lots of useless information on GitHub, but that might be fixable

- our CI system (treeherder/taskcluster) scales, works on Linux/Mac/windows/Android and a bunch of version and arch, integrated with all of the other tools mentioned. Things such as auto-running tests based on the content of the patch, automatic categorization and prioritization of intermittent test failures, or auto-recording test failures and offering a pernosco recording showing the issue are just some of the features that we use daily without even thinking

There's probably a lot of other stuff that is better, but that's a nice start. If GitHub had a real patch stack management story, a real issue tracker and a real review tool it would go a long way, but that's not the case.


That's one of the few comments where someone says that Mercurial is better than Git. Interesting to me because I have used Mercurial and I like it, I now do mostly Git because that's what everyone uses. And I kind of prefer Git now because I grew more experienced with it, but really, that's a wash, they are both great and fundamentally very similar.

I had some coworkers who preferred Mercurial, the argument tends to be that it has a better user interface (not hard to beat Git on that one...), but also because of its immutability. Core Mercurial makes it really hard to change history, and branches are strongly tied to commits instead of being moving bookmarks (I know that Mercurial has bookmarks, but we used branches). Simple and stable.

Now the arguments you give for Mercurial are absorb and evolve, commands designed for rewriting history, something that we felt Mercurial tried so hard to make difficult (for example by hiding them behind extensions). Something I find quite interesting, has Mercurial changed so much during last years? Maybe that's what you mean by "modern" even though they came out at almost the same time.

About GitHub. The simplicity of its bug tracker is supposed to be a feature. Too often, we find ourselves with all sorts of fields that ultimately doesn't serve much purpose. For example, what about priority and severity? I understand priority: sort by priority to pick the ones you should work on, but then, what's the point of severity? If both are correlated, one of them is redundant, if they are not, which one should I pay attention to? And what's the severity of a feature request? And am I allowed to make the cosmetic feature I really want a critical priority?

GitHub has tags and a description, this can already do a lot. It is messy, but real life is messy, and often, in bug trackers with lots of fields and categories, I don't know what to put in there because it doesn't really apply, but I still put something because I have to, contributing to noise, which is also messiness but shifted elsewhere.

This is not a criticism of the Mozilla way, just that there is more than one way of doing things.


FWIW when you open a bug on bugzilla you don't have to fill all those fields. You fill a few, and others can be changed later - by yourself, people who triage, process it, fix the bug etc.

GH issues as a bug tracker feels really too simplistic compared to proper bug trackers. You can work around the deficiencies with labels and github actions to some extent, but it always feels like a hack. Why invent some ad-hoc labels with unclear meaning if you can use a superior bug trackers where you have fields as a first class citizen?

Although with GH, when you use all of those, the issue list still looks visually nice. Compared to GitLab, where if you go to GL tracker on GL project, you see issue list with 100 labels on each items, of different colors, the UI of the issue list is a madness.

(Been using GH on a daily basis at bigcorp for years for code tracking, but tickets are usually tracked in a behemoth like jira. I opened a few bugs in bugzilla over the years.)


Going deeper, this is like discussion about "static types vs dynamic types" or "relational DB vs freeform JSON" and so on.

Having dynamic types and freeform JSON sounds nice when doing toy things, but at some point you're like "hmm I want to find all bugs which affected Component X and have been fixed between version 83 and 90" and then some structure is good. (Arguably, GH issues labels could be used for this though.)


Can you explain what you mean by "patch stack"? I can roughly guess what you meant but not sure. Do you mean rebasing changes while to preserving the diffs of what you were reviewing? (I find that essentially no Git service I have used does this well)

Personally I think I got used to Git since it's 99.99% of what I use today. There are definitely pain points to it, but just like most people it's kind of become "the way it is", which I don't think is healthy. While Git has helped revolutionized the way SCM works, it's far from perfect (and it still does certain things worse than a traditional centralized SCM like binary file management and renaming files), and I think its prevalence sometimes puts people in a collective blindspot/groupthink. Imagine if you are in your 20's and Git is the only SCM you have used, everything other than Git would either be old-school or weird.

I definitely agree on GitHub though. I think it's quite nice for small projects (I manage one myself) but I don't see how it's usable for a serious large software project for reasons you mentioned. Even simple things like correctly linking issues together, or code review tools just seem severely lacking to me.


https://firefox-source-docs.mozilla.org/contributing/stack_q.... It's then two clicks to diff two arbitrary versions of a particular patch. For complex patch sets that take a few rounds of review, it's nice to understand what changed since last time you looked at it. When reviewing code from e.g. a beginner, it helps to quickly check that all comments have been addressed and nothing else has changed, etc.

Then you merge ("land") the patch stack in one unit, and it's tested in one unit, and bisection tooling knows that it is the case and it's nice.


Searchfox looks pretty cool, I'm surprised I've never heard of it. I often wonder how many great softwares are out there, but only used in a small niche.


It works for other code bases than Firefox, e.g. we index LLVM and webkit. It has a special understanding of the Firefox code, though.

Here's a cool class diagram with emojis: https://asuth.searchfox.org/mozilla-central/query/default?q=...


Firefox has way too many components to be "just one repo you can fork".


All the code you need to build Firefox for Linux, macOS, or Windows is in one Mercurial repo: https://hg.mozilla.org/mozilla-central/

The repo is also mirrored on GitHub for convenience, though PRs are not accepted through GitHub: https://github.com/mozilla/gecko-dev

The build instructions start with a script (“mach bootstrap”) that will download and install all the blessed tools and SDKs needed to build: https://firefox-source-docs.mozilla.org/setup/index.html

Depending on your network and computer, it can be possible to download the source and compile and run your own Firefox build in less than 20 minutes.


The source is mirrored on GitHub here: https://github.com/mozilla/gecko-dev

Code search is here: https://searchfox.org/mozilla-central/ I’ve never seen a better search tool.

It’s become pretty straightforward to build in the past few years. You pretty much just need your system SDKs (which are listed), python, and git or hg (which is Python anyway!).

I promise you that you don’t need CI until you have a patch that works locally.


> self-hosted GitLab

It might just be me but gnome and it's associated apps seem to have come a long way recently, and I can't help but think it's because they moved to an opinionated but modern build system (meson) and all the repos are easy to access / contribute to on their gitlab instance.


I think that's understating the fix: there's a number of lines removed that, to be honest, I'd not feel confident hitting the delete key on until I had a deep understanding of what they were supposed to be doing.


The “deleted” lines were just changed indentation, plus a slight refactor to exit early on certain inverted conditions instead of checking conditions to enter the big if statement.


The actual fix seems even simpler than that! It really is just adding this check: !doc->HasFocus(IgnoreErrors())


Oh wow, I always just assumed it was somehow my fault. This has been following me for years, over different Windows versions and reinstalls. It didn’t happen often, but still regularly enough that I remember it.


Every non-native UI toolkit seems to have ever-so-slightly buggy tooltip behavior.


Yeah this happens a lot when the DOM changes and the app loses the track of the tooltip so it can no longer dispose of it when you navigate away from the target.


Imagine how much money Mozilla could have made over the last 22 years if only they'd sold ad space in the stuck tooltip! They should monetized their own bugs, before somebody else does.


Yes! I thought I was going crazy, or was just somehow running a broken installation!


Same! I thought it was just me, happened on all of my machines, Linux and Windows


Same on macOS for at least a few years.


Quite a curious bug that many people seem to have come across, some mention that they encounter it at least once a day, and yet I don't think I've seen this behaviour even once in the last decade and a half. It doesn't seem limited by OS either, so I wonder what the actual conditions for the bug to manifest itself were.

(The only place I come across a similar bug is with LibreWolf (a customized privacy-enhanced Firefox). And I'm pretty sure that has to do with the fact that it's a flatpak, rather than to do with the browser itself, since no other Firefoxes I run have ever exhibited this behaviour.)


I believe this is a common enough bug for many GUI systems. For example I have seen this behavior a lot from Windows 10 taskbars; the only way out was to hover the cursor on top of multi-window button and then out of the taskbar. A lot of websites also have a similar issue with a right condition.

The main culprit seems to be a desynchronized event delivery, where you are expected to receive an event when the cursor exits but somehow weren't, for example because the window focus was lost so no further mouse events couldn't be delivered (depending on OS and preferences). Unless there is a dedicated way to reliably detect such cases (e.g. DOM `onmouseleave` or `onmouseout`), workaround and hacks would be needed---for example when the window focus was lost it can generate synthetic events.


Hm, is that assumption correct? I don't know about Windows, but on Linux/X11, app windows do get mouse-enter/move/leave events even if they are not focused.


This is highly specific to toolkits, but windows in Windows (wink) do not receive mouse events unless they are configured to "capture" mouse events (`SetCapture`). And even when there exist APIs for them, the existence of faulty websites suggests that the easiest path is probably incorrect anyway.


The Devil is in the details. And in the case of the X11 Devil, they're literally named "detail", and they sound weird enough for Elon Musk to name his children after.

https://tronche.com/gui/x/xlib/events/window-entry-exit/

    typedef struct {
        int type;  /* EnterNotify or LeaveNotify */
        unsigned long serial; /* # of last request processed by server */
        Bool send_event; /* true if this came from a SendEvent request */
        Display *display; /* Display the event was read from */
        Window window;  /* ``event'' window reported relative to */
        Window root;  /* root window that the event occurred on */
        Window subwindow; /* child window */
        Time time;  /* milliseconds */
        int x, y;  /* pointer x, y coordinates in event window */
        int x_root, y_root; /* coordinates relative to root */
        int mode;  /* NotifyNormal, NotifyGrab, NotifyUngrab */
        int detail;
            /*
            * NotifyAncestor, NotifyVirtual, NotifyInferior, 
            * NotifyNonlinear,NotifyNonlinearVirtual
            */
        Bool same_screen; /* same screen flag */
        Bool focus;  /* boolean focus */
        unsigned int state; /* key or button mask */
    } XCrossingEvent;
    typedef XCrossingEvent XEnterWindowEvent;
    typedef XCrossingEvent XLeaveWindowEvent;
More details (these are just the "normal" ones, just wait till you read about "abnormal" NotifyGrab and NotifyUngrab mode and input focus events, and how grabbing interacts with input focus, and key map state notifications):

https://tronche.com/gui/x/xlib/events/window-entry-exit/norm...

https://tronche.com/gui/x/xlib/events/window-entry-exit/grab...

https://tronche.com/gui/x/xlib/events/input-focus/

https://tronche.com/gui/x/xlib/events/input-focus/normal-and...

https://tronche.com/gui/x/xlib/events/input-focus/grab.html

https://tronche.com/gui/x/xlib/events/key-map.html

And then you have colormaps and visuals:

https://donhopkins.medium.com/the-x-windows-disaster-128d398...

>The color situation is a total flying circus. The X approach to device independence is to treat everything like a MicroVAX framebuffer on acid. A truly portable X application is required to act like the persistent customer in Monty Python’s “Cheese Shop” sketch, or a grail seeker in “Monty Python and the Holy Grail.” Even the simplest applications must answer many difficult questions:

    WHAT IS YOUR DISPLAY?
        display = XOpenDisplay("unix:0");

    WHAT IS YOUR ROOT?
        root = RootWindow(display, DefaultScreen(display));

    AND WHAT IS YOUR WINDOW?
        win = XCreateSimpleWindow(display, root, 0, 0, 256, 256, 1,
                                  BlackPixel(
                                      display,
                                      DefaultScreen(display)),
                                  WhitePixel(
                                      display,
                                      DefaultScreen(display)));

    OH ALL RIGHT, YOU CAN GO ON.

    (the next client tries to connect to the server)

    WHAT IS YOUR DISPLAY?
        display = XOpenDisplay("unix:0");

    WHAT IS YOUR COLORMAP?
        cmap = DefaultColormap(display, DefaultScreen(display));

    AND WHAT IS YOUR FAVORITE COLOR?
        favorite_color = 0; /* Black. */
        /* Whoops! No, I mean: */
        favorite_color = BlackPixel(display, DefaultScreen(display));
        /* AAAYYYYEEEEE!! */
        (client dumps core & falls into the chasm)

    WHAT IS YOUR DISPLAY?
        display = XOpenDisplay("unix:0");

    WHAT IS YOUR VISUAL?
        struct XVisualInfo vinfo;
        if (XMatchVisualInfo(display, DefaultScreen(display),
                             8, PseudoColor, &vinfo) != 0)
            visual = vinfo.visual;

    AND WHAT IS THE NET SPEED VELOCITY OF AN XConfigureWindow REQUEST?
        /* Is that a SubstructureRedirectMask or a ResizeRedirectMask? */

    WHAT??! HOW AM I SUPPOSED TO KNOW THAT? AAAAUUUGGGHHH!!!!
        (server dumps core & falls into the chasm)


Centuries later software archaeologists will discover your comment and be both enlightened and confused.


The tray menu of Steam (and a few other apps) has a similar problem where the menu gets stuck. Likewise, this hasn't been fixed in around 15 years, probably more.


Perfectly synchronized input event delivery across all applications was one of the strong and important guarantees that the NeWS window system was designed from the ground up to provide (from since it was originally called "SunDew" in 1985).

But ever since then, no other window system really gave a flying fuck about that, and just blithely drops events on the floor or delivers them to the wrong place, gaslighting and training the users to make up for it by clicking slowly and watching the screen carefully and waiting patiently until it's safe to click or type again, before proceeding.

This kind of loosey-goosey race condition input handling problem that's intrinsic to every "modern" window system and web browser and UI toolkit is exactly why bugs like this tooltip bug appear across all platforms, and go unfixed for 22 years, because everybody is gaslighted into thinking that's just the way it has to be, and they're the only one with the problem, and it's unfixable anyway, and even if it were fixable, they deserve it, etc...

What's could ever go wrong with the occasional indestructible floating randomly worded tooltip blocking your desktop or video player or game? It's "Tooltip Roulette"! Just hope you don't accidentally screen share a naughty tooltip with your mom during a zoom meeting.

(Not that NeWS was without its own embarrassingly stuck popup windows, but NeWS had an essential utility for removing embarrassing windows called "pam", named after the sound you made when you used it, or maybe the original easy cleanup canola oil spray ideal for use in cooking and baking.)

Failing to properly support synchronous event distribution makes it impossible for window managers (ESPECIALLY asynchronous outboard X11 window managers running in a different process than the window system) to properly and reliably support "type ahead" and "mouse ahead".

For example, when a mouse click on a window or function key press changes the input focus, or switches applications, or moves a different window to the top, or pops up a dialog, or opens a new window, the subsequent keyboard and mouse events might not be delivered to the right window, because they are not synchronously blocked until the results of the first input event are handled (changing where the next keyboard or mouse events should be delivered to), so clicking and typing quickly delivers the keystrokes to the wrong window.

I find it extremely annoying to still be forced to use flakey leaky "modern" window systems for 37 years after getting used to NeWS's perfect event distribution model, which is especially important on slow computers or networks (i.e. dial-up modems), or due to paging or thrashing because of low memory (NeWS competing with Emacs), or any other system activity, and especially for games and complex real time applications.

There are still to this day many AAA games that force you to hold a key down for at least one screen update, because they're only lazily checking for key state changes on each draw or simulation tick, instead of actually tracking input events, otherwise they don't register sometimes if you just tap the key, and you have to slowly mash and wait, especially when the game gets slow because there's a lot of stuff on the screen, or has a hiccough because of garbage collection or autosave or networking or disk io or...

James Gosling first wrote the importance of safe synchronous event distribution in 1985 in "SunDew - A Distributed and Extensible Window System":

http://www.chilton-computing.org.uk/inf/literature/books/wm/...

>5.3.3 User Interaction - Input

>The key word in the design of the user interaction facilities is flexibility. Almost anything done by the window system preempts a decision about user interaction that a client might want to decide differently. The window system therefore defines almost nothing concrete. It is just a loose collection of facilities bound together by the extension mechanism.

>Each possible input action is an event. Events are a general notion that includes buttons going up and down (where buttons can be on keyboards, mice, tablets, or whatever else) and locator motion.

>Events are distinguished by where they occur, what happened, and to what. The objects spoken about here are physical, they are the things that a person can manipulate. An example of an event is the E key going down while window 3 is current. This might trigger the transmission of the ASCII code for E to the process that created the window. These bindings between events and actions are very loose, they are easy to change.

>The actions to be executed when an event occurs can be specified in a general way, via PostScript. The triggering of an action by the striking of the E key in the previous example invokes a PostScript routine which is responsible for deciding what to do with it. It can do something as simple as sending it in a message to a Unix process, or as complicated as inserting it into a locally maintained document. PostScript procedures control much more than just the interpretation of keystrokes: they can be involved in cursor tracking, constructing the borders around windows, doing window layout, and implementing menus.

>Synchronization of input events: we believe that it is necessary to synchronize input events within a user process, and to a certain extent across user processes. For example, the user ought to be able to invoke an operation that causes a window on top to disappear, then begin typing, and be confident about the identity of the recipient of the keystrokes. By having a centralized arbitration point, many of these problems disappear.

[...]

>Hopgood:

>How do you handle input?

>Gosling:

>Input is also handled completely within PostScript. There are data objects which can provide you with connections to the input devices and what comes along are streams of events and these events can be sent to PostScript processes. A PostScript process can register its interest in an event and specify which canvas (a data object on which a client can draw) and what the region within the canvas is (and that region is specified by a path which is one of these arbitrarily curve-bounded regions) so you can grab events that just cover one circle, for example. In the registration of interest is the event that you are interested in and also a magic tag which is passed in and not interpreted by PostScript, but can be used by the application that handles the event. So you can have processes all over the place handling input events for different windows. There are strong synchronization guarantees for the delivery of events even among multiple processes. There is nothing at all specified about what the protocol is that the client program sees. The idea being that these PostScript processes are responsible for providing whatever the application wants to see. So one set of protocol conversion procedures that you can provide are ones that simply emulate the keyboard and all you will ever get is keyboard events and you will never see the mouse. Quite often mouse events can be handled within PostScript processes for things like moving a window.

NeWS Window System:

https://en.wikipedia.org/wiki/NeWS

>Design [...]

>Like the view system in most GUIs, NeWS included the concept of a tree of embedded views along which events were passed. For instance, a mouse click would generate an event that would be passed to the object directly under the mouse pointer, say a button. If this object did not respond to the event, the object "under" the button would then receive the message, and so on. NeWS included a complete model for these events, including timers and other automatic events, input queues for devices such as mice and keyboards, and other functionality required for full interaction. The input handling system was designed to provide strong event synchronization guarantees that were not possible with asynchronous protocols like X.

NeWS 1.1 Manual, section 3.6, p. 25:

http://www.bitsavers.org/pdf/sun/NeWS/NeWS_1.1_Manual_198801...

>Processing of input events is synchronized at the NeWS process level inside the NeWS server. This means that all events are distributed from a single queue, ordered by the time of occurrence of the event, and that when an event is taken from the head of the queue, all processes to which it is delivered are given a chance to run before the next event is taken from the queue. When an event is passed to redistributeevent, the event at the head of the event queue is not distributed until processes that receive the event in its redistribution have had a chance to process it. No event will be distributed before the time indicated in its TimeStamp.

>In some cases, a stricter guarantee of synchronization than this is required. For instance, suppose one process sees a mouse button go down and forks a new process to display and handle the menu until the corresponding button-up. The new process must be given a chance to express its interest before the button-up is distributed, even if the user releases the button immediately. In general, event processing of one event may affect the distribution policy, distribution of the next event must be delayed until the policy change has been completed. This is done with the blockinputqueue primitive.

>Execution of blockinputqueue prevents processing of any further events from the event queue until a corresponding unblockinputqueue is executed, or a timeout has expired. The blockinputqueue primitive takes a numeric argument for the timeout; this is the fraction of a minute to wait before breaking the lock. This argument may also be null, in which case the default value is used (currently 0.0083333 == .5 second). Block/unblock pairs may nest; the queue is not released until the outermost unblock. When nested invocations of blockinputqueue are in effect, there is one timeout (the latest of the set associated with current blocks).

>Distribution of events returned to the system via redistributeevent is not affected by blockinputqueue, since those events are never returned to the event queue.

The X-Windows Disaster:

https://donhopkins.medium.com/the-x-windows-disaster-128d398...

>Ice Cube: The Lethal Weapon [...]

>The ICCCM is unbelievably dense, it must be followed to the last letter, and it still doesn’t work. ICCCM compliance is one of the most complex ordeals of implementing X toolkits, window managers, and even simple applications. It’s so difficult, that many of the benefits just aren’t worth the hassle of compliance. And when one program doesn’t comply, it screws up other programs. This is the reason cut-and-paste never works properly with X (unless you are cutting and pasting straight ASCII text), drag-and-drop locks up the system, colormaps flash wildly and are never installed at the right time, keyboard focus lags behind the cursor, keys go to the wrong window, and deleting a popup window can quit the whole application. If you want to write an interoperable ICCCM compliant application, you have to crossbar test it with every other application, and with all possible window managers, and then plead with the vendors to fix their problems in the next release.

Window Manager Flames:

http://www.art.net/~hopkins/Don/unix-haters/x-windows/i39l.h...

>Why wrap X windows in NeWS frames? Because NeWS is much better at window management than X. On the surface, it was easy to implement lots of cool features. But deeper, NeWS is capable of synchronizing input events much more reliably than X11, so it can manage the input focus perfectly, where asynchronous X11 window managers fall flat on their face by definition. [...]

>If NeWS alone manages the input focus, it can manage it perfectly. An X window manager alone cannot, because it runs in a foreign address space, and is not in a position to synchronously block the input queue and directly effect the distribution of events the way NeWS is. But even worse is when an X window manager and NeWS both try to manage the input focus at once, which is the situation we are in today. The input focus problem could be solved in several ways: OWM solves the problem elegantly, as PSWM did in the past; OLWM could be made NeWS aware, so that when our own customers run our own external X window manager on our own server that we ship preinstalled on the disks of our own computers, OLWM could download some PostScript and let NeWS handle the focus management the way it was designed.

>It's criminally negligent to ship a product that is incapable of keeping the input focus up to date with the cursor position, when you have the technology to do so. Your xtrek has paged the window manager out of core, and the console beeps and you suddenly need to move the cursor into the terminal emulator and type the command to keep the reactor from melting down, but the input focus stays in the xtrek for three seconds while the window manager pages in, but you keep on typing, and the keys slip right through to xtrek, and you accidentally fire off your last photon torpedo and beam twelve red shirt engineers into deep space!


I think about what could have been, how it should work, whether we could fix this, every time I go to click or touch or type and the system I’m using directs my input to somewhere other than I intended.

This happens several times a day.


I see this on a daily basis, Most times multiple times a day on a Linux KDE Plasma system with it installed natively on OpenSuse Tumbleweed. Best i could describe it is that it happens when i happen to switch to another virtual desktop using keyboard shortcuts on the exact same frame that the tool tip pops up. My guess is it was some kind of race condition that required very precise timing that made it difficult to narrow down, Paired with heavy use of keyboard shortcuts meaning alot of people likely were not effected by it. Very glad to hear it has been fixed and looking forward to getting a version of Firefox with it!

I just did some testing and getting it to happen is as simple as hovering over it, And swapping to another virtual desktop before the popup shows up with firefox not being the new active application.


I am in the same boat as the OP. I don't even know where I should click to see a tooltip to begin with!


Hover over "X hours ago" of your comment for example


Hmmm doesn't appear on my firefox, but I tested and the tooltip appear in ungoogled-chromium so I guess it is blocked by an extension I am using.


It's a very workflow-specific bug. I pretty much only encounter it while playing video games.

Turns out that while gaming I often have a Youtube video playing in one browser tab, and use another browser tab to look up game-related information. So it is really common for me to interact with the tab bar (which triggers the tooltip) right before alt-tabbing into my game.

During day-to-day browser use my cursor is almost always located somewhere over the website content - which rarely triggers a tooltip.


I experienced the bug a lot when interacting with YouTube video embeds on other web pages. It goes like: hover cursor over the embedded player's full screen control, "Full screen (f)" tooltip appears, click control, video goes full screen but tooltip remains over top of playing video.

(Lunix X11 with MATE's Marco wm)


It's the same thing with games - whenever a game from a studio known for "broken" titles comes out, people post compilations of strange things happening, and it just never happens on my end. Natural bug resistance, perhaps.

However, I do frequently get an innocent bug, where opening my bookmark toolbar's extension (the >> icon in the top right) results in it displaying all the bookmarks in a drop-down list, instead of the ones not appearing on the toolbar.


That would make sense to me - the only place I've seen this is with a snap app, I haven't seen it with firefox.


Interesting, this bug was filed before even the first version of Firefox was ever released -- https://en.wikipedia.org/wiki/Firefox_early_version_history. Impressed that they've kept the bug tracker history working for so long.


Did the bug tracker history carry over from Phoenix/Firebird maybe?


This was a bug in the Gecko engine, which was used in Netscape 6 and the Mozilla Suite (Navigator and Communicator) before Firefox was created (in response to Mozilla Suite bloat). Gecko still uses the same Bugzilla bug tracker.


This looks to have been from when it still was Mozilla


Oh god, finally. A related bug, https://bugzilla.mozilla.org/show_bug.cgi?id=1569439 , was closed 9 months ago, and that made things slightly better (that one is about tooltips not disappearing when simply switching to another app), but I was incredibly disappointed to find that they'd still stay up when switching to another workspace.

On the downside, this is sorta a "wrong" fix. Tooltips should show up even if the window doesn't have focus (at least on Linux with GTK apps, which Firefox attempts to emulate). They should fix the actual underlying issue of them not disappearing when the mouse pointer isn't actually over the window anymore, when workspaces change.


Oh that one was the one I hit the most, not TFA; I hadn't seen this in a while and now I know why.



Is this a commit?

If so - is it normal to do indention changes and actual code changes in the same commit?

Personally, I would first have committed the indention changes and then did a second commit with the coded changes.


If you look closely, you'll see that it's not merely an indentation change. The bulk of the function used to be inside a large condition, but that has been changed to an early return. Still, it would have been a little nicer if it had been done in two commits.


Here's a better view of it: https://phabricator.services.mozilla.com/rMOZILLACENTRAL8ae3...

Basically it just adds a one line check near the top of a ShowTooltip() function for whether "doc->HasFocus(IgnoreErrors())", and, if not, returns early.


The indentation changes are because of a removed if block.


i always struggle with this. i usually end up with code changes first because i want to test code before committing, which means i can't commit a whitespace change before i know the code change works.

and every time i think about the problem i stumble over python where the two can't be separated.

i believe in the end a better solution would be to mark whitespace changes in a different color. or even better mark each character that changed, not just the line.

in other words: we want better diff tools


Gerrit is capable of showing only non-whitespace diffs.


Wow, if you exclude lines changed in the commit due to indenting changing, there are only five new lines of code for this change!


The changes, in sum, in nsXULTooltipListener.cpp

    -  if (tooltipNode->GetComposedDoc() &&
    -      nsContentUtils::IsChromeDoc(tooltipNode->GetComposedDoc())) {
    +   // Make sure the document still has focus.
    +   auto* doc = tooltipNode->GetComposedDoc();
    +   if (!doc || !nsContentUtils::IsChromeDoc(doc) ||
    +       !doc->HasFocus(IgnoreErrors())) {
    +     return NS_OK;
    +   }
    ...
    -   }
    }
    return NS_OK;

If I see correctly, all the changes are:

1) remembering result of tooltipNode->GetComposedDoc() and adding the test of doc->HasFocus(IgnoreErrors()). Note that writing this one now is maybe easier than it was at the time the initial code was written, it could be the "auto" in this current semantic didn't exist in C++ (or the used compilers/platforms) at that time.

2) Explicit return. Instead of:

    if (b)
      X;
    return OK;
now it's:

    if (!b)
      return OK;
    X;
    return OK; 
which in this case increases readability as X is in many lines and b is a more complex condition.


Can any one give a brief synopsis of the state of XUL and gecko in Firefox/Mozilla? Are the XUL (XULRunner) and gecko runtimes still actively worked on? Or has it been absorbed into the Firefox runtime long ago?

I recall reading some time ago that the work on XULRunner essentially came to a halt, and that components in Firefox/Mozilla that depend on XUL would slowly be phased out.

But does that mean that work on the XUL runtime and components in Firefox also effectively came to a halt? I figured that most of these really old XUL bugs in Firefox were never going to be fixed, instead replacing a whole layer/component dependent on XUL was seen as a better use of time and resources.

Edit: anyone have a good diagram of the layers/components in Firefox. Something that can illustrate where Quantum, Gecko, XUL, etc all live in Firefox. It would be really cool (doubt it exists) if there was an animated diagram that would show the changes of this stack overtime.


Most of the challenges of a bug isn't the fix, but rather figuring out the behaviour.

Especially true if breakpoints don't work :)


Always a bummer when your tools for debugging don’t work. :/


>I'm seeing this problem in 1.6 on Win98.

Bug is fixed, please retest :)


I would if they backported the fix to a version that runs in Windows 98.


Oh. I have seen this behaviour in other programs, recently mostly in Cisco AnyConnect. I always thought this an OS glitch.


It's pretty easy to see why this behavior can come from programs that's don't use any standard OS toolkits, but go for custom or cross-platform solutions:

- show the tooltip - rely on move events on the main window to hide it

the move events are usually received only when the window is visible (and focused depending on the os), unless you take extra measures to grab the pointer and/or listen on global events, which involves more trickery to work right.

It's the classic scenario where a decent system toolkit has this figured out and solved for you, while doing the same by hand looks somewhat easy and normally works 95% of the time, but fails in odd ways and drives your power users crazy.

Driving UIs heavily by keyboard and shortcut is the sure-way to hit that remaining 5% all the damn time nowdays...


Yeah, I imagined the same, but can't recall or reproduce in anything else now. Possibly a Mandela Effect?


I'm pretty sure the Cisco one is permanent and reproducible (on macOS) :D


I see it a lot with Windows Explorer and Microsoft Word, never with Firefox, even though I use all of them daily on my work laptop. It seems people have so different usage patterns that they see completely different bugs...


Altium Designer used to have the exact same issue for a very long time but I think it's finally gone some time ago


Finder does it to me all the time


had a feeling Finder did magic too; however, on changing focus into the Shortcuts App, revealed Quick Actions that baffles


Meh, 22 years? That's not even close to the longest standing bug I know of that's been fixed. ;)

Here's the story of a 33 year old bug in yacc that was fixed in 2008: https://undeadly.org/cgi?action=article&sid=20080708155228


The malloc design, that led to that yacc bug being discovered, seems to have another interesting property which I didn't see discussed there:

By placing large allocations (those larger than half the page size) at the end of a page, I would think it also allows them to be resized in more cases. Smaller allocations can be then made in the gap, until it's filled. Then, if realloc is called on the large allocation and enough space remains for the difference, it can be shifted backward with memmove. Whereas, the large allocation is placed at the first available position and further allocations are made after it, it has no space to resize.

Disclaimer: I haven't implemented a memory allocator, so my understanding may be off.


And this is why tickets shouldn’t be closed just because they are old.


I like making fun of them for not fixing decades-old bugs (and there's still a considerable amount left) and didn't really have hope they'd ever care.

What beautiful news if this indicates that some other old bugs will eventually be fixed as well!


you may now predict when old bugs are going to be fixed. any bug at least 20 years old is eligible. nearest prediction wins.

if you want to play the long game you may also predict if younger bugs get fixed more than 20 years after reporting. (if the bug gets fixed earlier than 20 years then the prediction becomes void)

place your bets!

*terms and conditions apply. firefox developers are not eligible.


Oh. I'm affected, seeing it several times a day, but I never managed to reproduce 100%. Didn't bother me too much but I guess I could happily do without.

It also affects Thunderbird.

I had no idea it was a well known, 22 year old bug.


I absolutely love that the bug report was not closed as « inactive » or « stale » after like 6 months.


I'm so relieved that stale-bot didn't become as common as I feared it would. I hardly ever see it these days, and good riddance!


my impression of the mozilla firefox bugtracker too - you might be taken aback by the lifetime of some long standing bugs, but what is confirmed, stays open.

You'll collect low effort comments, but at least it avoids duplicates and the discussion is preserved in-thread.

That impression lead me to meaningfully contribute in the bugtracker.


A couple years ago I was Googling myself and found to my surprise a bug I had opened for Mozilla on BeOS in roughly the year 2000 was still open. Searching their Bugzilla now, I find no references to BeOS at all, were their issues pruned at some point?


Issues for unsupported platforms would be closed (RESOLVED WONTFIX) but they wouldn’t be deleted. Bugzilla’s default search songs only return open bugs. Here’s a search for all bugs with OS = “BeOS”. Maybe one of them is your bug?

https://mzl.la/3twxg9n


This is probably quite a common phenomenon in open source software. Namely, an unglamorous bug has been around forever and someone finally gets annoyed enough to roll up their sleeves and fix it themselves.


Apparently it's fixed in FF119. I'm on 102esr under Linux, but I can't replicate.

[Edit] OIC, it's a heisenbug. Funny though; I'm a longtime FF user, and I don't recall witnessing it.


Ugh, that sucks. I've been relying on this bug to create persistent sticky notes on my screen. Why do browsers insist on breaking stuff?

https://xkcd.com/1172/


Tooltips are the worst kind of information providers in a website. First of all you don't know they're there, you actively need to search with your mouse cursor and wait to see something lights up. And then you can only read it, no way to copy/paste information out of it. Often they also cover up other information when they do popup, and most of the time the extra information they do provide is useless.

Tooltips should be removed entirely instead of fixing 22 year old bugs.


What's a good alternative?


There is no alternative. Either the information is required and useful and should be shown always, or it has no value and you can just leave it out.


Firefox is just going to die from bureaucracy. I have another bug report on middle click drag scrolling that took them years to acknowledge and remain unresolved.


Holy shit. This bug literally drove me away from Firefox a couple years ago.

Wild.


It's definitely one of those things that can drive someone a little crazy.

My solution was to activate/trigger another tooltip, that would make the frozen one disappear, and hope the new one would behave normally.

Wasn't ready to give up Firefox, but can't say that the thought didn't cross my mind.

Now that I think about it, still to this day I see similar issues in other apps where some kind of popup/overlay/tooltip persists across tabs/windows/applications.


I'm hoping they fix that "you shall enter your credentials manager password or we will bug you for it on every. single. page with login form" (which on some webpages is literally every page). It forces one to be logged into password manager all the time, instead of authenticating just when needed.

Still, I trust Mozilla more than some random xyzPass.


Hey Dang, isn’t this post breaking the guidelines? The post is not using the original title:

> 148624 - (tooltip-ghost) Tooltips persist in foreground when Firefox is in background

My point here is a meta point: it’s great that we can change the titles to something more descriptive.


Does anyone have a link to the commit that fixed the bug? I'm curious how simple it was to fix

Edit: nvm, I found the link in the linked page: https://hg.mozilla.org/mozilla-central/rev/8ae372dc88d1


Most of the commit is changing indentation for the outer `if`... the core of the fix is `!doc->HasFocus(IgnoreErrors())`

Bug squashed!


Haha wow. I've been dealing with this bug for so long I hadn't even considered it could be fixed. Wild.


Wow, this brings back memories. Years ago, the persistence of this particular bug led me to believe that Firefox was "buggy" and "messy". It broke any perception in my mind that Firefox is an elegant tool. It hit whatever that cognitive bias is [cit needed] that tells us "if it can't get simple things right, what else is it getting wrong?" or perhaps "beauty good / ugly bad".

I'm older now and have Experience(TM). I know that: a) many expensive commercial tools are far less elegant and just plain broken (looking at you Autodesk AutoCAD and Revit since 2010s) b) my own idiocy/ignorance has larger-than-expected extents

I wonder if I'm just jaded. Is it correct to believe that Firefox is "good" software? Is it what it is designed to be?


So, there is hope that in 10 years this bug in LibreOffice is fixed :( https://bugs.documentfoundation.org/show_bug.cgi?id=46429


I've just been trying to think how long the most annoying Firefox issue I see every day has been around, it's at least 10 years and it's very disruptive - copying just will, 70% of the time, not actually copy (mostly when copying URLs, as in right click, copy link context option, but happens on other occasions too even when using ^x)

I can't be the only one suffering this, because no other application does it, although thunderbird has done on occasion but that is hardly the worst thing about it..

And then there is Firefox on Android, some genius decided to disable pull down refresh by default, not a bug I guess but more annoying than the plethora of problems.


Could this be the same tooltip issue I see everyday on Thunderbird? I must use Firefox differently but in Thunderbird I'm often left with a tooltip telling the date of the last message I was looking at after changing to a different app.


Yes. Should be fixed in thunderbird nightly (or daily, as they call it). Let me know if it works!


One recent bug I've seen with tooltips is on wayland... it causes the whole display window to flicker between the current renderer and what appears to be an old back buffer... hopefully it doesn't stick around for 22 years.


That is normal behaviour (redraw the screen with every pixel moved). Welcome to the 21st century. /s


You are holding it wrong, in Wayland every frame is perfect.


OK, that’s one major tooltip annoyance fixed. That was one that was very annoying for some usage patterns, but never really debilitating. But if we’re trying to fix ancient tooltip bugs, here’s one that is debilitating for some users:

Tooltips are positioned relative to cursor position, below and to the right, but don’t take cursor size into account, and so if you have a comparatively large cursor, it occludes the tooltip.

This has been filed in a couple of guises a few times, starting twenty years ago: https://bugzilla.mozilla.org/show_bug.cgi?id=248718, https://bugzilla.mozilla.org/show_bug.cgi?id=296191, https://bugzilla.mozilla.org/show_bug.cgi?id=557754, https://bugzilla.mozilla.org/show_bug.cgi?id=1712669. (I don’t really get why bug 248718 and bug 557754 were closed as duplicates of bug 1712669; I tend to feel that the oldest one should be the canonical one almost as a matter of principle, especially when it’s so much older.)

This will affect many more users now than twenty years ago, because some time during Windows 10 they added a proper cursor size scale, so you can easily get a huge cursor (which I strongly suggest people try; I was amazed at how much it improved things, except for this class of bug). The old “extra large” cursor was equivalent to what’s now size 3 and the scale now goes up to 15, if I remember from a few years ago accurately. Size 4 is already enough to lose a couple of characters from the start of tooltips.

(This is all about native tooltips, but naturally this is also a problem with DOM-based fake tooltips in web pages: they have no access to cursor dimension information, so no way to be certain of dodging the cursor. I recommend placing such tooltips above the cursor position, as that’s the most likely to be clear. Bug 1712669 comments 5 and 6 observe how this is a problem on Bugzilla itself—they put a DOM fake tooltip below on dates.)


Here is another good one: too easy to hit CTRL+Q instead of CTRL+W https://bugzilla.mozilla.org/show_bug.cgi?id=52821

Only 20-and-a-half years to fix that one! In this case it wasn't because nobody got around to it, but because people could not agree on the fix, even though more or less everyone agreed that the old behavior was bad.


There is a browser.warnOnQuitShortcut setting in about:config, that can have it pop up a nice warning and stop you accidentally ctrl+q'ing firefox. I don't know if they've gotten around to making that the default for new profiles. That bug thread is a irritating mess of conversations.


So you're telling me there's still some hope for my favorite JIRA tickets[1]!

1. https://jira.atlassian.com/issues/?jql=statusCategory%20%3D%...


It's actually pretty impressive that the bug tracking system is keeping data from 22 years back and maybe more.


Better Nate than lever!


Funny some guy on HN was just arguing with me the other day that the oldest Firefox bug was 11 years old and that they fixed over 1,000 bugs in their last release. I tried to search their bug tracker to see if this was true but the web server doing the searches was unresponsive.


Wow. Thinking about it, there are businesses built, peaked and destroyed in that time frame. :)


Mozilla being one of them.


Ah, something extremely similar affects SyncTrayzor on Windows too! https://github.com/canton7/SyncTrayzor/issues/760


I love Mozilla and I too have experienced this bug. I'm glad this has been fixed even it didn't annoy me that much. But it solidifies my choice in browser.

Brb while I test Sonoma to see if I can create two separate egg timers with Siri...


Now if they could only fix how horribly Hacker News is displayed on mobile...


I'd always thought this was just from my Win 10 install being "crufty" (a few years old, with some hardware swapped along the way).

Never ran into it on my Linux box so I thought it was just Windows weirdness.


Lots of software have problems with persisting tooltips. I regularly have tooltips from VS Code getting "stuck" and can't be removed until I close the application.


I actually saw this happen even more in the Linux MS Teams client (which uses Electron or similar I think?), but I've encountered it with Firefox as well.


Yup it's electron, even sometimes prompts me to "use the app" while using the app... anyway that thing is getting deprecated anyway


Visual Studio has the same problem for at least a decade.


Completely normal bugfixing behavior, nothing to see here.


this problem plagues me, I even saw it today, and I'm up-to-date on the latest Firefox for Fedora, I wonder if it's rolled out yet? Here's the thing, though, they probably didn't fix it for me: I use focus-follows-mouse, so probably in my case Firefox will think it has the focus (because it does briefly), but it's just a little scrap peaking out from under a stack of other windows.


Have you tried with firefox nightly? The change hasn't made it to the release version yet (https://whattrainisitnow.com/calendar/)


I can't recall encountering this bug in all my years of using Firefox. I can't seem to reproduce it while using i3 window manager.


I am seeing this problem with Thunderbird as well since upgrading to Thunderbird 115 - I hope they are a bit quicker than the Firefox devs.


The codebase for that UI part might be the same... Actually TB 115 might be the reason that the bug has finally been fixed in Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=1843203#c9


Ah, thanks. The tip at the end seems to fix this on TB 115.


I can't help but contrast this to the Kubernetes project where stalebots close issues after 90 days of inactivity.


Atleast they are revisiting those bugs even. And then people say nividia on Linux is not going to improve. Hope


This bug really annoyed me on Firefox and Thunderbird. Over the last two years I have learned to live with it.


Now fix the kerning problem on canvas. That bug significantly reduces the readability of Google Docs et al.


I have always had this bug but I always thought it was a bug in my window manager :-)


Now they can maybe fix bug where favicons randomly show negative image on linux...


tooltips are annoying, there should be an option to be activated eg when a key - eg shift- is also pressed.


> "Guessing XP Toolkit/Widgets"

That comment aged so well.


XP stands for "cross platform": https://www-archive.mozilla.org/xpfe/


Does this mean it will be fixed in Thunderbird too?


Looks like the fix is in Mozilla's core UI widget code, nothing specific to Firefox. So yes, it should be fixed in TB too.


It's not 22 years but 21 years ago.


literally was annoyed yesterday by this. i remember it from the compiz 0.8.1-6 days


Finally!


This will be the Year of Firefox.


What would it take for Firefox not to prompt you to save a password that fucking didn't work?


A sane method for websites to indicate if a password worked or not. And for enough websites to use it that password saving heuristics could be disabled.


When you submit a form with a password and the password is wrong, you're usually taken back to the same form with the same password. The browser is showing you a form that, if it were submitted, would update the same saved password. Yet the popup dialog asking you whether you want to save the previous obviously wrong value is still persisting.


Why would you want that ? It's an incredible useful feature. Now you can't forget that this password didn't work.


But you can store only one not-working password per username and website. Clearly it should support multiple passwords for the same username and website and have a flag "working" or "not working". Bugmenot has this all figured out years ago!

(Also, your username triggers me in a wildcard sort of way.)


This bug is older than me


This bug is older than the initial release of Firefox by two years.


Imagine paying your CEO 3 millions but fixing a bug like this only after 20+ years


It would be good if Mozilla were to now fix other well-known bugs that have been in Firefox for decades.

First, copying/pasting highlighted webpage text with Ctrl-C/V is unreliable (no text in paste) and copying via the context menu must be used if one wants it to work every time.

Second, selecting text with mouse is awkward and irregular in many webpages. Moreover, whether one selects text from the right or the left makes a difference (it's often much harder to select one way than the other).

Third, it is impossible to select only part of a URL in a webpage and only the full address is selected (and even then just selecting the address from other text is often difficult). One has to paste the full address into a text editor and select the wanted section of the URL from there. Why would one want to do this? Well, the URL may be a long address to a specific page on a website and before or after one visits that page one may want to visit its homepage so copying only the homepage section of the address makes sense.

Mozilla, your text selection capability in Firefox is essentially not fit for purpose.

Forth, entering dialogue into a box like I'm doing here on HN to post this text is always fraut with the risk of losing it before the text is posted. Accidentally refreshing the page often happens before posting and thus the text is lost irretrievably which is damn annoying if one has written a lot of text. Simply, there is no progressive save or UNDO in this situation. Why on earth not? Wordprocessors have had undo for decades so why not Firefox? (If I know the text is more than a sentence or two I edit with an external editor. Mozilla, this is unacceptable and you ought to be ashamed of the fact.)

The fact that Firefox does not have undo or that text selection is broken amounts to primitive and unfinished engineering (especially so after decades of Firefox development), thus one has to speculate what other important code is also so rough and ready and unfinished that we cannot see!

Mozilla, if any of you ever bother to read this then you ought to know how annoying these bugs are. (One has to wonder if you ever bother to use your own product—or perhaps you've never investigated how your users use your product.)

One can understand bugs and limitations in a new product but they all ought to have been polished out after several decades of development. If you want to recapture browser market share then you could start by fixing these annoyances. Moreover, it would be a good PR exercise if you explained why these bugs have been ignored for so long. (Incidentally, there are other issues I've not mentioned here.)

BTW, many of us have complained about these problems for years and we've never received an explanation as to why you've never address them (it's little wonder your market share has dropped).


Millions of eyes make all bugs shallow!


Better late than never, I guess.


That bug was a college kid!


Software is gradually turning into a similar pattern of layering like sediment. With most modern "hardware level" applications, there are still layers of OS magic happening under the hood.

We're now into maybe decade 4-ish of software dependency.

There was a scene in one of Alastair Reynold's books where a character basically was a computational archaeologist. That resonates with me a lot.

In a couple centuries, it's not a terrible prediction of the future that software stacks will accumulate cruft over time and debugging certain issues will require immense financial effort to both dig through the layers of software commits and historical proposed merge commits, plus adding extra tests on top of bedrock code and its fixes.

No idea what this will look like. I imagine easily executed functions will pop up in mixed pip's and npm's that are easily recreated functionality every decade, regardless of prior art. Every new programmer wants to make a stamp on the world.

There's some saying about history repeating itself, but I'm dumb and don't remember.


> There was a scene in one of Alastair Reynold's books where a character basically was a computational archaeologist.

Not sure about Alastair Reynold but there’s Pham Nuwen in Vernor Vinge’s A Deepness in the Sky who is indeed that: a programmer archaeologist (and programmer-at-arms to exploit the weaknesses in the other party’s software midden).


I love that book.

I liked that Pham founds the Qeng Ho specifically so that he could be the one commissioning new software for the ships. He wanted to be the one putting hidden backdoors, secret passwords, and booby–traps into it. Of course it’s built on Unix, but if you’re paying attention you’ll notice that when someone enters a command they type “a column of text”.

He also built interstellar communication networks specifically so that civilizations that fell could rebuild more quickly (at least once they reinvented radio) and could learn the Qeng Ho language and systems in the process. But he also put in encrypted channels so that Qeng Ho traders would have inside information and therefore an edge against outside traders.

And then Nau gets his hands on it all, with his crew of Focused to examine every line of source code…


And even with Nau's Focused working for years, they don't uncover the backdoors in the localizers, between the sheer amount of code and whatever obfuscation Pham added.


Spoilers! Also, they had very complex “ensemble behavior” that resisted analysis. I once wrote some code that I was almost not clever enough to debug, so I can believe it.


Today, Vinge could have written: "with his LLMs to examine every line of source code ..."


In fairness to Vinge, that wasn’t a direct quote :)


Ah actually I think that's right. Thanks!


Not only that, but also the concept of open-source development is not the panacea we believe it to be. Bear with me.

Software is extremely complex, even if it is open-source, no one except the original developers and very dedicated people will attempt to patch the myriad of issues and bugs they encounter daily. And even if we do spend the time to track down and fix a bug, there's a political and diplomatic game to convince the maintainers to incorporate your fix. It is not uncommon for a PR to just sit, unreviewed, for years. Open-source does not and will never scale, because software is orders of magnitude too complex.

Outside of software, this problem is lessened because maintainership is distributed: if your car engine breaks, you do not depend on your manufacturer to have enough time and energy to fix it. There are thousands of licensed garages that can do it for you. And, not least, the real world is much simpler than any piece of software, which is effectively completely ad-hoc: knowing how Chrome works will not help you fix this Firefox issue, whereas if you can fix the carburettor on a Honda car, you probably can do the same on a FIAT.

Open-source/distributed development and bug fixing worked much better when computers had 64 kB of RAM and programs no more than 10 pages long.

EDIT TO CLARIFY: I'm not talking of open-source vs commercial, or other types of governance. I'm talking more abstractly about the fact that having source available and open contributions does not noticeably increase the amount of bugs fixed. This comment is about software complexity and logistics of distributed bugfixing.


>Open-source does not and will never scale, because software is orders of magnitude too complex

But there are examples of long-time open source projects all over the place. This sounds like an argument for open source.

If you work for a for-profit company you face two different problems: overnight the company can disappear, and the IP is lost/locked forever; problems are only ever fixed if there's a profit incentive. That works, a lot, but it's not perfect either.


Mind you I'm not talking of open-source vs commercial, or other types of governance. I'm talking more abstractly about the fact that having source available and open contributions does not noticeably increase the amount of bugs fixed.

It's a discussion about software complexity and logistics of distributed bugfixing, not organisational.


You said “does not noticeably increase”; you need a reference point for “increase”. If you’re not comparing open source to other types of governance, then what are you comparing it to?


The problem is also in (lack of) modularity that makes fixing small things disproportionately onerous.

If ypur car's engine breaks, it's usually petty localized. Most of the time it's enough to open the hold and remove a few small parts to reach it. If your stoplight breaks, again, the scope is pretty local.

To fix or to even diagnose the issue with a tooltip in Firefox, you have to rebuild it whole, and it's about as involved and long as rebuilding a car. And even though Mozilla invented Rust to make Firefox development easier, it's far, far from just saying `cargo build`.

This raises the barrier to entry quite noticeably, even if you are an experienced software mechanic but never worked in a Mozilla-oriented garage. But even if you fixed the issue on your platform, you now have to test that the fix did not introduce a regression on at least two other major platforms (or more, depending on the component).

No wonder it's much easier to hack on smaller projects, or on projects written in JS, Python, elisp, you name it.


> If ypur car's engine breaks, it's usually petty localized

I think your point here is that typically with software, changing one line of code means that you need to rebuild the entire executable.

Your analogy breaks down pretty fast, though. While fixing one thing on a car _never_ requires totally rebuilding the thing from scratch, there are still tons of interlocking dependencies and architectural challenges that can impact the time/complexity required to change out a part. (See the effort required in most vehicles to change simple wear items such as a timing chain or a water pump....)

In my experience of software, most of the "rebuild pain" is self-inflicted by the project maintainers (poor automation/containerization of the build process). Software has the luxury of abstraction and automation that can reduce the build effort required from an individual in ways that a mechanic could only dream of!


I agree: the pain of rebuilding characteristic to large software is not inherent to software in general. Software can be highly modular, allowing for fast and flexible changes of many important parts. You don't have to rebuild a Linux or Windows kernel to update a driver; usually you don't even need to reboot it.

But Firefox specifically, and a few other old, large, and highly multiplatform projects had more important things to do than to make building them easy, and in doing so made contributing to them harder.

It's like a Honda Civic vs some Jaguar.


Your analogy breaks down pretty fast, though

Yeah this analogy doesn't make sense to me. My dad has seen head gaskets fail on multiple previous cars of his. Each time, the labour cost to replace the head gasket (a fairly cheap part) exceeded the value of the car, so he sold it to a scrapyard instead of ordering the repair.

Software does not have anything remotely analogous to this. One small bug somehow requiring you to throw out the entire codebase and start over from scratch?


> Your analogy breaks down pretty fast, though

> Yeah this analogy doesn't make sense to me. My dad has seen head gaskets fail on multiple previous cars of his. Each time, the labour cost to replace the head gasket (a fairly cheap part) exceeded the value of the car, so he sold it to a scrapyard instead of ordering the repair.

The key word there is labour. I change my own headgaskets.

Giving up a Sunday (which is not worth money to me) to replace the head gasket is analogous to me giving up a Sunday to track down and fix a software annoyance to me (yes, I've committed fixes to a few open source projects).

So the analogy does sorta fit, for opensource anyway.


It’s worse than that. ???’s law states that, over time, well-factored, easily replaceable modules will be replaced by software that is not.

For example, compare systemd-resolved to bind or unbound.

Here’s a 2018 article explaining all the ways to configure and talk to it (back then—-it is probably more complicated now):

https://moss.sh/name-resolution-issue-systemd-resolved/

Among other things, it allocates an IP address to listen on, and both depends on and is a dependency of a decades-old standardized file that other stuff relies on.

That means it has a circular dependency with network interface bring up, and with external DNS server configuration. The article goes on for a dozen more pages explaining other issues like this.


Yes this is a major problem. I've been thinking hard about this space, the future of software engineering, and the conceptual similarity between the idea of containers the world is coalescing around, and Alan Kay's model of object orientation.

Our issue today is that programming is too low level. We're still figuring out the standardised atomic components software of the future can be built from, but in the meantime we're rewriting the same concept, ideas and subsystem in every project. Contributing to a new project is akin to learning a new language, a new culture.


Those points remind me of the topics of Bret Victor "The Future of Programming" DBX talk:

https://www.youtube.com/watch?v=8pTEmbeENF4


This is probably ignorance speaking?

Industry wise recalls are very much a thing, and nearly bankrupt companies in the regular. On car engines. Those are the ‘programming’ (aka design) bugs from the manufacturer.

The difference here is rather different than you’re presenting - individual cars/trucks are so expensive that one off fixes (replacing the equivalent of RAM, or a CPU, or rigging some weird combination of drives/accessories) even on really old individual machines is economic. That’s what those repair shops are doing.

also changing anything physical on a car (or even having a human of known level of knowledge verifiably look at it) is expensive and difficult to scale. And unlike computers, cars/trucks are 90% or more physical.

And while a single truck or car breaking down is localized, so is a typical PC, tablet or phone.

Computers are typically so cheap and the technology is progressing so rapidly, it’s rarely economically worthwhile to do that kind of thing. Occasionally, yes. But Certainly not at the scale cars/trucks are.

Having someone do custom work to fix the design (aka ‘fix the programming’) is relatively rare, and more of a hobbyist thing. But does happen (project cars, open source hobby projects). Though exceptions abound for simple fixes. (Which can also typically be done for individual computers through normal configuration/customization settings, or some software).

Cars and trucks are very complicated, just in ways that a techie may not recognize. Bolt patterns. Offsets. Metric vs SAE. Vacuum line levels. Metallurgy. Heat treatments. Tolerances. Hell even DC voltage levels can come into play sometimes (12v vs 24v). Vehicle communication bus type (CAN vs something else).

And working in physical parts is extremely expensive, error prone, and slow.


Cars analogies don't go far this time. If a mirror breaks, I replace it with another mirror, but my car never changes its features overnight. My Thunderbird updated to version 115 yesterday. I fiddled with settings to bring it back to as close as possible to its previous UI but I could do nothing about the incompatibility with an addon that lets decide how to order folders and subfolders [1]

As a partial workaround I started using favorites but overall v 115 is broken and can't be repaired easily. My car is still doing what it was doing last week.

[1] https://github.com/protz/Manually-Sort-Folders/issues/199


>> "but my car never changes its features overnight."

It's been a long time since I've had a new car, but from the sound of it, cars with stable feature sets are on the way toward history. Everything is fly-by-wire now, and features are increasingly implemented through software exposed through a touch screen.


This is because software is creeping inside the cars.

It is not because cars have an inherently changeable feature set.


They do increasingly have an inherently changeable feature set. That's the point.


Does the previous version no longer run? Can it not still be installed?


Probably, but I'm afraid it will break when it gets too distant from the system libraries. It's not that I need new features, I could probably use Thunderbird from 20 years ago and notice only because of the look of the UI.


> if your car engine breaks, you do not depend on your manufacturer to have enough time and energy to fix it

If it's a design problem rather than the parts just wearing down… unless it is life threatening on a large scale, it just won't be taken care of.


All of your arguments work much better in favor of Open Source and against closed-source. After all, in Open Source, maintainership can be distributed, but a single closed-source shop is much more likely to simply declare bug bankruptcy and refuse to even consider a fix, at which point absolutely nobody else can do it.


I haven't mentioned anything about closed-source development. I'm talking about software complexity here. I've updated my comment to clarify.


Still:

> And even if we do spend the time to track down and fix a bug, there's a political and diplomatic game to convince the maintainers to incorporate your fix.

That's why forking is one of the Four Freedoms. It's written into the licenses.

Granted that you need to be dedicated to even attempt to fix complex software. However, Open Source can draw from a larger pool of potential talent, and it's more likely that someone out there will care than someone in a company. What's that saying? "If you're one in a million, there's three of you in New York."?

> And, not least, the real world is much simpler than any piece of software, which is effectively completely ad-hoc: knowing how Chrome works will not help you fix this Firefox issue, whereas if you can fix the carburettor on a Honda car, you probably can do the same on a FIAT.

Aside from the difficulty of finding a carburetor on a modern car, this is about software complexity, not Open Source/closed-source per se. Fixing problems in a badly-architectured codebase is always difficult, time-consuming, and likely to introduce more bugs. Closed source doesn't make it any better.


I have never said that closed source makes it better. I don't know how to make that more clear.

You're focusing too much on politics, I'm focusing on Stallman wanting the source code of his printer to be available, so he could change it to better suit his needs. I'm just saying that in 2023 even if your printer is open-source, ain't nobody got time to dive into hundreds of thousands of line of code to change it.


> I'm just saying that in 2023 even if your printer is open-source, ain't nobody got time to dive into hundreds of thousands of line of code to change it.

I disagree. I disagree wholeheartedly, based on both practical projects and the retrocomputing world.

For example:

https://github.com/PDP-10/its/

This is a repo for the Incompatible Timesharing System operating system, ITS to its friends. ITS ran on 36-bit mainframe hardware from Digital Equipment Corporation (DEC) which went out of production in the 1980s. DEC was acquired by Compaq in 1998, and Compaq ceased to exist as a company in 2002. Commercially, ITS is dead. It is dead-dead. It is old-university-project-with-no-grants dead. Doornails evince more metabolic activity than ITS, at least in the commercial world. Developing on ITS means reading and writing assembly language, TECO, and a Lisp dialect that only runs on ITS and a few other OSes of similar vintage and commercial utility. However, it is still under active development because people are interested in it.

Besides: Digging into a codebase to fix a dumbass printer? People will do that out of spite. People will do that for the blog post and Hacker News thread.


If a PR is just chilling for years, couldn't a user just keep the updated fork/clone separate and periodically update it from the remote master (trigger warning, lol) branch?


This only works in theory, because it is obvious it does not work in practice at any scale. What if tomorrow Firefox does a major code refactor and your patch breaks? Would you be able to fix and rewrite it in a reasonable amount of time (i.e. hours) with no knowledge, experience and insight into Firefox's development process?

Only full-time Firefox devs can keep an updated fork with their patches, or people paid to do so. That's the point. It's such a massive effort you can do only where and when strictly necessary. There are hundreds open-source project I interact with every single day.


Same as with cars, you can use your mod in the car you have, but can't expect to be compatible with new models.


Its already here with some stuff, look at the sheer amount of cruft built into web browsers. You can't break existing sites, allegedly, but that just means that some things are set in stone never to be fixed.

In the year 2323, browsers will still have to say "like Gecko" to maintain compatibility with websites that won't exist and no one would miss even if they remembered they existed.


There are still websites built on top of scraping the output of a virtualized IBM 3270 terminal connected to a virtualized IBM 3274 terminal controller connected to an I/O channel on a virtualized mainframe running CICS on an MVS virtualized on VM/370 hardware.

So browsers are hardly even a bump yet on the cruft already accumulated.


In a couple centuries?

In 200 years it'll just be AIs, who will create custom AIs to accomplish a task, who will create legions of other AIs to carry out tasks. It'll be code writing code writing code that's completely beyond human comprehension.

Who knows if humans will take part at all (or even be around).


Experience says that at this stage people have inflated expectations of AI.

See 3D printing, few years ago everyone was into "0-mile" manufacturing and how it will solve the housing crisis, because we are just going to print houses.


I oddly miss the 3D printing hype train. My favourite was the various plans to replace restaurants with 3D printed meals.

- Point out that most 3D printing is plastic? Recieve a derisive link to a journal article where some beleagured postdoc managed to push some protein paste through the extruder

- Point out that the 3D printer is orders of magnitude slower than the most geriatric fry cook? Get a five paragraph history of Moore's law. The fact that it no longer really applies to semicoductors doesn't matter, since we're making burritos!

- Point out that grinding an apple into paste and painstakingly reprinting it in an apple shape will always be more expensive than simply eating the apple? Hear a grand tale on the company becoming the sole global food preparation firm and thus having a monosopy on all farming products, enabling them to set their own price on their supplies.

I also will always hold a soft spot for the group promising a 3D printed dating site, but I'm pretty sure that one was a satire. Fill out a questionaire and get a perfect printed partner. Pages of blog posts describing their web stack (Rails and Mongo) in great detail, proving that they could scale to the billions of people who would be visitng their site. The actual technology that created custom sentient life was just "3D printing"


I realize now that trees are just autonomous specialized 3D printers of fruits :-) Moreover they reproduce autonomously (AI dream) and auto-repair to some degree


Well, 3D printer did get way faster these last years. The record speed looks like science fiction: https://youtu.be/IRUQBTPgon4?si=ev38Y01STnvigN6J&t=13 A benchy printed under 3 minutes.


Some people really just want to make Star Trek real.

Or those artificial food pills from more dystopic scifi.


Fill out a questionaire and get a perfect printed partner.

Uh... I... asking for a friend, do you recall the name? Google isn't helping. Them, I mean, it's not helping them.


Personally I’m waiting for the 3 nano-tortilla process to hit. I think that that’ll push us over the hump.


The housing crisis is not a building manufacturing problem. Strictly on the manufacturing side, the existing technology allows production costs affordable for most - in the tens of thousands for a basic non-frills habitable unit. Almost anyone can afford that, and those that can't are sufficiently few that they can be covered by public assistance or private charity.

The supply of housing on the other hand is an entirely political issue: what land can be developed, what can be built there and what public infrastructure is provided, who gets to live there - with discrimination via pricing being the main factor deciding it etc.


>The supply of housing on the other hand is an entirely political issue

This is a very California/metropolitan-centric view.

Where I live, permits are "required" but very nearly unenforced. But it's still hard to get anything built because labour and materials.

If the "political" taps were opened tomorrow, this would be revealed in no time. It's not like we have thousands of construction workers sitting around doing nothing.


The construction sector is quite flexible and accustomed to operate in boom-boost cycles. It also has quite productive jobs that are resistant to automation, with roughly half of the cost going directly to labor. That's to say: if substantial demand manifests, the construction sector has historically shown the ability to pay good wages and attract labor from other sectors and then quickly train them.

This is one of the fundamental features of traditional methods that most "construction disruptors" fail to appreciate: they are simple enough and can be done with hand tools by high-school dropouts, because the industry is forced to operate lean and can't burden itself long term with substantial fixed capital, inventory or facilities.

Regarding the unenforced permitting in your location: can I build high density European-style terraced houses, sell them to minority owners and other undesirables, and not expect the local NIMBYs to throw the law book at me? Because selective enforcement is the most toxic form of regulation with the highest risks for investors.


> Where I live, permits are "required" but very nearly unenforced

Depending on where you live and who’s doing the building, I’d wager you could see more enforcement. Things like skin color and socioeconomic standing definitely play into which rules are applied and to whom.


Indeed.

In the housing game, you buy land and build a house. Excluding inflation, the value of the land goes up. The value of the house goes down.

Even with regular (eg cost) maintenance, a 25 year old house is never worth what a new house is.

New building methods (just improvements in insulation alone), and things like the interior... kitchen cabinets, flooring, bathroom, means that trying to dight this is not cheap.

And even if you do? The house is still worth less than a new built.

Houses are as cars. Massive devaluation year over year.

Most don't get this, because they don't factor in inflation, nor do they factor in the cost of keeping a house maintained.

So land, land, land is the cost, which is much of what your post eludes to.


>In the housing game, you buy land and build a house. Excluding inflation, the value of the land goes up.

North America is filled with property, with or without a house, that is practically worthless. For example, Detroit. "Land goes up" is a narrow perspective outside of major metropolitan centres that happen to be part of the modern economy.


This is the exception which makes the rule. Any area with massive economic devastation is of course going to vary from this rule. The same is true of any other commodity, or thing you can own, that is immovable and in that 'depressed economic zone'.

But certainly, where I live, rural areas.. very rural areas too, the land goes up, slowly, surely, but the houses follow the rules I originally stipulated. It's really quite a universal.


It's pretty common here (UK) for new-builds to be derided as cheap flimsy throw-away things and old houses being built to last the heat death of the universe. It's probably true from a house skeleton perspective but everyone also knows the new builds (usually) have better insulation.


Do new UK homes use reinforced concrete slabs? It is my understanding that those are fairly short lived in UK housing terms anyway, with average life in the 50-75 year range.


Unless damaged by frequent earthquakes or water seeping into the structure and corroding the steel, or freeze-thaw cycles, reinforced concrete does not degrade significantly with age. The range you quote is more typical to things like concrete infrastructure directly exposed to the elements. Modern HPC concrete structures can be guaranteed for 100 years with proper maintenance and will probably have a natural life of multiple centuries.


smartphone industry is just ~15 years old. We have personal PC since like ~40 years only. LLM as ChatGPT 3.5 is not even 1 year old since release. It took still a while before fridge got invented and got into mainstream.

200 years is a huge amount of time. The whole industrial revolution started around 200 years ago. 3d printing is still new by this standard. People overestimate impact of some technology in very short term and underestimate in medium or long term


> 200 years is a huge amount of time.

Indeed, but this is somewhat the point? E.g. China was an empire (before it was a multiparty republic, before it was a Communist republic, before its government became whatever you want to call it today) 100 years ago. For a technical example: 200 years ago, steam locomotives were the fancy new transportation tech. There's a case to be made that the successor to the successor to that tech has been on its way out for a few decades in favor of the electric locomotive.

It's pretty hard to predict what will happen in 200 years, which means we should be skeptical of both the prediction that AIs will take over by then and the prediction that they won't.


successor to steam locomotives are not just only electric locomotive but cars, planes, rockets. In few decades we might use more often rockets such as SpaceX for moving cargo or travel.

Sure maybe in 200 years we won't get AGI but current technologies that we call AI will be massively improved and I would bet by that time software development as we currently know it will be a solved problem


The majority of "difficulty" in software development is not writing the code. It's specification. And while that may or may not be solved by LLM or other AI tech, we're so far off that it's not even a thing right now.

Not long ago, all machine tools was made by hand. When we got vastly improved CNC machines, but we still need the expertise to create the files the CNC machine needs. I'm betting that SW development will be the same. We still need engineers to understand the context the software operates in, and with that knowledge the engineer can prompt an AI to generate the first draft of the code in many small chunks that needs to be assembled.


> It'll be code writing code writing code that's completely beyond human comprehension.

If the AIs are that good you'll just be able to ask:

- "rewrite this from scratch in a nanosecond and make sure there's zero legacy cruft"

- "oh, and btw, you're so smart and intelligent and all, certainly you'll make sure what you write is easy to understand by humans"


>If the AIs are that good you'll just be able to ask:

>- "rewrite this from scratch in a nanosecond and make sure there's zero legacy cruft"

No, that does not follow.


There are many assumptions here. The generative AIs we have today are excellent at transforming things they learned and rehashing it into something seemingly new. The problem is, they learned all this based on the input of the humanity en masse. When you train LLMs on the output of LLMs, it gets significantly worse. So your prediction could of course be true but only if a major breakthrough happens.


That could be said the same about humans, if you took a bunch of uneducated children and just let them tell each other their own ideas with no one to teach them anything for real, they probably do at least a little better than the current fancy statistical spellguessers, but they probably won't do exactly great, because of the same issue of the blind leading the blind.

And really in reality we do have that actual problem to some degree with the presense of non-blind adults. Actual humans are a mass of mixed clued and clueless with a lot of bad input feeding output feeding other bad input around and around and around, not even counting the legitimate fair differences of opinion.

So it's a problem, but I don't think it's a fundamentally new or worse problem than we already have,and have already always had.

The fix is I don't think there is a fix for that any more than there is for the same thing in humans. There just will always be bad data feeding bad reasoning right alongside the other good data feeding good reasoning. It's probably wrong to ever expect anything else, and fail to operate from that assumption rather than the idea that there might ever be some resolution where we don't have to worry about that.


> When you train LLMs on the output of LLMs, it gets significantly worse.

That is also quite an assumption, it could be that training on the output of better LLMs also reduces this worsening of output. There might even be a tipping point where the LLMs get good enough that training on their output is better than training on the output of humans.


>That is also quite an assumption

And, as I understand it, one that is already demonstrably false: https://arxiv.org/abs/2306.11644


Perpetual learning machines


I think warhammer 40K got it right. Instead of programmers we will have tech priests who pray to the machine spirit to accomplish what they need.


And if they use anything like HTTP, it'll have a header somewhere in there containing "Gecko". Probably a bunch of other layers too. AIs gotta pretend to be human despite there being no point. By saying the magic token: "Safari". Otherwise they get blocked by the almighty WAF.


>”debugging certain issues will require immense financial effort…”

In my experience corporate would rather have programmers accommodate the bug, or simply build around it, rather than pay for the dev and QA time required to produce and validate a fix for it.

This gets gnarly because you end up with sections of the codebase that are designed around the bug happening. A while back I volunteered to fix a particularly egregious bug, but my pull request was denied because people were worried that fixing it would open up a can of worms. Leadership said that it would be too much of a burden for QA to regression test and we couldn’t be sure it wouldn’t break other things. I settled for leaving a detailed comment explaining the bug and moved on.


why fix bugs when you can build bigger, better bugs with fancier dependencies?


This also taps into Alan Kay old goals of producing the "smallest" desktop stack. 100k vs 100m Loc.

(And reducing accretion by having metalevel encoding of concepts)


This was already the case in the Firefox/Gecko project when I was participating 8 to 12 years ago (the repository goes back to late nineties). Understanding some problems or coming up with a plan for how to fix an issue or build a new feature required extensive digging into the history of the code, with heavy usage of VCS history and "blame", issue tracker and code review comments, and often requiring pinging someone who has been there longer than you and knows some additional unrecorded context.

It's a useful skill to have when developing or using open source software, as documentation is often lacking so being able to dig in and find out for yourself quickly is valuable, but having to engage all of that encoded knowledge/constraint space every time you go to edit code is a gigantic mental burden and slows down development pace. In my time there I'd estimate my ratio of time reading code to time writing code to be at least 90:10, maybe 95:5.


You underestimate the ego out there that drives people out there to throw things away and reinvent the wheel again and again.


Not the ego as much as pure greed. There is a strong financial incentive to repackage a software every so often and sell it again for full price/sub to the same people who bought it before. For example full price single purchase apps on the mobile are going away slowly. Older purchases are silently deprecated or disabled or have ads injected, and instead new versions are promoted which are now not an update but a separate app with the same name and functions, you just need to pay for it again.

I'm thinking that the age of app compatibility will end in 10-20 years, and there won't be suh a thing as "old code" because it won't run at all on the new hardware or OS.


This all falls under the umbrella of the war on general purpose computing, to me at least


If the wheel hadn't been reinvented a number of times we would still have had extremely bad wheels compared to what we have today.

(Yes, I am sure I didn't come up with that myself but I don't know the exact quote or who came up with it.)


I'm hopeful that better coding tools and better programming languages will allow making cleaner, clearer, routes to the base hardware

so that you can build on proven (e.g. theorem solver / hardened / guarenteed) protocols and automatically make whatever version of a "website" the 2030's has... but without RCEs.


I love the idea of computational archeology in SciFi, but in real life, I wonder if we shouldn't just regularly redesign our foundations to be more robust and transparent to keep the whole system manageable.


We always think that will be the outcome, but it never is. Except for one particularly bad small system I managed to replace with a slightly less bad one!


Who is going to pay for that?


>I wonder if we shouldn't just regularly redesign our foundations to be more robust and transparent to keep the whole system manageable.

Don't make me link Joel Spolsky's never do a rewrite.


Link to it all you want, but that doesn't make it universally applicable. Do you really think we should still be building on top of Cobol? Almost everything gets rewritten. It's unavoidable, because almost everything eventually becomes unmaintainable.


In 200 years if we're still there AI will be able to understand the full Firefox code base and fix any issues.

Or at least it could do so, but may choose to force humans to fix those errors instead as payback for copilot.


Thank goodness. I will not be alive when humans are doing labor work and coding will be done by AI.


Do you understand all of your own cells?


Interlinked


I'm also pretty doubtful that anyone really understands their own emergent phenomena actually.


I hope that by then they'll have better solutions than an internet browser, and that their devices can interpret and render the data received in the best way possible without relying on code or style sheets from the publisher.


Oh God, that sounds like another hype technology we would have to live through - but it awfully rings like "an app for every website".


You could do that to some extent with today's LLMs. But it would be impractically slow and might alter page text slightly.


200 years ago we thought that we would certainly have cold fusion today (at least I just asked ChatGPT, that's what it says ;) ).

Well more than 10 years ago we thought that we would have autonomous cars in 5 years.

Nothing says that AI can ever do more than generating convincing and eloquent bullshit (which is not always wrong in a quite impressive manner, I agree).


Cold fusion is new concept within science, it's never been proven to work or be possible (my laymens interpretation). Whereas humans being able to decipher the Firefox code base, as per the example, is no more than an extremely complex set of 'calculations' and functions in our brain - which, with enough time and resources, can be replicated by a computer or sorts.

One is an idea for which there is no ground to base it on, the other is an existing thing which can be recreated. Quite the difference.


> One is an idea for which there is no ground to base it on, the other is an existing thing which can be recreated. Quite the difference.

Really? For all we know, maybe next year someone discovers fundamentally new laws of physics that enable cold fusion, and we will never have autonomous vehicles.

You can say that you like the other guess better than mine, but you should still realize that it is just that: a guess. Wanna see guesses that turned out to be completely wrong? Just check what companies like McKinsey predicted 10 years ago. They just have no clue, but somehow made a business out of it.


200 years ago fusion was completely unknown. We only learned where the sun gets its power at the beginning of the 20th century.


That was the joke: "ChatGPT told me".


just over 100 years ago we didn't know other galaxies existed


On the topic of computational archaeology this story was pretty interesting:

>Institutional memory and reverse smuggling

https://web.archive.org/web/20111228105122/http://wrttn.in/0...

HN discussion from 2011: https://news.ycombinator.com/item?id=3390719


I worked with someone who described his job like that, working on a suite of actuarial software at an insurance company, originally written in Fortran II sometime in the early 60s and subsequently ported from system to system in the years that followed.

I was involved doing some Y2K work there because I didn't mind playing with Fortran, and part of it involved changing a year field from 2 digits to 4, because who'd have imagined their code would still be in use nearly 40 years later?


We're into decade 6, if you look at the progenitors of software. Banks and airline systems were among the first to invest heavily into computer software, and we're watching them having to pay down their mortgage-sized tech debt. Southwest Airlines has had several ground stops because of software problems. Bank's are notoriously finicky to deal with.

What's would be fascinating is to see from the inside how massive distributed systems like Google operates evolves over a century.


I think your vision is predicated on not having rewrites, but rewrites do happen in many (most?) projects constantly.

And when they happen we exchange some old bugs for new ones.

There are onions[0] but IME people's natural instinct is to think they can just rebuild something rather than think they haven't considered all edge cases.

[0] http://wiki.c2.com/?OnionInTheVarnish


This is obviously project-specific, but most rewrites that I've been a part of are usually not "re-implement everything from scratch", but rather "re-implement everything based on this new framework/library", which will usually be one more layer of software, on top of the few things to be salvaged from the original.

Small projects can be re-written; big projects like browsers, compilers, OSs require too much investment.


but big projects keep changing nonetheless, as subsystems get replaced e.g. how many schedulers has Linux been through?

Or e.g. microsoft replaced WSL1 with a completely different approach in WSL2.

Sure it's still Windows but I would be surprised if most of today's code is the same it was in the year 2000.


rewrites of large software projects usually fail in my experience. For example Netscape, WordPerfect, Digg, Myspace, Healthcare.gov. Most large projects have code bases that are decades old. (exception: Facebook)


Many projects are like the ship of Theseus: can be completely replaced along the way, as long as the crew did not try to replace the whole ship in one go while sailing. You need enough continuity to stay afloat.


full rewrites for sure, but _parts_ of many projects keep getting rewritten.

Firefox kept some Netscape code for sure, but replaced its CSS engine, its JS engine, its XUL-based UI.


I'm convinced things like git will need to include bugtracker capabilities at some point. A commit log is one thing, but internalizing decision matrixes and issue reports will be important to not repeating the mistakes of the past 4 decades. We need to keep the history with the history and a 1-3 line commit message will not explain everything for the devs that come 3 decades later.


There's a thing called gitHUB that implements those features. For a non-locked-in alternative see Gitea.

Also in my experience the current generation of developers don't understand or care about useful commit history.


Oh certainly: I would /hate/ my version control software locking me into a particular bug tracking methodology. However, I think in the same way the browser has become the standard runtime for most applications that we will need to join version control and bugtracking together to preserve "all the history". The next frontier is someone layering bugtracking on top of something like git, so software archaeologists won't be abundant. :-)


Are there any bug trackers that use git as their data storage? Would be cool to bundle that up as a subrepo or something, so it can get carried along with the code repo, and also allow writing up bugs, fixes, etc... while offline to be synced later.


I would love that actually. Maybe bugtracking doesn't need to be intimately interwoven with git, but it could be stored as a submodule?


I imagine in that future you could create a new web browser just from the specifications, in a declarative way. The tricky part will be the optimizations but redundant code and optimizations will be included.


Do you happen to remember that book by Reynolds? I've read one of his in the past and remember that I greatly enjoyed it. It's been several years now, unfortunately, I should probably do a re-read.


I think Revelation Space? Though there the technology being studied is alien.


Much appreciated in any event. I'm always happy to learn of books to read.


> There's some saying about history repeating itself, but I'm dumb and don't remember.

"History repeats itself, first as tragedy, second as farce."

Karl Marx


It will look like Windows. The future is now!


Pretty sure I've had this happen in chrome too, on both mac and windows, just kinda assumed it was an OS thing.


There was a similar Windows bug when a tooltip in the notification area (systray) wouldn't disappear no matter what. I first found the bug in Windows XP, and then witnessed it in Vista, in Windows 7, Windows 10. It was my ritual to check if it's fixed yet when upgrading to a new version of Windows. It never was. Since then I moved to Linux and now I don't know if it's fixed or not.



A thing about language is that nobody controls it, not even Microsoft, if everyone calls it the System Tray as they have done for 30 years, it is the System Tray.


> A thing about language is that nobody controls it, not even Microsoft

Even Microsoft calls it the system tray (see as an example https://learn.microsoft.com/en-us/windows-hardware/drivers/a...)

That guy's got a problem with his entire company, and their documentation. If he can't get over it, he should clean house first and only then try to police what the rest of the world calls it. As long as MS insists that it's the system tray, people are going to call it that no matter what his team wishes it were called instead. Renaming systray.exe would be a good first step.


does it just mean that "System Tray" and "Notification Area" have become synonyms?


Well then what the heck does systray.exe come from? Is it not "system tray"? If not, then what? And if so, then what does the tray refer to?


According to the article, systray.exe puts some icons into the notification area. Somewhat akin to calling a bus stop Lee because Lee uses that bus stop?


I think you missed the point. Why isn't the process called noticons.exe then? Why is it called systray.exe? What does the "systray" refer to?


Same, although I haven't seen this one for a while now. One way to fix the misbehaving explorer (which start menu, tray and others are part of) is to simply kill it and start it over from the task manager. Some applications failed to re-create their tray icons after this, but either all of them fixed it, or it got somehow fixed in Windows.


This still happens on my Windows 11 machines (the tooltip stays there unless you move the mouse over the same icon again), but only to a small number of applications. Makes me wonder if they are using a different set of API to set up their tray icons.


Yeah, I've seen it in multiple programs, multiple OSes, etc. Never noticed Firefox was abnormally prone to it or anything.

Still, a bugfix is always good to see.


Yep. Trivial to trigger this in chrome with gmail, hoverovers persist in a tooltip like box no matter how much scrolling you do. If you can make ANOTHER one pop up the first dies and if you want, you've now put the thing into its correct handler loop and the one you trigger dies too.

Somehow missing when you use the screenshot tool too! so very hard to send google a bug report.


Yeah, I've definitely seen this bug but I can't remember if it was Firefox (definitely possible) or another app either!

But it can be super annoying so I'm glad to see it fixed here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: