Hacker News new | past | comments | ask | show | jobs | submit | _zamorano_'s comments login

Altough of course you're right, let's play devil's advocate an imagine a world without Nobel prices.

Laypeople needs a simple way to know who's who in advanced research fields, without Nobel prices (or any other commitee) we don't get to have that.

If people gets to ignore (more) such topics, it's likely politicians, and universities react accordingly, and funnel funds to other enterprises.

All these prices (I'd say writing prices are much worse) are typically super corrupt, but at least keep the field in people minds.


>> Laypeople needs a simple way to know who's who in advanced research fields, without Nobel prices (or any other commitee) we don't get to have that.

I think first you're underestimating "laypeople" which seems to include many scientists who are not physicists, and second you are forgetting that many of the scientists the "lay" public knows as the greatest of all times never received a Nobel, or any other famous prize: Einstein, Newton, Kepler, Copernicus, Galileo, etc etc.



Neither for relativity nor mass-energy equivalence though, which laypeople are much more likely to know about than the photoelectric effect (what the price was actually awarded for).


Ah. My mistake. Thanks for correcting me.


Einstein received the Nobel prize in 1921, but your point is still correct.


Do laypeople know Kepler?


Depends on the quality of the '"lay" public' I guess.

Where I live, in my estimation the 'educated "lay" public' would probably have heard of all the names mentioned, but with even worse notions of what their actual contributions were for Kepler.


The economics of this topic have always been interesting to me, especially when compared to various other fields. What is there to incentivize people to enter STEM fields, and especially research?

As a point of comparison, there are ~540 premier league football players, with an average salary of 3.5 million pounds. (Yes, that's average, not median, but there's less than 20 of them that earn under 200k.) It's not _that_ exclusive of a club, and the remuneration is insanely disproportionate, compared to academics - I highly doubt there are hundreds of researches earning millions.

So, yes, it's pretty odd to have some random people dish out these prizes, and they are a drop in the pond. However, I personally feel it's way too little, and that the targets of the prizes are far more deserving - even if it's a popularity contest - than random entertainers (even if they are quite entertaining). But, it's up for argument, and the markets obviously don't seem to agree with me.


Weirdly, if you sniff the XHR from [0] (when it loads a new page), it claims there's 1171 players for 24/25. Except if you look at a few of the teams individually, they're between 30-35 players. Which is much more in line with your ~540 than their 1171.

> the remuneration is insanely disproportionate

I once pointed out that Kevin De Bruyne, on his own, gets paid almost half as much (~21M) as the entire salary cap of the Rugby Union Premiership (~2022, 50M) (to make the point there's much more money in football than rugby.)

[0] https://www.premierleague.com/players


"I highly doubt there are hundreds of researches earning millions." -- by doing purely academic research, maybe not. But, the number of people who have moved from academia to industry off the strength of their research and made millions is probably much larger than you think. I'd wager just in ML you could round up a few hundred between OpenAI, Anthropic, Google/DeepMind, NVidia, Meta/FAIR, etc.


If Physicists could split atoms with only their arms and legs with some safety equipment, I bet they would get paid even more than 3.5 million pound salary.


Splitting atoms? Nah, that's the easy one, you can do that yourself even if you're quadriplegic and in a coma.

Even fusion is high school science fair stuff.

Spallation, antiprotons, quark gluon plasmas? Now you're talking.


If this was true then we would find that jobs with physical labor pays much more than what it currently pays.


> Laypeople needs a simple way to know who's who in advanced research fields

What need of a layperson does knowing "who's who" in advanced research fields fill?

Here's another good question related to that: Who is qualified to simplify that so that the need is filled?


They need to know, to sate the egos of physicists.


Google scholar rankings of conferences or individuals by H-index or citations is a perfect way for both lay people and academics to measure each others achievements.


Some things leave a permanent mark on you. Try being a workaholic a few years and tell me later how easy is to disconnect, and rejoin with familiy and friends.


Yeah, working constantly for a few years leaves permanent marks. You know what makes it better in ways we'll never have? Millions of dollars, luxury yachts, and fame. Mr. Beast isn't a doctor. He isn't a teacher. He doesn't fight our wars. He makes entertainment for children. He'll be fine.


I think parent comment was referring to employees who are willing to dig in and sacrifice a few years of their lives for MrBeast productions, not to Jimmy Donaldson (though I could be interpreting it wrong).


I struggled badly on a pair programming position.

It's not like I don't like reviews or cannot work alongside another person. It's I cannot learn while someone is talking to me or trying to make me place the cursor somewhere.

I'm all in for code review, even in pairs. In fact, I do that with a junior dev I have assigned and it's working well for us. I leave him thinking and come back to evaluate his solution.

I find reviewing him paired, is time saving for me. I make him lead me to the right code spots, rather than finding out on my own. I fire 3 quick questions and we're aligned on the spot.

I'll never work again on a 100% pp position but I think I've found my sweet spot with the technique.

I agree that, if no other safeguards are in place, using pp you can avoid real bad code. But without deep thought, you'll mostly converge to an average solution, when social dynamics are very much leading.


That's funny/sad.

I've never learned so much about programming as during my years pair programming!

I got to see how other people solved problems, and surprisingly often, they have a completely different approach than what I though was the obvious way. Half the time their way was better than mine, and I became a better programmer, and half the time I taught the other person something.

This went on day after day, for years!

There is a technique for pair programming well. I was lucky to join a team of PP pros, and picked it up pretty fast. Sounds like you had worse luck :(

> I cannot learn while someone is talking to me

This sounds strange. Normally people learn by listening to others talking, right?


I think that PP is generally talked about backwards. Everyone says that the least experienced programmer should drive while the senior programmer talks. When I was an intern 25+ years ago before pair programming had a name, I sat beside the grey beard who did the typing. I watched him and tried to keep up. For the first couple of weeks, I contributed nothing, but once I started to understand the code and his style, I was able to start seeing simple things, like reminding him to do a null pointer check, and then I started to see more and more stuff and I was able to contribute in real time.

I think this worked because 1) he was comfortable with me staring over his shoulder with neither of us talking and 2) I was comfortable with just watching an learning. I've tried this style with my junior programmers, and I'm too anxious being quiet and coding while someone watches me, and most juniors are also too anxious to prove that they're paying attention so they have to talk, and that breaks the flow.


Seems like your programming partner was particularly micromanagey. When I pair program it's more of a conversation to discover different approaches. If you're having to tell someone where to position the cursor, that's missing the point!


So I suposse building an aircraft involves standard bolts, procedures, testing and much more standarized ways.

In contrast a spacecraft like the shuttle, faces much harsher conditions and, as not many of these were built, I expect more manual procedures and tinkering while building the thing.

In the end, it's incredible these things didn't crash more often.


NASA has incredibly detailed records about every piece of their spacecraft down to things like:

- material used to make a bolt

- what the torque used to tighten the bolt was

- who tightened the blot

- when it was tightened

- etc

This allows them trace back through the history of each vehicle for debugging purposes.

They also applied this to the Space Shuttle software. This article from 1996 does an amazing job of describing the process: https://www.fastcompany.com/28121/they-write-right-stuff

It's interesting how modern some of the practices described are. Plus, some of the practices (E.g. the bug rate model), from my experience, only existed there.


  > It's interesting how modern some of the practices described are.
It should be noted that among famous NASA inventions, modern Project Management is listed among them.


A reusable aircraft that faces that sort of intense vibration, I'm not at all surprised that we need to track when the last time each bolt was checked and by whom.

(One of the things you have to watch out for is that if the torque on a nut drops for no reason, it may be a hairline crack in the bolt it's attached to)


The risk of spaceflight is still very high. Wiki [1] lists 676 people as having traveled to space, of whom 19 have died in accidents as a result of that travel, meaning that going to space has about a 3% chance of killing you.

The average age of an astronaut is 34 [2], and most are male, so a look at an actuarial table [3] tells us that going to space is approximately as likely to kill you as literally every risk an ordinary person would take in their life up to that point (at 34 years of age, about 4.3% of men have died, and a large proportion of those deaths are due [4] to accidental injury).

[1] https://en.wikipedia.org/wiki/List_of_spaceflight-related_ac...

[2] https://en.wikipedia.org/wiki/NASA_Astronaut_Corps#Qualifica...

[3] https://www.ssa.gov/oact/STATS/table4c6.html

[4] https://www.ncbi.nlm.nih.gov/books/NBK600454/table/ch2.tab4/


> The risk of spaceflight is still very high. Wiki [1] lists 676 people as having traveled to space, of whom 19 have died in accidents as a result of that travel, meaning that going to space has about a 3% chance of killing you.

But 14 of those were caused by the shuttle alone. All the others were over 50 years ago. So far, all the spacecrafts still in use today have had a pretty good track record.


That 19 is a rather narrow list. It excludes the Apollo 1 mission where astronauts died in the spaceship during a rehearsal etc. In total 11 died during training including a cosmonaut in 1993 and a Spaceship 2 test pilot in 2014. “As of 2024, there have been over 188 fatalities in incidents regarding spaceflight.”

The shuttle also carried over half of all astronauts (355) on orbital missions, so if you’re excluding the shuttle it’s not that much safer.

Soyuz MS is a refined design, but Soyusz 11 killed 3 people and Soyuz 1 killed 1. Calling it a different design isn’t unreasonable but by that token it would be limited to 22 successful crewed missions and 1 in progress.


Out of 355 astronauts that have ever used the shuttle, which comes out to about 4%. Not that much worse.

The shuttle's lack of a launch abort mechanism is something NASA wouldn't accept in any modern human-rated spacecraft. But arguably the deadliest feature of the shuttle was that it was pushed as the single launch platform for all launches, even those that didn't require any crew. Putting crew on every single flight made many missions more risky than they had to be


And also made missions at least 3X more expensive..... It wasn't just the on-paper launch costs. Everything had to be man-rated, meaning, among many other things, everything that operated during the launch had to be triple redundant, all the pyros had to be unpowered while onboard the shuttle (meaning you had to design another system to then power up the pyros and make it reliable), you needed three full launch crews (Cape, Johnson, plus wherever you actually ran your own ops) and all three launch crews had to support an endless set of rehearsals and launch delays.... The costs kept mounting. (source - was in program office of expendable launch comm sat, each satellite was ~$150M, launch was ~$80M. Roughly comparable mission down the hall cost ~$300M / satellite, ~$500M / launch.)


We're there more Soviet accidents that we don't know about?


Probably on the ground, but we likely know or highly suspect most of the ones in the air or in space.


> ... of whom 19 have died in accidents ...

Skimming your reference [1], I see 11 more who died in accidents during testing & training. Including the https://en.wikipedia.org/wiki/Apollo_1 fire on the launch pad (during a launch rehearsal test).

Until spaceflight is "buy your ticket, show up, get in your seat, wait, exit at your destination", I'd argue that we should include the testing & training risks in the risk of human spaceflight.


Small populations make for terrible statistics.


Often true.

But when that many people die, in that many separate incidents, across a variety of nations & launch vehicles - then the "The risk of spaceflight is still very high" thesis is statistically solid.


An estimate is better than no clue at all


To what extent is that true when the cause of the fatalities is the technical design of completely unrelated systems?

If one space agency built a rocket which always immediately exploded after launch, and another space agency built one which always worked, you could say the odds of failure for the next astronaut was 50%. But of course the two rockets are essentially unrelated. The chance of success of each rocket is a function of design, engineering process, organisational culture of that organisation.

Telling the astronaut strapped to the top of the explody rocket that there's a 50% chance of exploding is actually less help than no estimate. Because actually there's a 100% chance of them exploding. An estimate is only as valuable as the assumptions that drive it.


Or taking a guided climb of Mt Everest, roughly same numbers (compared to Aconcagua which sits around 30%)


Aconcagua is not notoriously difficult, and around 1k people per year summit. I can't find anything that says that hundreds of people per year are dying.

Do you mean Annapurna? It, at one point had a death rate above 30%, but is now below 20%. K2 has taken over the crown for deadliest mountain with a death rate of 24%


>compared to Aconcagua which sits around 30%

??? Aconcagua doesn't have an especially high fatality rate.


> at 34 years of age, about 4.3% of men have died

I think what you are saying is that 34 years after being born 4.3% of male individuals are dead.

In my understanding if someone dies when they are 10 years old they will never be "34 years of age". This probably feels nitpicky but it has thrown me into a loop of trying to understand what you are saying.

(Not even mentioning that I read the table you linked as 4.2% not 4.3%)


Saying 95.7% of men live to at least 34 would be more clear. (i have no idea if that stat is legit or not, i'm just writing it differently)


That sounds even better! Thank you for that rephrasing.


This seems needlessly pedantic as the meaning is still very clear.

And you did mention it, by saying you weren't mentioning it.


> This seems needlessly pedantic as the meaning is still very clear.

As I said it wasn’t clear to me. The two meaning which was fighting in my mind were the one i wrote and that the percentage is the probability of a male dying in their 34th year of life. Had to consult with the table to figure out which one they mean.

> And you did mention it, by saying you weren't mentioning it.

Well spotted. Exactly because the discrepancy troubles me. It either means that I don’t understand how to read or what to read in the table (in which case I would love to be corrected) or that the commenter made a typo (which doesn’t matter at all). If i were certain it is a typo I wouldn’t mention it. But since I can’t be certain that the error is not “in my equipment” i shared the observation hoping to get clarification.


That was part of the rationale behind the space shuttle in the first place. It was to create a space plane with aircraft-like operations in order to fine tune processes and technology to bring down the cost of space flight. Unfortunately, NASA never managed the operational cadence required, in part, because of the per-flight cost (which was, in turn, high, in part, because of the low flight cadence). It was a fine idea, but it didn't work out so well in practice.


It also didn't help that the design was compromised.

In order to get funding from the miltary, the shuttle had to be able to switch to a polar orbit which is why it had those stupidly large engines that serve no purpose otherwise.

If you get rid of that, you actually can design a reusable space plane.


Turns out, making something look like an airplane doesn't mean you can treat it like an airplane.


The space shuttle's appearance has little to do with its flight cadence - the airplane-like flight cadence (and thus, airplane-like reliability and cost) just never manifested. Especially after January 1986.


Also, the smaller number, and the smaller number of flights, means much less experience to wring out the low probability gotchas.

Today's extremely reliable airliners got that way on a long, long string of accidents and near accidents.


Though a large batch and large amount of flights by space standards, Shuttle was basically all prototypes by normal manufacturing standards. The 135 STS flights wouldn't even make a dent in an airliner certification and test campaign. It's not surprising they kept encountering problems.


Even in commercial aerospace, every part is tracked like a library book. There's rarely a question about whether a particular part is the right part or more importantly a used part. Because there's a chain of custody for each one.

You also have some parts that are destined for QA purposes, and those have a tagging system that is meant to prevent them from being recycled onto a real aircraft once they've been used for stress testing.


> In the end, it's incredible these things didn't crash more often.

I think two catastrophic space shuttle failures is more than enough :-/


Lane has spoiled essay for me. I've read all his books and he's in the right place on the readability-complexity spectrum for my case.

I can't find any other author, on any other field, I can learn so much from without being actual work.


Didn't try AltWinDirStat, but did try FastWinDirStat.

The thing is, FastWinDirStat uses a licensed propietary component. No problem for me, but the author did have some back and forth with another user on GitHub.

Seems FastWinDirStat license don't match with using a closed source library, or something...

As for its actual functioning, it does as it says. Works much faster than WinDirStat


Looks like a pretty clear violation of the WinDirStat license. They took WinDirStat which is GPL, linked it with some other proprieraty code and distributed the result.

(They could have been clear-ish (with caveats) by distributing only the source code and let the users do the compiling and linking, similarly to how you could download ZFS and build it into Linux. But you mustn't distribute the result further.)


Joe Studwell - How Asia works, and all Nick Lane.

The first is, despite its name, a manual for countries on how to win at capitalism. A must read to understand what works and what doesn't in macroeconomics.

About Nick Lane, he's an English biochemist working on cutting-edge investigation regarding the cell and origin of life. His work is very deep and leaves me with a sense of awe, of what nature and natural selection has 'built' and why life is the way it is.


Nowadays, I'm re-reading books that left a mark on me.

It's not uncommon to discover that some clever ideas I thought I came up with have been, in fact, read in a book. Humbling.

The thing is, you cannot get to that point by following productivity hacks. It's true that most divulgation books can we summarized in 2 pages. But the way good authors present the same idea again and again, through different examples and viewpoints is what remains in your brain.


What? Aren't those games Valve orginals since the beggining?


In the sense that these games were conceived by teams working at Valve, using this management structure, the answer is: No, none of them are.

Most of them have their origin in community mods, some already having gained substantial traction on their own.


Sure, the original idea might've already been there, but they built the game all the same. Saying it "wasn't them" is misleading. They built it from scratch, with existing concepts from mods.

Maybe their ideas department isn't the greatest, but their execution on these games was very good in the long run. They popularized hats and make absurd amounts of money from it.

Let's not forget the Steam Deck. Early on still, but an excellent product in a whole different area.


These are the little things you don't notice but make a difference.

It's a pity this kind of attention to detail is becoming obsolete in favor of flashy but unconvenient UI

I expect this behaviour on any list I find in a Windows app. I also expect keystroke consistency, like press F2 to edit... but all this UI things seems old fashioned.

I assume I don't get that on web apps, but "modern" Windows Apps are also deprecating these conveniences

I wonder if in the apple sphere they're suffering this kind of degradation.


The best example of this decline comes from Microsoft itself.

Here is how Microsoft recommends you do find-and-replace in their simple user-friendly ~20 year old note-taking app, OneNote [0]:

1. On a blank page, type the replacement text to use, or find it on a page.

2. Select the replacement text, and press Ctrl+C (⌘+C on Mac) to copy it to the clipboard.

3. Press Ctrl+F (⌘+F on Mac) to find on page, or Ctrl+E (⌘ + Option + F on Mac) search all open notebooks.

4. In the search box on the top left for Windows, top right for Mac, type the text to find.

5. In Windows, you can select Pin Search Results at the bottom of the results list, or press Alt+O to pin the list. Mac is already pinned.

6. In the Search Results pane on the side of your window, select a search result (a text link next to a page icon) to jump to the page where OneNote has highlighted the text it has found.

7. On the page, double-click or select each highlighted occurrence of the text, and press Ctrl+V (⌘ + V on Mac) to paste your replacement text over it.

Note: When you replace a word or phrase in a sentence, you might need to type a space after the new text is pasted.

8. Repeat steps 6-7 for each additional page in the search results list.

Tip: If you've got a lot of replacements on a single page, copy your text to Word, find and replace the text, and then paste back into OneNote.

[0] https://support.microsoft.com/en-us/office/find-and-replace-...


OneNote seems to be especially bad. I don't think there's been a day in which I've used it and not had it complain about sync errors with my Office 365 account.


I am speechless. Just wow.


> degradation

Definitely, it’s an industry-wide ailment caused by a focus on the web and neophyte users.

The irony is that this power user functionality didn’t impede new users, it’s just been gradually forgotten by folks only raised on the web, where almost everything had to be reimplemented from scratch.

We gained a lot but lost a lot as well.


Every time someone on HN praises Flutter for being the best way to do UI’s, I shudder to think of all the conveniences I’m used to (on both windows and macOS, I’ve used a lot of both) that are likely completely ignored or forgotten by these UI’s as they struggle to reinvent the user interface from scratch.

The UI’s of the 1990s were often designed using actual user studies, with years of hard-fought learning on how to do things in a way that is discoverable, Accessible, and with effective shortcuts for power users. I worry we’re losing all of this as we reinvent the UI without understanding what we’re rewriting.


It's crazy to me that so many software companies dumped millions and perhaps even billions of dollars in R&D collectively to learn how to do UI&UX effectively in the 80s, 90s, 00s, etc., and every new generation just ignores the previous one completely because it looks dated.

Relevant XCKD: https://xkcd.com/1053/

Gen Z, or Gen Alpha, or whatever you call teenagers these days, aren't born knowing how to use a desktop computer. They're born knowing how to use smartphone apps. From what I hear, many of them are terribly ill-equipped to be actually productive with desktop.

They're no different from just an office guy who never used a computer in the 90s or from a kid or got a PC from his parents in the 00s.

Why would the GUI have to adapt to modern users when a new user now is no wiser in practice than a new user back then?


>every new generation just ignores the previous one

One of the easiest first steps when creating your own identity as the new generation is to refuse your predecessors wholesale. This goes triple for something that is about fashion, which GUIs to some extent are a fashion statement.


> I worry we’re losing all of this as we reinvent the UI without understanding what we’re rewriting.

It would get rediscovered in the future same way old ideas are recycled and are new again.


For some reason, this hasn't happened yet, and the time frame that has passed since the "loss" (e.g. from ~2010 on) is comparable to the time frame the UI patterns were originally introduced (e.g. Windows 1.0 to Windows 95).

I'm afraid the prevalence of touch-screen use and the lack of focus on power users by UX designers is making the degradation permanent.


No. Every creation and discovery is influenced by culture and individuals.

Even if you assume the same level of skill and effort will be used (it won’t) there are still many paths and forks to take.

Consider that most “ui designers” come from graphic design or psychology backgrounds, while 90s UI was designed by engineers. Those groups have different values and make different decisions.


Yea, I think a lot of us old timers do notice these "little things" and it's infuriating when they are missing from the Shiny New Web Version of our desktop apps.

And, like others said, the existence of these niceties does not detract from the experience for non-power users. It's simply a bonus for more experienced users, and these bonuses are slowly going away as more and more developers choose the "give up once the thing kind of works" strategy.


I think what makes it worse is that a lot of 'UX' people seem to actively oppose these kinds of power user conveniences because they think they know better than their users.


It’s not that they think they know better, it’s that they know power users are the minority.

Everything is about minimizing expense to an extreme degree to drive “growth” in the current tech economy, so there’s little focus put into things that do not test as an immediate boost to marketability.


I wish this was the reason. It's just mediocrity and pure, raw ignorance. A result of throwing bodies into the technology industry in a desperate attempt to get the software out.


Also power users may be using ad block or opting out of tracking and therefore being left out of the success metrics.

Everything will be optimized towards the clueless.


Considering how much money are poured into AI development would not say that "everything is about minimizing expense". But it is true that user interfaces are not something that will sell your application to the masses.

But to be honest, trying to sell any kind of "advanced software tools" (like compilers or other very specialized tools that provide small gains compared to existing free ones) for "power users" is extremely hard. Was involved in something similar in ~2010 and many power user think they don't need it or they can do it better themselves but anyhow does not want to pay for a complex tool.

Probably the web interfaces is exactly a manifestation of that, many users trying to implement things themselves because they know better. But I prefer writing interfaces with React than with Win32 so I think there is progress.


It's also a lowest-common denominator problem.

Which advanced UX do you implement? MacOS? Windows? Gnome or KDE behaviour? CDE?

One could sniff the user-agent and adapt to a recognized OS behaviour but we all see how superficially shallow the UX on the web is. I've given up on expecting anything advanced.


Any UI is as good as its base layer. We wouldn’t have $subj if MS didn’t implement it. But then when you tell them that the browser UI model is just useless crap with smooth animations, you get hellvoted, yelled and thrown fragile css/$()/useX incantantions at.

Which advanced UX do you implement? MacOS? Windows? Gnome or KDE behaviour? CDE?

Browser advanced UX. Not current browser UX, but a hypothetical one which doesn’t suck.


Accessibility enhancement are not just meant for power users though. Some people depend on those.


There will be no power users when you take away all the powers.


Yes, UX people do recognize it's weird to have something invisibly act a certain way that disobeys the plain inputs.


Desktop programmers got these things mostly for free, even if they gave up as soon as it compiled.

Also, I focused mostly on the web, but things like Apple/Gnome removing menus and titlebars are a problem as well.


I don't know if this is that clear cut that it's worth making sweeping judgement based on it. I don't think it's a great idea to have it so pressing the same letter twice selects the 2nd item in the list with the same starting letter.

From another fellow grumpy old-timer who appreciates details, this looks more like "undiscoverable alternate mode that is invisible to the user, hacked in by a too clever engineer" than power user nicety


This is the way I remember it working originally. It wasn't until later that I discovered you could type the letters of something to jump to that. So for me, typing the first letter multiple times was just the intuitive and obvious way to do it.


It is discoverable through “g windows tips and tricks”. Reading articles like that went out of fashion too though. Thing with “power” and “advanced” is that you have to learn, because otherwise that info for all features had to be on screen and there’s only so much space on it.

This whole discoverability bs is what has driven UI into intuitive uselessness for everyone, when there’s actually two groups: first (/one) time users and just users (“power” or not). For some reason modern UX designers think everyone is in the first group. Even if an app is naturally multi-time heavy-use.


> because otherwise that info for all features had to be on screen and there’s only so much space on it.

there's practically infinite space on screen since that space has a time dimension and can also be tied to context which also varies. Like with this feature: you can show a tip on the first few searches within a list, you don't need to permanently keep the explanation on screen. Or you can show a hint with ll by having a different style of the second l or something similar

Undiscoverability is precisely one big reason why these things get axed since they're not used, so forgotten about in other contexts (web, new UI framework etc)

And yes, needing to reading random articles like this is a major fail in discoverability as you decrease reach and increase cost for the many poor users


I hate the "discoverability" trend. I don't want discoverable UI. I want UI that is consistent with all my other apps, which I have already learned. I should not have to have an app nudge me to "discover" its oh-so-cleverly-hidden features. I'm not Dora the fucking Explorer, I'm a computer user who expects everything on the platform to behave predictably and consistently.

And while we are at it, "lack of use" is not a good reason to axe features. Some things, by their nature, just don't get used as much but when they are needed they are needed. Product designers have become slaves to their telemetry and metrics, and are letting the tail wag the dog.


I wasn't talking about a trend, but real discoverability, so your rant is misplaced.

> expects everything on the platform to behave predictably and consistently.

and poor discoverability makes this expectation even less likely to be met

Similarly

> "lack of use" is not a good reason to axe features.

But it is a great reason not to implement features in the new framework since you're not even aware of them because they're undiscoverable! (I know, I know, not to you, you've already wasted time doing the discovery the hard way by reading some obscure blog, but then you go teach the devs of those new frameworks!)

Also, what is this imaginary telemetry that is able to track that I want ll to land on llama instead of blindly following invisible-mode#1? It would be awesome if product designers were that competently informed, we would've gotten universally great UIs!


Real discoverability is as real as discoverability of subj, so we’re all misplaced. That bugs me the most, much talks, crap UIs. Empty rationalizing it to death instead of cultivating feature cultures that already work and do not require “design”.


Probably more historical. My recollection is that lists originally worked that way (same letter many times) for a long time. Then a decade+ later I realized it had become smarter.


> Definitely, it’s an industry-wide ailment caused by a focus on the web and neophyte users.

The constant praise for web UI despite the issues always irritated me deeply.

It's like you're supposed to only say nice things about where we are because other people are saying that. And the more impressionable among us believe that because the Thought Leaders are saying that, then it is truth. And the circle continues.


Because most of the time you didn't have to implement this. The UI toolkit implemented this. When you have to do everything from scratch, most people don't.

Take a simple link navbar with a menu of links on a webpage. This has existed since forever. It should be that you can open the menu with your keyboard and use up/down to navigate it. But that means writing JS code. So devs who did it from scratch didn't do it, and only those who used an UI library with that already programmed in by someone else got it.


But "waaaahhhh native apps are soooo hard" they cry.

IDK man, every project from a single developer who uses Win32 widgets is infinitely better, more predictable, more usable, and more professional looking than any shitty electron app.


I agree. I'm writing a desktop app based on a web stack and I'm agonizing over tiny details in how tabbing should work and how the arrow keys should interact with lists etc. I don't know if my UI is any good, but I know that good UI takes a tremendous amount of attention to detail.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: