I just tested 1700mbits/s from my iPhone 17 PM in the next room over from my Ubiquiti E7 and I don’t even have MLO enabled. Something’s very wrong if you’re only getting 600mbit.
My theory, beyond their organizational incentive issues, is that Google’s UIs are so pathetically bad because the company is so gung ho about “web first”. The web is a wonderful thing, but it’s set UI development back by decades.
I think the decline in UI quality is real, but I don't think the web takes all of the blame. The blame that it does take is due to a sort of mixed bag of advantages and disadvantages: web technologies make it quicker and easier to get something interactive on the screen, which is helpful in many ways. On the other hand, because it lowers the effort needed to build a UI, it encourages the building of low-effort UIs.
Other forces are to blame as well, though. In the 80s and 90s there were UI research labs in indistry that did structured testing of user interactions, measuring how well untutored users could accomplish assigned tasks with one UI design versus another, and there were UI-design teams that used the quantitative results of such tests to deign UIs that were demonstrably easier to learn and use.
I don't know whether anyone is doing this anymore, for reasons I'll metion below.
Designing for use is one thing. Designing for sales is another. For sales you want a UI to be visually appealing and approachable. You probably also want it to make the brand memorable.
For actual use you want to hit a different set of marks: you want it to be easy to learn. You want it to be easy to gradually discover and adopt more advanced features, and easy to adapt it to your preferred and developing workflow.
None of these qualities is something that you can notice in the first couple of minutes of interacting with a UI. They require extended use and familiarization before you even know whether they exist, much less how well designed they are.
I think that there has been a general movement away from design for use and toward a design for sales. I think that's perfectly understandable, but tragic. Understandable because if something doesn't sell then it doesn't matter what its features are. Tragic because optimizing for sales doesn't necessarily make a product better for use.
If a large company is making a utility cares, they'll have a ux person/s, sometimes part of a design team, to make sure things are usable.
But if you're really big, you could also test in production with ab testing. But as you said, the motivation tends to be to get people to click some button that creates revenue for the company. (subscribe, buy, click ad)
Somewhat related to this, the google aistudio interface was really pushing gdrive. I think they reduced it now, but in the beginning if you wanted to just upload a single file, you had to upload it to gdrive first and then use it.
There was also some annoying banner you couldn't remove above the prompt input that tried to get you to connect to gdrive.
Yes true. It's basically form over function and it's not just limited to Web UIs.
Windows 11, iOS7, iOS26 are just some example of non Web UIs, which focused first on optimizing for sales, i.e. making something look good without thinking about usability implications.
Maybe in the past this was true, or if you’re using an inferior DB. I know first hand that a Postgres table can work great as a queue for many millions of events per day processed by thousands of workers polling for work from it concurrently. With more than a few hundred concurrent pollers you might want a service, or at least a centralized connection pool in front of it though.
Millions of events per day is still in the small queue category in my book. Postgres LISTEN doesn't scale, and polling on hot databases can suddenly become more difficult, as you're having to throw away tuples regularly.
10 message/s is only 860k/day. But in my testing (with postgres 16) this doesn't scale that well when you are needing tens to hundreds of millions per day. Redis is much better than postgres for that (for a simple queue), and beyond that kafka is what I would choose in you're in the low few hundred million.
This "per hour" and "per day" business has to end. No one cares about "per day" and it makes it much harder to see the actual talked about load on a system. The thing that matters is "per second", so why not talk about exactly that? Load is something immediate, it's not a "per day" thing.
If someone is talking about per day numbers or per month numbers they're likely doing it to have the numbers sound more impressive and to make it harder to see how few X per second they actually handled. 11 million events per day sounds a whole lot more impressive than 128 events per second, but they're the same thing and only the latter usually matters in these types of discussions.
I listened to an interview with a researcher a while back who hypothesized that human reasoning probably evolved not mostly for the abstract logical reasoning we associate with intelligence, but to “give reasons” to motivate other humans or to explain our previous actions in a way that would make them seem acceptable…social utility basically. My experience with next token predicting LLMs aligns with human communication. We humans rarely complete a thought before we start speaking, so I think our brains are often just predicting the next 1-5 words that will be accepted by who we’re talking to based on previous knowledge of them and evaluation of their (often nonverbal) emotional reactions to what we’re saying. Our typical thought patterns may not be as different from LLMs’ as we think.
IIRC the researcher was Hugo Mercier, probably on Sean Carroll’s fantastic Mindscape podcast, but it might have been Lex Fridman before he strayed from science/tech.
"reasoning evolved not to complement individual cognition but as an argumentative device" -- and it has more positive effects at social level than at individual level
> and it has more positive effects at social level than at individual level
Now it raises the question should we be reasoning in our head then? Is there a better way to solve intractable math problems for example? Is math itself a red herring created for argumentative purposes?
We can never know, but I personally favour the rise of "handedness" and the tool-making (technological) hypothesis. To make and use tools, and to transfer the recipes and terminology, we must educate one another.
"In the physical adaptation view, one function (producing speech sounds) must have been superimposed on existing anatomical features (teeth, lips) previously used for other purposes (chewing, sucking). A similar development is believed to have taken place with human hands and some believe that manual gestures may have been a precursor of language. By about two million years ago, there is evidence that humans had developed preferential right-handedness and had become capable of making stone tools. Tool making, or the outcome of manipulating objects and changing them using both hands, is evidence of a brain at work." [1]
Not choosing exactly what words you want to use is something very different than not completing a thought IMO. When you speak you may not know exactly which words you're going to use to communicate an idea, but you already know the idea that you're communicating. By contrast LLMs have no such concept as an idea - only words.
And it's also important that language and words are just another invention of humanity. We achieved plenty before we had any meaningful language whatsoever. At the bare minimum we reproduced - and think about all that's involved in forming a relationship, having a child, and then successfully raising that child for them to go on and do the same, all without any sort of language. It emphasizes that ideas and language are two very different concepts, at least for humans.
Interesting. N.J. Enfield (Linguist, Anthropologist) makes a similar point about the purpose for which language evolved for in "Language vs Reality". I'm paraphrasing loosely, but the core argument is that the primary role of language is to create an abstraction of reality in order to convince other people, than to accurately capture reality. He talks about how there are 2 layers of abstraction - how our senses compress information into higher order concepts that we consciously perceive, and how language further compresses information about these higher order concepts we have in our minds.
Why would a human need to develop the ability to convince others if truth should be enough? One would have to make the argument that convincing others and oneself involves things that are not true to at least one party (as far as they know). I don't know why a species would develop misunderstanding if truth is always involved. If emotions/perception are the things that create misunderstanding, then I can see the argument for language as necessary to fix misunderstanding in the group. On some level, nature thought it correct to fix misunderstanding on a species level.
The problem with this line of reasoning is that “truth” is a word, and therefore not “enough” be definition of not being a physical thing in the world. Without communication there can be no lies, so truth doesnt mean anything.
If by “truth” you mean more like Kants “the thing in itself”, then the problem there is we need abstraction. If I show you how to make an arrowhead, somehow I need to convey that you can follow the same process with your own piece of flint, and that they are both arrowheads. Without any language abstraction my arrowhead and your arrowhead are just two different rocks with no relation to each other.
I have had the same suspicion. I can propose a new kind of ongoing Turing-like test where we track how many words are suggested on our phones (or computers) as we type. On my phone it guesses the next single word pretty well, so why not the next two? Then 3... imagine half-way through a message it "finishing your sentence" as close friends and family often do. Then why should it wait for halfway? What are the various milestones of finishing the last word, last 5 words, half the sentence, 80%, etc?
There's also the whole predictive processing camp in cognitive science whose position is loosely similar to the author's, but the author makes a much stronger commitment to computationalism than other researchers in the camp.
This just doesn't explain things by itself. It doesn't explain why humans would care about reasoning in the first place. It's like explaining all life as parasitic while ignoring where the hosts get their energy from.
Think about it, if all reasoning is post-hoc rationalization, reasons are useless. Imagine a mentally ill person on the street yelling at you as you pass by: you're going to ignore those noises, not try to interpret their meaning and let them influence your beliefs.
This theory is too cynical. The real answer has got to have some element of "reasoning is useful because it somehow improves our predictions about the world"
Sometimes you're in a bubble. I'm in some niche fiction communities, and we can get some really warped perceptions about what people like to read because we are so deep in our own niche.
There’s definitely a need for this! I’ve been thinking hard recently about how I could go part time without leaving the industry. Almost all of the good SWE work situations I’ve heard of require full time. Even the contractors I’ve worked with in my career have been full time.
It’s jarring and galling to see management and science put together in a way that’s suggestive of management being a science. It reeks of stolen valor.
I think in this context Management Science is an older term that was synonymous with operations research. The flagship journal of Informs (the institute for operations research and management science) has the same name. Studying how to optimize thing, lots of statistics and math. Stanford was at the forefront of the field from George Danzig onwards. So not trying to make management a “science” in this case.
Finding a good human therapist match is difficult, and it's far more difficult for people who are neurodivergent in any way. If a therapist isn't experienced in the way ADHD or autistic brains work differently they often simply don't have the mental model required to understand and help at all and they give advice that's completely inappropriate. They might have worked with a lot of OCD people, but they won't really get it unless they truly specialize in it or are afflicted themselves. Most of the popular depictions of mental differences, and even many of the perspectives in medical literature of them are wrong or use technical language that's terribly easy to misunderstand out of context. I'm terrified to think about the misconceptions that an LLM would have after ingesting all the Internet content about mental differences/illnesses!
And it's not just having a good mental or virtual-mental model of an illness, personal circumstances also make all the difference. A human therapist learns about your personal circumstances and history and learns the ways that your individual thought patterns diverge from the norm and from the norm of people whose brains differ in the same way as yours. LLMs as they are now don't incorporate memory of past conversations and will never be able learn about you to customize their responses appropriately!
At most I could accept if this were a feature for children’s accounts that parents could choose to enable. I’m not sure I’m Ok with even that though because slippery slopes quite often start with “ think of the children”.
Audio/Video/Chat services should probably be demanded(maybe even required by law) to be dumb pipes that never filter ANYTHING.