Sadly current "dishwasher" models are neither self-loading nor unloading. (Seems like they should be able to take a tray of dishes, sort them, load them, and stack them after cleaning.)
The problem is more doing it in sufficiently little space, and using little enough water and energy. Doing one that you just feed dishes individually and that immediate wash them and feed them to storage should be entirely viable, but it'd be wasteful, and it'd compete with people having multiple small drawer-style dishwashers, offering relatively little convenience over that.
It seems most people aren't willing to pay for multiple dishwashers - even multiple small ones or set aside enough space, and that places severe constraints on trying to do better.
> ... it might take you weeks to change your page layout algorithm to accommodate it. If you don’t, customers will open their Word files in your clone and all the pages will be messed up.
Basically, people who use Office have extremely specific expectations. (I've seen people try a single keyboard shortcut, see that it doesn't work in a web-based application, and declare that whole thing "doesn't work".) Reimplementing all that stuff is really time consuming. There's also a strong network effect - if your company uses Office, you'll probably use it too.
On the other hand, people don't have extremely specific expectations for LLMs because 1) they're fairly new and 2) they're almost always nondeterministic anyway. They don't care so much about using the same one as everyone they know or work with, because there's no network aspect of the product.
"Basically, people who use Office have extremely specific expectations."
Interesting point, but to OP's point- This wasn't true when Office was first introduced and Office still created a domineering market share. In fact, I'd argue these moat-by-idiosyncracy features are a result of that market share. There is nothing stopping OpenAI from developing their own over time.
Does office actually have a moat? I thought the kids liked Google docs nowadays. (No opinion as to which is actually better, the actual thing people should do is write LaTeX in vim. You can even share documents! Just have everybody attach to the same tmux session and take turns).
If you're writing a spreadsheet in LaTeX, I suspect something has gone very wrong somewhere along the line.
Spreadsheets are as much a calculation environment as they are a table of figures, and if you're technical enough to be writing docs in LaTeX you should be doing the calculations in a more rigorous environment where you don't copy the formulas between cells.
> This wasn't true when Office was first introduced and Office still created a domineering market share
It was absolutely true at that time, but Microsoft was already a behemoth monopoly with deep customer connections throughout enterprise and was uniquely positioned to overcome the moats of their otherwise fairly secure competitors, whose users were just as loyal then as the GP is describing about users now.
Even if OpenAI could establish some kind of moat, office applications make for very poor analogy as to how they might or whether they need to.
It totally was. Bit different. Excel has Lotus keybindings available to this day, the spreadsheet was the home computer’s 1st killer app and Microsoft killer it and took the market share.
(I've been coding long enough that what Joel writes about there just seems obvious to me: of course it happened like that, how else would it have?)
So, a spreadsheet in the general sense — not necessarily compatible with Microsoft's, but one that works — is quite simple to code. Precisely because it's easy, that's not something you can sell directly, because anyone else can compete easily.
And yet, Microsoft Office exists, and the Office suite is basically a cost of doing businesses. Microsoft got to be market-dominant enough to build all that complexity that became a moat, that made it hard to build a clone. Not the core tech of a spreadsheet, but everything else surrounding that core tech.
OpenAI has a little bit of that, but not much. It's only a little because while their API is cool, it's so easy to work with that you can (I have) asked the original 3.5 chat model to write its own web UI. As it happens, mine is already out of date, because the real one can better handle markdown etc., so the same sorts of principles apply, even on a smaller scale like this where it's more of "keeping up in real time" rather than "349 page PDF file just to get started".
OpenAI is iterating very effectively and very quickly with all the stuff around the LLM itself, the app, the ecosystem. But so is Anthropic, so is Apple, so is everyone — the buzz across the business world is "how are you going to integrate AI into your business?", which I suspect will go about the same as when it was "integrate the information superhighway" or "integrate apps", and what we have now in the business world is to the future of LLMs as Geocities was to the web: a glorious chaotic mess, upon which people cut their teeth in order to create the real value a decade later.
In the meantime, OpenAI is one of several companies that has a good chance of building up enough complexity over time to become an incumbent by a combination of inertia and years of cruft.
But also only a good chance. They may yet fail.
> On the other hand, people don't have extremely specific expectations for LLMs because 1) they're fairly new and 2) they're almost always nondeterministic anyway. They don't care so much about using the same one as everyone they know or work with, because there's no network aspect of the product.
For #1, I agree. That's why I don't want to bet if OpenAI is going to be to LLMs what Microsoft is to spreadsheets, or if they'll be as much a footnote to the future history of LLMs as Star Division was to spreadsheets.
For #2, network effects… I'm not sure I agree with you, but this is just anecdotal, so YMMV: in my experience, OpenAI has the public eye, much more so than the others. It's ChatGPT, not Claude, certainly not grok, that people talk about. I've installed and used Phi-3 locally, but it's not a name I hear in public. Even in business settings, it's ChatGPT first, with GitHub Copilot and Claude limited to "and also", and the other LLMs don't even get named.
The end of the article notes "A liar is someone who only says false statements." I agree that this is quite different from the colloquial definition of a liar, someone who mixes truth and lies in order to mislead.
> As a result, in order to determine if a formula is satisfiable, first convert it to conjunctive normal form, then convert the new formula into a course catalog.
I know this is a consequence of NP-completeness and so on and so forth, but I also find it a funny and charming way to phrase it. Once we've solved the fundamental problem (what courses to take), we're able to solve simple specializations and derivatives (boolean satisfiability).
Very fun. I wonder if a heap could improve the performance of the "loop through the skyline in order to find the best candidate location" step. (I think it would improve the asymptotic time, if I'm understanding the algorithm correctly, but obviously the real performance is more complex).
> Besides talking about the topic, he thinks about how he appears to others, how he may be seen more favorably, how he may win, dominate, impress or escape punishment, and/or how he may avoid or mitigate a perceived attack... Such inner feelings and outward acts tend to create similarly defensive postures in others; and, if unchecked, the ensuing circular response becomes increasingly destructive.
> You only know you’ve shipped when your company’s leadership acknowledge you’ve shipped. A congratulations message in Slack from your VP is a good sign, as is an internal blog post that claims victory. For small ships, an atta-boy from your manager will do. This probably sounds circular, but I think it’s a really important point.
Matt Levine had an interesting point/counterpoint here:
> There are a lot of ways to put your money into the stock market; some of them feel mostly like “fun gambling,” while others feel like “boring sensible investment of retirement savings in the long-term growth of the economy.” A lot of retail brokerage accounts will let you do both. If you are young, carefree and confident, and don’t have very much money, you might gravitate toward the gambling stuff. And then later, as you get more money and responsibility and grow up a bit, you will get more curious about buy-and-hold investing in low-cost index funds.
> And you can do both of those things — and transition between them — in the same brokerage account. This is good! Arguably it is socially useful for brokerage apps be competitive with, say, sports gambling apps, to be as fun and easy-to-use and exciting and risky as sports gambling. Because investing is competing with sports gambling for young, risk-seeking people’s attention. But the brokerage app is better positioned to transition those people into boring responsible investing. There is no analogous transition in sports gambling. Here is a recent paper finding that sports betting “does not displace other gambling or consumption but significantly reduces savings, as risky bets crowd out positive expected value investments.” Zero-day options are, arguably, a bit better than that.
> Again you cannot get too carried away with this theory. Is it socially beneficial to introduce riskier and zanier financial products to retail brokerages? Well, if the zanier products lure customers away from sports betting, and start them on a road to mature sensible investing, then yes. If the zanier products lure customers away from index funds, and start them on a road away from sensible investing, then no. I’m not sure there’s an a priori answer.
I'd still argue it's all gambling, even though there are many safe and boring bets. We can't guarantee Gold won't be obsolete in 40 years or thst we find some insane vein or synthesis process that deflates the value. We can't guarantee we hit some depression right when you want to cash out.
Leaving your money to banks with the hopes you end up on top is pretty much gambling no matter what. Even if it's safer than most traditional gambling.
It does fit the second dictionary definition of gambling: "take risky action in the hope of a desired result". However at that point almost everything you do in life could be called 'gambling' so it kind of loses all meaning. Buying a Big Mac? Gambling that it isn't infected with e coli. And so on.
reply