I'm the same. Having a slew of expert tuned models or submodels or whatever the right term of for each kind of problem seems like the "cheating" way (but also the way I would have expected this kind of thing to work, as you can use the tool for the job, so to speak. And then the overall utility of the system is how well it detects and dispatches to the right submodels and synthetises the reply.
Having one massive model that you tell what you want with a whole handbook up front actually feels more impressive. Though I suppose it's essentially doing the submodels thing implicitly internally.
It would be a fascinating thing to happen though. It makes me think of the Greg Egan story Unstable Orbits in the Space of Lies. But instead of being attracted into religions based on physical position relative to a strange attractor, you're sucked in based on your location in the phase space of an AI's (for whatever definition of AI we're using today) collection of contexts.
It's also a little bit worrying because the information here isn't mysterious or ineffable, it's neatly filed in a database somewhere and there's an organisation that can see it and use it. Cambridge Analytica and the social fallout of realtime sentiment analysis correlation to actions taken has got us from 2016 to here. This data has potential to be a lot richer, and permit not only very detailed individual and ensemble inferences of mental states, opinions, etc., but also very personalised "push updates" in the other direction. It's going to be quite interesting.
I wouldn't call it fascinating. It's either sloppy engineering or failure to explain the product. Not leaking user details to other users should be a given.
It would absolutely be fascinating. Unethical in general and outright illegal in countries that enforce data protection laws, certainly. Starting hundreds of microreligions that evolve in real time and bring able to track it per-individual and with second-by-second timings, and being able to A-B test modifications (or Α-Ω test, if you like!) would be the most interesting thing to happen in cognitive science ever and in theology in at least centuries.
Islam has a very similar concept in the Dajjal (deceptive Messiah) at the end times. Explicitly described as a young man with a blind right eye, however, at least he should be obvious when he comes! But there are also warnings about other false prophets.
(It also says Qiyamah will occur when "wealth overflows" and people compete over it: make of that what you will).
I think all religions have built in protections calling every other religion somehow false, or they will not have the self-reinforcement needed for multi-generational memetic transfer.
They're also quite inefficient (75% ish, which is not horrible, but still something) as you lose energy both in pumping and generation.
So the more you use them, the more energy you need in the first place, so you try not to use them too much. And the worst thing for a capital intensive business is to not get used much. Even with the arbitrage advantage, it takes a long time to pay it off. The grid may pay a "retainer" to sweeten the deal but you can't just build more and expect them all to get that benefit.
It's similar to the problem that "use excess energy to electrolyse water into hydrogen" has: no-one running a multi-billion electrolyser really wants to run it a few hours a day only when the grid is oversupplied. And in top of that, risk being cut off at the knees decades before breakeven if someone comes along with a cheaper/more profitable way to deal with the oversupply.
I constantly see people reply to question with "I asked ChatGPT for you and this is what it says" without a hint of the shame they should feel. The willingness to just accept plausible-sounding AI spew uncritically and without further investigation seems to be baked into some people.
At least those folks are acknowledging the source. It's the ones who ask ChatGPT and then give the answer as if it were their own that are likely to cause more of a problem.
That sort of response seems not too different from the classic "let me google that for you". It seems to me that it is a way to express that the answer to the question can be "trivially" obtained yourself by doing research on your own. Alternatively it can be interpreted as "I don't know anything more than Google/ChatGPT does".
What annoys me more about this type of response is that I feel there's a less rude way to express the same.
Let me google that for you is typically a sarcastic response pointing out someone’s laziness to verify something exceptionally easy to answer.
The ChatGPT responses seem to generally be in the tone of someone who has a harder question that requires a human (not googleable), and the laziness is the answer, not the question.
In my view the role of who is wasting others time with laziness is reversed.
The thing is, the magic robot's output can be wrong in very surprising/misleading/superficially-convincing ways. For instance, see the article we are commenting on; you're unlikely to find _completely imaginary court cases to cite_ by googling (and in that particular case you're likely using a specialist search engine where the data _is_ somewhat dependable, anyway).
_Everything_ that the magic robot spits out needs to be fact checked. At which point, well, really, why bother? Most people who depend upon the magic robot are, of course, not fact checking, because that would usually be slower than just doing the job properly from the start.
You also see people using magic robot output for things that you _couldn't_ Google for. I recently saw, on a financial forum, someone asking about ETFs vs investment trusts vs individual stocks with a specific example of how much they wanted to invest (the context is that ETFs are taxed weirdly in Ireland; they're allowed accumulate dividends without taxation, but as compensation they're subject to a special gains tax which is higher than normal CGT, and that tax is assessed as if you had sold and re-bought every eight years, even if you haven't). Someone posted a ChatGPT case study of their example (without disclosing, tsk; they owned up to it when people pointed out that it was totally wrong).
ChatGPT, in its infinite wisdom, provided what looked like a detailed comparison with worked examples... only the timescale for the individual stocks was 20 years, the ETFs 8 years (also it screwed up some of the calculations and got the marginal income tax rate a few points wrong). It _looked_ like something that someone had put some work into, if you weren't attuned to that characteristic awful LLM writing style, but it made a mistake that it's hard to imagine a human ever making. Unless you worked through it yourself, you'd come out of it thinking that individual stocks were clearly a _way_ better option; the truth is considerably less clear.
The issue is not truth, though.
It's the difference between completely fabricated but plausible text generated through a stochastic process versus a result pointing towards writing at least exists somewhere on the internet and can be referenced.
Said source may be have completely unhinged and bonkers content (Time Cube, anyone?),
but it at least exists prior to the query.
Go look at "The Credit Card Song" from 1974. It's intended to be humorous, but the idea of uncritically accepting anything a computer said was prevalent enough then to give the song an underlying basis.
'member the moral panic when students started (often uncritically) using Wikipedia ?
Ah, we didn't knew just how good we had it...
(At least it is (was ?) real humans doing the writing, you can look at modification history, well made articles have sources, and you can debate issues with the article in the Talk page and even maybe contribute directly to it...)
If I wanted ChatGPT's opinion, I'd have asked ChatGPT. If I'm asking others, it's because it's too important to be left to ChatGPT's inaccuracies and I'm hoping someone has specific knowledge. If they don't, then they don't have to contribute.
It's not constructive to copy-paste LLM slop to discussions. I've yet to see a context where that is welcome, and people should feel shame for doing that.
Watching nearly the entire software-financial complex burn to the ground when the vaunted "moats" dry up is going to be a hell of a sight. All this AI hype is just going to end up commodifying the very thing that the entire industry is built on: management of processes.
Places that understand that physical production cannot be abstracted forever will prevail.
Correction: those that don't enter a polling station. What you do in there is up to you. You can cast a vote, spoil the ballot, cast a "donkey vote" (numbering the options in the order printed), leave the ballot empty, as long as it goes in the box.
> Going for someone's feelings is just kinda silly.
It's also extremely counterproductive, because anyone who did care about their work being any good will quickly be turned into a grey rock by phrases like "you messed up", "unacceptable" and and "horrific".
And those who don't care about their work also don't care a jot what you think about it.
I think the core issue is that everyone reacts differently to different approaches of conveying a problem. Some people you scream at them "You're trash, you suck!" and their motivation to succeed explodes. Do it to others and they just collapse and check out. Some people screw up big, get a gentle talking to, and walk away feeling like it's no big deal. Others are dead inside and know they can never make that mistake again.
There is no magic response, it needs to be tailored to the individual, and being able to read what kind of response is right for which individual is part of what separates shitty and great managers.
Having one massive model that you tell what you want with a whole handbook up front actually feels more impressive. Though I suppose it's essentially doing the submodels thing implicitly internally.
reply