My view since 2016 has been that winning elections in the US is about telling a good story. Whether you're trueful or not doesn't really matter as long as people believe it.
Trump's story is pretty ridiculous, there's no way that his plans on how to fix the economy or the border or the whole department of efficiency thing work anywhere close to as well as he says. Regardless, his demographic believes it.
Kamala's story was a lot weaker, involved a ton of hard truths and concessions about things that people in her base care about such as Gaza. Additionally her story on the border was mostly the same thing as Trump's. If you like the border story, why not go for the guy pushing it harder?
Obama had a pretty good story in 2008 (the whole hope thing). Dems need to get back to that.
It would have been pretty silly for Harris to campaign on a Hope and Change™ platform, since that would imply she is doing a very poor job as incumbent.
Not yet, but I'll probably open source it eventually! Still need to clean things up a lot, and implement missing functionality, for example I haven't even bothered to implement audio capture yet, because I wanted to try video first
A technique I've found that works for learning languagea in Anki is to generate or download massive deck, suspend all of it and then whenever I encounter an unfamiliar word in the wild I unsuspend it and set it up for review. Takes like 5 seconds.
You're in too deep of you seriously believe that this is possible currently. All these chatgpt things have a very limited working memory and can't act without a query. That reddit post is clearly not an ai.
We have models with context size well over 100k tokens - that's large enough to fit many full-length books. And yes, you need an input for the LLM to generate an output. Which is why setups like this just run them in a loop.
I don't know if GPT-4 is smart enough to be successful at something like what OP describes, but I'm pretty sure it could cause a lot of trouble before it fails either way.
The real question here is why this is concerning, given that you can - and we already do - have humans who are doing this kind of stuff, in many cases, with considerable success. You don't need an AI to run a cult or a terrorist movement, and there's nothing about it that makes it intrinsically better at it.
Sooner than even the most pessimistic among us have expected, a new, evil artificial intelligence bent on destroying humankind has arrived.
Known as Chaos-GPT, the autonomous implementation of ChatGPT is being touted as "empowering GPT with Internet and Memory to Destroy Humanity."
So how will it do that?
Each of its objectives has a well-structured plan. To destroy humanity, Chaos-GPT decided to search Google for weapons of mass destruction in order to obtain one. The results showed that the 58-megaton “Tsar bomb”—3,333 times more powerful than the Hiroshima bomb—was the best option, so it saved the result for later consideration.
It should be noted that unless Chaos-GPT knows something we don’t know, the Tsar bomb was a once-and-done Russian experiment and was never productized (if that’s what we’d call the manufacture of atomic weapons.)
There's a LOT of things AI simply doesn't have the power to do and there is some humorous irony to the rest of the article about how knowing something is completely different than having the resources and ability to carry it out.
For a while, I have been making use of Clever Hans as a metaphor. The horse seemed smarter than it really was.
They can certainly appear to be very smart due to having the subjective (if you can call it that) experience of 2.5 million years of non-stop reading.
That's interesting, useful, and is both an economic and potential security risk all by itself.
But people keep putting these things through IQ tests; as there's always a question about "but did they memorise the answers?", I think we need to consider the lowest score result to be the highest that they might have.
At first glance they can look like the first graph, with o1 having an IQ score of 120; I think the actual intelligence, as in how well it can handle genuinely novel scenarios in the context window, are upper-bounded by the final graph, where it's more like 97:
So, with your comment, I'd say the key word is: "currently".
Correct… for now.
But also:
> All these chatgpt things have a very limited working memory and can't act without a query.
It's easy to hook them up to a RAG, the "limited" working memory is longer than most human's daily cycle, and people already do put them into a loop and let them run off unsupervised despite being told this is unwise.
I've been to a talk where someone let one of them respond autonomously in his own (cloned) voice just so people would stop annoying him with long voice messages, and the other people didn't notice he'd replaced himself with an LLM.
Anything would be better than the current system where you basically just have one source.
Independently ran mirrors all over the world, along with snapshots.
Have the occasional fork or two. Say your from a small town in Northern Illinois. If you have 2 TB of image archives from a defunct local newspaper, it might be good for photography forks even if it wouldn't make sense for the main archive.
reply