>Isn't that what mathematical extrapolation or statistical inference does?
Obviously not, since those are just producing output based 100% on the "sum total of past experience and present (sensory) input" (i.e. the data set).
The parent's constraint is not just about the output merely reiterating parts of the dataset verbatim. It's also about not having the output be just a function of the dataset (which covers mathematical and statistical inference).
Intelligent living beings have natural, evolutionary inputs as motivation underlying every rational thought. A biological reward system in the brain, a desire to avoid pain, hunger, boredom and sadness, seek to satisfy physiological needs, socialize, self-actualize, etc. These are the fundamental forces that drive us, even if the rational processes are capable of suppressing or delaying them to some degree.
In contrast, machine learning models have a loss function or reward system purely constructed by humans to achieve a specific goal. They have no intrinsic motivations, feelings or goals. They are statistical models that approximate some mathematical function provided by humans.
In my view, absolutely yes. Thinking is a means to an end. It's about acting upon these motivations by abstracting, recollecting past experiences, planning, exploring, innovating. Without any motivation, there is nothing novel about the process. It really is just statistical approximation, "learning" at best, but definitely not "thinking".
Again the problem is that what "thinking" is totally vague. To me if I can ask a computer a difficult question it hasn't seen before and it can give a correct answer, it's thinking. I don't need it to have a full and colorful human life to do that.
But it's only able to answer the question because it has been trained on all text in existence written by humans, precisely with the purpose to mimic human language use. It is the humans that produced the training data and then provided feedback in the form of reinforcement that did all the "thinking".
Even if it can extrapolate to some degree (altough that's where "hallucinations" tend to become obvious), it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety".
> it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety".
That's creativity which is a different question from thinking.
I disagree. Creativity is coming up with something out of the blue. Thinking is using what you know to come to a logical conclusion. LLMs so far are not very good at the former but getting pretty damn good at the latter.
> Thinking is using what you know to come to a logical conclusion
What LLMs do is using what they have _seen_ to come to a _statistical_ conclusion. Just like a complex statistical weather forecasting model. I have never heard anyone argue that such models would "know" about weather phenomena and reason about the implications to come to a "logical" conclusion.
I think people misunderstand when they see that it's a "statistical model". That just means that out of a range of possible answers, it picks in a humanlike way. If the logical answer is the humanlike thing to say then it will be more likely to sample it.
In the same way a human might produce a range of answers to the same question, so humans are also drawing from a theoretical statistical distribution when you talk to them.
It's just a mathematical way to describe an agent, whether it's an LLM or human.
They're linked but they're very different. Speaking from personal experience, It's a whole different task to solve an engineering problem that's been assigned to you where you need to break it down and reason your way to a solution, vs. coming up with something brand new like a song or a piece of art where there's no guidance. It's just a very different use of your brain.
I guess our definition of "thinking" is just very different.
Yes, humans are also capable of learning in a similar fashion and imitating, even extrapolating from a learned function. But I wouldn't call that intelligent, thinking behavior, even if performed by a human.
But no human would ever perform like that, without trying to intuitively understand the motivations of the humans they learned from, and naturally intermingling the performance with their own motivations.
Exactly. Lightning minimizes the use of the globally consistent ledger, which inevitably has to make trade-offs due to the "trilemma".
You set aside some funds on the global ledger and then you can use those funds to transact in a much more efficient, truly p2p manner, without touching the global ledger. Eventually, you net out everything and settle it all at once on-chain, utilizing its features to resolve any conflicts.
I think it all comes down to relativity and the speed of light.
There is no single, universal, true ordered state (ledger/db). Participants need a conflict resolution mechanism to figure out whose truth is correct. One must rely on a localized consinstent state of some authority (leader/consensus).
The fact that every human society ends up with some kind of centralized oligarchy is probably also due to this effect. Something has to resolve disputes about the state of the system.
A solution that somehow goes around these limitations could have implications beyond computing. It could enable “headless” large scale cooperation. This would be a fundamental innovation in the evolution of intelligence generally.
Proof of work is the only one we have that kind of works and it’s massively expensive. You could argue that it’s just a way to make economically irrational or short sighted collusion prohibitively expensive rather than a true solution and might only work in a domain like a currency where there is a direct mapping to cost.
Proof of work essentially allows for a periodic leader election, where the leader has sufficient time to propagate its state update along with a verifiable proof of authority in a permissionless decentralized system.
Decentralized cooperation in society likely wouldn't be permissionless, in most cases you would probably want to assign some voting power per human/citizen/share certificate/whatever. I'm also not sure if decentralized, verifiable randomness is easier to achieve outside of computer networks (for example, source some verifiable randomness from the universe based on a pre-defined algorithm).
> if we prefer to outright shut down online advertising
Yes, please. Both online and offline. Advertising is probably the most useless, annoying and wasteful industry out there.
We could have pull-only databases of businesses, products and services instead. Ideally, with independently verified, fact-checked information and authentic reviews. Realistically though, this kind of objectivity would probably be infeasible to enforce and maintain. But even if we allow for misinformation, paid rankings and whatnot, the point stands: any such database should follow a pull-only model, users access it voluntarily to search for products and services and it's not an unsolicited broadcast to everyone everywhere all the time.
Ideally governments would provide an index of registered businesses with some basic filtering (e.g. location or category of services provided) with a name, address, phone number, and url. Present in random order to be fair.
My state seems to have a search tool, but no list. It also only has name/address (so presumably it's more for serving legal papers or whatever).
If I want to find a plumber, I should be able to ask my government for a list of the licensed plumbers in my area.
Intelligent beings in the real world have a very complex built-in biological error function rooted in real world experiences: sensory inputs, feelings, physical and temporal limitations and so on. You feel pain, joy, fear, have a limited lifetime, etc.
"AI" on the other hand only have an external error function, usually roughly designed to minimize the difference of the output from that of an actually intelligent real world being.
Yeah, that's exactly my thinking man. We have to root intelligence in the real world, otherwise it will endlessly spin in these abstract loops.
Akin to how logic -- untethered by emotion, intuition and experience (wisdom, maybe if you want? Understanding? Sure) -- can justify any obscene conclusion, and can not discriminate between morality.
Reward functions or a system of values -- these things are rooted in real world experience. Logic is required, sure, but insufficient. At least alone! Haha :)
I once watched a Nobel prize winner present. His presentation was a Word document, and he presented by scrolling down. For bonus points you could do the same with `less`.
Yeah I've met plenty of talented people who are terrible at presenting their work. I've often thought such people would benefit from a partner, someone understands the value of the work but is better at channelling it to the world. Like a producer is to a musician.
you probably can do images with vim, either rendered as colored ascii or with some third party software such as w3m (text based web browser that can render images in the terminal)
I think what's more relevant here is Marx's theory of alienation.
In order for people to work less hours at their job there would have to be strong local communities and a kind of socio-economic framework to support constructive, productive and satisfying activities outside of their day job. Things like improving or fixing up the neighborhood or your own household, political activism, caregiving, teaching, citizen science are all examples of things that one could do not for money but for the benefit of the community or the betterment of society. But due to the capitalist mode of production people are alienated from each other and the product of their labor so much that they think of work as a commodity to be traded for money only. And rather than doing these things as a way of finding fulfillment and building communities, people contract others to do it for them for money.
tl;dr: a network of off-chain, transitive payment channels. two on-chain transactions (to fund and settle channel) allow for ~unlimited, ~free, ~instant transactions across a p2p network of nodes off-chain
Payment channels (which Lightning Network uses) allows for unlimited transactions, but not an unlimited number of users, because — as you say — two on-chain transactions (deposit+withdrawal) are required to initiate and finalize a transfer. Given that the Bitcoin blockchain can only handle ~20M transaction per month, this means only 20M people can deposit to/withdraw from the Lightning Network (LN) per month.
To this critique, some reply that merchants will just start trading unsettled (off-chain) LN transactions to pay their suppliers, but no one has presented any model of how this would work in a real economy, with merchants needing to pay suppliers, who need to pay their suppliers, who need to pay their employees.
~20M transaction per month are an arbitrary limit. The economic majority recently decided against increasing that limit, but this doesn't mean it's not going to get increased at some later point.
Of course there are technical reasons for that limit. But whatever the correct size of the limit is - the current limit is almost certainly too low. The Ethereum blockchain is able to handle a significantly higher amount of transactions without implementing all of the performance optimizations that Bitcoin has (such as compact blocks).
Sure, LN alone might not be enough to serve the daily transactions of billions of users. Still, it proves how layers built on top of the blockchain can improve the efficiency by several orders of magnitude with some reasonable trade-offs. LN is probably just the first of such layers.
Things like sidechains, cross-chain atomic swaps, various block space usage improvements (eg. Schnorr signatures) are also in the pipeline right now, all of which could further improve throughput.
So while LN may not be the silver bullet, it makes me hopeful that at least from a technical perspective, Bitcoin will be able to scale to become a global currency.
Bitcoin is 9 year old tech--that is an eternity in tech. Lightning network has been touted as "the scaling solution" for many of those years and has yet to see any use. And even then, the scaling solution Lightning Network provides is "don't use bitcoin". All the solutions for bitcoins scaling problems boil down to "don't use bitcoin".
So if the only way to scale Bitcoin is to not use it... again... what is the point?
I think people equate Bitcoin with the blockchain and by that understanding I think the parent poster is correct. The current blockchain doesn't scale and isn't suitable for every tiny transaction.
Isn't that what mathematical extrapolation or statistical inference does? To me, that's not even close to intelligence.