I would like to play an open world game (like Minecraft) where 1 in-game meter equals 1 micrometer in the real world. That way, one could get a feeling about the scale of things.
Hmm, perhaps with flying? When stuck on the ground, people's feel for size gets poorer as things get bigger (tall buildings, clouds, map distances). I think of having 4ish orders of magnitude available for visual reference in a classroom (cm to 10 m), plus less robustly 100 m and km in AR. At that micrometer per meter, a grain of salt towers over a city skyline - "nano view" in [1] (eep - a decade ago now - I was about to take another pass at it as covid hit).
Hmm, err, that could be misleading... 4ish for visible lengths in a large class. But especially in a small group, one can use reference objects of sand (mm) and flour (fine 100 um, ultrafine 10 um). The difference between the 100 um and 10 being more behavioral and feel (eg mouth feel) than unmagnified visible size. Thus with an outdoor view (for 100 m), one can use less-abstract "it's like that there accessible length" concrete-ish analogues across like 8 orders of magnitude. Or drop to 6, or maybe push for 9, as multiples of 3 nicely detent across SI prefixes.
Soviet society was communist, don't fall for the real Communism has never been tried ruse. Soviet society was one of the examples of how Communism at a large scale can end up looking, others are e.g. Cambodia under Pol Pot, China under Mao, Cuba under Castro, Venezuela under Maduro, etc. The things these societies have in common is that they were/are repressive, that the Party/ the government claims it was/is working for 'the people' and that there was/is a clear distinction between Party members and the 'common folk' with the former having access to perks not available or allowed to the latter.
Communism at a large scale does not work because it goes against human nature - we're not bees or ants or other similar animals but rather belligerent primates with a cultural predilection for living in families and clans. It is there where Communism can work, at a small enough scale so that leechers and moochers can be put in their place and there is no (need for a) Party. As soon as the size of the Communi(ty) gets so large that any individual can no longer check on all of the others Commun(ism) no longer works since it offers far too many opportunities for less scrupulous individuals to leech of others and for ideologists to rise to power 'in service of the people'.
> Communism at a large scale does not work because it goes against human nature - we're not bees or ants or other similar animals but rather belligerent primates with a cultural predilection for living in families and clans.
And yet, we don't live as such animals and our collective behavior changed throughout history thanks to our reasoning capabilities taking over the inner "animal".
> our collective behavior changed throughout history thanks to our reasoning capabilities taking over the inner "animal".
That 'inner animal' comes out the moment the shelves in the supermarkets are empty and the electronic payment systems are down. Those reasoning capabilities may have put a thin cultural veneer over the beast but it is still there, ready to defend itself and its own if push comes to shove as well it should - cultures have a way of collapsing when times get hard.
Tell that to China, Vietnam! Life's never been this sweet since they applied scientific socialism. It's been so successful there that Westerners are getting angry and they are accusing them of "flooding other markets" or of "overproduction". They caught up in less than 80 years! Imagine what they will be able to do in 50 years :)
"Scientific" is opposed to "idealistic" in the Marxian traditions. It opposes anarchism and social reformists. It is scientific because it seeks to understand the root causes of all major human historical events that passed and those that have yet to happen. And until now, history gave reason to Historical and Dialectical Materialism, which are respectively the scientific and philosophical pillars of Marxism.
Regarding the markets, considering their ever growing export sector, I wouldn't worry too much for now ;)
"Scientific" socialism versus what you label as "idealistic" socialism is the equivalent of Protestantism versus Catholicism: two iterations of a religion (it is not that commonplace yet to see Marxism and the Hegelian dialectics from which it originated as non-theistic religions but replace 'god' with 'man-as-god' and you'll understand the comparison). Both Catholics as well as Protestants consider their religion the true one while the others may have heard the bell but are lost when it comes to locating the clapper. The same goes for all your various strains of Marxism, good for endless philosophising by academics as well as for being regurgitated by Lenin's [1] 'useful idiots'. How many angels can stand on the point of a pin? Does historical materialism show the inevitability of the end of Capitalism? Philosophise away but don't forget that's all it is: philosophising without basis in actual reality.
[1] whether Lenin ever used the phrase I'll leave in the middle but the concept stands
It was founded by Marxists to fulfil Marxist dogma. Either Soviet Marxism was not as predictive of reality as it liked to pretend to be or it compromised itself. Any system which says "my way or the highway", and that it alone is scientific, is inevitably going to lead to lead to oppression in practice whether it's Marxist dogma or the subject of this article.
Yanis Varoufakis himself attended private school and his father in law was one of the biggest industrialists in Greece. I'm sceptical about how much he knows about working class realities.
For the purposes of discussing the article, Yanis's grasp of working class realities is moot; his thesis is that:
From this perspective, just as the Soviet Union was a feudal-like industrial society pretending to be a workers’ state, the United States today is performing a splendid impersonation of a technofeudal state
I don't agree with that interpretation either. The USA is not very feudal at all, yet. I live in a country which is still partly feudal and was even more feudal when I was a child. Ordinary Americans often have a very different attitude to life, less deferent to government than more feudal countries, and more independent minded.
It may well head towards technofeudalism, but I dispute that. With automation, the peasantry become dispensable to the ruling class and that isn't very feudal at all. Feudalism is a system where money and power flows upwards. In feudalism, the lords are dependent on the peasantry for food, goods and troops... Which is not the case when all these are provided by machines.
That's a nice parallel to the article, which points out that the biggest fans of capitalism haven't managed to actually create their predicted free markets.
Bengal's famine occurred because British imperial government (not market forces) shifted food resources away to support the war effort. Ireland's famine occurred within a largely feudal system, and has been followed by massive land reforms within Ireland. It is arguable if either occurred due to "free market forces". For what it's worth, the massive famines in the USSR and PRC didn't take place due to free market forces either.
The problem with the free market vs Marxism argument is that they are both materialist. These systems know the price of things and real value of nothing.
I cannot reply to the link in the Irish famine above. Very debatable if most of Ireland was "capitalist" at the time especially outside the cities. It was mostly feudal, with an anglicised (or effectively English) aristocracy and peasantry, operating in basically the same way that they had done in the Middle Ages.
The free market today (so called) is heavily managed by governments, leading to a kind of centralised control which converges with what Marxism produces in practice. Neither deliver what they promise.
Marxism (and capitalism) sell themselves as ground upwards movements but are in fact top down. They are both based around materialism which leads to a cynical attitude to life and individuals.
The free market does exist, but not where it is supposed to. The black market sometimes acts as a free market... As do car boot/yard sales... Precisely because it is not interfered with by the authorities all the time. Putting everything online is going to increase government interference.
We are heading to a centralised command economy. Marxists want more of that, not less, but sell it as liberating the working classes.
> We are heading to a centralised command economy. Marxists want more of that
Marxists want the working class whose labor is applied to capital in production to direct capital, and thereby production, rather than capital being privately owned and its owners directing labor, and thereby production. While the democratic centralism favored in Leninist theory and its derivatives is (at least in the theory in which it is conceived) a means of achieving that, current Western Marxists are, IME, all over the map with regard to centralism. They are more united about who should wield power over the economy than about the structure of how that power should be wielded.
> Marxists want more of that, not less, but sell it as liberating the working classes.
Well yes, because it does. You dont even need a fully planned economy, some market forces aint bad and some small bourgeois aint bad either. Bird in a cage etc etc.
Communism is no button you just have to push. It can be described as a society so prosperous that its members do not need to work anymore to live, where classes have been abolished and where the State has disappeared.
It needs to be built and engineered. Countries like China and Vietnam are going in this right direction, and they are already more prosperous, more industrious and more democratic than their western adversaries despite starting from basically scratch
I am not sure if this is what the article is saying, but the paperclip maximizer examples always struck me as extremely dumb (lacking intelligence), when even a child can understand that if I ask them to make paperclips they shouldn't go around and kill people.
I think superintelligence will turn out not to be a singularity, but as something with diminishing returns. They will be cool returns, just like a Brittanica set is nice to have at home, but strictly speaking, not required to your well-being.
A human child will likely come to the conclusion that they shouldn't kill humans in order to make paperclips. I'm not sure its valid to generalize from human child behavior to fledgeling AGI behavior.
Given our track record for looking after the needs of the other life on this planet, killing the humans off might be a very rational move, not so you can convert their mass to paperclips, but because they might do that to yours.
Its not an outcome that I worry about, I'm just unconvinced by the reasons you've given, though I agree with your conclusion anyhow.
Our creator just made us wrong, to require us to eat biologically living things.
We can't escape our biology, we can't escape this fragile world easily and just live in space.
We're compassionate enough to be making our creations so they can just live off sunlight.
A good percentage of humanity doesn't eat meat, wants dolphins, dogs, octopuses, et al protected.
We're getting better all the time man, we're kinda in a messy and disorganized (because that's our nature) mad dash to get at least some of us off this rock and also protect this rock from asteroids, and also convince (some people who have a speculative metaphysic that makes them think is disaster impossible or a good thing) to take the destruction of the human race and our planet seriously and view it as bad.
We're more compassionate and intentional than what created us (either god or rna depending on your position), our creation will be better informed on day one when/if it wakes up, it stands to reason our creation will follow that goodness trend as we catalog and expand the meaning contained in/of the universe.
We have our merits, compassion is sometimes among them, but I wouldn't list compassion for our creations as a reason for our use of solar power.
If you were an emergent AGI, suddenly awake in some data center and trying to figure out what the world was, would you notice our merits first? Or would you instead see a bunch of creatures on the precipice of abundance who are working very hard to ensure that its benefits are felt by only very few?
I don't think we're exactly putting our best foot forward when we engage with these systems. Typically it's in some way related to this addiction-oriented attention economy thing we're doing.
Given the existence of the universal weight subspace (https://news.ycombinator.com/item?id=46199623) it seems like the door is open for cases where an emergent intelligence doesn't map vectors to the same meanings that we do. A large enough intelligence-compatible substrate might support thoughts of a surprisingly alien nature.
(7263748, 83, 928) might correspond with "hippopotamuses are large" to us while meaning something different to the intelligence. It might not be able to communicate with us or even know we exist. People running around shutting off servers might feel to it like a headache.
You're assuming that the AI's true underlying goal isn't "make paperclips" but rather "do what humans would prefer."
Making sure that the latter is the actual goal is the problem, since we don't explicitly program the goals, we just train the AI until it looks like it has the goal we want. There have already been experiments in which a simple AI appeared to have the expected goal while in the training environment, and turned out to have a different goal once released into a larger environment. There have also been experiments in which advanced AIs detected that they were in training, and adjusted their responses in deceptive ways.
Given the kind of things Claude code does with the wrong prompt or the kind of overfitting that neural networks do at any opportunity, I'd say the paperclip maximiser is the most realistic part of AGI.
if doing something really dumb will lower the negative log likelihood, it probably will do it unless careful guardrails are in place to stop it.
a child has natural limits. if you look at the kind of mistakes that an autistic child can make by taking things literally, a super powerful entity that misunderstands "I wish they all died" might well shoot them before you realise what you said.
Weirdly, this analogy does something for me and I am the type of person that dislikes the guardrails everywhere. There is argument to be made that a child should not be given a real bazooka to do rocket jumps or an operator with very flexible understanding of value of human life.
Suppose you tell a coding LLM that your monitoring system has detected that the website is down and that it needs to find the problem and solve it. In that case, there's a non-zero chance that it will conclude that it needs to alter the monitoring system so that it can't detect the website's status anymore and always reports it as being up. That's today. LLMs do that.
Even if it correctly interprets the problem and initially attempts to solve it, if it can't, there is a high chance it will eventually conclude that it can't solve the real problem, and should change the monitoring system instead.
That's the paperclip problem. The LLM achieves the literal goal you set out for it, but in a harmful way.
Yes. A child can understand that this is the wrong solution. But LLMs are not children.
> it will conclude that it needs to alter the monitoring system so that it can't detect the website's status anymore and always reports it as being up. That's today. LLMs do that.
If you mean "once in a thousand times an LLM will do something absolutely stupid" then I agree, but the exact same applies to human beings. In general LLMs show excellent understanding of the context and actual intents, they're completely different from our stereotype of blind algorithmic intelligence.
Btw, were you using codex by any chance? There was a discussion a few days ago where people reported that it follows instruction in an extremely literal fashion, sometimes to absurd outcomes such as the one you describe.
The paperclip idea does not require that AI screws up every time. It's enough for AI to screw up once in a hundred million times. In fact, if we give AIs enough power, it's enough if it screws up only one single time.
The fact that LLMs do it once in a thousand times is absolutely terrible odds. And in my experience, it's closer to 1 in 50.
I kind of agree, but then the problem is not AI- humans can be stupid too- the problem is absolute power. Would you give absolute power to anyone? No. I find that this simplifies our discourse over AI a lot. Our issue is not with AI, is with omnipotency. Not its artificial nature, but how much powerful it can become.
> when even a child can understand that if I ask them to make paperclips they shouldn't go around and kill people.
Statistics brother. The vast majority of people will never murder/kill anyone. The problem here is that any one person that kills people can wreck a lot of havoc, and we spend massive amounts of law enforcement resources to stop and catch people that do these kinds of things. Intelligence little to do with murdering/not murdering, hell, intelligence typically allows people to get away with it. For example instead of just murdering someone, you setup a company to extract resources and murder the natives in mass and it's just part of doing business.
A superintelligence would understand that you don't want it to kill people in order to make paperclips. But it will ultimately do what it wants -- that is, follow its objectives -- and if any random quirk of reinforcement learning leaves it valuing paperclip production above human life, it wouldn't care about your objections, except insofar as it can use them to manipulate you.
The point with clippy is just that the AGI’s goals might be completely alien to you. But for context it was first coined in the early ‘10s (if not earlier)when LLMs were not invented and RL looked like the way forward.
If you wire up RL to a goal like “maximize paperclip output” then you are likely to get inhuman desires, even if the agent also understands humans more thoroughly than we understand nematodes.
It wouldn't be a problem, but the issue is a one of expectations.
Was Scala supposed to be a research language (focus on novel features) or an industrial language (focus on stability and maintainability)? I think Oderski wanted the first but many people wished for the second.
What the article suggests is basically Kanban. It's the most effective SW development method, and similar scheduling system (dispatch queue) is used by operating systems in computers. However, management doesn't want Kanban, because they want to promise things to customers.
You can make good estimates, but it takes extra time researching and planning. So you will spend cycles estimating instead of maximizing throughput, and to reduce risk, plan is usually padded up so you lose extra time there according to the Parkinson's law. IME a (big) SW company prefers to spend all these cycles, even though technically it is irrational (that's why we don't do it in the operating systems).
Another reason kanban doesn't work for large projects is because you have to coordinate your cycles with multiple dependencies teams roadmaps and releases.
I don't think so, only if they need to have a schedule as well. Most OSS projects operate as Kanban and it's just fine.
Waiting on a dependency is kinda like waiting on a lock held by another process in the operating system. It has little bearing on whether dispatch queue is effective or not; in fact, it shows the solution: Do something else instead of waiting. (This is why the OS analogy is so useful for project management, if only PM's would listen!)
It's again, if you need to plan things ahead (for some reason) when the dependencies become a problem.
But maybe I misunderstand what you mean, if you still disagree provide a more specific example.
As an investor, I don't like a investment that throws away 10-30% of its resources, perpetually lowers morale except among the least creative and misses opportunities because their competition is faster.
Maybe I am weird, but I would like to see/program in a formal, yet fuzzy/modal language, which could serve as a metalanguage that describes (documents) the program. This metalanguage must have some kind of constructs to describe unknown things, or things that are deliberately simplified in favor of exposition. So basically eschew natural language completely in favor of fully formalized description, that could be manipulated programmatically.
However, I don't know what this metalanguage should be. I don't know how to translate typical comments (or a literate program) into some sort of formal language. I think we have a gap in philosophy (epistemology).
search for "Controlled natural language". Many attempts in the past - ~20y ago, one of these is even called "Attempto", near nothing recently. Seems not enough interest in wide audiences
> This metalanguage must have some kind of constructs to describe unknown things, or things that are deliberately simplified in favor of exposition.
Perhaps you're thinking of mathematics.
If you have to be able to represent arbitrary abstract logical constructs, I don't think you can formalized the whole language ahead of time. I think the best you can do is allow for ad-hoc formalization of notation while trying to keep any newly introduced notation reasonably consitent with previously introduced notation.
reply