Cost is relative. LIDAR maybe be expensive relative to a camera or two but it’s very inexpensive compared to hiring a full time driver. Crashes aren’t particularly cheap either. Neither are insurance premiums.
Percent utilization for most operating systems is the amount of time the idle task is not scheduled. So for both workloads the idle task was never scheduled, hence 100% "utilization".
Junior engineers are getting hired but there are definitely fewer positions available. It’s scary for sure but it’s also a normal part of the boom & bust cycle that is inherent to the tech industry.
If anything, AI investment is propping us up as some money is still getting invested even though money is expensive at the moment.
In 2002 friends of mine that graduated top 10 at UC Berkeley were struggling to get interviews, never mind jobs. That was the worst dry spell I’ve seen in my career. But they stayed busy and were under employed for a bit and eventually got thier first jobs. One even got picked up by a growing startup named Google.
I think most industries are like this. In hardware engineering we definitely get clobbered roughly once a decade for one reason or another.
The main issue is we’re not there today and it’s not obvious what that world looks like.
We all had junk drawers of useless charging cables, everyone agreed it was stupid, hence a universal charging connector standard along with the promise that the charger junk drawers will be freed.
Even if we mandate the “POSIX of smart phones”, for lack of a better term, what problem today, for everyday users, does it solve? It might even make interactions with various government technology worse as that API will likely only be begrudgingly supported, which won’t win any hearts or minds.
Basically until you have a one line slogan that most people can relate to which, and is a problem they have today, movement will be very slow.
Also, in the short term, if these various site are AI coded, and thus follow existing software patterns, expect this to get worse.
The power dynamic between the gifter and the giftee isn't that simple. Even bribes dynamics will change a lot depending on who does it and to which amount.
There is a whole antropologic field around that, but to keep it short, if you pay your palace and all expenses with the money funneled to you as gifts, you're not the one in control.
Plenty of companies have attempted this over the years but it’s not obvious that a big enough customer base exists to support the tremendous number of engineering hours it takes to make a phone. Making a decent smart phone is really hard. And the operations needed to support production isn’t cheap either.
Government maybe rather than legislating big companies stores could not back up smaller open HW/SW vendors? It seems we gave up increasing competition on HW and what is left is app store level...
> I'm not sure it's that our job is the most automatable
I don't know. It seems pretty friendly to automation to me.
When was the last time you wrote assembly? When was the last time you had map memory? Think about blitting memory to a screen buffer to draw a square on a screen? Schedule processes and threads?
These are things that I routinely did as a junior engineer writing software a long time ago. Most people at that time did. For the most part, the computer does them all now. People still do them, but only when it really counts and applications are niche.
Think about how large code bases are now and how complicated software systems are. How many layers they have. Complexity on this scale was unthinkable not so long ago.
It's all possible because the computer manages much of the complexity through various forms of automation.
Expect more automation. Maybe LLMs are the vehicle that delivers it, maybe not. But more automation in software is the rule, not the exception.
RAD programming held the same promise, as did UML, flow/low/no code platforms.
Inevitably, people remember that the hard part of programming isn't so much the code as it is putting requirements into maintainable code that can respond to future requirements.
LLMs basically only automate the easiest part of the job today. Time will tell if they get better, but my money is on me fixing people's broken LLM generated businesses rather than being replaced by one.
Indeed. Capacity to do the hard parts of software engineering well may well be our best indicator of AGI.
I don't think LLMs alone are going to get there. They might be a key component in a more powerful system, but they might also be a very impressive dead end.
Sometimes I think we’re like cats that stumbled upon the ability to make mirrors. Many cats react like there’s another cat in the mirror, and I wonder if AGI is just us believing we can make more cats if we make the perfect mirror.
This has been my argument as well. We've been climbing the abstraction ladder for years. Assembly -> C -> OOP ->... this just seems like another layer of abstraction. "Programmers" are going to become "architects".
The labor cost of implementing a given feature is going to dramatically drop. Jevons Paradox paradox will hopefully still mean that the labor pool will just be used to create '10x' the output (or whatever the number actually is).
If the cost of a line of code / feature / app becomes basically '0', will we still hit a limit in terms of how much software can be consumed? Or do consumers have an infinite hunger for new software? It feels like the answer has to be 'it's finite'. We have a limited attention span of (say) 8hrs/person * 8 billion.
That seemed amazing to me because it would’ve meant Google found the comment, integrated it into thier index, and then made that index available, all within three hours. I know Google is good but are they that good?
I googled “xml startup business example” their AI summarized an “xml startup” as “a business using XML as a core technology” and gave the business below as an example startup.
Google is usually pretty on top of fast-changing sources like HN. My mind was blown more when I saw that ChatGPT seemed to ingest and regurgitate an HN comment of mine as an answer within 10-15 minutes of my posting (see the thread in https://news.ycombinator.com/item?id=42649774). Sadly this is no longer verifiable as the answer does not match my comment, but at least it correctly answers the original request, which it did not prior to my response.
How interesting. I also tried “XML Startup” with and without personalization and got nothing from hacker news on the first three pages of links. I had no idea there was so much variance on returned results.
We probably do need some kind of regulation in this space because for better or worse, and I think it’s worse, it’s hard to be a participant in modern society without a smart phone. (In my mind it would be something more akin to the communications act of 1934, but for apps to mandate a certain amount of “interoperability” across operating systems, whatever that may mean, but I digress)
on the other hand, it wasn’t all that long ago that we had many smart phone markers and operating systems, all with different strategies. It’s possible that the market did decide…
I would argue that there was more than a duopoly. We had Windows Phone, WebOS, blackberry, Palm etc. The market voted and we're left with 2.
Equally, pretty much no iPhone user (outside of tech circles) cares about the App Store monopoly for iPhone. The policy is well known, and hasn't changed in 15 years.
Indeed many (not all) tech folk who complain about the App Store still went out and bought an iPhone.
The raw truth is that the market did decide. And no we don't need regulation. Apple and Google have different enough policies for there to be choice. In some countries Android has dominant market share.
It's a matter of multiple independent legal decisions at this point that both of these companies have engaged in repeated, sustained, illegal anti-competitive behaviors, so the extent to which there was a "market" that voted is highly arguable.
Whatever may or may not have happened in the past is not especially relevant. Today it's essentially impossible to enter the market as you will need to develop a solid iOS or Android compatibility layer, because the best phone in the world is useless without software (apps) you want to run. This is also a major reason several of those platform you mentioned didn't work out by the way.
Sailfish OS does exactly that, but it has a number of limitations and added friction due to legal and technical reasons.
And choosing between two systems is really not much of a "choice". Right now there is a story on the front page about Android app requiring devs to validate their ID, even for side-loaded apps.
I only use my smartphone for the basics. I want the tightly controlled app store hence I buy iPhone. I don’t want apps to be able to roll third party subscription and payment options because they will inevitably abuse it at the cost of less savvy users.
Epic Games is a developer and publisher making billions off tricking children into paying for worthless virtual goods. Fuck ‘em if they can’t make a living with the 30% Apple cut.
Apple has no issue with tricking children into paying for worthless virtual goods, they will happily host your Coin-Dosing-Clash-Of-Shadow-Fortnights if they get their 30% cut.
Only if we need to classify things near the boundary. If we make something that’s better at every test that we can devise than any human we can find, I think we can say that no reasonable definition of AGI would exclude it without actually arriving at a definition.
We don’t need such a definition of general intelligence to conclude that biological humans have it, so I’m not sure why we’d such a definition for AGI.
I disagree. We claim that biological humans have general intelligence because we are biased and arrogant, and experience hubris. I'm not saying we aren't generally intelligent, but a big part of believing we are is because not believing so would be psychologically and culturally disastrous.
I fully expect that, as our attempts at AGI become more and more sophisticated, there will be a long period where there are intensely polarizing arguments as to whether or not what we've built is AGI or not. This feels so obvious and self-evident to me that I can't imagine a world where we achieve anything approaching consensus on this quickly.
If we could come up with a widely-accepted definition of general intelligence, I think there'd be less argument, but it wouldn't preclude people from interpreting both the definition and its manifestation in different ways.
I can say it. Humans are not "generally intelligent". We are intelligent in a distribution of environments which are similar enough to ones we are used to. There's no way to be intelligent with no priors on environment basically by information theory (you can make your environment to be adversarial to the learning efficiency in "intelligent" beings which comes from priors)
We claim that biological humans have general intelligence because we are biased and arrogant, and experience hubris.
No, we say it because - in this context - we are the definition of general intelligence.
Approximately nobody talking about AGI takes the "G" to stand for "most general possible intelligence that could ever exist." All it means is "as general as an average human." So it doesn't matter if humans are "really general intelligence" or not, we are the benchmark being discussed here.
If you don't believe me, go back to the introduction of the term[1]:
By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be "conscious" or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.
It's pretty clear here that the notion of "artificial general intelligence" is being defined as relative to human intelligence.
Or see what Ben Goertzel - probably the one person most responsible for bringing the term into mainstream usage - had to say on the issue[2]:
“Artificial General Intelligence”, AGI for short, is a term adopted by some researchers to refer to their research field. Though not a precisely defined technical term, the term is used to stress the “general” nature of the desired capabilities of the systems being researched -- as compared to the bulk of mainstream Artificial Intelligence (AI) work, which focuses on systems with very specialized “intelligent” capabilities. While most existing AI projects aim at a certain aspect or application of intelligence, an AGI project aims at “intelligence” as a whole, which has many aspects, and can be used in various situations. There is a loose relationship between “general intelligence” as
meant in the term AGI and the notion of “g-factor” in psychology [1]: the g-factor is an attempt to measure general intelligence, intelligence across various domains, in humans.
Note the reference to "general intelligence" as a contrast to specialized AI's (what people used to call "narrow AI" even though he doesn't use the term here). And the rest of that paragraph shows that the whole notion is clearly framed in terms of comparison to human intelligence.
That point is made even more clear when the paper goes on to say:
Modern learning theory has made clear that the only way to achieve maximally
general problem-solving ability is to utilize infinite computing power. Intelligence given limited computational resources is always going to have limits to its generality. The human mind/brain, while possessing extremely general capability, is best at solving the types of problems which it has specialized circuitry to handle (e.g. face recognition, social learning, language learning;
Note that they chose to specifically use the more precise term "maximally general problem solving ability when referring to something beyond the range of human intelligence, and then continued to clearly show that the overall idea is - again - framed in terms of human intelligence.
One could also consult Marvin Minsky's words[3] from back around the founding of the overall field of "Artificial Intelligence" altogether:
“In from three to eight years, we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight.
Simply put, with a few exceptions, the vast majority of people working in this space simply take AGI to mean something approximately like "human like intelligence". That's all. No arrogance or hubris needed.
Well general intelligence in humans already exists, whereas general intelligence doesn't yet exist in machines. How do we know when we have it? You can't even simply compare it to humans and ask "is it able to do the same things?" because your answer depends on what you define those things to be. Surely you wouldn't say that someone who can't remember names or navigate without GPS lacks general intelligence, so it's necessary to define what criteria are absolutely required.
> You can't even simply compare it to humans and ask "is it able to do the same things?" because your answer depends on what you define those things to be.
Right, but you can’t compare two different humans either. You don’t test each new human to see if they have it. Somehow we conclude that humans have it without doing either of those things.
> You don’t test each new human to see if they have it
We do, its called school and we label some humans with different learning disabilities. Some of those learning disabilities are grave enough that they can't learn to do tasks we expect humans to be able to learn, such humans can be argued to not posses the general intelligence we expect from humans.
Interacting with an LLM today is like interacting with an Alzheimer patient, they can do things they already learned well but poke at it and it all falls apart and they start repeating themselves, they can't learn.
Yes, there are diseases, injuries, etc. which can impair a human’s cognitive abilities. Sometimes those impairments are so severe that we don’t consider the human to be intelligent (or even alive!). But note that we still make this distinction without anything close to a rigorous formal definition of general intelligence.
reply