Hacker News new | past | comments | ask | show | jobs | submit login
Please don't mention AI again (mataroa.blog)
968 points by ludicity 4 months ago | hide | past | favorite | 563 comments



One thing I’ve noticed with the AI topic is how there is no discussion on how the name of a thing ends up shaping how we think about it. There is very obviously a marketing phenomenon happening now where “AI” is being added to the name of every product. Not because it’s actually AI in any rigorous or historical sense of the word, but because it’s trendy and helps you get investment dollars.

I think one of the results of this is that the concept of AI itself increasingly becomes more muddled until it becomes indistinguishable from a word like “technology” and therefore useless for describing a particular phenomenon. You can already see this with the usage of “AGI” and “super intelligence” which from the definitions I’ve been reading, are not the same thing at all. AGI is/was supposed to be about achieving results of the average human being, not about a sci-fi AI god, and yet it seems like everyone is using them interchangeably. It’s very sloppy thinking.

Instead I think the term AI is going to slowly become less marketing trendy, and will fade out over time, as all trendy marketing terms do. What will be left are actually useful enhancements to specific use cases - most of which will probably be referred to by a word other than AI.


Couldn't agree more. We are seeing gradual degradation of the term AI. LLMs have stolen attention of the media and the general population and I notice that people equate ChatGPT with all of AI. It is not, but an average person doesn't know the difference. In a way, genAI is the worst thing that happened to AI. The ML part is amazing, it allows us to understand the world we live in, which is what we have evolved to seek and value, because there is an evolutionary benefit to it--survival; the generative side is not even a solution to any particular problem, it is a problem that we are forced to pay to make bigger. I wrote "forced", because companies like Adobe use their dominant position to override legal frameworks developed to protect intellectual property and client-creator contracts in order to grab content that is not theirs to train models they resell back to the people they stole content from and to subject ourselves to unsupervised, unaccountable policing of content.


> I wrote "forced", because companies like Adobe use their dominant position to override legal frameworks developed to protect intellectual property and client-creator contracts in order to grab content that is not theirs to train models they resell back to the people they stole content from and to subject ourselves to unsupervised, unaccountable policing of content.

Adobe is an obnoxious company that does a lot of bad things, but it's weird to me seeing them cast in a negative light like this for their approach to model training copyright. As far as I know they were the first to have a genAI product that was trained exclusively on data that they had rights to under existing copyright law, rather than relying on a free use argument or just hoping the law would change around them. Out of all the companies who've built AI tooling they're the last one I'd expect to see dragged out as an example of copyright misbehavior.


> Out of all the companies who've built AI tooling they're the* last one I'd expect to see dragged out as an example of copyright misbehavior.

I was surprised, too: I thought Adobe's main problem was their abusive pricing, and I was actually a little impressed by their take on this hype wave. And yet. https://news.ycombinator.com/item?id=40607442


Here's a good explanation of what they are doing https://www.youtube.com/watch?v=tpbbschYKjE


That video is about privacy and csam scanning with a very brief reference at the end to the possibility that Adobe might be using customer photos to train the csam scanner.

I can see how that checks the box of "policing", but you also made this claim:

> in order to grab content that is not theirs to train models they resell back to the people they stole content from

Did you not mean that to imply that Adobe is using images from customers to train generative AI? Because that's sure what it sounds like.


Very well written and thoughtful comment. Thank you.


+1.

What happened to ML? With the relatively recent craze precipitated by chatgpt, the term AI (perhaps in no small part due to "OpenAI") has completely taken over. ML is a more apt description of the current wave.


Open AI was betting on AI, AGI, Super inteligence.

Look at the google engineer who thought they had an AI locked up in the basement... https://www.theverge.com/2022/6/13/23165535/google-suspends-...

MS paper on sparks of AGI: https://www.microsoft.com/en-us/research/publication/sparks-...

The rumors that OpenAI deal with MS would give them everything till they got to AGI... A perpetual license to all new development.

All the "Safety people" have left the OpenAi building. Even musk isnt talking about safety any more.

I think the bet was that if you fed an LLM enough, got it big enough it would hit a tipping point, and become AGI, or sentient or sapient. That lines up nicely with the MS terms, and MS's on paper.

I think they figured out that the math doesn't work that way (and never was going to). A prediction of the next token being better isnt intelligence any more than weather prediction will become weather.


The "next token" thing is literally true, but it might turn out to be a red herring, because emergence is a real phenomenon. Like how with enough NAND-gates daisy-chained together you can build any logic function you like.

Gradually, as these LLM next-token predictors are set up recursively, constructively, dynamically, and with the right inputs and feedback loops, the limitations of the fundamental building blocks become less important. Might take a long time, though.


> Like how with enough NAND-gates daisy-chained together you can build any logic function you like.

The version of emergence that AI hypists cling to isn't real, though, in the same way that adding more NAND gates won't magically make the logic function you're thinking about. How you add the NAND gates matters, to such a degree that people who know what they're doing don't even think about the NAND gates.


Exactly this, a thousand times.

There are infinitely more wrong and useless circuits than there are the ones that provide the function you want/need.


But isn't that what the training algorithm does? (Genuinely asking since I'm not very familiar with this.) I thought it tries anything, including wrong things, as it gradually finds better results from the right things.


Better results, yes, but that doesn't mean good results. It can only find local optima in a predetermined state space. Training a neural network involves (1) finding the right state space, and (2) choosing a suitable gradient function. If the Correct Solution isn't in the state space, or isn't reachable via gradual improvement, the neural network will never find it.

An algorithm that can reason about the meaning of text probably isn't in the state space of GPT. Thanks to the https://en.wikipedia.org/wiki/Universal_approximation_theore..., we can get something that looks pretty close when interpolating, but that doesn't mean it can extrapolate sensibly. (See https://xkcd.com/2048/, bottom right.) As they say, neural networks "want" to work, but that doesn't mean they can.

That's the hard part of machine learning. Your average algorithm will fail obviously, if you've implemented it wrong. A neural network will just not perform as well as you expect it to (a problem that usually goes away if you stir it enough https://xkcd.com/1838/), without a nice failure that points you at the problem. For example, Evan Miller reckons that there's an off-by-one error in everyone's transformers. https://www.evanmiller.org/attention-is-off-by-one.html

If you add enough redundant dimensions, the global optimum of a real-world gradient function seems to become the local optimum (most of the time), so it's often useful to train a larger model than you theoretically need, then produce a smaller model from that.


Responding to this:

> But isn't that what the training algorithm does?

It's true that training and other methods can iteratively trend towards a particular function/result. But in this case the training is on next token prediction which is not the same as training on non-verbal abstract problem solving (for example).

There are many things humans do that are very different from next token prediction, and those things we do all combine together to produce human level intelligence.


> There are many things humans do that are very different from next token prediction, and those things we do all combine together to produce human level intelligence.

Exactly LLMs didn't resolve knowledge representation problems. We still don't know how it's going in our brains, but at least we know, we may do internal symbolic knowledge representation and reasoning. LLMs don't. We need a kind of different math for ANNs, a new convolution but for text where layers extract features through the lexical analysis and ontology utilisation, and then train the network.


This presupposes that conscious, self-directed intelligence is at all what you're thinking it is, which it might not be (probably isn't). Given that, perhaps no amount of predictors in any arrangement or with any amount of dynamism will ever create an emergent phenomenon of real intelligence.

You say emergence is a real thing, and it is, but we have not one single example of it taking the form of sentience in any human-created thing of complexity.


When my friends talked about how AGI is just creating huge enough neural network & feeding it enough data, I have always compared it to: imagine locking a mouse in a library with all the knowledge in the world & expecting it to come out super intelligent.


The mouse would go mad, because libraries preserve more than just knowledge, they preserve the evolution of it. That evolution is ongoing as we discover more about ourselves and the world we live in, refine our knowledge, disprove old assumptions and theories and, on occasion, admit that we were wrong to dismiss them. Also, over time, we place different levels of importance to knowledge from the past. For example, an old alchemy manual from the middle ages used to record recipes for a cure for some nasty disease was important because it helped whoever had access to it quickly prepare some ointment that sometimes worked, but today we know that most of those recipes were random, non-scientific attempts at coming up with a solution to a medical problem and we have proven that those medicines do not work. Therefore, the importance of the old alchemist's recipe book as a source of scientific truth has gone to zero, but the historic importance of it has grown a lot, because it helps us understand how our knowledge of chemistry and its applications in health care has evolved. LLMs treat all text as equal unless it will be given hints. But those hints are provided by humans, so there is an inherent bias and the best we can hope for is that those hints are correct at the time of training. We are not pursuing AGI, we are pursuing the goal of automating the process of creation of answers that look like they are the right answers to the given question, but without much attention to factual, logical, or contextual correctness.


No. The mouse would just be a mouse. It wouldn't learn anything, because it's a mouse. It might chew on some of the books. Meanwhile, transformers do learn things, so there is obviously more to it than just the quantity of data.

(Why spend a mouse? Just sit a strawberry in a library, and if the hypothesis holds that the quantity of data is the only thing that matters holds, you'll have a super intelligent strawberry)


> Meanwhile, transformers do learn things

That's the question though, do they? One way of looking at gen AI is as a highly efficient compression and search. WinRAR doesn't learn, neither does Google - regardless of the volume of input data. Just because the process of feeding more data into gen AI is named "learning" doesn't mean that it's the same process that our brains undergo.


To the extent that we know what learning is (not very!) yes they do.


No need to waste a strawberry. Just test the furniture in the library. Either you have super intelligent chairs and tables or not.


Or a pebble; for a super intelligent pebble.

“God sleeps in the rock, dreams in the plant, stirs in the animal, and awakens in man.” ― Ibn Arabi


GPT-3 is about the same complexity as a mouse brain; it did better than I expected…

… but it's still a brain the size of a mouse's.


Mice are fare more impressive though.


And they can cook. I know it, I have seen it in a movie /s


I've yet to see a mouse write even mediocre python, let alone a rap song about life in ancient Athens written in Latin.

Don't get me wrong, organic brains learn from far fewer examples than AI, there's a lot organic brains can do that AI don't (yet), but I don't really find the intellectual capacity of mice to be particularly interesting.

On the other hand, the question of if mice have qualia, that is something I find interesting.


>but I don't really find the intellectual capacity of mice to be particularly interesting.

But you should find their self-direction capacity incredible and their ability to instinctively behave in ways that help them survive and propagate themselves. There isn't a machine or algorithm on earth that can do the same, much less with the same minuscule energy resources that a mouse's brain and nervous system use to achieve all of that.

This isn't to even mention the vast cellular complexity that lets the mouse physically act on all these instructions from its brain and nervous system and continue to do so while self-recharging for up to 3 years and fighting off tiny, lethal external invaders 24/7, among other things it does to stay alive.

All of that in just a mouse.


> But you should find their self-direction capacity incredible

No, why would I?

Depending on what you mean by self-direction, that's either an evolved trait (with evolution rather than the mouse itself as the intelligence) for the bigger picture what-even-is-good, or it's fairly easy to replicate even for a much simpler AI.

The hard part has been getting them to be able to distinguish between different images, not this kind of thing.

> and their ability to instinctively behave in ways that help them survive and propagate themselves. There isn't a machine or algorithm on earth that can do the same,

https://en.wikipedia.org/wiki/Evolutionary_algorithm

> much less with the same minuscule energy resources that a mouse's brain and nervous system use to achieve all of that.

Is nice, but again, this is mixing up the intelligence of the animal with the intelligence of the evolutionary process which created that instance.

I as a human have no knowledge of the evolutionary process which lets me enjoy the flavour of coriander, and my understanding of the Krebs cycle is "something about vitamin C?" rather than anything functional, and while my body knows these things it is unconventionable to claim that my body knowing it means that I know it.


I think you're completely missing the wider picture in your insistence on giving equivalency to the mouse with any modern AI, LLM or machine learning system.

The evolutionary processes behind the mouse being capable of all that are a part of the long distant past, up to the present, and their results are manifest in the physiology and cognitive abilities (such as they are) of the mouse), but this means that these abilities, conscious, instinctive and evolutionary only exist in the physical body of that mouse and nowhere else. No man-made algorithm or machine is capable of anything remotely comparable and its capacity for navigating the world is nowhere near as good. Once again, this especially applies when you consider that the mouse does all it does using absurdly tiny energy resources, far below what any LLM would need for anything similar.


Evolution is an abstract concept, and abstract concepts cannot be “intelligent” (whatever that means). This is like saying that gravity or thermodynamics are “intelligent”.


Evolution is not an abstract concept. It's a concrete algorithm with some abstract steps.


You might be surprised with what we've done by implementing the same thing as software.

https://en.wikipedia.org/wiki/Evolutionary_algorithm


I have yet to see a machine that would survive a single day in a mouse's natural habitat. And I doubt I'll see one in my lifetime.

Mediocre, or even excellent, Python and rap lyrics in Latin are easy stuff, just like chess and arithmetic. Humans just are really bad at them.


It doesn't say specifically, but I think these lasted more than a day, assuming you'll accept random predator species as a sufficient proof-of-concept substitute for mice which have to do many similar things but smaller:

https://www.televisual.com/news/behind-the-scenes-spy-in-the....


Those robots aren't acquiring their own food, and they're not edible by the creatures that surround them. They're playing on easy mode.


Still passes the "a machine that would survive a single day" test, and given machines run off electricity and we have PV already food isn't a big deal here.


> I've yet to see a mouse write even mediocre python, let alone a rap song about life in ancient Athens written in Latin.

Isn't this distinction more about "language" than "intelligence". There are some fantastically intelligent animals, but none of them can do the tasks you mention because they're not built to process human languages.


Prior to LLMs, language was what "proved" humans were "more intelligent" than animals.

But this is besides the point; I have no doubt that if one were to make a mouse immortal and give it 50,000 years experience of reading the internet via a tokeniser that turned it into sensory nerve stimulation and it getting rewards depending on how well it can guess the response, it would probably get this good sooner simply because organic minds seem to be better at learning than AI.

But mice aren't immortal and nobody's actually given one that kind of experience, whereas we can do that for machines.

Machines can do this because they can (in some senses but not all) compensate for the sample-inefficient by being so much faster than organic synapses.


I agree with the general sentiment but want to add: Dogs certainly process human language very well. From anecdotal experience of our dogs:

In terms of spoken language they are limited, but they surprise me all the time with terms they have picked up over the years. They can definitely associate a lot of words correctly (if it interests them) that we didn't train them with at all, just by mere observation.

A LLM associates bytes with other bytes very well. But it has no notion of emotion, real world actions and reactions and so on in relation to those words.

A thing that dogs are often way better than even humans is reading body language and communicating through body language. They are hyper aware of the smallest changes in posture, movement and so on. And they are extremely good at communicating intent or manipulate (in a neutral sense) others with their body language.

This is a huge, complex topic that I don't think we really fully understand, in part because every dog also has individual character traits that influence their way of communicating very much.

Here's an example of how complex their communication is. Just from yesterday:

One of our dogs is for some reason afraid of wind. I've observed how she gets spooked by sudden movements (for example curtains at an open window).

Yesterday it was windy and we went outside (off leash in our yard), she was wary and showed subtle fear and hesitated to move around much. The other dog saw that and then calmly got closer to her, posturing towards the same direction she seemed to go. He made small very steps forward, waited a bit, let her catch up and then she let go of the fear and went sniffing around.

This all happened in a very short amount of time, a few seconds, there is a lot more to the communication that would be difficult and wordy to explain. But since I got more aware of these tiny movements (from head to tail!) I started noticing more and more extremely subtle clues of communication, that can't even be processed in isolation but typically require the full context of all movements, the pacing and so on.

Now think about what the above example all entails. What these dogs have to process, know and feel. The specificity of it, the motivations behind it. How quickly they do that and how subtle their ways of communications are.

Body language is a large part of _human_ language as well. More often than not it gives a lot of context to what we speak or write. How often are statements misunderstood because it is only consumed via text. The tone, rhythm and general body language can make all the difference.


I've yet to see ChatGPT run away from a cat.


To be fair, it’s the horsepower of a mouse, but all devoted to a single task, so not 100% comparable to the capabilities of a mouse, and language is too distributed to make a good comparison of what milestone is human-like. But it’s indeed surprising how much that little bit of horsepower can do.


Weather prediction is pretty on point for 5 days out. Also the state of the art is a graph neural network transformer.


> A prediction of the next token being better isnt intelligence

Why? I think it absolutely can be intelligence.


I think it depends on how you define intelligence, but _I_ mostly agree with Francois Collet's stance that intelligence is the ability to find novel solutions and adaptability to new challenges. He feels that memorisation is an important facet, but it is not enough for true intelligence ant that these systems excel at type2 thinking but gave huge gaps at type1.

The alternative I'm considering is that It might just be that it's just a dataset problem, feeding these llms on words makes the lack a huge facet of embodied axistance that is needed to get context.

I am a nobody though, so who knows....


I agree, LLM are interesting to me only to the extent that they are doing more than memorisation.

They do seem to do generalisation, to at least some degree.

If it was literal memorisation, we do literally have internet search already.


And who says that LLMs won't be able to adapt to new conditions?

Right now they are limited by the context, but that's probably a temporary limitation.


*Chollet


Intelligence implies a critical evaluation of the statement under examination, before stating it, on considerations over content.

("You think before you speak". That thinking of course does not stop at "sounding" proper - it has to be proper in content...)


A LLM has to do an accurate simulation of someone critically evaluating their statement in order to predict a next word.

If an LLM can predict the next word without doing a critical evaluation, then it raises the question of what the intelligent people are doing. They might not be doing a critical evaluation at all.


> If an LLM can predict the next word without doing a critical evaluation, then it raises the question of what the intelligent people are doing

Well certainly: in the mind ideas can be connected tentatively by affinity, and they become hypotheses of plausible ideas, but then in the "intelligent" process they are evaluated to see if they are sound (truthful, useful, productive etc.) or not.

Intelligent people perform critical evaluation, others just embrace immature ideas passing by. Some "think about it", some don't (they may be deficient in will or resources - lack of time, of instruments, of discipline etc.).


> Intelligence implies a critical evaluation of the statement under examination, before stating it, on considerations over content.

And who says LLM are not able to do that (eventually)?


The poster wrote that prediction of the next token seems like intelligence to him. He was replied that consideration over content is required. You are now stating that it is not proven it will not happen. But the point was that prediction of the next token is not the intelligence sought, and if and when the intelligence sought will happen, that will be a new stage - the current stage we do not call intelligence.


I have some experience with LLM, and they definitely do consider the question. They even seem to do simple logical inference.

They are not _good_ at it right now, and they are totally bad at making generalizations. But who says it's not just an artifact of the limited context?


After the AI Winter, people started calling it ML to avoid the stigma.

Eventually ML got pretty good and a lot of the industry forgot the AI winter, so we're calling it AI again because it sounds cooler.


There’s a little more to it. Learning-based systems didn’t prove themselves until the ImageNet moment around 2013. Machine learning wasn’t used in previous AI hype cycles because it wasn’t a learning system but what we now call good old fashioned AI - GOFAI - hard coded features, semantic web, etc.


Are classical machine learning techniques that don't involve neural networks / DL not considered "learning-based systems" anymore? I would argue that even something as simple as linear regression can and should be considered a learning-based system, even ignoring more sophisticated algorithms such as SVMs or boosted tree regression models. And these were in use for quite a while before the ImageNet moment, albeit not with the same level of visibility.


I've just accepted that these broad terms are really products of their time, and use of them just means people want to communicate at a higher level above the details. (Whether that's because they don't know the details, or because the details aren't important for the context.) It's a bit better than "magic" but not that different; the only real concern I have is vague terms making it into laws or regulations. I agree on linear regression, and I remember being excited by things like random forests in the ML sphere that no one seems to talk about anymore. I think under current standards of vagueness, even basic things like a color bucket-fill operation in a paint program count as "AI techniques". In the vein of slap a gear on it and call it steampunk, slap a search step on it and call it AI, or slap a past-data storage step on it and call it learning.


Interesting opinion but no

AI is a science ML is just one of the areas of this science


>What happened to ML?

It's cyclical. The term "ML" was popular in the 1990s/early 2000s because "AI" had a bad odor of quackdom due to the failures of things like expert systems in the 1980s. The point was to reduce the hype and say "we're not interested in creating AGI; we just want computers to run classification tasks". We'll probably come up with a new term in the future to describe LLMs in the niches where they are actually useful after the current hype passes.


Is there a good reference available that describes what happened with expert systems in the 80's? I'm only vaguely aware about such things but in my mind they seem to have some utility.


H.P. Newquist's "The Brain Makers: Genius, Ego, and Greed In The Search For Machines That Think" (1994) is a good book that centers on the second AI bubble (1980-1987) and its collapse. It is maybe a bit too focused on the companies and CEOs themselves rather than the technology, but it does cover the main problem of expert systems -- their brittleness when faced with a situation they haven't been designed for. One of the things that we learned with expert systems is it is better to have probabilistic weights for things rather than the IF THEN ELSE branches of a traditional expert system -- this led to things like Bayesian models which were popular before deep learning took over.


I make it a point to use ML in every work meeting or interaction where AI is the subject. it has not gained traction.


> What happened to ML?

Well for one, five years ago, anyone doing machine learning was doing their own training -- you know, overseeing the actual learning part. Now, although there is learning involved in LLMs, you as the consumer aren't involved in that part. You're given API access (or at best a set of pre-trained weights) as a black box, and you do what you can with it.


Yep, the difference is that ML doesn’t have the cultural legacy of the AI concept and thus isn’t as marketable.


> the term AI (...) has completely taken over

Maybe if you go by article titles it has. If you look at job titles, there are many more ML engineers than AI engineers.


AI -> Machine learning -> Deep Learning -> LLM's - LMM's -> ?


AI -> Machine learning -> Deep Learning -> LLM's - LMM's

new math for knowledge based systems <- ANN <- ML <- KRR <- LMM's


Hallucinations!

A generative tool can’t Hallucinate! It isn’t misperceiving its base reality and data.

Humans Hallucinate!

ARGH. At least it’s becoming easier to point this out, compared to when ChatGPT came out.


Great example. People will say, “oh that’s just how the word is used now,” but its misuse betrays a real lack of rigorous thought about the subject. And as you point out, it leads one to make false assumptions about the nature of the data’s origin.


If "hallucination" refers to mistaking the product of internal processes for external perception, then generative AI can only hallucinate, as all of its output comes from inference against internal, hard-coded statistical models with zero reconciliation against external reality.

Humans sometimes hallucinate, but still have direct sensory input against which to evaluate inferential conclusions against empirical observation in real time. So we can refine ideas, whatever their origin, against external criteria of correctness -- this is something LLMs totally lack.


Calculators compute; they have to compute reliably; humans are limited and can make computing mistakes.

We want reliable tools - they have to give reliable results; humans are limited and can be unreliable.

That is why we need the tools - humans are limited, we want tools that overcome human limitation.

I really do not see where you intended to go with your post.


Not the poster you’re replying to, but -

I took his point to mean that hallucinate is an inaccurate verb to describe the phenomenon of AI creating fake data, because the word hallucination implies something that is separate from the “real world.”

This term is thus not an accurate label, because that’s not how LLMs work. There is no distinction between “real” and “imagined” data to an LLM - it’s all just data. And so this metaphor is one that is misleading and inaccurate.


This. We have trained generations of people to trust computers to give correct answers. LLM peddlers use that trust to sell systems that provide unreliable answers.


I liken current LLMs to that one uncle that can answer and has an opinion on everything even though he has actually no knowledge of the thing in question. Never says "I don't know", and never learns anything for more than a few moments before forgetting.


Amen. But if you do, you'll still be attacked by apologists. Per the norm for any Internet forum... this one being a prime example.


I say LLM when Im talking about LLMs, I say Generative ML when I'm talking about Generative ML, and I say ML when I'm talking about everything else.

I don't know what AI is, and nobody else does, that's why they're selling you it.


> AGI is/was supposed to be about achieving results of the average human being, not about a sci-fi AI god

When we build something we do not intend to build something that just achieves results «of the average human being», and a slow car, a weak crane, a vague clock are built provisionally in the process of achieving the superior aid intended... So AGI expects human level results provisionally, while the goal remains to go beyond them. The clash you see is only apparent.

> I think the term AI is going to ... will fade out over time

Are you aware that we have been using that term for at least 60 years?

And that the Brownian minds of the masses very typically try to interfere while we proceed focusedly and regarding it as noise? Today they decide that the name is Anna, tomorrow Susie: childplay should remain undeterminant.


Building something that replicates the abilities of the average human being in no way implies that this eventually leads to a superintelligent entity. And my broader point was that many people are using the term AGI as synonymous with that superintelligent entity. The concepts are very poorly defined and thrown around without much deeper thought.

> Are you aware that we have been using that term for at least 60 years?

Yes, and for the first ±55 of those years, it was largely limited to science fiction stories and niche areas of computer science. In the last ±5 years, it's being added to everything. I can order groceries with AI, optimize my emails with AI, on and on. It's become exceptionally more widespread of a term recently.

https://trends.google.com/trends/explore?date=today%205-y&q=...

> And that the Brownian minds of the masses very typically try to interfere while we proceed focusedly and regarding it as noise? Today they decide that the name is Anna, tomorrow Susie: childplay should remain undeterminant.

You're going to have to rephrase this sentence, because it's unclear what point you're trying to make other than "the masses are stupid." I'm not sure "the masses" are even relevant here, as I'm talking about individuals leading/working at AI companies.


I honestly never understood AGI as a simulation of Average Joe: it makes no sense to me. Either we go for the implementation of a high degree of intelligence, or why should we give an "important" name to "petty project" (however complicated, that can only be an effort that does not have an end in itself). Is it possible that the terminological confusion you see is because we are individually very radicated in our assumptions (e.g. "I want AGI as a primary module in Decision Support Systems")?

> In the last ±5 years, it's being added to everything // I'm not sure "the masses" are even relevant here, as I'm talking about individuals leading/working at AI companies

Who has «added to everything» the term AI? The «individuals leading/working at AI companies»? I would have said, the onlookers, or relatively marginal actors (e.g. marketing) who have an interest in the buzzword. So my point was: we will go on using he term in «niche [and not so niche] areas of computer science» irregardless of the outside noise.


The grifter is a nihilist. Nothing is holy to a grifter and if you let them they will rob every last word of its original meaning.

The problem with AI, as perfectly clear outlined in this article, is the same as the problem with the blockchain or with other esotheric grifts: It drains needed resources from often already crumbling systems¹.

The people falling for the hype beyond the actual usefulness of the hyped object are wishing for magical solutions that they imagine will solve all their problems. Problems that can't be fixed by wishful thinking, but by not fooling yourself and making technological choices that adequately address the problem.

I am not saying that LLMs are never going to be a good choice to adequately address the problem. What I am saying is that people blinded by blockchain/AI/quantum/snakesoil hype are the wrong people to make that choice, as for them every problem needs to be tackled using the current hype.

Meanwhile a true expert will weigh all available technological choices and carefully test them against the problem. So many things can be optimized and improved using hard, honest work, careful choices and a group of people trying hard not to fool themselves, this is how humanity managed to reach the moon. The people who stand in the way of our achievements are those who lost touch with reality, while actively making fools of themselves.

Again: It is not about being "against" LLMs, it is about leaders admitting they don't know, when they do in fact not know. And a sure way to realize you don't know is to try yourself and fail.

¹ I had to think about my childhood friend, whose esotheric mother died of a preventable disease, because she fooled herself into believing into magical cures and gurus until the fatal end.


That is funny, because of all the problems with LLMs, the biggest one is that they will lie/hallucinate/confabulate to your face before saying I don't know, much like those leaders.


Is this inherent to LLMs by the way, or is it a training choice? I would love for an LLM to talk more slowly when it is unsure.

This topic needs careful consideration and I should use more brain cycles on it. Please insert another coin.


It's fairly inherent. Talking more slowly wouldn't make it more accurate, since it's a next-token predictor: you'd have to somehow make it produce more tokens before "making up its mind" (i.e., outputing something that's sufficiently-correlated with a particular answer that it's a point of no return), and even that is only useful to the extent it's memorised a productive algorithm.

You could make the user interface display the output more slowly "when it is unsure", but that'd show you the wrong thing: a tie between "brilliant" and "excellent" is just as uncertain as a tie between "yes" and "no".


Telling the LLM to walk through the response, step by step is a prompt engineering thing though.


It is. Studied in the literature under the name "chain of thought" (CoT), I believe. It's still subject to the limitations I mentioned. (Though the output is more persuasive to a human even when the answer is the same, so you should be careful.)


In the beginning I think some, if not many, people did genuinely think it was "AI". It was the closest we've ever gotten to a natural language interface and that genuinely felt really different than anything before, even to an extreme cynic. And I also think there's many people that want us to develop AI and so were trying to actively convince themselves that e.g. GPT or whatever was sentient. Maybe that Google engineer who claimed an early version of Bard was "sentient" even believed himself (though I still suspect that was probably just a marketing hoax).

It's only now that everybody's used to natural language interfaces that I think we're becoming far less forgiving of things like this nonsense:

---

- "How do I do (x)."

- "You do (A)."

- "No, that's wrong because reasons."

- "Oh I'm sorry you're 100% right. Thank you for the correction. I'll keep that in mind in the future. You do (B)."

- "No that's also wrong because reasons."

- "Oh I'm sorry you're 100% right. Thank you for the correction. I'll keep that in mind in the future. You do (A)."

- #$%^#$!!#$!

---


> In the beginning I think some, if not many, people did genuinely think it was "AI".

Most people don't know what the academic discipline of AI involves, I've noticed a bit of that even on this website. LLMs, even the worst of them, are objectively is a member of the set of things which is termed AI by those working in that field.

> (though I still suspect that was probably just a marketing hoax).

He was fired for saying that it was sentient, and they didn't release the model externally at the time, and they only made Bard available in response to the competition from OpenAI/ChatGPT.


Yes, it's a language model, not a reasoning model. But people got confused at the start.


Or they are adherents to the hypothesis of Linguistic Determinism which preexisted LLMs by quite some time.

https://en.wikipedia.org/wiki/Linguistic_determinism


Thank you. Few people really seem to acknowledge the name.


>It was the closest we've ever gotten to a natural language interface and that genuinely felt really different

Agreed, but I think we'll soon start to discover that interacting with systems as if they were text adventure games of the 80's is also going to get pretty weird and frustrating, not to mention probably inefficient.


Even worse, because the text adventure games of the '70s and '80s (at least, the ones that were reasonably successful) had consistent internal state. If you told it to move to the Back of Fountain, you were in the Back of Fountain and would stay there until you told it to move somewhere else. If you told it to wave the black rod with the star at the chasm, a bridge appeared and would stay there.

If you tell ChatGPT that your name is Dan, and give it some rules to follow, after enough exchanges, it will simply forget these things. And sometimes it won't accurately follow the rules, even when they're clearly and simply specified, even the first prompt after you give them. (At least, this has been my experience.)

I don't think anyone really wants to play "text adventure game with advanced Alzheimer's".


There was a great post here about this specific possibility, that NN crowd has effectively convinced themselves in the supernatural abilities of their NNs:

https://softwarecrisis.dev/letters/llmentalist/


> It's only now that everybody's used to natural language interfaces that I think we're becoming far less forgiving (...)

Sort of, kind of. Those natural language interfaces that Actually Work aren't even 2 years old yet, and in that time there weren't many non-shit integrations released. So at best, some small subset of the population is used to the failure modes of ChatGPT app - but at least the topic is there, so future users will have better aligned expectations.


> Those natural language interfaces that Actually Work aren't even 2 years old yet

Siri. Introduced 2011.

Before that...

Q&A Database, the product that Symantec was founded to sell. Released 1985.

https://en.wikipedia.org/wiki/Q%26A_(Symantec)

Before that...

The Infocom adventure game parser.

https://www.ifwiki.org/Infocom-type_parser

Infocom was founded in 1979.

https://en.wikipedia.org/wiki/Infocom

Before that...

SHRDLU, 1968.

https://en.wikipedia.org/wiki/SHRDLU

Before that...

ELIZA? (1964)

https://en.wikipedia.org/wiki/ELIZA


> Siri. Introduced 2011.

Not working well to this day. Siri is mostly a joke and a meme, much like Alexa, Cortana and Google whatever-they-call-it-now.

Other stuff: yeah, you can get away with a lot using some formal grammar, a random number generator, and a lot of effort railroading the user so they won't even think to say anything that'll break the illusion. I've had people not realize for weeks that my IRC bot is a bot, even if all it did was apply a bunch of regular exceptions to input (the trick was that the bot would occasionally speak unprompted and react to other people's conversations, and replied to common emoticons and words indicating emotion).

No, only in the last two years we can say that there exist Speech-to-Text that works reliably, Text-to-Speech that sounds natural, and a ML model that can parse arbitrary natural language text and reliably infer what you want, even if you never directly stated what you want, and handling for it was never explicitly coded.


I disagree.

I think you are presenting a gradual incremental change as some sort of binary transition, and it isn't. This is at best disingenuous and misleading and at worst it's a flat out lie.

Text-only natural language interfaces were working in the 1960s and working well by the 1980s.

Live real-time speech recognition with training was working by the turn of the century, and following a hand injury, I was dictating my work into freeware for a while by 2000 or so. It was bundled with later versions of IBM OS/2 Warp.

Real-time speaker-independent speech recognition started working usefully well some 15 years ago, and even as a sceptic who dislikes such things, I was using it and demonstrating it a decade ago. It's been a standard feature of mainstream commercial desktop OSes as well as smartphones for about 8-9 years. Windows 10 (2015) included Cortana; macOS Sierra (2016) included Siri.

In fact, after I posted my previous comment, this morning Facebook reminded me that 8Y ago today I was installing Win10 on my Core 2 Duo Thinkpad.

I don't allow any of these devices in my home but they're a multi-billion dollar in domestic voice-controlled hardware.

This is mainstream used by a double-digit percentage of humanity.

You seem to have been deceived by the LLM bot fakery of "intelligence" that it's achieved some quantum leap in smarts recently. This is illusory.


I must have really bad luck for voice control systems, because I've tried most of them, and there were only two that reliably worked for me:

1) Most recent breed of ML models, backed in part by LLMs;

2) A voice control system I hacked together some 17 years ago, with Microsoft Speech API and a cheap microphone I soldered to a long cable and stuck to the side of the wardrobe. It had the benefit of using a fixed grammar tree, and it was in the saner time when the vendor actually allowed you to train the system on your own texts.

Everything else - especially current voice assistants and dictation software on the couple flagship phones I've used over the years - was garbage compared to that.

So again, I must be the unluckiest person in the world when it comes to voice recognition, because what you wrote is entirely counter to my experience.


I am not talking about my own experience, though.

I'm talking about what I see in the world. The people who own Apple watches and iPhones who rarely even take the phone out of their pocket/bag any more. The folks whose light switches are never used or who are even getting them removed. The people who don't use things like kitchen timers any more. The disappearance of physical or local music/video collections.

I think these trends are observable and widespread.

Me? I don't go near any of them, myself.


> The folks whose light switches are never used or who are even getting them removed.

Eh, I know an anecdote isn't a statistic, but my experience with voice controls is so bad that I've mostly gotten rid of the "smart" parts and gone back to the light switches.

Back when we had Alexa, we'd say "Alexa, Küche aus" ("kitchen off") and it would reply "Ich kann nicht 'Küche' im Spotify finden" (worse, we didn't have Spotify).

Siri is less bad, but it still just randomly fails. I've had two devices in the same room try to respond to the same voice command, one succeeds and the other spins around for a bit and responds with a spoken generic error.

For the voice input keyboard, the error rate is much worse, bad enough that in my experience I might as well have typed one real word and let autosuggest write the remainder of whatever I was not-typing.


I find that easy to believe.

I have watched my friends with this kit demonstrate it to me, many times, and to me it seems clumsy and difficult and error-prone, as well as being expensive, vastly horribly insecure and not so much violating privacy as gang-raping it.

I do not understand why they like it or think it better.

The only friend I know with a valid use case is non-light-sensing blind. It's useful for him to just tell the room "turn the lights off" rather than hunt for the switch. Or ask if they're on.

(Yes he has lights. He's married with a kid.)


I love this. I want a t-shirt with this


"So, when will you be updating yourself to not provide the same wrong answer in future, or when I ask it later in a new session"

"No."


> One thing I’ve noticed with the AI topic is how there is no discussion on how the name of a thing ends up shaping how we think about it. There is very obviously a marketing phenomenon happening now where “AI” is being added to the name of every product. Not because it’s actually AI in any rigorous or historical sense of the word, but because it’s trendy and helps you get investment dollars.

I was reading one famous book about investing some times ago (I don't remember which one exactly, I think it was a random walk into wall st, but don't quote me on that) and one chapter at the beginning of the book talk about the .com bubble and how companies, even ones who had nothing to do with the web, started to put .com or www in their name and were seeing an immediate bump in their stock price (until it all burst, as we know now).

And every hype cycle / bubble is like that. We saw something similar with cryptocurrencies. For a while, every tech demos at dev convention had to have some relation to the "blockchain". We saw every variation of names ending in -coin. And a lot of company, that where not even in tech, had dumb project related to the blockchain, which for anyone slightly knowledgeable with the tech it was clear that it was complete BS, and they almost all the time were quietly killed off after a few month.

To a much lesser extent, we saw the same with "BigData" (who even use this word anymore?) and AR/VR/XR.

And now its AI, until the next recession and/or the next shiny thing that makes for amazing demos pops-out.

It is not to say that it is all fake. There is always some genuine business that have actual use case with the tech and will probably survive the burst (or get brought up and live on has MS/Google/AWS Thingamajig). But you have to be pretty naïve if you think 99% of the current AI company will live in the next 5 years, and believe their marketing material. But it doesn't matter if you manage to sell before the bubble pop, and so the cycle continue.


Long Island Iced Tea renamed to Long Island Blockchain for a stock price bump.


Hah, I forgot about that, and their subsequent delisting. That was about as blatantly criminal as you could get and in that particular hype train I remember some noise about that but not nearly enough.


Yeah, happens everytime. Remember when people were promising blockchain but had nothing to show for it (sometimes, not even an idea)? Or "cloud powered" for apps that barely made API calls? Remember when every and anything needed an app, even if it was just some static food menu?

It's obvious BS from anyone in tech, but the people throwing money aren't in tech.

>I think the term AI is going to slowly become less marketing trendy, and will fade out over time, as all trendy marketing terms do. What will be left are actually useful enhancements to specific use cases - most of which will probably be referred to by a word other than AI.

it'll die down, but the marketing tends to stick, sadly. we'll have to deal with if AI means machine learning or LLMs or video game pathfinding for decades to come.


> Remember when people were promising blockchain but had nothing to show for it (sometimes, not even an idea)?

They're still on r/Metaverse_blockchain. Every day, a new "meme coin". Just in:

"XXX - The AI-Powered Blockchain Project Revolutionizing the Crypto World! ... Where AI Meets Web3"

"XXX is an advanced AI infrastructure that develops AI-powered technologies for the Web3, Blockchain, and Crypto space. We aim to improve the Web3 space for retail users & startups by developing AI-powered solutions designed explicitly for Web3. From LLMs to Web3 AI Tools, XXX is the go-to place to boost your Web3 flow with Artificial Intelligence."


They too are a redlisted species now. Just prompt chatGPT to buisness speak wirh maximum buzzword boolshit and be amazed. When they came for the kool aid visionaries i was not afraid, cause I was not a grifter.

This is a synergistic, paradigm-shifting piece of content that leverages cutting-edge, scalable innovation to deliver a robust, value-added user experience. It epitomizes next-gen, mission-critical insights and exemplifies best-in-class thought leadership within the dynamic ecosystem


The 'intelligence' label has been applied to computers since the beginning and it always misleads people into expecting way more than they can deliver. The very first computers were called 'electronic brains' by newspapers.

And this delay between people's mental images of what an 'intelligent' product can do and the actual benefits they get for their money once a new generation reaches the market creates this bullwhip effect in mood. Hence the 'AI winters'. And guess what, another one is brewing because tech people tend to think history is bunk and pay no attention to it.


Newspapers often mislead people about stuff in general...


> There is very obviously a marketing phenomenon happening now where “AI” is being added to the name of every product.

This isn't really a new phenomenon, though. The only thing new about it is that the marketing buzzword of the day is "AI". For a little while prior it was "machine learning". History is littered with examples of marketers and salespeople latching onto whatever is popular and trendy, and using it to sell, regardless if their product actually has anything to do with it.


Typically at this point in the hype cycle a new term emerges so companies can differentiate their hype from the pack.

Next up: Synthetic Consciousness, "SC"

Prediction: We will see this press release within 24 months:

"Introducing the Acme Juice Squeezer with full Synthetic Consciousness ("SC"). It will not only squeeze your juice in the morning but will help you gently transition into the working day with an empathetic personality that is both supportive and a little spunky! Sold exclusively at these fine stores..."


« A new generation of Sirius Cybernetics Corporation robots and computers, with the new GPP feature.”

“GPP feature?” said Arthur. “What's that?”

“Oh, it says Genuine People Personalities.”

“Oh,” said Arthur, “sounds ghastly.”

A voice behind them said, “It is.” The voice was low and hopeless and accompanied by a slight clanking sound. They span round and saw an abject steel man standing hunched in the doorway.

“What?” they said.

“Ghastly,” continued Marvin, “it all is. Absolutely ghastly. Just don't even talk about it. Look at this door,” he said, stepping through it. The irony circuits cut into his voice modulator as he mimicked the style of the sales brochure. “All the doors in this spaceship have a cheerful and sunny disposition. It is their pleasure to open for you, and their satisfaction to close again with the knowledge of a job well done.”

»


:)


I'd like to think that AI right now is basically a placeholder term, like a search keyword or hot topic and people are riding the wave to get attention and clicks.

Everything that is magic will be labeled under AI for now, until it gets seated into their proper terms and are only closely discussed by those who are actually driving innovation in the space or are just casually using the applications in business or private.


The term "artificial intelligence" was marketing from its creation. It means "your plastic pal who's fun to be with, especially if you don't have to pay him." Multiple disparate technologies all called "AI", because the term exists to sell you the prospect of magic.


Names carry powerful connotations and significantly shape how we think about and interact with technology


AI should be re-acronymed to mean Algorithmic Insights. artificial intelligence is akin to a ziplock bag doing algebra with no external interactions


Turbo


I worked for an AI startup that got bought by a big tech company and I've seen the hype up close. In the inner tech circles it's not exactly a big lie. The tech is good enough to make incredible demos but not good enough to generalize into reliable tools. The gulf between demo and useful tool is much wider than we thought.


I work at Microsoft, though not in AI. This describes Copilot to a T. The demos are spectacular and get you so excited to go use it, but the reality is so underwhelming.


Copilot isn't underwhelming, it's shit. What's impressive is how Microsoft managed to gut GPT-4 to the point of near-uselessness. It refuses to do work even more than OpenAI models refuse to advise on criminal behavior. In my experience, the only thing it does well is scan documents on corporate SharePoint. For anything else, it's better to copy-paste to a proper GPT-4 yourself.

(Ask Office Copilot in PowerPoint to create you a slide. I dare you! I double dare you!!)

The problem with demos is that they're staged, they showcase integrations that are never delivered, and probably never existed. But you know what's not hype and fluff? The models themselves. You could hack a more useful Copilot with AutoHotkey, today.

I have GPT-4o hooked up as a voice assistant via Home Assistant, and what a breeze that is. Sure, every interaction costs me some $0.03 due to inefficient use of context (HA generates too much noise by default in its map of available devices and their state), but I can walk around the house and turn devices on and off by casually chatting with my watch, and it work, works well, and works faster than it takes to turn on Google Assistant.

So no, I honestly don't think AI advances are oversold. It's just that companies large and small race to deploy "AI-enabled" features, no matter how badly made they are.


Basically, functional AI interactions are prohibitively resource intensive and expensive. Microsoft's non-coding Copilots are shit due to resource constraints.


Basically, yes. My last 4 days of playing with this voice assistant cost me some $3.60 for 215 requests to GPT-4o, amounting to a little under 700 000 tokens. It's something I can afford[0], but with costs like this, you can't exactly give GPT-4 access out to people for free. This cost structure doesn't work. It doesn't with GPT-4o, so it more than twice as much didn't with earlier model iterations. And yet, that is what you need if you want a general-purpose Copilot or Assistant-like system. GPT-3.5-Turbo ain't gonna cut it. Llamas ain't gonna cut it either[1].

In a large sense, Microsoft lied. But they didn't lie about capability of the technology itself - they just lied about being able to afford to deliver it for free.

--

[0] - Extrapolated to a hypothetical subscription, this would be ~$27 per month. I've seen more expensive and worse subscriptions. Still, it's a big motivator to go dig into the code of that integration and make it use ~2-4x fewer tokens by encoding "exposed entities" differently, and much more concisely.

[1] - Maybe Llama 3 could, but IIRC license prevents it, plus it's how many days old now?


> they just lied about being able to afford to deliver it for free.

But they never said it'll be free - I'm pretty sure it was always advertised as a paid add-on subscription. With that being the case, why would they not just offer multiple tiers to Copilot, using different models or credit limits?


Contrary to what the corporations want you to believe -- no, you can't buy your way out of every problem. Most of the modern AI tools are mostly oversold and underwhelming, sadly.


whoa that's very cool. can you share some info about how you set up the integration in ha? would love to explore doing something like this for myself


With the most recent update, it's actually very simple. You need three things:

1) Add OpenAI Conversation integration - https://www.home-assistant.io/integrations/openai_conversati... - and configure it with your OpenAI API key. In there, you can control part of the system prompt (HA will add some stuff around it) and configure model to use. With the newest HA, there's now an option to enable "Assist" mode (under "Control Home Assistant" header). Enable this.

2) Go to "Settings/Voice assistants". Under "Assist", you can add a new assistant. You'll be asked to pick a name, language to use, then choose a conversation model - here you pick the one you configured in step 1) - and Speech-to-Text and Text-to-Speech models. I have a subscription to Home Assistant Cloud, so I can choose "Home Assistant Cloud" models for STT and TTS; it would be great to integrate third party ones here, but I'm not sure if and how.

3) Still in "Settings/Voice assistants", look for a line saying "${some number} entities exposed", under "Add assistant" button. Click that, and curate the list of devices and sensors you want "exposed" to the assistant - "exposed" here means that HA will make a large YAML dump out of selected entities and paste that into the conversation for you[0]. There's also other stuff (I heard docs mentioning "intents") that you can expose, but I haven't look into it yet[1].

That's it. You can press the Assist button and start typing. Or, for much better experience, install HA's mobile app (and if you have a smartwatch, the watch companion app), and configure Home Assistant as your voice assistant on the device(s). That's how you get the full experience of randomly talking to your watch, "oh hey, make the home feel more like a Borg cube", and witnessing lights turning green and climate control pumping heat.

I really recommend everyone who can to try that. It's a night-and-day difference compared to Siri, Alexa or Google Now. It finally fulfills those promises of voice-activated interfaces.

(I'm seriously considering making a Home Assistant to Tasker bridge via HA app notification, just to enable the assistant to do things on my phone - experience is just that good, that I bet it'll, out of the box, work better than Google stuff.)

--

[0] - That's the inefficient token waster I mentioned in the previous comment. I have some 60 entities exposed, and best I can tell, it generates a couple thousand token's worth of YAML, most of which is noise like entity IDs and YAML structure. This could be cut down significantly if you named your devices and entities cleverly (and concisely), but I think my best bet is to dig into the code and trim it down. And/or create a synthetic entities that stand for multiple entities representing a single device or device group, like e.g. one "A/C" entity that combines multiple sensor entities from all A/C units.

[1] - Outside the YAML dump that goes with each message (and a preamble with current date/time), which is how the Assistant know current state of every exposed entity, there's also an extra schema exposing controls via "function calling" mechanism of OpenAI API, which is how the assistant is able to control devices at home. I assume those "intents" go there. I'll be looking into it today, because there's a bunch of interactions I could simplify if I could expose automation scripts to the assistant.


lol I can’t help but assume that people who think copilot is shit have no idea what they are doing.


I have it enabled company-wide at enterprise level, so I know what it can and can't do in day-to-day practice.

Here's an example: I mentioned PowerPoint in my earlier comment. You know what's the correct way to use AI to make you PowerPoint slides? A way that works? It's to not use the O365 Copilot inside PowerPoint, but rather, ask GPT-4o in ChatGPT app to use Python and pandoc to make you a PowerPoint.

I literally demoed that to a colleague the other day. The difference is like night and day.


I've gone back to using GitHub Copilot with reveal.js [0]. It's much nicer to work with, and I'd recommended it unless you specifically need something from PowerPoint's advanced features.

[0] https://revealjs.com/


GitHub (which is owned by Microsoft) Copilot or Microsoft Copilot?


It's a lot like AR before Vision Pro. The situation for the demo and reality didn't meet. I'm not trying to claim Vision Pro is perfect but it seems to do AR in the real world without the circumstances needing to be absolutely ideal.


The Vision Pro is not doing well. Apple has cancelled the next version.[1] As Carmack says, AR/VR will be a small niche until the headgear gets down to swim goggle size, and will not go mainstream until it gets down to eyeglass size.

[1] https://www.msn.com/en-us/lifestyle/shopping/apple-shelves-n...


It was always the plan for Apple to release a cheaper version of the Vision Pro next. That the next version of the PRO has been postponed isn't a huge sign. It just seems that the technology isn't evolving quickly enough to warrant a new version any time soon.


> swim google size

The "Bigscreen Beyond" [0] is quite close, but doesn't have cameras - so at this stage it's only really good for watching movies and the like.

[0] https://store.bigscreenvr.com/products/bigscreen-beyond


That one does have 6DoF tracking, it's just based on the Valve Lighthouse system. Upside of that system is it's more privacy respecting.


Which it probably won't, because real life physics are not aware about roadmaps and corporate ads.


What physics are you talking about? Limits on power? Display? Sensor size? I ask because I’ve had similar feelings about things like high speed mobile Internet or mobile device screen size (over a couple of decades) and lived to see all my intuition blown away, so I really don’t believe in limits that don’t have explicit physical constraints behind them.


Lens diffraction limits. VR needs lenses that are small and thin enough while still being powerful enough to bend the needed light towards the eyes. Modern lenses need more distance between the screen and the eyes and they’re quite thick.

Theoretically future lenses may make it possible, but the visible light metamaterials needed are still very early research stage.


Apple approved ALVR few days ago too, clearly they're having issues at least wrt getting developer attention.

1: https://apps.apple.com/us/app/alvr/id6479728026


Your article states this differently. The development has not been canceled fully but re focused.

“and now hopes to release a more standard headset with fewer abilities by the end of next year.


That's marketing-speak for "cancelled".


I think both hardware and software in AR have to become unobtrusive for people to adopt it. And then it will be a specialized tool for stuff like maintenance. Keeping large amounts of information in context without requiring frequent changes in context. But I also think that the information overload will put a premium on non-AR time. Once it becomes a common work tool, people using it will be very keen to touch grass and watch clouds afterwards.

I don't think it will ever become the mainstream everyday carry proponents want it to be. But only time will tell...


Until there is an interface for it that allows you to effectively touch type (or equivalent) then 99% of jobs won't be able to use it away from a desk anyway. Speech to text would be good enough for writing (non technical) documentation but probably not for things like filling spreadsheets or programming.


But does what Apple has shown in its demos of the Vision Pro actually meet reality? Does it provide any value at all?

In my eyes, it's exactly the same as AI. The demos work. You can play around with it, and its impressive for an hour. But there's just very little value.


The value would come if it was something you would feel comfortable wearing all day. So it would need perfect pass through, be much much lighter and more comfortable. If they achieved that and can do multiple high resolution virtual displays then people would use it.

The R&D required to get to that point is vast though.


> can do multiple high resolution virtual displays

In most applications, it then would need to compete on price with multiple high resolution displays, and undercut them quite significantly to break the inertia of the old tech (and other various advantages - like not wearing something all day and being able to allow other people to look at what you have on your screen).


I take your point but living in a London flat I don't have the room for multiple high resolution displays. Nor are they very portable, I have a MBP rather than an iMac because mobility is important.

I do think we're 4+ years until it gets to the 'iPhone 1' level of utility though, so we'll see how committed Apple are to it.


That's what all these companies are peddling though. The question is - do humans actually NEED a display before their eyes for all awake time? Or even most of it? Maybe, but today I have some doubts.


Given how we as a society are now having significant second thoughts as to the net utility for everybody having a display in their pocket for all awake time, I also have some doubts.


it's very sad because it's sort of so near but so far kind of situation

It would be valuable if it could do multimonitor, but it can't. It would be valuable if it could run real apps but it only runs iPad apps. It would be valuable if Apple opened up the ecosystem, and let it easily and openly run existing VR apps, including controllers - but they won't.

In fact the hardware itself crosses the threshold to where the value could be had, which is something that couldn't be said before. But Apple deliberately crimped it based on their ideology, so we are still waiting. There is light at the end of the tunnel though.


> But Apple deliberately crimped it based on their ideology

It's in a strange place, because Apple definitely also crimped it by not even writing enough software for it inhouse.

Why can't it run Mac apps? Why can't you share your "screen configuration" and its contents with other people wearing a Vision Pro in the same room as you?


It is not really AR. Reality is not just augmented but captured first with camera. It can make someone dizzy.


It's the opposite of AR, it's VR augmented with real imagery.


I never considered this angle. (Yeah, I am a sucker -- I know.) Are you saying that they cherry pick the best samples for the demo? Damn. I _still_ have high hopes for something like Copilot. I work on CRUD apps. There are so many cases where I want Copilot to provide some sample code to do X.


Sorry I didn’t mean GitHub Copilot. Code generation is definitely one of the better use cases for AI. I meant the “Copilot” brand that Microsoft has trotted out into pretty much everyone of its products and rolled together in this generic “Copilot” app on windows.


They absolutely do. Check out this video https://youtu.be/tNmgmwEtoWE


I just used Groq / llama-7b to classify 20,000 rows of Google sheets data (Sidebar archive links) that would have taken me way longer... Every one I've spot checked right now has been correct, and I might write another checker to scan the results just in case.

Even w/ a 20% failure it's better than not having the classifications


The problem isn't that it's not useful for self driven tasks like that, it's that you can't really integrate that into a product that does task X because when someone buys a system to do task X, they want it to be more reliable than 80%.


Stick a slick UI that lets the end user quickly fix up just the bits it got wrong and flip through documents quickly and 80% correct can still be a massive timesaver.


I think that can kind of work for B2C things, but is much less likely to do so for B2B. Just as an example, I work on industrial maintenance software, and customers expect us to catch problems with their machinery 100% of the time, and in time to prevent it. Sometimes faults start and progress to failure faster than they're able to send data to us, but they still are upset that we didn't catch it.

It doesn't matter whether that's reasonable or not, there are a lot of people who expect software systems to be totally reliable at what they do, and don't want to accept less.


We're thinking about adding AI to the product and that's the path I'd like to take. View AI as an intern who can mistakes, and provide a UI where the user can review what the AI is planning to do.


Except that people hate monitoring an intern all day, regardless of whether it is a human or a machine.


I think this is going to be a heavy lift, and one of the reasons I think a chat bot is not the right UX. Every time someone says “all you need to get to do to get ChatGPT working is provide it explicitly requirements and iterate”, and for a lot of coding tasks it’s much easier to just hack on code for a while, then be a manager to a 80% right intern.


I classified ~1000 GBA game roms files by using their file names to put each in a category folder. It worked like 90% correctly. Used GPT 3.5 and therefore it didn't adhere to my provided list of categories but they were mostly not wrong otherwise.

https://gist.github.com/SMUsamaShah/20f24e80cfe962d26af5315e...


Sorry this actually sounds like a real use case. What was the classification? (I tried google “sidebar archive”). I assume somehow you visited 20,000 web pages and it classified the text on the page? How was that achieved ? You ran a local llama?


We had ChatGPT look at 200.000 products, and make a navigation structure in 3 tiers based on the attributes of each product. The validation took 2% of the time it would have taken to manually create the hierarchy ourselves.

I think that even the simple LLM's are very well suited for classification-tasks, where very little prompting is needed.


Sorry to harp on.

So you had a list of products (what sort - I am thinking like widgets from a wholesaler and you want to have a three tier menu for an e-commerce site?)

I am guessing each product has a description - like from Amazon, and chatgpt read the description and said “aha this is a Television/LCD/50inch or Underwear/flimsy/bra

I assume you sent in 200,000 different queries - but how did you get it to return three tiers? (Maybe I need to read one of those “become a ChatGPt expert” blogs


I'm not this person; but, I've been working on LLMs pretty aggressively for the last 6ish months and I have some ideas of how this __could__ be done.

You could plainly ask the LLM something like this as the query goes on:

"Please provide 3 categories that this product could exist under, with increasing specificity in the following format:

  {
     "broad category": "a broad category that would encompass this product, as well as others, for example 'televisions' for a 50" OLED LG with Roku integration",
     "category": "a narrower category that describes this product more aggressively, for example 'Smart Televisions'",
     "narrow category": "an even narrower category that describes this product and its direct competitors, for example OLED Smart televisions"
  }
A next question you'll have pretty quick is, "Well, what if sometimes it returns 'Smart televisions' and other times it returns 'Smart TVs', won't that result in multiple of the same category?" And that's a good and valid question, so you then have another query that takes the categories that have been provided to you and asks for synonyms, alternative spellings, etc, such as:

"Given a product categorization of a specific level of specificity, please provide a list of words and phrases that mean the same thing".

In OpenAI's backend - and many of them, I think, you can have the api run the query multiple times and get back multiple answers. enumerate over those answers, build the graph, and you can have all that data in an easy to read and follow format!

It might not be perfect, but it should be pretty good!


> Well, what if sometimes it returns 'Smart televisions' and other times it returns 'Smart TVs', won't that result in multiple of the same category

Text similarity works well in this case. You can just use cosine similarity and merge ones that are very close or ask GPT to compare for those on the edge


It sounds like a real use case, but possibly quite overkill to use an LLM.

Unless you need to have some "reasoning" to classify the documents correctly, a much more lightweight BERT-like model (RoBERTa or DistilBERT) will perform on par in accuracy while being a lot faster.


"while being a lot faster", yes; but something that LLMs do that those other tools don't is being hilariously approachable.

LLMs can operate as a very, very *very* approachable natural language processing model without needing to know all the gritty details of NLP.


> Every one I've spot checked right now has been correct, and I might write another checker to scan the results just in case.

If you already have the answers to verify the LLM output against why not just use those to begin with?


Not GP, but I would imagine "another checker to scan the results" would be another NN classifier.

Thinking being that you'd compare outputs of the two, and under assumption of the results being statistically independent from each other and of similar quality, say 1% difference between the two in said comparison, would suggest ~ 0.5% error rate from "ground truth".


Maybe their problem is using LLM to solve f:X→Y, where the validation, V:{X,Y}→{0,1}, is trivial to compute?


Looks like this is another application of the ninety-ninety rule: getting to the stage where you can make incredible demos has required 90% of the effort, and the last 10% to make it actually reliable will require the other 90% (https://en.wikipedia.org/wiki/Ninety%E2%80%93ninety_rule).


80/20, 90/10 etc. are all just https://en.wikipedia.org/wiki/Zipf%27s_law


Excellent find, I’d never heard of Zipf’s law.

GP was talking about something else though, the 90:90 rule is related to an extremely common planning optimism fallacy around work required to demo, and work required to productise.


It's not just demos though. It's that the final 10% of any project, which largely consists of polishing, implementing feedback, ironing out bugs or edge cases, and finalization and getting to a point where it's "done" can end up taking as much effort as what you needed to complete the first 90% of the project.


Can you elaborate? I am curious. In my line of work, the 80/20 rule is often throw about, that being "to do 80% of the work, you only need 20% of the knowledge." I thought the other reply was talking about the same diminutive axiom, but now I am not sure.


The sibling post gives a good account of the 90:90 challenge.

The last part of any semi-difficult project nearly always takes much longer than the officially difficult “main problem” to solve.

It leads to the last 10% of the deliverables costing at least 90% of the total effort for the project (not the planned amount; the total ad calculated after completion, if that ever occurs)

This seems to endlessly surprise people in tech, but also many other semi-professional project domains (home renovations are a classic)


Isn't it a bit similar situation with the self driving cars?


I'm going to copy my answer from zellyn in a thread some time ago:

  "It’s been obvious even to casual observers like myself for years that Waymo/Google was one of the only groups taking the problem seriously and trying to actually solve it, as opposed to pretending you could add self-driving with just cameras in an over-the-air update (Tesla), or trying to move fast and break things (Uber), or pretending you could gradually improve lane-keeping all the way into autonomous driving (car manufacturers). That’s why it’s working for them. (IIUC, Cruise has pretty much also always been legit?)"
https://news.ycombinator.com/item?id=40516532


> The tech is good enough to make incredible demos but not good enough to generalize into reliable tools. The gulf between demo and useful tool is much wider than we thought.

One thing it is good at is scaring people into paying to feed it all the data they have for a promise of an unquantifiable improvement.


> The gulf between demo and useful tool is much wider than we thought.

This is _always_ the problem with these things. Voice transcription was a great tech demo in the 1990s (remember DragonDictate?), and there was much hype for a couple of years that, by the early noughties, speech would be the main way that people use computes. In the real world, 30 years on, it has finally reached the point where you might be able to use it for things provided that accuracy doesn't matter at all.


Assuming it works perfectly, speech still couldn't possibly be the main way to use a computer:

- hearing people next to you speaking to the computer would be tiring and annoying. Though remote work might be a partial solution to this

- hello voice extinction after days of using a computer :-)


Same here, but I'm hoping it takes off for other people.

I get requests all the time from colleagues to have discussions via telephone instead of chat because they are bad at typing.


Oh, yeah, I mean, it would've been awful had it actually happened, even if it worked properly. But try telling that to Microsoft's marketing department circa 1998.

(MS had a bit of a fetish for alternate interfaces at the time; before voice they spent a few years desperately trying to make Windows for Pen Computing a thing).


So a large cluster of nvidia cards cannot predict the future, generate correct http links, rotate around objects with only a picture at source with the right lighting or program a million lines of code from 3 lines of prompt ?

Color me surprised. Maybe we should ask Mira Murati to step aside from her inspiring essays about the future of poetry and help us figure out why the world spent trillions on nvidia equity and how to unwind this pending disaster...


it also can't reliably add two numbers to each other.

> help us figure out why the world spent trillions on nvidia equity and how to unwind this pending disaster..

There are many documented examples of the market being irrational.


This is just what happens, though. We were promised computer proliferation, and got locked-down squares with (barely) free internet access and little else to get excited for besides new ways to serve API requests. The future of programming isn't happening locally. Crypto, AI, shitty short-form entertainment, all of it is dripping from the spigot of an endless content pipeline. Of course people aren't changing the world on their cell-phone, all it's designed to do is sign up for email mailing lists and watch YouTube ads.

So I really don't actually know what the OP wants to do, besides brutalize idiots searching for a golden calf to worship. AI will progress regardless of how you gatekeep the public from percieving it, and manipulative thought-leaders will continue to schiester idiots in hopes of turning a quick buck. These cycles will operate independently of one-another, and those overeager idiots will move onto the next fad like Metaverse agriculture or whatever the fuck.


The jump to AI capabilities from data illiterate leadership is of such a pattern...

It reminds me of every past generation of focusing on the technology, not the underlying hard work + literacy needed to make it real.

Decades ago I saw this - I worked at a hardware company that tried to suddenly be a software company. Not at all internalizing - at every level - what software actually takes to build well. That leading, managing, executing software can't just be done by applying your institutional hardware knowledge to a different craft. It will at best be a half effort as the software craftspeople find themselves attracted to the places that truly understand and respect their craft.

There's a similar thing happening with data literacy where the non data literate hire the data literate, but don't actually internalize those practices or learn from them. They want to continue operating like the always have, but just "plug in AI" (or whatever new thing) without changing fundamentally how they do anything

People want to have AI, but those company's leaders struggle with basic understanding of statistical significance, basic fundamentals of experimentation, and thus essentially destroy any culture needed to build the AI-thing.


Do they struggle with the basics, or do they just not care?

I'm in a similar situation with my own 'C-suite' and it's impossible to try and make them understand, they just don't care. I can't make them care. It's a clash of cultures, I guess.


> 'C-suite' and it's impossible to try and make them understand,

I think we should do a HN backed project, crowd funded style.

1. Identify the best P-hackers in current science with solid uptake on their content (citations).

2. Pay them to conduct a study proving that C levels who eat crayons have higher something... revenue, pay, job satisfaction, all three.

3. buy stock in crayons

4. Publish and hype, profit.

5. Short crayons and out your publication as fraud.

6. Profit

Continue to work out of spite, always with a crayon on hand for when every someone from the C-suite demands something stupid and offer it to them.

A man can dream... I feel like this is the plot to a Hunter S Thompson writes Brave new World set in the universe of Silicon Valley.

I should be a prompt engineer.


TL;DR: Yes, and I think that's why some of these comments are so hostile to OP.

> it's impossible to try and make them understand, they just don't care. I can't make them care. It's a clash of cultures, I guess.

That seems to be what OP's cathartic humor is about. It's also (probably) a deliberate provocation since that sub-culture doesn't deal well with this sort of humor.

If that's the case, you can see it working in this thread. Some of the commenters with the clearest C-suite aspirations are reacting with unironic vitriol as if the post is about them personally.

I think most of those comments already got flagged, but some seemed genuine in accusing OP of being a dangerously ill menace to society, e.g. "...Is OP threatening us?"

In a sense, OP is threatening them, but not with literal violence. He's making fun of their aspirations, and he's doing so with some pretty vicious humor.


I think it's a bit reductive to flatten the conversation so much. While I don't have as much of an extreme reaction as the people you talk about, the post left a bit of a sour taste in my mouth. Not because I'm one of "those people" - I agree with the core of the post, and appreciate that the person writing it has actual experience in the field.

It's that the whole conversation around machine learning has become "tainted" - mention AI, and the average person will envision that exact type of an evil MBA this post is rallying against. And I don't want to be associated with them, even implicitly.

I shouldn't feel ashamed for taking some interest and studying machine learning. I shouldn't feel ashamed for having some degree of cautious optimism - the kind that sees a slightly better world, and not dollar signs. And yet.

The author here draws pretty clear lines in what they're talking about - but most readers won't care or even read that far. And the degree of how emotionally charged it is does lead me to think that there's a degree of further discontent, not just the C-suite rhetoric that almost everyone but the actual C-suites can get behind.


> the post left a bit of a sour taste in my mouth [...] And the degree of how emotionally charged it is does lead me to think that there's a degree of further discontent

I think part of the problem is that it's generally futile to judge the mental state or hidden motivations of some random person on the internet based solely on something they've written about a particular topic. And yet, we keep trying to do that, over and over and over, and make our own (usually incredibly flawed) judgments about authors based on that.

The post left a bit of a sour taste in my mouth too, mainly because as I've gotten older I don't really enjoy "violence humor" all that much anymore. I think a big part of that is experience: experiencing violence myself (to a fairly minor degree, even), and knowing people who have experienced violence makes joking about violence just not feel particularly funny to me.

But if I step back a bit, my (probably flawed) judgment is pretty mild: I don't think the author is a violent person or would ever actually threaten or bring violence upon colleagues. I'm not even sure the author is even anywhere near as angry about the topic as the post might lead us to believe. Violence humor is just a rhetorical device. And just like any rhetorical device, it will resonate with some readers but not with others.


> I think it's a bit reductive to flatten the conversation so much.

Is that because I added a TL;DR line, or my entire post?

> I shouldn't feel ashamed for taking some interest and studying machine learning. I shouldn't feel ashamed for having some degree of cautious optimism - the kind that sees a slightly better world, and not dollar signs. And yet.

I agree with this in general. I didn't mean to criticize having interest in it.

> And the degree of how emotionally charged it is does lead me to think that there's a degree of further discontent

Do you mean the discontent outside the C-suite? If so, yes, I agree with that too. But if we start discussing that, we'll be discussing the larger context of economic policy, what it means to be human, what art is, etc.


> Is that because I added a TL;DR line, or my entire post?

The TL;DR was a fine summary of the post, I was talking about the whole of it. Though, now that I re-read it, I see that you were cautious to not make complete generalizations - so my reply was more of a knee-jerk reaction to the implication that most people who oppose the author's style are just "temporarily embarrassed C-suites", unlike the sane people who didn't feel uncomfortable about it.

> I didn't mean to criticize having interest in it.

I don't think you personally did - I was talking about the original post there, not about yours. The sentiment in many communities now is that machine learning itself (or generative AI specifically) is an overhyped, useless well that's basically run dry - and there's no doubt that the dislike of financial grifters is what started their disdain for the whole field.

> Do you mean the discontent outside the C-suite?

Yes.


> the post is about them personally.

There is a decent chance that, yes, this rant is quite literally aimed at the people that frequent Hacker News. Where else are you going to find a more concentrated bunch of people peddling AI hype, creating AI startups, and generally over-selling their capabilities than here?


on linkedin, for starters


Senior management's skill set is fundamentally not technical competence, business competence, financial competence, or even leadership competence. It's politics and social skills (or less charitably, schmoozing). Executives haven't cared about the "how" of anything to do with their business since the last generation of managers from before the cult of the MBA aged out


The issue its that its not just with technology, but absolutely anything that could be loosely defined as an expert-client relationship. Management always budgets less time and money than what anyone with expertise in the subject would feel is necessary. So most things are compromised from the outset, and if they are successful its miraculous that the uncredited experts that moved heaven and earth overcame such an obstinate manager. Its no wonder most businesses fail.


This is a common problem across all fields. A classic example is that you don't change SAP to suit your particular business, but instead you change your business to suit SAP.


I swear this particular rant style is derived from earlier writers who I've read, probably many times, but don't remember. It feels completely familiar, way more so than someone who started working in 2019 could possibly have invented ex nihilo. They're good it though! And somebody has to keep the traditions going.


I get what you mean but I have to disagree about this one being good. Epic rants are fun when there's a cold hard point being made, and the author uses the rant format to make the point irrefutably clear and drive it cruelly home.

Here, if you strip away the guff about how smart the author is and how badly he wants to beat up people who disagree, I have no idea what he's trying to say. The rest reads like "these companies who want to use AI are bad, and instead of using AI they should try not being bad", and such. Ok?


I was trying to be nice but yeah, "i'm smarter" and "i crush your skull" are not witty. There are nice twists of phrase in there though. The kid has potential!


In short, get him an editor


To be honest I'm not 100% sure it was meant to be satire - but if it was then I agree with you :D


Maddox was the originator of this style afaik. He's been at it for over 20 years.

https://maddox.xmission.com/


Hunter S. Thompson.


Reminded me of this too


It feels a little bit like a James Mickens rant, but not as funny (not a jab at the author of this, James Mickens is just really funny).


Definitely reminded me of the classic Mickens usenix login;logout columns! Just overall a very engaging communication style for adding some entertainment value to what might otherwise be pretty dry to read in one sitting.


A little bit of Maddox, a little bit of BOFH.


This is just conversational australian english


It has a kind of seanbaby, somethingawful vibe.


Maybe Wilhelm Reich with his book 'Listen, Little Man!'?


Ed Zitron maybe ? Writes on crypto, AI, musk the same way.


Tone aside, the post contains some hard truths. I'm curious to see what the HN audience think of the point the author makes.


I love his point and I love his exuberantly colorful tone.


The audience isn't getting to see it because it's getting flagged as it shows up.


We know organic interest when we see it, so I merged all the comments into this one (the first) and re-upped it. Meta explanations here if it's the kind of thing you like:

https://news.ycombinator.com/item?id=40734677

https://news.ycombinator.com/item?id=40734700


IMO this should be flagged. This is outrage porn, and comments overwhelmingly fall into two camps: ones discussing the author and writing style, completely devoid of intellectual curiosity; and ones venting their own pent up energy based on the title/topic. Very few are interested in the actual points of the article, because it’s written in a way that maximally discourages civil discussion.

I know occasionally a “discuss the title” thread is allowed, but this one is almost strictly worse than just the title without the link, since we don’t get useless comments on the author in the latter case.

Oh, and I say this as someone who vaguely agree with the sentiment.


You're right, but this seems to be a rare boundary condition where some users think it's fun/interesting and others are triggered by it and the two groups are the same order of magnitude.

In such cases the story is going to keep showing up no matter what we do. If we don't yield after 14 submissions we're destined to yield after 140! I'd rather yield after 14. It's just one thread, and easy enough to move on from.


It's been flagged.

Seven | eight or more prior submissions have been flagged. They've been vouched for, flagged again, marked dead, and resubmitted .. by many different unaligned not bot people.

As dang noted above the HN community wants to thrash this out .. some want it deader than a parrot, others want to comment that much of the AI hype appears to have no clothes and this opinion range comes from 10 year old active accounts (and more recent ones).


This is better than the Monty Python dead parrot sketch. Which, now that I think about it, could be a metaphor for AI.


I skipped over the blog post and thread several times here, as the shortened title didn't really sound appealing.

It's only after I read it via twitter that I dug back in to see the thread. I agree that this might be seen as "outrage porn"... but who really doesn't love a good rant? It's good to discharge the thundercloud of disquiet through at least one good thread, such as this one.

As for me, and my perspective, I took the Stanford AI course back when it showed up here, and others... just to learn the mechanics of it, and it's fascinating. The hype cycle on this though, is off the charts.

I came to see if there were any others who wanted to get actual legislation to passed to help set up "Thought Leader Jail". ;-)


This is what great moderation looks like. Thank you, dang.


and of course you had to lie about title, because that's how pitiful you are.


Pitiful as I am, I need to ask you to stop breaking HN's guidelines. Your account has been doing this repeatedly, and in fact is way over the line at which we ban people.

I'm not banning you at the moment because it wouldn't feel sporting to do so in response to a personal remark. But if you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules going forward, that would be good.


This post has an unnecessarily aggressive style but has some very good points about the technology hype cycle we're in. Companies are desperate to use "AI" for the sake of using it, and that's likely not a good thing.

I remember ~6 years ago wondering if I was going to be able to remain relevant as a software engineer if I didn't learn about neural networks and get good with TensorFlow. Everyone seemed to be trying to learn this skill at the same time and every app was jamming in some ML-powered feature. I'm glad I skipped that hype train, turns out only a minority of programmers really need to do that stuff and the rest of us can keep on doing what we were doing before. In the same way, I think LLMs are massively powerful but also not something we all need to jump into so breathlessly.


I empathize with it, but ultimately it's fruitless. This happens with every big tech hype. They very much want people to keep talking about it. It's part of the marketing, and tech puts a lotta money into marketing.

But that's all it is, hype. It'll die down like web3, Big Data, cloud, mobile, etc. It'll probably help out some tooling but it's not taking our jobs for decades (it will inevitably cost some jobs from executives who don't know better and ignore their talent, though. The truly sad part).


> It'll die down like web3, Big Data, cloud, mobile, etc

At least half of those the promise was realised though - mobile is substantially bigger than the market for computers and cloud turned out to be pretty amazing. AWS is not necessarily cost effective but it is everywhere and turned out to be a massive deal.

Big Data and AI are largely overlapping, so that is still to play. Only web3 hasn't had a big win - assuming web3 means a serious online use case for crypto.

"Die down" in this context means that the hype will come, go and then turn out to be mostly correct 10 years later. That was largely what happened in the first internet boom - everyone could see where it was going, the first wave of enthusiasm was just early. I don't think any technology exists right now that will take my job, but I doubt that job will exist in 20 years because it looks like AI will be doing it. There are a lot of hardware generations still to land.


"The proportion of global companies planning to increase spending on AI over the next 12 months has slipped to 63% from 93% a year earlier, according to a recent survey of 2,500 business leaders by software company Lucidworks Inc. Meanwhile, just 5% of companies in the US are using AI, according to the Census Bureau."

https://archive.ph/h6QW7#selection-2069.305-2077.1

https://www.bloomberg.com/opinion/articles/2024-06-20/google...


To a first approximation, I expect companies to spend nothing on AI and get put out of business if they are in a sector where AI does well. Over the medium-long term the disruption looks so intense that it'll be cheaper to rebuild processes from the ground up than graft AI onto existing businesses.


Which sectors are these?


> At least half of those the promise was realised though

I dunno, I think there might be different sets of "promises" here.

For example, "cloud infrastructure" is now a real thing which is useful to some people, so one could claim that "the promise of cloud infrastructure" was fulfilled.

However that's not really the same promises as when consultants preached that a company needed to be Ready For The Cloud, or when marketing was in a slapping "Cloud" onto existing product marketing, or unnecessary/failed attempts to rewrite core business logic into AWS lambda functions, etc.


AI and 'Big Data' (as trends) aren't really overlapping in my view. Of course training these LLM models requires a huge amount of data but that's very different from the prospect of spinning up a Spark cluster and writing really badly performing Python code to process something that could have easily been done in a reasonable time anyway on a decent workstation with 128gb of RAM and a large hard drive/SSD, which was a large part of what the hype train was a few years ago.


Go back further:

At a point in time the database was a bleeding edge technology.

Ingres (Postgres)... (the ofspring of Stonebreaker), Oracle, ... Db2? MSSQL? (Heavily used but not common)... So many failed DB's along the way, people keep trying to make "new ones" and they seem to fade off.

When was the last time you heard someone starting a new project with Mongo, or Hadoop? Postgres and Maria are the go to for a reason.


There's a team at my company that chose Mongo for a new data transform project about a year ago. They didn't create a schema for their output (haven't to this day) and I'm convinced they chose it purely because they could just not handle any edge cases and hope nobody would notice until it was already deployed, which is what happened. For example maybe one in a thousand of the records are just error messages - like they were calling an API and got rate limited or a 404 or whatever and just took that response and shoved it into the collection with everything else.


I... Well dev/null is an option...

https://www.youtube.com/watch?v=b2F-DItXtZs

It's all just history repeating.



still, I would have out that mess in jsonb in Postgres :D


Postgres is awesome and part of its charm is the extensibility it offers enabling the adoption of innovative features introduced by competing DB's.

Postgres adopted a lot of mongos features when it released the JSON data type and support for path expressions


Mongo is still very big, including for greenfield.


It's tiny compared to 10 years ago when it was all the rage and DBs were dead...



In the last few years I have come to think of AI as transformative in the same way as relational databases. Yes, right now there's a lot of fad noise around AI. That will fade. And not everyone in IT will be swimming in AI. Just like not everyone today is neck deep in databases. But databases are still pretty fundamental to a lot of occupations.

Front-end web devs might not write SQL all day, but they probably won't get very far without some comprehension. I see AI/ML becoming something as common. Maybe you need to know some outline of what gradient descent is. Maybe you just need some understanding of prompt engineering. But a reasonable grasp of the priciples is still going to be useful to a lot of people after all the hype moves to other topics.


I agree that the world isn't changing tomorrow like so much of the hype makes it out to be. I think I disagree that engineers can skip this hype train. I think it's like the internet - it will be utterly fundamental to the future of software, but it will take a decade plus for it to be truly integrated everywhere. But I think many companies will be utterly replaced if they don't adapt to the LLM world. Engineers likewise.

Worth noting that I don't think you need to train the models or even touch the PyTorch level, but you do need to understand how LLMs work and learn how (if?) they can be applied to what you work on. There are big swaths of technology that are becoming obsolete with generative AI (most obviously/immediately in the visual creation and editing space) and IMO AI is going to continue to eat more and more domains over time.


I’ve been doing just fine ignoring AI altogether and focusing on my thing. I only have one life. Fridman had a guy on his podcast a while ago, I don’t remember his name, but he studies human languages, and the way he put it was the best summary of the actual capabilities I’ve heard so far. Very refreshing.


Who was that?


Could it be Edward Gibson [1]?

> I work on all aspects of human language: the words, the structures, across lots of languages. I mostly works [sic] with behavioral data: what people say, and how they perform in simple experiments.

(I find it ironic to see a grammatical error in his bio. Probably because of a mass find/replace from "He" to "I" but still...)

[1] http://tedlab.mit.edu/ted.html


I think #426


> This post has an unnecessarily aggressive style

Im not sure its "unnecessary".

He is, very clearly venting into an open mic. He starts with his bonfides (a Masters, he's built the tools not just been an API user). He adds more through out the article (talking about peers).

His rants are backed by "anecdotes"... I can smell the "consulting" business oozing off them. He cant really lay it out, just speak in generalities... And where he can his concrete examples and data are on point.

I dont know when angry became socially unacceptable in any form. But he is just that. He might have a right to be. You might have the right to be as well in light of the NONSENSE our industry is experiencing.

Maybe its time to let the AI hate flow though you...


As someone who spent an inordinate amount of time trying very hard to be less angry despite having a lot of good reasons to be, a chunk of which overlap with this piece, I get a lot of dismissal from people who seem to think any expression of any negative emotion, especially anger, deeply discredits a person on its own. It's so pervasive that I find even addressing it to be refreshing, so thank you


Be angry more. I work in China but Im French, so people assume (and I nudge them to think), that it's a culture thing for me to express anger publicly at injustice or idiocy.

But it's liberating to be angry at bullshit (and God knows China is the bullshit Mecca), and AI is the top bullshit these days. We're not anti innovation because we say chatgpt is not gonna maintain our trading systems or whatever you work on. It's a funny silly statistical text generator that uses hundreds of thousands of video cards to output dead http links.

We're far from anything intelligent but it's at least indeed very artificial.


As someone who was in academia at the right time to really contextualize the seismic shift in the capabilities of automated natural language processing models that came of both attention and the feasible compute scale increase that allowed for them to use long enough context windows to outpace recurrent models in this regard, I really didn't think I'd end up having to roll my eyes at people falling over themselves to exaggerate the implications, but in retrospect it's clear that this was more me being unconscionably naive then than it being that unpredictable


Thanks I will get down voted to oblivion for it.

Cause getting angry at a problem and spending 2 days coding to completely replace a long standing issue isnt something that happens...

People need to be less precious. You cant be happy all the time. In fact you probably should not be (life without contrast is boring). A little anger about the indignities and bullshit in the world is a good thing. As long as you're still rational and receptive, it can be an effective tool for communicating a point.


Or just communicating the emotion! I think aligning on an emotional layer of perception is important for shaking people out of automated behaviors when it's necessary to, and I dislike this cultural shift toward punishing doing any of that from the standpoint of its mechanism design implications almost as much as I hate it on vibes


Not only that, but the thing is that it’s all fake in our industry and the companies that we work on. People seem to be very sensitive today to showing any kind of actual emotion or feelings, be it anger or frustration. Everyone puts on the fake american service industry smile, say words like I hear you, we’re a team, we must be constructive. Then in the background all do the most insane political backstabbing, shit talk about other teams, projects, people walk over the careers and the future of others just to advance themselves, but as long as you put a smile on your face in the meetings and in public, none of that matters.


I mean you make some very good points but you sound like you could be kind of mildly upset if I squint at it right so I think you should really be more mindful and adopt a positive additude before I will even consider listening to anything you have to say


If you want a theory; a man who isn't in control of his emotions can present anything up to an immediate mortal danger to the people around him (particularly if they are female).

Being able to control negative emotions isn't a nice-to-have trait or something that can be handled later. There is an urgent social pressure that men only get angry about things that justify it - a class of issues which includes arguably nothing in tech. Maybe a few topics, but not many.

Anger isn't a bad thing in itself (and can be an effective motivator in the short term). But people get very, very uncomfortable around angry people for this obvious reason.


> If you want a theory; a man who isn't in control of his emotions can present anything up to an immediate mortal danger to the people around him (particularly if they are female).

What emotions do you really control?

We expect men to suppress this emotion. And there's is 400k years of survival and reproductive success tied up with that emotion. We didn't get half a percent of the population with Ghegis Khans Y chromosome with a smile, balloons and a cake.

It's not like violence doest exist. But we seem to think that we can remove it just like the murder in the meat case. Are we supposed to put anger on a foam tray and wrap it in plastic and store it away like a steak because the reality of it upsets people?

It's to the point where words murder, suicide, rape and porn are "forbidden words"... were saying unlike, grape and corn. So as not to offend advertisers a peoples precious sensibilities. Failing to see this behavior is a major plot point in 1984.

I think we all need to get bad to the reality of the world being "gritty" and having to live in it.


1) If you are comparing people's behaviour to Genghis Khan, don't expect positive social reinforcement. The man was a calamity clothed in flesh, we could do without anything like him happening ever again.

2) Violence != anger [0]. I don't know much about him, but Ghengis Khan could have been an extremely calm person. It is hard to build an empire that large and win that many campaigns for someone prone to clouded thinking which is a point in favour of him being fairly calculating.

> What emotions do you really control?

3) In terms of what gets expressed? Nearly all of them. Especially in a written setting, there is more than enough time to take a deep breath and settle.

> We expect men to suppress this emotion.

4) As an aside, I advise against suppressing negative emotions if that means trying to hold them back or something. That tends to lead to explosions sooner or later. It is better to take a soft touch, let the emotion play out but disconnect it from your actions unless it leads to doing something productive. Reflect on it and think about it; that sort of thing.

[0] Although maybe I should not that angery violence is a lot more dangerous than thoughtful violence; angry violence tends to be harder to predict and lead to worse outcomes.


> If you want a theory; a man who isn't in control of his emotions can present anything up to an immediate mortal danger to the people around him

You cant posit this and then go on to try and claim Violence != anger.

> The man was a calamity clothed in flesh Nice, well said!!! He was also likely brilliant. Its rare stupid people make it to the top!

I hope that Ghengis Kahn NEVER happen again...But I think society is just a thin veil between us and those monsters. The whole idea of pushing down anger is just moving us one more steep from that reality!


Sure I can posit both. They're both true.

There is a Venn diagram here. One circle is violence and one is anger.


Okay but we're not talking anger that's expressed by violent behavior or even clear significant loss of control, I'm talking people on the internet can pick up the mildest hint of anger from your tone or even subject matter. As a woman and a pretty scrawny one at that, as well as being, well, obviously very opinionated and belligerent, I have experienced every flavor of the threatening behavior you're invoking and I can assure you this has nothing to do with why people reflexively dismiss people who they think are being "emotional". More and more, the accusation of being angry specifically seems to be all people think they need to say to smugly claim to be speaking from a "rational" high ground, often despite having contributed nothing of substance to the topic at hand. Like pointing out that this person's blog post aimed at no one particular person did not really have to contend with the perception that this person was going to actually become violent at anyone, although actually I could see getting that impression from this post more than most, since it frequently explained the anger as cartoonish threats of hypothetical violence. I'm not exaggerating. When I see this in person and can make better assumptions about the genders of the people involved, this seems disproportionately likely to be leveraged against women, as are most arguments to "obvious" or "apparent" disqualifying irrationality, and this is not a shock because we are within living memory of much of work culture treating it as conventional wisdom that this should be assumed of all women by default. People really be trying to win epistemic pissing contests by posting something that looks like running "u mad" through google translate and back once, unironically, just as surely as you're trying to do that obnoxious thing of trying to invoke the gravity of situations in which people genuinely fear for their safety, hoping that gravity will somehow make it harder to question what you said for fear of seeming chauvanistically oblivious or whatever that's supposed to do

I propose the alternate theory that as in-person interaction becomes a smaller portion of most people's social experience, many have gotten worse at handling even mild interpersonal conflict without the kind of impersonal mediating forces that are omnipresent online, and this kneejerk aversion reaction can rationalize itself with the aid of this whole weird gilded age revivalist-ass cartoon notion of "rationality" that's become popular among a certain flavor of influential person of late and, especially in a certain kind of conversation with a certain kind of smug obnoxious person, seems kind of like classic Orwellian doublespeak

Also this position that "arguably almost nothing" in tech warrants anger seems super tonedeaf in a context where most of the world has become a panopticon in the name of targeting ads, you need a mobile phone owned by a duopoly to authenticate yourself to your bank, and large swaths of previously functional infrastructure is being privatized and stripmined to function as poorly as the companies that own them can get away with while the ancillary benefit of providing employees with subsistence and purpose wherever possible, while still managing to nickel and dime you for the privilege with all manner of junk fees, and offer poorly-designed phone trees in place of any meaningful documentation or customer service


Just going through your last paragraph; the logical implication of getting angry about any of that is either living in a state of ignorance or getting angry all the time. Either of those options is far inferior to just taking note of what is happening and calmly suggesting some improvements or working to make things better when the opportunity arises.

And these issues are just minor compared to all the terrible stuff that happens routinely. If we're ranking issues from most to least important things like "you need a mobile phone owned by a duopoly to authenticate yourself to your bank" are just so far down it is laughable (the wry type, like "why do I even care"). The fact that you need a bank at all is a far more crippling issue. Let alone all the war, death, cruelty and disinterest in suffering that is just another day in a big world.


Two things can be true at once. We live in a big world and in that world, there are many things that warrant our anger, some of which are more important or urgent than others. Yes, it's probably more important that there are two wars going on or that the rich country that I live in has become a police state that jails millions of people on dubious and often bigoted pretenses or that the capital that owns the industrial capacity that won the last major era of technological progress is hell-bent on continuing business as usual in a way that we're now pretty sure will drastically harm the ecological infrastructure we depend on to survive, and has been engaged in decades of attacking the scientific and political capacity to dismantle them. Also, many of these problems are directly aided and abetted by the owners of the current wave of technological advances, who have also created and continue to iteratively worsen a pervasive network of surveillance and control, as well as an experiential environment that reliably produces apathy and learned helplessness, while destroying significant hard-won freedoms and infrastructure in the process (including uber rolling back labor rights gains, amazon crippling public delivery infrastructure it views as competition, etc)

Epictetus wrote of concerning oneself more with that which one may be able to control than that which one can't, and people who aren't familiar with the Enchiridion have nonetheless internalized this wisdom. It pops up in lots of places, like in various schools of therapy, or in the serenity prayer. My career is in computers, and this website is a nexus wherein people who do computers for a living gather to discuss articles. Therefore, the shared context we have is disproportionately about issues surrounding computers. We are all of us likely better positioned to enact or at least advocate for change in how computer things are done in the world, and in each of the last 7 decades this has become a larger share of the problems affecting the world, and anger is difficult to mask when talking about problems precisely because one of the major ways we detect anger in these text conversations devoid of body language or vocal tone is expressing a belief that something is unacceptable and needs to be changed


it turns out Lucidity has written about this precise phenomenon too - the relentless positivity of toxic environments: https://news.ycombinator.com/item?id=39702323


I know someone that got bullied for taking leave to care for their special needs kids, and the positivity people came pouring out of the woodwork to accuse her of being confrontational when she was understandably upset. Not that the commenters above have indicated that they'd do anything like that, but yeah, it's WILD out there.


There is at least one German comedian who made an entire career solely out of being angry...



Yup, exactly this guy! Known and beloved in his stage persona Herr Hassknecht...


He did what some of us want to do in meetings with clients. I hear and read all those BS arguments he's used as headings every week. It's insane.


I take it as humorous hyperbole, in the style of McSweeney's, like It’s Decorative Gourd Season, Motherfuckers https://www.mcsweeneys.net/articles/its-decorative-gourd-sea...


Author here. I spent Thursday evening with my vocal teacher, where we mostly giggled because I couldn't hit the F required to start Billy Joel's "Movin' Out". So yes, it was most decidedly humorous hyperbole, and it has been taken this way by like, everyone that's not on Hackernews. Genuinely astonished that it isn't as obvious to other people as it was to you, but something I learned quickly is that you simply can't control how your words are perceived, and it isn't possible to waterproof sufficient long-form content.

To be honest, this is kind of nice, because my girlfriend recently told me that I'm "extremely unthreatening" because of all the improv theater, fencing, music, reading, and writing. Now I know at least a few people on the internet are threatened by me. I'm a loose cannon, on the verge of totally unrestrained violence. I'm two steps removed from a dinosaur, and if someone looks at me funny, who KNOWS what'll happen.


Improv theater and fencing, you say?

Have you, perchance, ever been involved in an Insult Sword Fight


I agree. A bunch of his other posts have similar style. I like it. It is witty, mixed in with serious technical subjects!


[flagged]


“Is your issue with my tone or what I am saying? Because I will only apologize for the former”


Honestly, I couldn't get past the violent language. Why do we give people who speak like this any respect. It's completely inappropriate.


I'm curious to know if you avoid movies/TV-shows in a similar manner?


I didn't find this article refreshing. If anything, it's just the same dismissive attitude that's dominating this forum, where AI is perceived as the new blockchain. An actually refreshing perspective would be one that's optimistic.


Is it possible that this forum may be dismissive of the AI bubble because the people on HN tend to have better understanding of the technology, its limitations, and the deceptive narrative around it?


We like to think we're above everyone else but in reality our technical expertise is mostly irrelevant when evaluating the utility. I've been programming for 20 years and I can't think of a single thing I know that puts me in a better position to predict the future of AI.


Compared to other populations, the HN group will know about AI tools being released, use them, and have some vague understanding their fundamental basis.

The tools are impressive but have their limitations. I think the demos are more for investors looking for the next unicorn. Like self driving cars, its a hard problem


I was not speaking about the future however. And especially not about the distant future. AI is such a wide term that it’s hard to even discuss the topic.

It is already used for surveillance and racial profiling (China etc.) and that’s already a disruption in oppression and control.

Will likely disrupt driving.

Image recognition and generation are clearly influential.

Probabilistic text generation tech is making huge promises and getting a lot of investment but application is lacking - seems like a technology of search of application.


> I've been programming for 20 years and I can't think of a single thing I know that puts me in a better position to predict the future of AI.

I don't think that's the argument, though. OP (and many commenters here) are talking about the state of AI tools right now, and how executive/marketing/sales people already either lie about or genuinely (incorrectly) believe fantastical things about current capabilities.


That still requires to have tight blinders on your eyes, the same kind that were needed to not be dismissive of the blockchain.

Or maybe we're confusing two things. AI and blockchain are complete opposites in real terms; the former represents an unexpected and qualitative jump in technological capabilities of humanity, and already delivers spectacular results, while the latter is just a purposefully inefficient database, neat on mathematical grounds, but useless in the real world. However, in business terms, both are in fact the same - a way for grifters to get rich on peddling bullshit.

To this I can only say: please remember that the nature of a thing is not affected by how much the fraudsters can lie to you about it. Hustlers gonna hustle. So sure, it's a business-level hype - but that doesn't affect the merit of the underlying tech.


Did you heard about the opinion of HN about Dropbox ?


The one infamous comment you mean? Because otherwise people in that thread thought it was pretty cool! I see this often mentioned here but it really just seemed like one comment in a whole thread.


It was also the most upvoted comment.


That's far from what the article actually tries to say. Did you read the full thing?

HN is disproportionately dismissive, where one comment in late 2023 was this:

    Ruby is so much better than Python, and Python is only pumped up by AI hype, and the AI hype will die down soon. Ruby will regain the throne again.
Imagine that!

This article is not that. This article just tells you to get your basics correct as a company, and don't think about using AI before you are absolutely sure where and how you will use it. And non-technical people are the main drivers of AI hype (which is besides the true thing).


Can you give a link to that quote? A google search and hn search doesn't surface it.


Just a quick search with string "ruby python hype" on HN yielded a response to this on the second page.

> "Sadly, I don't expect this situation to reverse in the future. Maybe like blockchain, once the AI/ML fantasy hype dies down the other comparatively-unpleasant language will lose general appeal and Ruby will gain more attention." [0]

[0]: https://news.ycombinator.com/item?id=34243498


I see. I thought it was a direct quote. I'm not sure the linked comment matches the sentiment in your original statement.


I am.


It reads a bit like someone at the turn of the 20th century describing the shortcomings of newfangled automobiles.


Author makes good points but suffers from “i am genius and you are an idiot” syndrome which makes it seem mostly the ranting of an asshole vs a coherent article about the state of AI.


I've recently read through many of the author's articles and also through his LinkedIn content, and came to the opposite conclusion. The intentional "In-Your-Face-Trolling" style is intended as a cover for "Impostor Syndrome in Overdrive", which lots of us suffer. Yet he was able to fool so many! Just check the "Compliments" section on his website :)

I made the following comment about him in a conversation with a coworker: " The guy who authored the article is mad. Certifiably mad. Just spewing around pure unadulterated truth. (LinkedIn link goes here) What does it tell about me (or anybody) who so far hadn't found anything in his writings to disagree about? "

Krazam (search YouTube) is the other example of largely the same. But because it is visual it is a bit more obvious.


> What does it tell about me (or anybody) who so far hadn't found anything in his writings to disagree about?

It tells they're too excited by the delivery and aren't thinking about the merit of what's being said.

Same mistake people made with all those "tells it as it is" vloggers and pundits.


LOL nah, mate’s just Australian


Ah-ha, "Boys just havin' fun"™. Being Australian is not incompatible with impostors' or being a good troll!


Tall poppy syndrome and a HUGE piling spoonful of cultural cringe.


Just bloody shootin for bronze… lookin for a Steven Bradbury.


Australian dream.


Author sounds like a young person who feels like he's a god among men just for the fact that he's implemented the algortihms and understands the math and engineering behind the libraries most DS's just pip install.

Which is weird coming from a generation of devs, where actually doing this work yourself was the norm.

As for DS, from what little I've experienced from the field, he sounds right. Most people come in without a mathematically rigorous education, they talk fancy, but what they end up doing is pulling in dependencies from a pre-written library and using those without understanding the theory behind them.

They also ignore the fact that 99% of the value in data science is created by taking good data, understanding the domain, in which case fancy algorithms are unnecessary. And the acquisition of said things needs good data engineering, not data science.

But more often than not, the credit and prestige goes to folks who pull in fancy ML algorithms and run extensive experiments and build massive ML pipelines, feeding in truckloads of tangentially relevant data.


> Author sounds like a young person

I almost laughed out loud when he said he started working as a data scientist in 2019. Five years is not a very long time. And he claims he already had identified the entire field as full of fraud in the first two years of that!

I agree with a lot of the article's points, but the author took a serious credibility hit with me after asserting that two years of from-scratch experience is enough time to evaluate an entire subfield of computer science.


You start quite condescending but then basically acknowledge what the author is saying. Most DS's, even "from your generation" probably don't write their own tools. I bet you are even guilty of this too. No need to do some implicit grandstanding.


Americans not getting Aussies is the best part of this thread


Please don't take HN threads into nationalistic flamewar. I know you didn't intend to but it's what this kind of internet comment leads to (in the general case), and we don't want that here.

In fact, since your comment is a putdown both of a nationality and of the community, it might be good to quote this from https://news.ycombinator.com/newsguidelines.html: "Please don't sneer, including at the rest of the community."


You realize that this comment was a very minor tease? This community is so Us-centric that the pro-US-on-anything bias drives basically every voting trend, and the issue is just teasing how Americans don’t understand other nationalities’ humor and cultural nuances?


Unfortunately people tend to underestimate their own provocations by orders of magnitude. Even when we get full out flamewars the instigator inevitably says things like 'but I was only mildly teasing' or what have you. In any case, what matters is effects, not intent, and that's what we have to moderate by (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...). The effects of snark, flamebait, and so on are predictable, so commenters are responsible for avoiding them.

By the way, this has been a problem in the past: https://news.ycombinator.com/item?id=20657986 (Aug 2019), and we've had to ask you several times not to break HN's guidelines in other ways too. Could you please review https://news.ycombinator.com/newsguidelines.html and recalibrate how you're posting to this site?

https://news.ycombinator.com/item?id=38286445 (Nov 2023)

https://news.ycombinator.com/item?id=35619557 (April 2023)

https://news.ycombinator.com/item?id=33885502 (Dec 2022)

https://news.ycombinator.com/item?id=15127589 (Aug 2017)


I'm Australian and concur that OP sounds full of himself.

Telling self-righteous... friends, to wind their neck in is far more Australian than OP's behaviour.

[0] https://en.wikipedia.org/wiki/Tall_poppy_syndrome

Edit:

To clarify, I have no problem with his style of writing, which is great, but "I am clearly better than most of my competition"? Lord, get a grip.


Not sure about tall-poppy syndrome, but I think it's somewhat justified (this could be argued though) that success most often doesn't look like what we think it should look like.

In most people's minds success should come from a combination of talent and hard work. We think people who work hard and come up with good ideas should become successful. But usually working 'within the system' limits your ability to be succesful. If you save the day at your current job, you might get a 20% raise if you're lucky. If you are mediocre but change jobs often, you will probably beat that.

In software, getting a high paying job usually hinges on your ability to get someone willing to pay you a lot of money.

I'm sure there are people who are getting paid 10x more or less for doing work that is fundamentally the same, just with different presentation.

For example I know a guy who's a mediocre PHP dev, but managed to snag a couple of high paying clients, and got into OE over covid, and brings in a ton of money, despite the fact that somehow he still doesn't seem to be working that hard.

Does he deserve that money? Is he someone we should look up to? I don't wanna say no, but I also don't wanna say yes.


> We think people who work hard and come up with good ideas should become successful.

I think that's some sort of platonic ideal that hasn't really been all that true for a long time, though. What brings success is coming up with valuable[0] ideas, and then executing well on them. There are many ideas that are good that are unfortunately not so valuable. And there are many people who work hard but just aren't all that talented or effective or productive, and their work ends up not amounting to much.

> Does [someone who doesn't work that hard but has high income] deserve that money? Is he someone we should look up to? I don't wanna say no, but I also don't wanna say yes.

Maybe we should step back and consider that this is the wrong question. "Looking up to" someone is an emotional thing; IMO we should only look up to people for intangible "virtuous" reasons, not because e.g. they've managed to make a bunch of money. Look up to people because they are honest, have integrity, are kind, and help people.

"This guy makes a lot of money despite not working very hard" should be viewed dispassionately. Evaluate the work itself, and the representation and selling of that work. If it's done with integrity, the product of the work is as promised, and no one is harmed, then it may be worth emulating.

I personally think that the social conditioning we've all gotten that suggests that hard work is good and virtuous is garbage, and is an attitude and message that has acted as a tool of oppressors. I hesitate to repeat the "work smarter, not harder" buzz-phrase, but I think there's a lot of truth there.

[0] I don't even necessarily mean "valuable" in the monetary sense, though that too-often is a big driver.


I think he was mentioning winning a specific competition?


I have Australian friends and they are not like this.

Sorry but, being Australian doesn't get you a free pass to banter everywhere and still expect to be taken seriously. Let alone spill self-diagnosed superiority in form of text.


He grew up in Penang, moved to Australia in 2013 according to his blog.

I'm not a fan of the "I'll break your neck" theme. He doesn't want people talking about AI but his own business website says he'll talk to you about AI in exchange for money.

Does he want to be Louis CK Live at the Beacon Theater AND a data scientist consultant? I don't think it's possible to be both.


Silicon Valley is not "Americans." Otherwise, I agree.


Only a small portion of HN users are in Silicon Valley. Last time I looked at the numbers (a few years ago) it was around 10%. About 50% were in the US overall, a third or so in Europe, and so on.


I think he's polarizing because he's right about the industry and everybody on both sides knows it. Many people in the industry are just selling snake oil. There's also a ton of idiots such as the people who misconfigure cloud software to waste half a million dollars of company money. A truth teller comes off as an a-hole to people who don't want particular truths to be told.


> A truth teller comes off as an a-hole to people who don't want particular truths to be told

If someone is repeatedly threatening physical violence, as the author of this post is, that also tends to come off as an a-hole to some people even if the threats are not genuine.

I agree the author of this post is saying accurate things, and that will piss people off, too.

So we have two completely separate ways in which someone might think the author is an a-hole. They aren't all trying to hide some truth, like you imply.


It is not serious. You shouldn't see it as a literal threat. It's a writing tool. Just like adding "fucking" to something doesn't literally mean that that thing is copulating.


I'm aware that it's a writing tool, not a literal threat. That's why I pointed out the threats are not genuine. Thank you for your explanation.

The choice to use the writing tool in question makes the author come off like an a-hole.


Yeah but it makes you sound like an asshole, same as constantly talking about punching people.


> If someone is repeatedly threatening physical violence, as the author of this post is

Really? I think most people will agree that it's a writing style (not that I enjoy it) rather than the author really threatening actual violence.


Yes, including myself, which is why I ended that sentence with

> even if the threats are not genuine

My point is that using that writing style makes the author come off as an a-hole.


I have read only a few sentences, so it can't be the hard truths that give off this vibe. Saying you are one of the greats based on things someone inexperienced would list underlined by the very short time in the industry comes off as arogant.


>Many people in the industry are just selling snake oil

We have always been selling snake oil - its just the inexperienced and those who have never shipped anything of any value to the world that feel that the snake oil is where the buck stops - but those of us who have shipped tons of snake oil know that eventually that oil congeals and becomes an essential substance in the grinding wheels of industry.

Which this wanker (Disclaimer: Australian, can use it if I wanna, since I know a lot about snakes, too..) seems to not have fully understood yet, as there is a great deal of evidence to support the fact that their experience is mostly academic, and hasn't actually resulted in anything being shipped.

Academics seem too often to forget that software is a service industry and in such a context, snakes and oil are very definitely par for the course.

Nobody cares if you implemented the important bits all by yourself - what are your USERS doing with it? Oh, you don't have actual users? Then please STFU and let the snake wranglers get on with it ..


“ Nobody cares if you implemented the important bits all by yourself - what are your USERS doing with it? Oh, you don't have actual users? Then please STFU and let the snake wranglers get on with it ..”

I got the opposite impression of the article that it was mostly about the fact that companies thought they needed to be to theoretical and academic and in fact taking advantage of AI should be looked at very practically. Granted it’s a long article and he makes lots of points, but I felt like most of the part of section 4 was that you don’t need to implement it yourself and gluing libraries together was probably the right tack, and that most companies were ignoring this in the gold rush of “AI good”.


It's a polarizing style of communication but the message is on point.


Author has a writing style that he likes to use, which gives him the ability to speak about certain topics with less struggle.

Its getting into the art/performance category of code blogs.


And that makes it enjoyable to read if you are inclined to that style


I don't disagree entirely, but there is a pretty strong hint of the self-aware dry humour typical of Australians. I think he believes what he's saying, but they're probably not taking themselves that seriously or literally.


I am talking about people, not politics. Unless you think individual Australians, are well known for their personal authoritarianism?

I don't find myself conducting much authoritarianism but admittedly I do keep a pretty tight grip on the movements of my budgies. It's for their own good you see.


As a prominent sporter of budgie smugglers myself, yes, I do in fact think that Australian identity involves a great deal of authoritarianism. Its how the country was built, after all.


I will genuinely think this through, as I had never considered it could be part of a collective identity. I think in general Australians are quite disconnected from politics. Complacency is likely how we ended up with our authoritarian leanings in politics, rather than Australians having a desire for a more authoritarian government. But I do live in a bit of a bubble, and I do know when isses of security come up Australians seem pretty happy to give the government more control.


Authoritarianism is not just a political ideology, it is also a psychological personality which can be triggered by a multitude of factors. Australians have had a century for this memetic mental virus to be inculcated into their society - the easiest way to see this is to leave, learn another language, and then come back...


>humour

The word you should have used is authoritarianism, which this writer has, alas, in spades.

Your users are more important than your sense of self worth, in this industry.

Nobody ships ego. We ship working software: to users who find it valuable.


> We ship working software: to users who find it valuable.

I agree that's what's most likely to bring you financial and reputational success, but I also think there are a lot of things people can and do sell that are various incarnations of snake oil, at best.

This perhaps gets a little philosophical, but: is it ethical to sell someone something they don't need, and doesn't actually help them, even if they believe they need it, and over time even believe they've been helped by it?

I think a lot of the applications of "AI" today can fall under that umbrella, given the "right" customer.


> is it ethical to sell someone something they don't need

It would only be unethical to remove their agency over the decision, in my opinion.


Apologies, I posted my reply on the parent comment.


Especially since most "data scientists" turn spreadsheets into reports for the C suite, I'd argue that his entire role fits into the same arguments he makes against AI. Like he says, unless you're doing things on the cutting edge, I don't think most businesses have seen positive outcomes from employing data "scientists" or "engineers". They just take people's excel spreadsheets and make them prettier, taking approximately one quarter to implement each into Power BI.

Also being 5 years into his career thinking he actually groks how it all works is adorable. I get the impression he has the idea that work is supposed to be a rewarding passion project rather than getting shit done for your boss. Give me all the cushy bullshit AI projects please. I can play with the toys for 6 months and come back with whatever and it will be perfectly acceptable. Either "this is great, super helpful for the company" or "welp, the tech isn't there yet, but at least we tried". That's called riding the gravy train.


> “i am genius and you are an idiot” syndrome

which is a weird thing, since I think in fields where most people can be assumed to be smart, there's usually not that much differentiation in cognitive ability.

Just for reference, if we take IQ as a proxy measure for intelligence, an average group of people (say, a high school class, a council meeting), the worst 10% will have an IQ of <80 with the best 10% will have an IQ of >120.

That's the difference of 40 points, and its a common enough scenario for most people to get a feel of what it's like.

In contrast, lets say you have a room of professionals, who have been screened to be in the top 10% of the population (not a huge stretch) as a cutoff. In this scenario, you'd need 100k people in this hypothetical room to get a similarly large IQ gap.

While I think the author might be a sharp guy, and probably studied his field deeper than most, to say there's an insurmountable chasm between him and the rest of his readers might be a bit of a stretch.

But hey, if you want to sell your unique genius as your upscale consulting brand, I guess this is how you market yourself.


I don't think that is the goal of this blog. I think he is bragging a bit because he is afraid people will ask: "Who is this nobody that talks about AI as if he knows anything about it?" He is trying to qualify himself to give this opinion. He uses a hyperbolic style. I for one like it. But I like style and care less about the background of people to be honest. I think a good analysis is a good analysis regardless of who makes it.


suffers from “i am genius and you are an idiot” syndrome which makes it seem mostly the ranting of an asshole vs a coherent

Very true. A thread about it here is a hat on a hat.


It seems like “I am genius and you are an idiot syndrome" and "ranting of an asshole" are on opposite ends of the spectrum, and directly in opposition to each other.

The very sort of hypemongers and grifters the author complains about often hide unsustainable claims behind complicated language and opaque terminology, with the intent of portraying themselves as experts and making clear-headed criticism seem uneducated or uninformed in comparison.

The author here is making a deliberate choice to use a ranty tone to cut through that sort of bullshit, and in doing so, successfully expressed his frustration with the pervasive level of hype in AI discussions.


When I see people getting hyped up about AI, I just roll my eyes and move on. But now, quite a few people in the anti-AI camp are getting more hyped up than its promoters, jumping at the throats of anyone who dares mention the term. The recent harassment campaign from Mastodon folks toward the iTerm2 dev was a particularly disturbing one.

I personally would stay far far away from either of these two camps.


I feel his frustration with the grifters, the kool aid drinkers and makers.

For me the writing felt authentic and entertaining. Emotionally charged but rightfully so. It is incredibly disturbing to see people lying with a straight face and getting insane investments.


TL;DR: This is intentional hyperbole and satire

1. "ludic" means playful[1].

2. The blog's tagline implies this is satire:

> "Wow, if I was the leader of this person's company I would immediately terminate them." [2]

It seems like most of the comment thread failed to pick up on this.

That's understandable. The post's humor is a style which won't make sense if you're not fluent in both English and online culture.

Even if you understand the style, you also might not like it.

1. https://www.merriam-webster.com/dictionary/ludic

2. https://ludic.mataroa.blog/


> It seems like most of the comment thread failed to pick up on this.

At that point, is it a problem of most of the comment thread, or the way it was written?

I may say something that comes across really snarky to my coworker. Just because I didn't mean it to be snarky does not mean that it won't be interpreted that way.

Also, I have a feeling a lot of the comment thread are fluent in both English and online culture. This doesn't come across as a good-faith argument.

It's like I say something that comes across as snarky, my coworker confronts me about it, and I say "oh don't worry about it, if you came from low-context culture you would understand." It's very demeaning. Not to mention unsympathetic.


> At that point, is it a problem of most of the comment thread, or the way it was written?

Maybe OP's fault for posting it here, then. It's angry cathartic humor, so you're right not everyone will appreciate it.

> This doesn't come across as a good-faith argument.

It was meant to be, but you're also right that it no longer seems to be true:

* the aggressive knee-jerk stuff is getting flagged quickly

* more comments have been posted

> It's very demeaning.

I see your point given the way the thread is evolving. However, the posts I was referring to were implying OP is a schizophrenic[1] or bipolar.

[1]: https://news.ycombinator.com/item?id=40734705


I take your points here, and I agree.


Many people think satire or similar humor is serious, it happens in real life as you mentioned. There are many times the Onion has been quoted as a source.

However, since we are talking about people being rational, then the points and links above that show this is satirical should lead people to make their own decision.

Hopefully, this doesn’t become a place for people to draw lines in the sand.


> I may say something that comes across really snarky to my coworker. Just because I didn't mean it to be snarky does not mean that it won't be interpreted that way.

I think the difference is knowing your audience. You probably have a decent handle on which of your coworkers will appreciate and understand your snark, and which won't. You'll change your tone and what you say accordingly. Sometimes you'll get it wrong, because you're human, and we all get things wrong sometimes. In those instances you might briefly apologize for creating confusion or causing offense, and you both quickly move on with your day.

But if you're writing a blog post, you don't really know your audience. If you have a regular audience, that audience probably exists because they "get you" and like what you write and how you write it. So in a way you do know your audience: by definition, they've self-selected to be people who get your writing.

But then you decide to submit one of your blog posts to a community of varied individuals like HN. Some people on HN are like your existing audience, and will like it. Some people on HN are not going to get it, or not going to like it.

That's... just life. So I think "is it a problem [with] the way it was written?" is the wrong question. There really can't be a problem with how it was written. Certainly there are (mostly subjective) standards for how well something is written, regardless of the way it's written, but you can't really say the author was wrong to write in the combative, extreme style that they've chosen as their entire online shtick. Because it's meaningless to be "right" or "wrong" about that; those terms aren't defined for that. It's only what someone may like or dislike, and the author should (rightly, IMO) not be particularly concerned about that in this context.

I personally didn't enjoy the article that much; I don't find joking about violence to be funny, and it even makes me a little uncomfortable. I read through the whole thing because I found the topic interesting and his opinions on it worth reading. But that's just my own personal subjective take, and it's both fine for me to feel that way, and fine for others to enjoy the humor more than I did.


There's so much about this guy's work that just flies over most peoples' heads, but that's fine by me. Most people don't get it, regardless of what _it_ is.

This is the only blog that I actively look forward to reading.


> won't make sense if you're not fluent in both English and online culture.

I am not fluent in either and I'm in love with his style and substance!


I also good read.


> The post's humor is a style which won't make sense if you're not fluent in both English and online culture.

Oh, please. That's like saying that only native speakers with a university degree can understand a 6 year old's fart jokes.

The humor in this article is juvenile shock-jocking. It starts from the trashy clickbait headline, and is never elevated past that. There's no particular sophistication needed to understand it. It's just not particularly funny or insightful; it's just taking some rote complaints about AI and the hype cycle, and threatening to kill people in various graphic ways. Hilarious.


At least you understood that it was an attempt at humor. Humor is subjective.


It's called ironic/satirical, actually. And irony can't be understood if there's not the same knowledge about a thing. So understanding the irony and humor in this means the thinking expressed in the post is aligned to the reader's. If one can understand it, one already shows the same thinking patterns about the topic (corresponds to having the same knowledge..)


"'t ain't funny, McGee." The author isn't talented enough to pull off the humor angle so it just comes across as what kids these day refer to as "cringe".


> but suffers from “i am genius and you are an idiot” syndrome

But he's still nowhere near as unhinged as the rabid AI bullshittery shitting up the airwaves for the past year

Not a lot of room for nuance when the subject matter is this polluted. Typical HN convention of preferring nuance to outright dismissal is bad at filtering BS


Wow, thoroughly entertaining (though slightly deranged) read. Did not expect to see a RA Salvatore reference in this, that brought me back.


I deleted my comment to the author's chutzpah. It had a bit of "IamTheMainCharacter" and "IamAveryStableGenius" qualities. The thing is, I agree with them, even if I want to say "dial the ego down mate" -and after all if you cannot parade your worth on your own blog, where can you?

Still. I prefer self-deprecating. Maybe you don't get seniority in his space if you don't sell.


May I point you to the "Compliments" section of his blog, in your search for self-deprecating.


Fair. But he's unrepentant. It's like the book author who adds bad review blurb to their book on the basis controversy simply feeds readership.

But yes. Clearly self aware. Just doesn't care. Which is fine too: it's his blog.


Started a career in ML/AI years before ChatGPT changed everything.

At the time, we only used the term AI if we referred more than just machine/deep learning techniques to create models or research something (thinks operations research, Monte Carlo simulations, etc). But it started to change already.

I think startups and others will realise to make a product successful, you will need clean data and data engineers, the rest will fill follow. Fundamentals first.

All the startups trying to sell "AI" to traditional industries: good luck!

I've worked as an AI engineer for a big insurance, contractor with a bank, and oh gosh!


I bet the old guard will refuse to change course and new companies using the new tools will displace the old ones. Like what is happening with online banks and similar. Like what happened with low-cost airlines.


I actually think the example of banks is a good one, because what's happened in places with a competitive banking landscape is just that the big players have upped their game and the benefit of the challenger banks has diminished, with them struggling to become profitable.

Monzo is the biggest new player in the UK and it's not making much of a profit. Revolut doesn't have a banking license because it can't comply with the regulatory requirements. Starling has taken much more of a conservative path and is being led by an ex-Barclays person, but even it is being investigated by the FCA for having poor controls around financial crime. All of those giving loans have unnaceptably high % of defaults from an investor perspective.


In Europe there are other more successful cases like Wise. Or big banks starting whole subsidiaries from the ground up.


Wise aren't a bank again though, they don't have a European or UK banking license.


I didn't say it was. I was saying they are displacing the old guard.


I think new hot upstarts will find that a combination intricate regulation, physical reality and institutional inertia would only allow them to make a tiny dent over years.


I remember the hype around BigData. I was in those meetings where vendors pitched their product. Our director would asked "Do you do Big Data?" Any vendor who said no was immediately dismissed.

I still don't know what the answer to that question was supposed to be. We scraped coupons from our competitors then displayed them on our websites.


I probably agree with a lot of the points the author makes but abhor the style and tone this is written in.


Love the tone and the comments, personally.


I abhor the style and disagree with most of their comments.


You deserve credit for saying that so even-keeledly. Usually people just do the same thing back.


Did you shadowban me for asking a question about frontpage ranking? That's a bit ridiculous, don't you think?


It would indeed be ridiculous; also stupid and counterproductive. Those could be clues to the fact that we didn't do it.

Some of your comments are getting killed by software filters. Those are tuned more strictly for new accounts, for reasons which aren't hard to figure out.

I don't think any mods even saw your comments (before now)


Well, thank you software filters. Are you sure they're working correctly because all my comments except my first are getting killed. What did I do? This is no fun.


TL;DR: Do you have a stance on HN's change in culture?

It seems like HN comments are shifting from technical focus toward:

* Early Reddit's low-effort but tame "snark"

* Aggressively moralizing posts dismissing sardonic criticism as dangerous mental illness

I haven't seen much of OP's style lately, especially since n-gate[1] went inactive.

I'm wondering whether that's a bad thing. Although the tone is hostile on the surface, there's usually some aspiration toward competence associated with it.

[1]: http://n-gate.com/


I don't think it's changed much. I think perceptions of the kind you're describing (HN is turning into reddit, comments are getting worse, etc.) are more a statement about the perceiver than about HN itself, which to me seems same-as-it-ever-was. I don't know, however.


TL;DR: I think you're right. Ty for maintaining HN!

> are more a statement about the perceiver than about HN itself

I may have some rose tint to my oldest memories of lurking HN. Not only was I younger, but I was also seeing hours-old threads instead of comments arriving in real-time before any sorting or flagging.

In other words, thank you for maintaining the site all these years.


Yes exactly! These perceptions are strongly conditioned by each generation's "rose tint".

It's ok though, because the perception "HN isn't what it used to be" is somehow part of the overall system.


n-gate was the best. Referencing it was always worth the downvotes.

I get we are semi-autistic nerds and can't appreciate comedy/satire/sarcasm, but comedy/satire/sarcasm is a potent means of criticism and analysis, especially since we are in an ever-increasing torrent of bullshit.


HN is a bit too serious I think. More comedy would be better than the usual cynicism which can bleed over into your mindset even after visiting the site. You are what you read, after all. Maybe even this comment comes off as cynical; no doubt I’ve been infected too.


What the world needs very badly in the generative spam world is a healthy dose of skepticism.

What qualifies as skepticism versus cynicism is often in the eye of the beholder.


Already [flagged] [dead] at least 8 times, as linked by @greyface3- https://news.ycombinator.com/item?id=40733576

Which is a pity. The style is excellent & so wonderful, is a critical relief, after suffering through insane out of this world hype-bordering-religion. At least to me; he doesn't read as menacing, he reads as being on a justifiably distraught polemic against total madness that's allowed to pointlessly suck up all the oxygen in the room.

We should be flipping our shit (if not each other) that we have to put up with this endless exuberant schucksterism. That robs us of agency & pollutes our noosphere with inauthentic bullshitting.



this guy keeps threatening violence in his blog posts, has anyone ever had to fight him? How tough is he actually?


Author here. Someone sent me this comment this morning (along with a note saying: "I am not afraid of you") and it absolutely sent me. Probably because he knows I am of exactly average height and exactly average build, and I quit Muay Thai after one day of getting conditioned against knees-to-the-stomach.

But if you don't tell anyone, I won't tell anyone.


I found the perfect use case for AI, it might interest you:

https://news.ycombinator.com/item?id=40736501

Let me know if you need any further help with the subject.


I think it's lame.


You had me at "I think".


Too flat.


Fortunately, it was cheap to get a tone modulation, and the results literally made me smile .. I think this is now going to be my new thing to do with articles, since its kind of hilarious to get the human emotions altered while leaving the content relatively intact.

(My prompt: "okay, do it again, but this time change the tone to be cheerful and encouraging and not so flat.")

Ludicity: Embracing AI Innovations with Caution and Care Published on June 19, 2024

The recent advancements in AI, particularly with developments like GPT-4, are undeniably reshaping our world. These innovations have the potential to eliminate mundane tasks, revolutionize industries, and even pose existential questions about our future.

As someone who has formal training as a data scientist, having excelled in a competitive machine learning event at one of Australia's top universities and written a Master's thesis with custom libraries in MATLAB, I have a unique perspective. I may not be the best, but I am certainly experienced and knowledgeable. So, it's with a mix of concern and optimism that I urge caution in our enthusiasm for rolling out AI everywhere.

Balancing Enthusiasm with Reality

I began my journey as a data scientist in 2019, and by 2021, I realized that while the field is vast and promising, it's also filled with misconceptions and overhype. Many leaders in the industry seemed to push AI initiatives without fully understanding them. The market saw a surge of companies jumping on the AI bandwagon, often without clear use cases, driven by hype rather than genuine need.

Despite the financial rewards, I transitioned to data and software engineering. Unlike those who chase trends for short-term gains, professionals like us value job stability, meaningful relationships, and genuine contributions to our field. The transient nature of hype-driven careers contrasts sharply with the steady, rewarding path of continuous learning and improvement.

Practical Advice for AI Implementation

Instead of blindly adopting AI, focus on the fundamentals. Many companies struggle with basic applications, yet they aim to implement complex AI systems. It's essential to ensure that your organization has the necessary infrastructure, skills, and culture before diving into advanced technologies.

For example, many businesses haven't mastered basic IT operations like regular database backups. Improving these fundamental practices will better position your company to leverage AI effectively when it's truly needed. Emphasize collaboration, clear communication, and mutual respect in problem-solving. This approach will yield more reliable and sustainable results than any rushed AI implementation.

The Future of AI: Opportunities and Challenges

Generative AI, like ChatGPT, has shown remarkable potential, but it's important to distinguish between practical applications and unrealistic expectations. AI can indeed revolutionize industries, but only if used thoughtfully and strategically.

If your company has a clear, well-defined use case for AI, by all means, pursue it. However, for many businesses, the priority should be on improving existing operations and culture. AI will naturally integrate into your processes as it becomes relevant and beneficial. Remember, technology should serve to enhance your capabilities, not distract from core objectives.

Encouraging Thoughtful AI Adoption

Let's approach AI with both enthusiasm and caution. Recognize the incredible potential while also being aware of the limitations and challenges. Embrace AI where it makes sense, but always prioritize the fundamentals of good business practice.

In conclusion, the journey with AI is exciting and full of possibilities. By focusing on solid foundations, continuous learning, and thoughtful implementation, we can harness the power of AI to create a brighter future. Let's work together to make sure our advancements are meaningful, sustainable, and truly beneficial.

Thank you for reading, and let's continue this conversation with positivity and a shared commitment to excellence! If you have any interesting work or thoughts to share, feel free to reach out at ludicity.hackernews@gmail.com. Stay tuned for more insightful discussions, ambitious projects, and exciting developments in the world of AI and beyond.


No one that has fought him is here to tell the tale.


Judging by what he puts in the open in his other articles, an accomplished fencer he is, so there must be a few who had to fight!


> you do not need AI for anything

I'd so have preferred this to be true, and to ignore the AI thing (mainly to avoid any effort to change any of my habits in any way). But as an end user I can say that this is wrong. I definitely need LLMs for one critical thing: search that works.

Google has become clogged with outright spam and endless layers of indirection (useless sites that point to things that point to things that point to things, never getting me to the information that actually fucking matters), but I can ask the best LLMs queries like "what's the abc that does xyz in the context of ijk" and get meaningful answers. It only works well when the subject has a lot of "coverage" (a lot of well-trodden ground, nothing cutting-edge) but that's 80% of what I need.

I still have to check that the LLM found a real needle in the haystack rather than making up a bullshit one. (Ironically, Google works great for that once you know what the candidate needle actually is—it just sucks at finding any needle, even a hallucinated one, in the first place.) For shortest path from question to answer, LLMs are state of the art right now. They're not only kicking Google's ass, they're the first major improvement in search since Google showed up 20+ years ago.

Therefore I think this author is high on his own fumes. It reminds me of the dotcom period: yeah there was endless stupid hype and cringey grifters and yeah there were excellent rants about how stupid and craven it all was—but the internet really did change everything in the end. The ranters were right about most of the battles but ended up wrong about the war, and in retrospect don't look smart at all.


> I definitely need LLMs for one critical thing: search that works.

Hmm, that seems weird.

> Google has become clogged with outright spam and endless layers of indirection

So.. the problem isn't search, it's that Googles ad business has destroyed googles search and created spam to try to get ad clicks.

I switched to Kagi a few months ago and I'm much happier. Most of the blog spam is just gone, and the things that slip through I can nuke.


yeah I should try it, I'm just a laggard.

Still though, the way that LLMs can give customized answers to complex specific questions, questions that probably don't have any specific page on the web, feels like a leveling up beyond traditional search engines.

here's a recent example: I asked how to call a particular function in some library from a little-used language, and it not only told me exactly how to do it but wrote the FFI wrapper. I could definitely have dug that information up but it would have taken a lot longer and then I'd still have had to pore over documentation to write that tedious code.


"Can give" seems not super relevant honestly.

If you ask an AI for historical pictures of Hitler shaking hands with Zulu fighters, you might get them, but you'd be a fool to think that's facts. Text is exactly the same. The tech is exactly the same. The result is exactly the same. People are just so used to text being about facts via school that they are fooled, but they know about paintings from before kindergarten.

Your example isn't about facts or search though. In the case of code we can easily check the result. It's not a case of facts really.


I wholeheartedly agree (that first ask from ChatGPT, then double check with whatever, and that Google results have become junk - mostly). Alas, I can totally see how AI companies will monetize this by allowing advertisers to influence the answers. And not in a add-in-a-sidebar fashion, but baking the advertisers product etc. into the answers. Personally, I find this somewhat disturbing.

Indeed, I think this is a key area where regulation is needed. If answer from AI is influenced by 'something unexpected' - what ever that might be, then it has to be clearly highlighted in the answer.


I think you missed the point of the article.

It's not meant as a criticism of ChatGPT and some LLMs, more like a criticism of corporate drones and "thought leaders" that latch on latest hype instead of fixing their core problems.


Oh I probably did because none of that is interesting to me. Every 5 years another crop of omg corporate drones are so awful consciousness appears, but once you've been through 3 or 4 cycles, everything new is old again. The rants aren't more interesting than the suits are. (well, - the rants at least have their fun side)

What interests me is that there is new technology here that does something relatively magic - that just doesn't happen often. Corporate crap and MBAs and suits and founder hucksters are forever with us. I'm sure it was equally true decades before me and you showed up.

I'm rather annoyed that this is so, because now I have to slightly get off my ass and learn something, so as to reach a better state of lethargy later.

I'm a ranter by character, but you know who I'd pick between the ranters and the drones? The drones. The drones are the 100 million sperm racing to fertilize the egg. They're all losers (save one infinitesimal winner) and it's easy to point that out. The ranters stand on the side saying "those fucking idiots. those morons. what losers they are. as opposed to me, who sees this." But the ranters are the dead end, smart as they may be. The sperm are at least in the race.


This it's incredible if you keep in mind it's just gonna find the most "common" words that come alongside your query in some jobbled together answer


Machine learning isn't just a word frequency machine. If that's all it was, we'd have this technology decades ago.


I'm really wondering about this. Is it just "recycling things people have said before", or is it genuinely generative?

"Recycling things people have said before" is basically search, and that is hugely valuable and quite enough for me. If it's genuinely generative, that's a cosmic leap beyond search.

My guess is that it's not genuinely generative, but rather that the long tail of "everything that everyone has said before" is so vast as that it feels like magic when it's retrieved.

But LLMs have shocked me enough on the search front that I'm no longer smugly confident about this.


Have you seen the Othello paper? [1] To me it really puts paid to the idea that LLMs could just be stochastic parrots, in the sense of just rearranging things they've seen before. They at least can apply world models they devised during training (though whether or not one does for some given prompt can be a different question).

[1] https://thegradient.pub/othello/


I mean, if the question is if they act as just pure, 100% search and nothing else, the answer is pretty self-evident.

I'm not much of an LLM user, but the few times that I did turn to it for programming advice was in rare and obscure situations that weren't really discussed anywhere on the internet (usually because they contained multiple issues in one). The LLM tended to produce something I'd call a reasonable answer, especially on topics that weren't completely obscure.

But we don't even need to go that deep to answer the question. For example, if an LLM was pure search, you couldn't make one generate text in some specific style or with specific constraints, unless that exact answer already existed somewhere on the internet. They can mash up ideas or topics, and still output good or reasonable data.

The billion dollar question isn't whether it's generative - it's whether the generative capabilities are "enough". Machine learning is about finding patterns, and a complex enough pattern finder will be very good at approximating answers accurately. LLMs don't actually have an accurate "model of the world" learned - but they have something that's just close enough on certain topics to make people use it as if it does.


The issue with google is true. That being said ai isn’t the answer in my experience. Wikipedia is a better source of casual information I find. And if I want to dig further I look for the real sources if I can, not the slop written by journalists but the actual material people in the field are engaging with. The primary sources. The documentation. If I can’t find that or understand that material, then thats that and I move on, considering anything ‘accessible’ is going to be an example of gell mann amnesia and not worth my time.


> I myself have formal training as a data scientist, going so far as to dominate a competitive machine learning event at one of Australia's top universities and writing a Master's thesis where I wrote all my own libraries from scratch. I'm not God's gift to the field, but I am clearly better than most of my competition - that is, practitioners who haven't put in the reps to build their own C libraries in a cave with scraps, but can read textbooks and use libraries written by elite institutions.

I really didn't have any illusions on the article after reading this - apparently the author believes that anyone who hasn't written a C library is below him.

And also, this author is known to make articles that are full of ranting and have rage titles, for example https://news.ycombinator.com/item?id=34968457


Glad I'm not the only one put off by that. Though it's nice of him to signal so early on that his words aren't worth reading


Maybe I'm misinterpreting but it seems like he considers himself a part of the "practitioners who haven't put in the reps to build their own C libraries in a cave with scraps, but can read textbooks and use libraries written by elite institutions" not above them


Except for that in the sentence right before that he says that he did write his own C libraries from scratch, which I think means that the only reasonable interpretation of the "practitioners..." clause is that those are the people people that he describes as his "competition" who he is "clearly better than".

I'd really like to read it that way, but I'm afraid he actually did come across that arrogant.


I'm not seeing where he says he wrote libraries in C in a cave with scraps. That sounds like a few steps beyond writing libraries from scratch (non-specified language). One's competition is their equals, no? It's not a competition otherwise.


It's just an Iron Man(2008) movie reference. (Though it was the Big Bad saying that a problem wasn't that hard if Our Hero didn't need any resources - which doesn't exactly fit what the author seems to be saying here - I read it mostly as lightening the mood a bit regardless.)


"I am clearly better than most of my competition"?


Still not seeing where he said he himself wrote C in a cave with scraps. Not suggesting he's humble, but I still think he sees himself as one of them (he doesn't say he's "better than everyone [at that level]")


The entire sentence is:

“I'm not God's gift to the field, but I am clearly better than most of my competition - that is, practitioners like myself who haven't put in the reps to build their own C libraries in a cave with scraps, but can read textbooks and use libraries written by elite institutions.“

It’s quite poorly written, but they note they’re clearly better than most of their competition.


> they note they’re clearly better than most of their competition.

Yup, I've noted that three times. Are we still claiming he wrote libraries in C in a cave with scraps? Or just moving on?

Poorly written? Maybe. Poorly read? Equally likely. Maybe we should just ask the guy, he'll know.

Did the downvoter see me as their "competition"? "Clearly better"? Both? I don't have enough karma yet to respond in kind, so they're right about something at least.

lol they did it again. mods!


The section starts with him bragging about having written libraries from scratch. Given that, I really don't understand how you can arrive at them considering themselves one of the people who hasn't written libraries.


> I really don't understand how you can arrive at them considering themselves one of the people who hasn't written libraries.

There seems to be a comprehension issue on one side or the other. Can someone point me to where he says his competition has never written a single library? I only see that he says they haven't written a library "in C in a cave with scraps". Where does he say he's written libraries in C? I don't see that either. Maybe the words "scraps" and "scratch" are too close together? If a subset of readers are inclined to dismiss him out of hand for this perceived slight, nothing I write will convince them otherwise, but that doesn't make their uncharitable interpretation of his words the correct one.


Yes, there is a comprehension issue here. Everyone else understood this as a discussion on whether the author is being arrogant and dismissive in this section. You seem to be looking for a discussion on whether anyone really writes C libraries in a cave with scraps.

Nobody here but you is taking the "cave" and "scraps" literally. It'd be total nonsense if taken literally. Like, what would it even mean? It's obviously the author trying to make their writing punchy. You should not take it any more seriously than their threats to snap people's necks for talking about AI.

If you want to ignore than actual discussion and steer it toward a discussion of an interpretation of the text that's so literal that the text doesn't even make sense, you should probably be very explicit about it.


No one's discussing whether he's arrogant or not, they've all made up their minds. I gave an alternate, more charitable interpretation of his words, that would offer an otherwise offended reader a way to reframe the blog, and not dismiss it out-of-hand if they took his words as a slight. I believe this interpretation.

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

:)


Please note that Ludic is partly writing for himself to vent about the massive disappointment it is working in tech after all he's been through and partly as a rallying cry for others who can relate to being primarily surrounded by well-spoken incompentents.

If you subdue the urge to write him off based on his emotionally expressive writing, you'll find a lot of poignant observations you wouldn't get from more civilized venues (e.g. HN, famous tech influencers, thinkers, and execs, etc.).

Ludic's blog is a one-man 4chan board, minus the racism, sexism, and so on.


It worked for DHH. The substance is significantly better than I expected from the opener.


> and have rage titles, for example https://news.ycombinator.com/item?id=34968457

Uh yeah that's a bit like telling the same joke over and over. In fact it is the same joke.


I'm a Data Scientist currently consulting for a project in the Real Estate space (utilizing LLMs). I understand the article is hyperbole for perhaps comedic purposes, and actually do perhaps 80% align with a lot of the authors views, but it's a bit much.

There is industry-changing tech which has become available, and many orgs are starting to grasp it. I won't deny that there's probably a large percentage of projects which fall under what the author describes, but these claims are doing a bit of a disservice to the legitimately amazing projects being worked on (and the competent people performing that work).


> I'm a Data Scientist currently consulting for a project in the Real Estate space (utilizing LLMs).

Consultants are obviously making huge amounts of money implementing LLMs for companies. The question is whether the company profits from it afterwards.


Time will tell, but I would cautiously say say yes.

Note that I don't usually work in that particular space (I prefer simple solutions and don't follow the hype), didn't sell myself using 'AI' (I was referred), and also would always tell a client if I believe there isn't much sense in a particular ask.

This particular project really uniquely benefits from this technology and would be much harder, if possible at all, otherwise.


Would you recommend to still get into freelance consulting (with a ML background) at this point in time? Or will the very technology you're consulting about, replace you very soon? AutoML, LLMs etc..


I'd say it depends on what your other options are. I don't think the technology will replace me soon, even at the rate I see it improving. At this point it's still a tool we can use to deliver faster, if we use it wisely.

Especially about ChatGPT et al. - I use it daily, but having the proper foundation to discern and verify its output shows me that it's still very far from being a competent programmer for any but the 'easy' tasks which have been solved hundreds of times over.

Like I hinted, I also view all of this hype sceptically. I dislike the 'we need AI in our org now!' types and am not planning on taking on projects if I don't see their viability. But there's obviously still a lot of demand and people offering services like those in TFA who're just looking to cash in, and that seems to work.

If you can find projects you believe you can make a difference in with your background, why not give it a shot?


Thank you!

> I'd say it depends on what your other options are.

> why not give it a shot?

Your right. If it fails because of automation by ML, most other career paths in the tech sphere would do, too.


If you're interviewing for job at a company you're not familiar with, what are some good heuristics (and/or questions to ask) to politely get a sense of whether it's run by buzzword bingo enthusiasts?


If its tech and they want a 30-45 minute interview with live coding on something like hackerrank - especially if whatever brain teaser they've chosen has absolutely nothing to do with the field they operate in - I'd put the chances around 80%.


Easy. Is the interview process a dehumanizing process, or is it not a process at all and you are treated as a potential friend and colleague? Are they trying to sell you on the team and project or are they merely hazing their applicants? That will tell you more than any heuristic about the culture of working at this company.


If you ask how they handle hard problems they'll tell you about AI solving it soon


A lot of this is on point. I work in tech diligence, talking to companies raising money. The amount of pointless AI hand waving is unreal, and the majority have not ever tested their disaster recovery plan.


The commentary on Scale's "2024 AI readiness chart" is so spot on


Interesting analysis wrapped in satire


> How stupid do you have to be to believe that only 8% of companies have seen failed AI projects?

That’s because most of them are still in progress. Enterprise moves slow and only started on this recently. They still think it’s going great because they’re riding the high of imagination.


That is not true unfortunately.

ML has been around for decades, DL for more than a decade.

In 2019, I had to explain to executives that 95% of AI projects fail (based on some other survey), top 1 reason is bad or missing data and top 2 is misaligned internal processes. I probably still have the slides somewhere.

One project I worked on was impossible because the data was so bad that after cleaning, we went from 4M rows to 10k usable rows in 4 languages. We could have salvaged a lot more if we restricted the use case but then the benefits of the projects would be not so interesting anymore. The internal sponsor gave up and understood the problem. Instead, they decided to train everyone on how to improve data entry and quality! In just 6 months I could see the data was getting better indeed. But I had to leave this company, the IT dep was too toxic.

So I think the author is right. According to Scale, we'd have gone from 95% failures to 95% successes in just 4-5 years just thanks to LLMs? This is of course ridiculous, knowing the problem was never poor models.


No, it's because the numbers are made the f** up. That same chart, as the author notes also points out that allegedly a third of companies have outsourced "strategic decision making" to AI. That is so offensive to any person who has a brain that the authors much criticized tone is completely warranted if you have any love for truth at all.

I mean, contempt is literally the only sane and appropriate emotional reaction to the amount of lies, and it is intentional lies, that are being marketed to people.


>That’s because most of them are still in progress.

What? You know AI has been around since before ChatGPT.


The number of projects increased a lot since that started getting big press, though. It kicked off a much bigger craze than before.


> by 2021 I had realized ... it was also largely fraudulent. Most of the leaders that I was working with clearly had not gotten as far as reading about it for thirty minutes ... Most of the market was simply grifters and incompetents (sometimes both!) leveraging the hype to inflate their headcount so they could get promoted, or be seen as thought leaders.

This was true of web tech in the 2000s, social media and mobile apps in the 2010s, crypto, now AI...

There's substance behind most of these topics, they're intellectually interesting, useful to people etc., but they tend to be largely grift, hype, office politics, narcissists tryina narcissist.


Tech as an industry is addicted to hype bubbles, and addiction in people and in large groups of people fosters bad decision making, skewed priorities, and over the long haul does great harm.


I feel like a simpler explanation is that the money involved attracts lots of shallow people trying to make a buck, who do not care about the substance or technology.


> GPT-4 can't even write coherent Elixir, presumably because the dataset was too small to get it to the level that it's at for Python

Yes but it does save me a whole bunch of time writing boilerplate command line entries in argparse. I can give it a table definition and ask it to write a bunch of crud methods instantly. I can do all of this myself but why?

Of course the stuff it produces doesn't work all the time but then I'm not asking it to write my entire app. I'm asking it to spare me the tedium of iterating over the building blocks so that I can get to the main part - building things.


Awesome. Best rant I’ve read since Zed Shaw’s “Rails is a Ghetto” back in 2008.


If anyone wants to know the definition of "trenchant", read this blog post.


At first I was thinking that "AI" is the "HD" or "digital quality" of this decade. But it's a broader fraud than that. It reminds me of this IBM commercial, which made me laugh out loud at the time: https://www.youtube.com/watch?v=IvDCk3pY4qo


When I asked the innocuous question -

https://www.reveddit.com/v/singularity/comments/1bdjf7u/is_t... - The mods deleted the content of the question and the whole question itself.

Go figure!!

This is the original question

=====

Is the power of AI companies basically their ability to outbid for AI accelerator chips?

It seems to me that all the hype AI companies are getting is primarily due to the ability to outbid competitors for scarce AI chips, or even design and/or build their own. This also includes their ability to build their own infrastructure around them.

IMHO this scarcity is the main source of their high valuations, and it seems that in the long term when chipmaking capacity builds up this advantage will eventually wane.

Having the ability to train on information provided by users and customers is also a factor, but that doesn't look like it will last either

=====


This rings true and is also hilarious. I am tired of the AI hype/grift that seems to pervade the industry - funnily enough some of these same promoters were on the blockchain/crypto hype that was going to revolutionize every industry on earth.

There is meat to recent advances in Machine Learning…but the revenue / savings for actual businesses will need to start coming in if this hype is going to be sustainable for the next 16-24 months.


I feel like OpenAI took this thing out of the oven too soon or packaged in a way which misrepresents its nature. The power of AI isn't to answer your technical support questions or be your AI personal assistant, it's in the manic creativity that convinced people they were talking with a human (albeit a slight unhinged one.) But that energy had to be nerfed for business use cases and in of itself is problematic for monetization. So what we're left with after they've been nerfed is basically an iteration on what we already had, not the world changing technology that we were promised.


I honestly don't think OpenAI anticipated it becoming as big of a deal, and they had to lean into grandiose expectations once it did become a big deal. They didn't have a grandiose marketing campaign to my memory - the only reason their tech exploded overnight was because it was something that made an average person interested. News of previous GPT versions and primitive image generators were passed around among techy people for years before that point. Maybe there's a threshold of generation quality, at which a user perceives it as something that can rudimentarily "understand" the input, and isn't just a generic chatbot.


So what about: 1. vector search 2. RAG pipelines with your data 3. Semantic, image, video, object, detection 4. Robotics 5. Code generation /Review 6. AI language translation 7. AI research agents 8. ChatGPT

Clickbaity article aside there’s tons of legitimate uses of LLMs for corporations of all sizes

Now blockchain on the other hand…


>> RAG pipelines with your data

FTA

"Everyone is talking about Retrieval Augmented Generation, but most companies don't actually have any internal documentation worth retrieving. Fix. Your. Shit."

> 5. Code generation /Review

FTA

"If another stupid motherfucker asks me to try and implement LLM-based code review to "raise standards" instead of actually teaching people a shred of discipline, I am going to study enough judo to throw them into the goddamn sun.

I cannot emphasize this enough. You either need to be on the absolute cutting-edge and producing novel research, or you should be doing exactly what you were doing five years ago with minor concessions to incorporating LLMs. Anything in the middle ground does not make any sense unless you actually work in the rare field where your industry is being totally disrupted right now."

The man trots out his bonified's at the start and in the article. He's inside and backing up that rage.


"Everyone is talking about Retrieval Augmented Generation, but most companies don't actually have any internal documentation worth retrieving. Fix. Your. Shit."

I read up about Copilot as part of some internal research, and absolutely the first things to do for Copilot (I'll copy-paste just the line-item headings from the section entitled "Prepare your data for Copilot for M365 searches") is:

  - Clean out redundant, outdated, and trivial (ROT) content.   
  - Organize content into logical folders and sites.
  - Tag files with keywords.
  - Standardize file names.
  - Consolidate multiple versions.
  - Promote data hygiene habits.
Sigh. If I could do all this at an organizational level, I wouldn't need copilot at all.


What the hell does that mean fix your docs?? -

companies have tons of document libraries and documentation that need sifting through and are generating more content regularly and => RAG and vector search is game changer with real value there

Eg we implemented RAG + vector search at a manufacturing company and it changed their workflows entirely

And to coding with LLMs, Say what you will about AI coding but code review/linting and LLM created unit tests are as itself game changing as IDE intellisense this value is worth at least one junior developer on the team - that’s 70k yearly salary alone benefit


> What the hell does that mean fix your docs??

If you asked a company to run for a week, based on what its docs said, and nothing else I suspect it would be bankrupt before Friday.

Knowlege is tribal, human, and adaptive.

AI did this once already. Harvesting the data from professionals for expert systems... The problem is you need to keep feeding it data... the people dont go away, they aren't doing the job any more, they are just documenting the job at that point.


A measly 70k saved today to make fewer positions for junior talent that you will so desperately need to be senior when your current crop retires. You might as well burn the office furniture for heat this winter and save on hvac.


In manufacturing, maybe you haven't done ISO9000 but you've heard of it and have all sorts of regulated documentation as a baseline - a "bias for writing process details down" that is absent in a bunch of other industries, with software at the top of the list. "Documentation // it is written by strangers // throw it in the trash" is a software haiku that I keep running into (not as a policy or anything, just a recurring meme about how bad/uninformative/absent software docs generally are.)


I'm working at a hardware shop, we are making what is essentially is a special kind of network routers/switches for analog network (DOCSIS). And we also got hit by NN craze, half of automation team is developing an "analyzer" based on some neural network model for our lab tests which classify fails by type. A completely useless activity, because even if that code guesses correctly, we still need to go see logs manually and make a decision manually, one by one. Exactly as if NN didn't exist. But after trying to tell that in team chat I was confronted with doubt and rejection. And so this development continues.


> so I'm going to choose to take that personally and point out that using the word AI as some roundabout way to sell the labor of people that look like me to foreign governments is fucked up, you're an unethical monster, and that if you continue to try { thisBullshit(); } you are going to catch (theseHands)

And (theseFeets) in (yourNutz).

> I'm going to ask ChatGPT how to prepare a garotte and then I am going to strangle you with it, and you will simply have to pray that I roll the 10% chance that it freaks out and tells me that a garotte should consist entirely of paper mache and malice.

Sadly it’s got a much greater than 10% chance of getting a garotte wrong.


A garotte is a device or weapon used for strangulation. It typically consists of a cord or wire, sometimes with handles at each end to provide better grip and leverage. The garotte is used by wrapping it around a person's neck and tightening it, cutting off the air supply and causing asphyxiation. Historically, garottes have been used for execution and assassination due to their silent and deadly nature. In modern contexts, they are often associated with covert or criminal activities.

welp


Zero interest rates are gone, so dying companies are throwing themselves onto the AI bandwagon as a last ditch attempt to bring in cash and keep going.


This is how I feel. GPT and similar LLMs demoes extremely well, but collapses as soon as its put through any kind of real world problem. However, we absolutely are living through press release driven business administration, so it's likely that the realistic approach will be discarded for things that are sexier for the media. Companies will pay for this.


Done right, the rant is mostly a style of expression easily adapted to making clear contrasts. If you can make it work, I say go for it.


>That is to say, it turns out that the core competency of smiling and promising people things that you can't actually deliver is highly transferable.

There's a lot of value in this statement, regardless of how you feel about AI, etc.

We really should fully fund the FTC and anyone else who can help fill the cells in "Thought Leader Jail".


Fun to read. The style of the article reminded me of https://scholar.harvard.edu/files/mickens/files/thenightwatc...


> There you have it - what are you most interested in, dear leader? Artificial intelligence, the blockchain, or quantum computing?6 They know exactly what their target market is - people who have been given power of other people's money because they've learned how to smile at everything, and know that you can print money by hitching yourself to the next speculative bandwagon.

Nailed it.


I don't see how AI, quantum and block chain are at all equivalent.

Block chain has no use.

AI and quantum hage obvious uses if they work.

Quantum is not close to working now. It's where AI was at in the 80s/90s.

AI may not be perfect but there is no denying that the GPTs were a dramatic shift.


> GPTs were a dramatic shift

IT was a massive leap forward for a 50 year old idea.

IF it takes another 50 years to make a leap of equal size, we might get to AGI before the heat death of the the universe.

> I don't see how AI, quantum and block chain are at all equivalent.

If we shut them off tomorrow what do YOU need to replace in your life without them?


> If we shut them off tomorrow what do YOU need to replace in your life without them?

A translator and editor would be nice? I use GPT to give feedback on my writing. It works well, especially suggesting other ways to word things. This is something I didn't even know I needed.

There are all kinds of other uses. It has offered advice to me on programming language design, approaches to math proofs, etc.

Again, I don't NEED these things, but yes it makes it a lot easier to have that there.

Just like I don't NEED google. I could scan through hundreds of thousands of websites or hire someone else to do it. But it's just easier to do that myself.


> I don't see how AI, quantum and block chain are at all equivalent.

It's not that there is any claim to equivalency, its that these are the technology trends that are most useful - the trend itself, nevermind any sort of usable technology - for those who grift.


It's not an uncommon argument to try and draw parallels between these - X was a fairly useless, yet extremely speculative new tech scam, and Y is speculative new tech. Therefore Y is also a useless scam, QED.

There's a difference between something being an extremely hyped development and it being an actual grift down to the core. The internet was an extremely overhyped development, but ultimately not a grift. Cryptocurrency was, to a large extent, both. Whether generative AI is one or the other won't be apparent until a bubble truly starts growing.


Something can be both a good development and overhyped grift. You have to keep in mind that for a lot of people and their businesses, the grift is literally the entire angle. Not building a technology. Not building a 100 year old business. But to get rich fast as you can while you have the opportunity, tech and business be damned. Ironically, both the real technologists and the grifters benefit from this preaching of misleading overstatements and half truths from the rooftop. It’s therefore tolerated almost as a funding mechanism for the industry at large. Mac os 15, same as it ever was now with AI under the hood. Sounds like a good seller to me.


Blockchain is used as a stand-in financial system for those who don't or can't use the incumbent ones (for banking, commerce, gambling, and so on).

AI is nowhere close to delivering on its promises. But it's pretty useful for many tasks.

Quantum computers are vaporware. Quantum sensors are already here and game-changing. But just like AI, has failed to live up to hype.

---

Crypto is already here and makes good on its promises: decentralized finance.

Full disclosure: I do not own any crypto besides negligible sums in forgotten wallets.


Yeah, this is pretty spot on, unfortunately.

Most people don't really comprehend how much money there is in the hands of people at the top that just... falls down to whatever random stuff those people are getting worked up about at the moment. The vast majority of it ends up being nonproductive and it really does get allocated based on what those people see in their Twitter feeds. This is a much more pronounced problem than it was 20 years ago because of all the money printing governments have done, in general if you are connected to the government and banks, you will be the largest beneficiary of that type of action. None of this stuff is really subjected to market economics, it either flows through some kind of government/NGO bureaucracy or from someone who controls a monopoly or something similar to one. The waste and inefficiency in this modern pseudo-command-economy is mind blowing to behold.


Not only do people merely get worked up of course. Theres grifters and plenty of nepotism. Sometimes people in positions of power expand the system solely to make contracts for people connected to them, ostensibly for some benefit but with the amount of waste and unaccounted for money once leaving one org and entering for another intentions are sufficiently masked. And even if peoples intentions were plainly unmasked, everyone at the top of the org are probably just as leveraged and isn’t going to stop the music just to appease the little man that propaganda defanged 100 years ago in this country.


I'm just going to say it.

The author is an idiot who is using insults as a crutch to make his case.


The author’s Australian so I couldn’t help but read it in the voice of Robert Hughes, to narrate this new shock of the new.

“‘This is the real thing, this deserves respect!’. It isn’t and it doesn’t, and no one cares.”


This article is great use of rhetorical delivery to drive home these messages.


this is so refreshing to see


What is AI? Artificial? Yes. Intelligence? Hell NO!!


This writeup is pure comedy (both in content and subject matter). Should be recommended reading for all those curious about the AI buzzword.


Probably the most entertaining thing about the article (aside from the article itself) is seeing a comment section full of Americans who have precisely zero understanding of Australian humour or writing style.

As a former-PhD-Data-Scientist-who-quit-the-industry-because-it-was-full-of-fraud-and-went-back-into-software-engineering-and-is-now-an-Australian-consulting-to-Americans, this is even more hurty. Someone sent this to me and I thought I'd dreamposted it.

Great article, the anecdotes physically pain me in the same way that watching Utopia does.


Similarly, as a Brit, the tone and humour is not unusual at all. Feels like someone blowing off steam down the pub with their mates.

It is totally lost on many here, some of whom equivocate the rant with serious threats of violence.

Personally I found it very funny and an accurate portayal of industry trends


It's mainly midwest Americans that like to tone police. People from the east coast, especially northern east coast, read this as comfort literature.


The editorialising of the title is amusing. The writer comes across as like a teenager still developing his style.


The writer? Or the person editorializing the headline? :P


Yes, the writer :)


From what I have been seeing there are a few real use cases:

1. Customer service - seems like natural language processing by AI could be a better offering than someone manually trying to resolve problems. I have been in front of many CS agents who couldn't do what I wanted them to do. Untrained CS agents, non native speakers, or people who just don't care enough to help.

2. Internet search - I don't have to search through arbitrary articles and text to get the answer I want. Now its not always accurate or the latest, but still better than scrolling through Google search (feel bad for the publishers and writers though - they aren't getting the same ad views as before and clearly not ideal)

3. Summarizing - AI does a fabulous job here - TLDR and more.

4. Rewriting things to a better tone - AI is going an amazing job here, every time I get stuck on how to write something OpenAI has helped me. Now I don't use the output as it is, but it gives me an idea of how to write my own message.

5. NLP interface to devices / tools = I think this is a really valuable use case.

Almost everything I suggested here is pointing to a $20 or a fixed monthly subscription for individual user. I don't know if its an "enterprise" need. Except for the customer service use case.


Even if it’s meant as tongue-in-cheek, it comes off as dismissive and carries a certain smugness that I find particularly irritating.

And the smugness feels especially unfounded because the author clearly has decided to double down on their views even though it’s not at all evident that they’ve taken the time to interact with the tech as an end user (which millions upon millions have, last time I checked). They’ve just not gotten it, or: yes, it is you, not me.

This is just another form of rote, unthinking (anti) hype. It reminds me of the smugness that Balmer had towards the iPhone.


> Even if it’s meant as tongue-in-cheek,

It is not this.

There is actual anger there, he's pointing out WHY he's pissed.

> it’s not at all evident that they’ve taken the time to interact with the tech as an end user (which millions upon millions have, last time I checked)

He covers this point, in depth in the article. From a few angles.


This article makes me want to increase my AI usage hundred-fold.


Would be perfect if the article was written by gpt.


Nikhil is the Hambini of the AI industry.


https://www.linkedin.com/jobs/view/sc-dv-cleared-head-of-kno...

Don't know what to do with AI, Ah fuck it let's just shove it with FOI and GDPR


"With God as my witness, you grotesque simpleton, if you don't personally write machine learning systems and you open your mouth about AI one more time, I am going to mail you a brick .."

I really dislike articles that have rage against (x) and try to appear smart and with authority to tell you that everyone else is stupid and they know better but deep down they are just anti-everthing because contrarianism is the target here and not AI or anything that any article like this talks about. The idea is to just say "no, it's not like that".


To me it sounds like a typical neckbeard internet power-trip.


Strangely, the most boring people are the ones that seem to hate the article.


You mean you prefer haters and jihadists so as not to be bored? Ok for game of thrones


Jihadists? Haters?

Are you stuck in 2005?


Hate is a human emotion and adjective in the English language. What are you talking about? Just trying to make the conversation ridiculous? Sorry for you


You said "haters" -- it's right there, even unedited.

Don't be daft. It's boring.


Wow! This article sure has had a roller coaster ride 8-/

While the title seems to express a widespread annoyance at the overwhelming prevalence of this subject in the tech news, the body of the article then goes on dedicate itself to the exact topic the title claims it doesn't want to hear about.

I for one really do wish someone/something would piledrive this topic, but this article was just more of the same tripe...


The AI hype happened in close proximity to the decline of the crypto mania and so many mediocre grifters from crypto latched onto AI as a fraud life raft.


Brilliant


While i agree with what he's said, its really a very violent read, I wish it wasnt so.

Also, I attend an online Microsoft meeting every week where they inform the attending community about all the new changes and technology coming from MS.

I'm utterly sick and tired of hearing about "AI" and Co-Pilot from them. every. signle. meeting. is 90% AI/Co-pilot.

I'm over it. I'm burnt out on it as a product.


>I assume that anyone who has ever typed in import tensorflow is a scumbag

That's true, decent people write import torch.


Sometimes things really are as significant as they seem.


This person has significantly less ml exp. than I do. I guess it's fine for me to totally dismiss their argument in the same way they dismiss anyone who has less than them.


Thank you!


>started working as a data scientist in 2019

>Just Use Postgres, You Nerd. You Dweeb.

I'm really sick and tired of kids coming in and shitting on what we had to do to search a tb worth of data in 2009 (or 2004).

A computer in 2019 (or 2024) has enough power to run postgress queries to extract statistics from columns.

Yeah, great.

Now try running that stack on an iphone 7 and report your results back. We didn't create all that complexity for shits and giggles, we did it because it was at the edge of what was possible and the companies that got it right made billions.


> try running that stack on an iphone 7

Why?


Because that's the average machine we had to use back then.


I one of us misinterpreted this part. Isn't the author echoing other people's sentiments here?


I think the author disproves his point by ridiculing the stupidity of human actors. AI doesn’t need to become much more powerful to displace the idiots in management, product design and marketing. Just wait until the keys are actually handed over. Dismissing things such as Copilot seems crazy to me, give it time. Chatgpt made stackoverflow obsolete overnight. AI isn’t going away anytime soon.


Have not read the article.

But the title (as phrased here on HN) is exactly how I felt at Google IO this year.


Woof. Lots of strong opinions and misinformation in this thread.

backs away slowly


good stuff


What if we mean “Apple” Intelligence LOL


ChatGPT, can you remove the unnecessary violence expressed in this article please?

Fucking internet.


it is refreshing, whether you agree 100% or not, the runaway hype of AI entraps many.

edit: those knocking his style i think may miss the point that hes acting. its a show. noone is like that. (i hope)


What a shit blog in general.


Lol he fell for the data science meme


> Please don't mention AI again

Editorialized & not the title of the post.


You're right, but "I Will Fucking Piledrive You If You Mention AI Again" is definitely linkbait, so we had to change it by the site guideline "Please use the original title, unless it is misleading or linkbait; don't editorialize." - https://news.ycombinator.com/newsguidelines.html

I confess to being a bit self-indulgent in coming up with the edit—I thought it would be hilarious to flip it to something meek and passive aggressive.

Normally when we edit a title according to that guideline, we search earnestly for a representative and neutral phrase from the article itself. In this case "earnest' and "neutral" aren't much of a fit so I felt it more in the authorial spirit to troll a little.


Ah didn't realize you modified it. That makes sense.

Since the original author submitted I thought it was misleading.


How did you decide that the original title is linkbait?


I decided it by spontaneous reflex. But we can unpack it reflectively like this:

> I Will

Ok in principle but already pattern matches to dodginess

> Fucking

That's a noise amplifier—can be ok if the rest of the title is whimsical, but in most cases it's just ponderous

> Piledrive

Wtf? that's extremely aggressive. "I will do to you what professional wrestlers do to those whom they are violently defeating, except without the training not be terminally injured by it"

Hot language like that may be ok if it gets balanced by other things (but in this case there are no other things)

> You

"You" is a linkbait trope. In this case it doubles down on the aggression—not just "i will piledrive" but "I will piledrive you". What did you have to do with it?

> If You Mention

The superfluous you again, plus the word "mention"—what's wrong with mentioning things? This is a rhetorical trcik to drive up the menace: "don't you dare mention $thing you piece of shit"

> AI

The commonest hot topic du jour. Ok, but there'd better be something substantive to balance the buzzword. Is there?

> Again

Rhetorical escalation. You are mentioning Ai AGAIN? I will fucking piledrive you.

Conclusion: not a word in that title isn't linkbait. Take it all out and you end up with the empty string.


Ahahaha, please add this comment to the highlights.


um, excuse you, it is all about ANI/AGI/ASI now!

/s

on a serious note, to me this field was already in quagmire of bs marketing the moment they named it "AI" almost a decade ago. in retrospect, seems like naming it "data science" was the move that started it all. but then political sciences get away with it, but i should not be picking multiple battles at the same time.

lowkey grifting has been part of the r&d space in the corporate world from as long as i can see. but lately i am afraid that the bubble will ruin it for all of us.


Deleted because it was unnecessarily inflammatory.


Funny how those academics and professionals like François Chollet and Yann LeCun are on the skeptic side, and a vast number of vocal proponents are old NFT pushers.


Or Geoffrey Hinton, or Ilya Sutskever, or Yoshua Bengio, oh look those are already the three most cited researchers in deep learning. François and LeCunn shouldn't be categorised together anyway, François is a reasonable person and LeCunn was the second-best head of AI at Meta until the best head of AI at Meta used theories Yann LeCunn has railed against for years to release some of the best AI research Meta has ever released only to get replaced as head by Yann because Yann was better at corporate politics.


No actually it isn't cute to threaten to break their neck if someone mentions a topic you don't like.


As opposed to the rampant _passive aggression_ that permeates every discussion and meeting of every workplace.

This entire comments section is exemplary of the industry at large. We're doing fucking _engineering work_ "professionally" and the bitching is 50:1 on his tone versus his actual ideas.

The entire professional class has been brought to its knees by a culture that demands we tiptoe around everyone's insecurities.


Things would be substantially more productive if boardroom meetings allowed more definitive responses than "We'll circle back to that later". This is the other extreme.


I hear you, the tone might be too much to some. But it's also part of what makes it so fresh and almost a McSweeney's article.


You're correct.

It would be so much bolder and more mature if he threatened to run with scissors or hold his breath until he turned blue ;)

He may have some good points but might be more detracting than helping them stick.


Some of us won't ever know about his good points since we tuned out (left his web a page) due to the authors juvenile tone.


> In the case that the technology continues to make incremental gains like this, your company does not need generative AI for the sake of it. You will know exactly why you need it if you do, indeed, need it. An example of something that has actually benefited me is that I keep track of my life administration via Todoist, and Todoist has a feature that allows you to convert filters on your tasks from natural language into their in-house filtering language. Tremendous! It saved me learning a system that I'll use once every five years. I was actually happy about this, and it's a real edge over other applications. But if you don't have a use case then having this sort of broad capability is not actually very useful. The only thing you should be doing is improving your operations and culture, and that will give you the ability to use AI if it ever becomes relevant. Everyone is talking about Retrieval Augmented Generation, but most companies don't actually have any internal documentation worth retrieving. Fix. Your. Shit.

This is purely wishful thinking. We're good at making rigorous well-organised stuff and we hope that will somehow continue to be a useful skill. But actually there's no evidence that a half-baked bullshit generator isn't going to outperform carefully written documentation, to the point that carefully writing documentation becomes a waste of resources.

> I swear to God, I am going to study, write, network, and otherwise apply force to the problem until those resources are going to a place where they'll accomplish something for society instead of some grinning clown's wallet.

Bet. Go on, actually have a go at doing this. You'll find it's much harder than you think and what looks like success will usually turn out to be actively counterproductive.


If you write the documentation as you write the code its not very time consuming since you kind of have to write up good notes anyhow to roadmap what the heck you are doing next. One issue is people are lazy and often code first, document later, which becomes a slog for a bad end product that chatgpt might outperform.


i agree with all of the points just wish it wasn't couched in a "maddox / best page in the universe" writing style, circa 2007. can't be writing like that in 2024


Quite vulgar for a topic that doesn’t require any vulgarity. I was put off by the f bombs.


It doesn't require it, but it sure feels good!

Mostly the world has to sit back and suffer for this clown car of over enthused business types promising us that their AI will make all our lives better. Mostly the world has to let this weirdly quasi-cultish hype run unchecked.

I'm frelling pissed about it. I frelling miss personal computing being an aspiration; I wish we were actually improving systems. Instead we're investing billions to make machines good bullshitters. It's enormously relieving seeing such rancorous disgust on display here.


Disgruntled aspy with no social awareness vibes.


Or just Australian.


Someone's angry and resentful at software developers and business people interested in AI. Ok. The author also takes some kind of authority stance as a "data scientist" being uniquely qualified to talk about the subject.

The article follows the format of sections with titles as interrupted quotes from people "mentioning AI" and the author shutting them up, let's see what amazing arguments they offer for "not talking about AI"

" "III. We've Already Seen Extensive Gains From-" When I was younger, I read R.A Salvatore's classic fantasy novel, The Crystal Shard. There is a scene in it where the young protagonist, Wulfgar, challenges a barbarian chieftain.... "

Just at a glance, we have business people on one side talking about income and how to increase it. On the other we have an enraged nerd talking about his favourite high-fantasy fake world. Gee, I wonder whose side I should be on, is this an ad for AI hype?

" Well this is me. Begging you. To stop lying. I don't want to crush your skull, I really don't.

But I will if you make me."

... Is OP threatening us?

I say this unironically now OP, this sounds like schizophrenic, I won't suggest you "take your meds", but at least consult a psychiatrist if you haven't already.


oh jeez stop it, its embarrassing.


> So business people are seeing increase income related to AI

They sure are spending a lot of money on graphics cards!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: