The jump to AI capabilities from data illiterate leadership is of such a pattern...
It reminds me of every past generation of focusing on the technology, not the underlying hard work + literacy needed to make it real.
Decades ago I saw this - I worked at a hardware company that tried to suddenly be a software company. Not at all internalizing - at every level - what software actually takes to build well. That leading, managing, executing software can't just be done by applying your institutional hardware knowledge to a different craft. It will at best be a half effort as the software craftspeople find themselves attracted to the places that truly understand and respect their craft.
There's a similar thing happening with data literacy where the non data literate hire the data literate, but don't actually internalize those practices or learn from them. They want to continue operating like the always have, but just "plug in AI" (or whatever new thing) without changing fundamentally how they do anything
People want to have AI, but those company's leaders struggle with basic understanding of statistical significance, basic fundamentals of experimentation, and thus essentially destroy any culture needed to build the AI-thing.
Do they struggle with the basics, or do they just not care?
I'm in a similar situation with my own 'C-suite' and it's impossible to try and make them understand, they just don't care. I can't make them care. It's a clash of cultures, I guess.
TL;DR: Yes, and I think that's why some of these comments are so hostile to OP.
> it's impossible to try and make them understand, they just don't care. I can't make them care. It's a clash of cultures, I guess.
That seems to be what OP's cathartic humor is about. It's also (probably) a deliberate provocation since that sub-culture doesn't deal well with this sort of humor.
If that's the case, you can see it working in this thread. Some of the commenters with the clearest C-suite aspirations are reacting with unironic vitriol as if the post is about them personally.
I think most of those comments already got flagged, but some seemed genuine in accusing OP of being a dangerously ill menace to society, e.g. "...Is OP threatening us?"
In a sense, OP is threatening them, but not with literal violence. He's making fun of their aspirations, and he's doing so with some pretty vicious humor.
I think it's a bit reductive to flatten the conversation so much. While I don't have as much of an extreme reaction as the people you talk about, the post left a bit of a sour taste in my mouth. Not because I'm one of "those people" - I agree with the core of the post, and appreciate that the person writing it has actual experience in the field.
It's that the whole conversation around machine learning has become "tainted" - mention AI, and the average person will envision that exact type of an evil MBA this post is rallying against. And I don't want to be associated with them, even implicitly.
I shouldn't feel ashamed for taking some interest and studying machine learning. I shouldn't feel ashamed for having some degree of cautious optimism - the kind that sees a slightly better world, and not dollar signs. And yet.
The author here draws pretty clear lines in what they're talking about - but most readers won't care or even read that far. And the degree of how emotionally charged it is does lead me to think that there's a degree of further discontent, not just the C-suite rhetoric that almost everyone but the actual C-suites can get behind.
> the post left a bit of a sour taste in my mouth [...] And the degree of how emotionally charged it is does lead me to think that there's a degree of further discontent
I think part of the problem is that it's generally futile to judge the mental state or hidden motivations of some random person on the internet based solely on something they've written about a particular topic. And yet, we keep trying to do that, over and over and over, and make our own (usually incredibly flawed) judgments about authors based on that.
The post left a bit of a sour taste in my mouth too, mainly because as I've gotten older I don't really enjoy "violence humor" all that much anymore. I think a big part of that is experience: experiencing violence myself (to a fairly minor degree, even), and knowing people who have experienced violence makes joking about violence just not feel particularly funny to me.
But if I step back a bit, my (probably flawed) judgment is pretty mild: I don't think the author is a violent person or would ever actually threaten or bring violence upon colleagues. I'm not even sure the author is even anywhere near as angry about the topic as the post might lead us to believe. Violence humor is just a rhetorical device. And just like any rhetorical device, it will resonate with some readers but not with others.
> I think it's a bit reductive to flatten the conversation so much.
Is that because I added a TL;DR line, or my entire post?
> I shouldn't feel ashamed for taking some interest and studying machine learning. I shouldn't feel ashamed for having some degree of cautious optimism - the kind that sees a slightly better world, and not dollar signs. And yet.
I agree with this in general. I didn't mean to criticize having interest in it.
> And the degree of how emotionally charged it is does lead me to think that there's a degree of further discontent
Do you mean the discontent outside the C-suite? If so, yes, I agree with that too. But if we start discussing that, we'll be discussing the larger context of economic policy, what it means to be human, what art is, etc.
> Is that because I added a TL;DR line, or my entire post?
The TL;DR was a fine summary of the post, I was talking about the whole of it. Though, now that I re-read it, I see that you were cautious to not make complete generalizations - so my reply was more of a knee-jerk reaction to the implication that most people who oppose the author's style are just "temporarily embarrassed C-suites", unlike the sane people who didn't feel uncomfortable about it.
> I didn't mean to criticize having interest in it.
I don't think you personally did - I was talking about the original post there, not about yours. The sentiment in many communities now is that machine learning itself (or generative AI specifically) is an overhyped, useless well that's basically run dry - and there's no doubt that the dislike of financial grifters is what started their disdain for the whole field.
There is a decent chance that, yes, this rant is quite literally aimed at the people that frequent Hacker News. Where else are you going to find a more concentrated bunch of people peddling AI hype, creating AI startups, and generally over-selling their capabilities than here?
Senior management's skill set is fundamentally not technical competence, business competence, financial competence, or even leadership competence. It's politics and social skills (or less charitably, schmoozing). Executives haven't cared about the "how" of anything to do with their business since the last generation of managers from before the cult of the MBA aged out
The issue its that its not just with technology, but absolutely anything that could be loosely defined as an expert-client relationship. Management always budgets less time and money than what anyone with expertise in the subject would feel is necessary. So most things are compromised from the outset, and if they are successful its miraculous that the uncredited experts that moved heaven and earth overcame such an obstinate manager. Its no wonder most businesses fail.
This is a common problem across all fields. A classic example is that you don't change SAP to suit your particular business, but instead you change your business to suit SAP.
It reminds me of every past generation of focusing on the technology, not the underlying hard work + literacy needed to make it real.
Decades ago I saw this - I worked at a hardware company that tried to suddenly be a software company. Not at all internalizing - at every level - what software actually takes to build well. That leading, managing, executing software can't just be done by applying your institutional hardware knowledge to a different craft. It will at best be a half effort as the software craftspeople find themselves attracted to the places that truly understand and respect their craft.
There's a similar thing happening with data literacy where the non data literate hire the data literate, but don't actually internalize those practices or learn from them. They want to continue operating like the always have, but just "plug in AI" (or whatever new thing) without changing fundamentally how they do anything
People want to have AI, but those company's leaders struggle with basic understanding of statistical significance, basic fundamentals of experimentation, and thus essentially destroy any culture needed to build the AI-thing.