Hacker News new | past | comments | ask | show | jobs | submit | avidiax's comments login

I see lots of passages that scream AI. Some selections:

> Retailing for $40,000 (over $100,000 today), it pushed the boundaries of what CRTs could achieve, offering professional-grade performance.

"Professional-grade" huh? There are professional TV watchers? It's not a studio reference monitor. It's just a regular TV but bigger.

> The urgency was palpable.

Where does one palpate urgency?

> Against the odds, Abebe found the CRT still in place, fully operational, and confirmed that the restaurant owner was looking for a way to get rid of it.

We establish later that it wasn't fully operational at all. And what odds? We didn't establish any. The TV is rare, and we later establish that the original owner knew it.

> What follows is a race against time to coordinate the TV's extraction, involving logistics experts, a moving team, and a mountain of paperwork.

> Abebe, the man who made the rescue possible, turned out to be the director of Bayonetta Origins: Cereza and the Lost Demon. His selfless dedication during the final months of the game’s development exemplifies the power of shared passions.

Cool detail, but irrelevant, even if followed by breathless admiration fluff.

> This story isn’t just about a TV; it’s about preserving history and celebrating the people who make it possible.

I don't recall anybody being celebrated. They got a cool TV. Cool.


>I don't recall anybody being celebrated. They got a cool TV. Cool.

The original video gives plenty of appreciation to the people who made moving it possible, the shop owner, and the people who restored it to perfect working condition.


The TV was bought and used by a business, it doesn't get much more "pro" than that (someone should remind Apple's marketing team). But we could argue about semantics all day, humans make vaguely inaccurate statements all the time.

> Where does one palpate urgency?

Most frequently in metaphors. https://books.google.com/ngrams/graph?content=urgency+was+pa...


> Where does one palpate urgency?

Ask an urologist!


I didn't notice that this was AI myself. I tend to start skimming when the interesting bits are spread out.

There's two variations of this that are very common:

* Watering down - the interesting details are spread apart by lots of flowery language, lots of backstory, rehashing and retelling already established points. It's a way of spreading an cup of content into a gallon of text, the same way a cup of oatmeal can be thinned.

* High fiber - Lots of long-form essays are like this. They start with describing the person being interviewed or the place visited as though the article were a novel and the author is paid by the word. Every person has some backstory that takes a few paragraphs. There is some philosophizing at some point. The essay is effectively the story of how the essay was written and all the backstory interviews rather than a treatise on the supposed topic. It's basically loading up your beef stew with cabbage; it is nutritive but not particularly dense or enjoyable.

Both are pretty tedious. AI can produce either one, but it can only hallucinate or fluff to produce more content than its inputs. As such, AI writing is a bit like a reverse-compression algorithm.


Not saying that's impossible. Certainly a ghost gun has an advantage to corrupt government agents just as it does to criminals.

But it's going to be difficult to make it stick against a patsy.

What will be very suspicious is if he dies in custody before trial. Otherwise, at trial, I expect:

* He has no alibi

* The ballistic evidence matches the gun.

* He has possession of additional ammunition from the same lot.

* He was in fact a client of UHC with a substantial claim denial history

* His computers/phone show that he was cyber-stalking

If they have all that, I don't think a reasonable person could believe that the FBI crime lab and Google can be coerced in a grand conspiracy into fabricating evidence.

If it turns out that the gun is a "2nd ghost gun", and the prosecution claims that he ditched the 1st gun and the ammo, and his alibi is "weak", and he cleared his digital history, that would be a much weaker more suspicious case.


> he was in fact a client of UHC with a substantial claim denial history

It doesn't appear he was [1].

[1] https://apnews.com/article/luigi-mangione-united-healthcare-...


If you are really sweating the problem, you are sweating only one problem.

But an organization usually has more than one problem that needs to be addressed concurrently, unless it wants to constantly be "fire fighting" problems that could have received partial attention earlier but instead have become emergencies.

So this guy is going to want to sweat more than one problem, which really just means doing a normal job of prioritizing things, but with more sweat. Just remove the L & B from WLB, permanently. Sort of like this boss imagines he does by writing emails at his family's events, but without the extreme upside potential afforded to executives.


I feel Google, Facebook, etc. all need to setup actual phone numbers and chat rooms, and make them rank highly on searches for "Google support phone number", "Google fraud department", "Google account recovery department", "Google Live Support Chat" etc.

Then those numbers should simply play a message that this is the only official phone number, and no human will ever call from or answer this number, and the company does not offer customer support or appeals to account problems.

They also need to make searching for fraud phone numbers return anti-fraud messaging rather than what it currently does. Seems like the entire 844-906 exchange is fraudulent [1].

I had a family member that just got scammed because they panicked after their Facebook account got banned, basically exactly like [2].

[1] https://www.google.com/search?q=844-906

[2] https://www.npr.org/sections/alltechconsidered/2017/01/31/51...


“If you receive a calling from Google, ask them for your issue ID, hung up and call 1-1-GOO”

1-1-GOO: “Google never contacts customers directly and has no way to let customers contact them. Hace a nice day”


Or, hear me out: provide actual customer support.

To 4+ billion customers. Not possible at any realistic company size.

If you or any person figured out how to do such a thing you’d be the next trillion $ company.


If your scaling requires you to ignore some laws and regulations, maybe your scaling is just a wet dream that should not become reality, and still attempting it should be punished. It's just the cost of doing business.

No laws or regulations were broken. If your desire to punish things which anger you means you ignore laws, maybe the problem isn’t the company.

Considering their profit is on the order of 100 billion, providing proper customer support does actually seem entirely realistic.

Meta net profit 2023 was $40B. Revenue is not profit.

AT&T has 100M customers, 40k customer service reps, avg wage $20/hr full time, i.e., $40k/year each.

If you simply scale this 5B customers would need 2M customer service reps. That’s $80B in wages. With no legal need, and since people use their services more by choice more than just about any other company’s product on the planet, it would seem they’re doing just fine.


Are they really customers if they don’t pay?

If they're not customers then there's really no need for customer support, right? So why would they complain? :)

The corps want you to believe that but it's not true.

India requires direct customer support by law.


It’s math. They don’t need me to believe anything.

They provide support in India to meet local laws.


Nonsense. It's (moderately) expensive, it's a cost. It's far from impossible, the proof of that being that huge companies did and do provide customer support.

Big tech loves "stripping unnecessary fluff" and "being efficient". Turns out the "unnecessary" stuff is there for a reason. The automatic management + zero customer support is dystopian to say the least.


Do the math (which I did for another in this post).

Saying nonsense to basic arithmetic is ignorant.

Don’t like the service, don’t use it. Same as every company.


How many billions do they make an hour? How many human hours of Indian and Philippine wages can they pay?

Users in low wage countries with minimal profit per customer doesn’t preclude US / Canadian tech support where they get 20+x the revenue per user.

They are making 10+$/month per user for a few hundred million, and have a large profit margin that easily pays for basic tech support.


That's a consequence of growth they should have thought of and a basic part of running any business.

At least in the US Attorneys General are being forced to do this work for them. It's essentially the only way to get a hacked Facebook/Instagram account recovered.

https://www.engadget.com/41-state-attorneys-general-tell-met...


No, attny generals chase things that raise their political stature. They’re political. Every single one of the 50 are dem or republican: 0 independent.

They’ll make noise about this because it riles a loud minority. If they really wanted it fixed then pass laws. They don’t, because they also like business.


"We’re the search company. We don’t care ; we don’t have to".

Where do you think Google would rank its own support, help, etc., contact pages and info if not at the top of searches like the ones you mentioned?

The problem is the subjunctive here.

It's not where the _would_ rank ... it's where they currently _do_ rank.

In my test, the AI Overview produced accurate information ("Google does not offer phone support for account recovery") but none of the other hits on the first page say anything like "Phone support calls are always fraud. Google will not call you."


I think the point they are making is that google will let the fraudsters pay to place higher than the warnings because it's profitable to do so.

If there is only one time they would honor their fair market obligations and not raise their own rankings, it would be on a cost center like free tech support to consumers.

I have some sympathy for your argument, but I think you are fundamentally misunderstanding the power dynamic between citizens and criminals.

Some of the petty thieves will think twice if they hear about other thieves getting beat up. Many of them will simply respond with violence.

Look at Latin American countries where thieves will shoot you dead for an iPhone.

The bicycle thieves are going to steal no matter what. They have to score their next hit. Better that they can do that armed only with an angle grinder rather than a pistol, too.

And if someone decides to turn a bicycle theft into a murder, well, the bicycle thief can usually "live off the land" much easier than you can. When you are used to living on the street and all you need is your next hit, it's much harder to catch you for murder, even if you can be identified.

In a fight where you have more to lose, are an order of magnitude more likely to be held accountable, and your opponent is irrational, effectively anonymous, and probably more practiced in violence than you, escalation seems unfavorable even if it leaves you with a shitty feeling.


This is a pretty cynical take, but I would think that having AI management would be highly undesirable for companies, and not because it would be bad at managing.

Even in good, reputable companies, there is a certain amount of legally/ethically dubious behavior that is nonetheless desirable.

An H1B candidate for a position has been found, but it must be demonstrated that there is no local candidate for that position. Every local candidate must fail the interview, whether or not that is fair.

You have a small team. You've hired someone good at their job, but over lunch, they've mentioned they plan to have 10 children, so they will be on parental and FMLA leave for 3+ months a year indefinitely. You need to find a problem with this person's performance.

You have a team of developers. One of them has done a great job this past year, but the project they are working on and their specialization is no longer needed. It would not be fair to them to give them a middling performance review, but it's in the company's interest that the limited compensation budget goes towards retaining someone with skills aligned to the future direction.

An AI would have any unethical or illegal prompting exposed for any court to examine. Likewise, there would be little reason not to maintain a complete record of everything the management AI is told or does. One could design an AI that leadership talks to off the record, which then manifests its instructions in its state, and then could lie (or be unable to prove) its instructions later. That would then be similar to a human manager.

But I don't think any court would accept such an off the record lying AI. So an AI probably can't keep any secrets, can't lie for the company's benefit in depositions or court, and can't take the fall for leadership.


You know… all the things you mention are actually bad. I want them to stop, for the sake of our society. If the price for that is getting rid of human managers with a broken moral compass such as yours, I’m all for it.

Here's the thing. You assert confidently that GP is acting on a "broken moral compass". But you can also make the case that it is moral to act in interest of the company: After all, if the company fails, a potentially large number of people are at risk of losing their household income (and, in broken economical systems, also stuff like health insurance).

That's just the slippery slope of neoliberalism. The ends do not justify the means, no matter how you spin them: A company will not fail if you continue to employ parents of many children, employ a regional candidate, or write fair performance reviews regardless of strategic goals. If any of these cases appears to lead to the downfall of a corporation, there were much bigger problems looming that actually caused the failing in the first place.

A company is literally a group of people working towards the same goal. The people are just as important as the goal itself, it's not a company otherwise.


Why are you switching between corporations and companies as if they're the same?

I actually do know of a small company that was quite badly screwed over by a vindictive employee who hated her boss, deliberately did not quit because she knew she was about to have another child, got pregnant and then disappeared for a year. Local law makes her unfireable almost regardless of reason (including not actually doing any work), and then gives her three months maternity leave too. So she basically just didn't work for a year. She said specifically she did that to get back at her boss, she didn't care about the company or its employees at all.

For a company of that size something like that can put it in serious financial jeopardy, as they're now responsible for paying a year's salary for someone who isn't there. Also they can't hire a replacement because the law also guarantees the job is still there after maternity leave ends.

> If any of these cases appears to lead to the downfall of a corporation, there were much bigger problems looming that actually caused the failing in the first place.

This kind of thinking has caused ruin throughout history. Companies - regardless of size - aren't actually piñatas you can treat like an unlimited cash machine. Every such law pushes a few more small companies over the edge every year, and then not only does everyone depending on that company lose, but it never gets the chance to grow into a big corporation at all.


Where did this happen? Typically the government covers some or all of the parental leave costs where it is mandated, and while a company can't fire her they are allowed to hire someone to do the job in the meantime with the money they would have paid her. It's obviously not ideal but it's hard to imagine it is screwing the company over all THAT badly.

In Finland parental leave is not fully covered by the government. So you get to pay both the original worker and their temporary replacement.

It's okay for unprofitable companies to fail. Desirable, in fact.

No, it's desirable for them to become profitable and successful again, especially if the only reason they're unprofitable is people abusing the rules to extract capital from them unsustainably.

> especially if the only reason they're unprofitable is people abusing the rules to extract capital from them unsustainably

Employees don't extract capital from companies, especially unsustainably.

Executives and Boards of Directors do though


Sure they do. Unions, abuse of other worker rights laws and voting in socialist parties that raise corporate tax rates to unsustainable levels are all exactly that, and have a long history of extracting so much the companies or even entire economies fail. Argentina is an extreme example of this over the past 100 years but obviously there are many others.

You don't think AI's can't be trained to lie? Odd, given a major research area right now is to prevent AI from lying. They do it so confidently now nobody can tell.

I don't think that an AI would be interrogated in court.

I think that it would be hard to hide all the inputs and outputs of the AI from scrutiny by the court and then have the company or senior leadership be held accountable for them.

Even if you had a retention policy for the inputs and outputs, the AI would be made available to the plaintiff and the company would be asked to provide inputs that produce the observed actions of the AI. If they can't do that without telling the AI to do illegal things, it would probably result in a negative finding.

----

Having thought a bit more, I think the model that we'd actually see in practice at first is that the AI assists management with certain tasks, and the tasks themselves are not morally charged.

So the manager might ask the AI to produce performance reviews for all employees, basing them on observable performance metrics, and additionally for each employee, come up with a rationale for both promoting them and dismissing them.

The morally dubious choices are then performed by a human, that reviews the AI output and collects or discards it as the situation requires.


I think you're suggesting there is some cognitive dissonance there. I think there's some truth to that, but it's also ignoring a true difference.

Personal finances can be viewed (somewhat incorrectly) as not being zero-sum. Me making more for my work or investments seems like it doesn't take from anyone else.

While a CEO deciding that AI should handle as much of the labor in a company as possible seems like a decision that benefits the company and it's shareholders directly at the expense of its workers.

I think in actual fact both sides here are zero-sum, but when the worker makes more personally, there are only diffuse and marginally-affected losers (the company, its shareholders, consumers and customers experiencing higher prices, etc.). The company's actions would affect people that can be directly named and are terribly affected.

It's unfortunately the difference between stealing 5¢ from 10,000,000 people or $100k from 5 people.


I don't think I am. You put the facts pretty succinctly. I just evaluate them differently.

Having thousands of smart humans optimizing for their personal goals in their individual ways, at the expense of company goals, is an issue that exists in every company, bankrupts companies and super frightening to deal with as a CEO.

People are not obviously more noble, when working towards personal goals instead of company goals, and a lot of people working towards their personal goals instead of company goals, is a serious issue for any company. Not having a single entity and one big number to deal with makes it actually much more powerful and scary.


Then you might think they'd be scared of handing over the keys to their company to an inscrutable AI working towards OpenAI's goals, but I guess the money is too good.

> While a CEO deciding that AI should handle as much of the labor in a company as possible seems like a decision that benefits the company and it's shareholders directly at the expense of its workers.

Many businesses are low profit margin with very price sensitive customers. There is reasonable concern that if they don’t follow competitors’ in efforts to reduce pricing, the whole business might fail.

See outsourcing textile manufacturing and other manufacturing to Asia. See grocery stores that source dairy and meet from local producers only rather than national operations with economies of scale. See insurance companies where the only concern is almost always lowest premium, not quality of customer service or claims resolution.



The sun is damaging because it contains ionizing radiation (radiation that is powerful enough to directly disassociate a molecule into ions). This is the UV portion of sunlight.

UV starts at 800,000 GHz.

The 6Ghz being discussed here is completely non-ionizing, not even comparable to UV.

The only concern with 6Ghz is that is can also cause dielectric heating, which is the same as a microwave. But again, at 25mW, you can't even feel the heat from direct contact with the antenna, let alone a few meters away. Your exposure follows the inverse-square law [1], which means that it drops proportional to the square of the distance. So if it's not a problem at 10cm, it's 100x less of a non-problem at 1m.

[1] https://en.wikipedia.org/wiki/Inverse-square_law


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: