Hacker News new | past | comments | ask | show | jobs | submit | jxjnskkzxxhx's comments login

There's no such thing as a robotaxi, it's just called a taxi. People who want to take taxis already take taxis.

Making money by selling products is a lot harder than by selling hype. Throw this Friday in jail.

If BYD and China are dominant in the EV (hardware) war, then the long term strategic play is to dominate the FSD (software) war. Whether or not this is a bold strategy or massive mistake is yet to be determined.

The only deployed fully autonomous passenger vehicles are robotaxis with lots of sensors operating is carefully managed service areas. There is no "FSD war."

This is actually reinforcement to my point. Getting real wold momentum on FSD software is exactly what Elon wants. He wants to be ahead in this technology curve before anyone else attempts to scale it outside of taxis.

What he "wants" is the hype to continue to prop up a pumped share price that looks increasingly vulnerable, nevermind his own experts saying it is financially incoherent.

they already hobbled themselves by limiting their fsd to computer vision, no chance of dominating the fsd market when the competitors are already level with or superior.

I agree that a purely visual approach is weaker, however the new vehicles are expected to have many more cameras and using structured lighting to identify objects in near realtime. I won't count it out as that is an order of magnitude cheaper to scale if it becomes approved.

BYD already have a FSD competitor. It's not that clear that Tesla will win that one.

Here's some funky footage of a driverless thing mucking around on a race track https://youtu.be/J_c2gsxImjA?t=53 . Not sure how genuine it is but they are definitely developing stuff. Also re their B system:

>It has been tested on real roads for more than a year and has driven across China with no human intervention in testing using a BYD Denza sedan.

Compare to Musk:

>One of the more wild claims Elon Musk has made — regarding Tesla, that is — was back in 2016 when he said a self-driving Tesla would be able to go from Los Angeles to New York City “in ~2 years” (in early 2018)

which I think still hasn't happened?


You're assuming that the media is selecting what news they're reporting based on how they want you to think. Maybe they're just reporting something that they believe people would be interested in.

Quantity has a quality of its own.

We have a formal description of this, it's called bayes theorem.

The Jordanian doctors just had poor priors.


In this particular case, "when you see stripes with hoofbeats, think zebra, not horse with a paintjob"

What if it was really reliable? Would you still be against it?

The question doesn't make sense imo because it, meaning neural network or other ML computer vision classification, doesn't have a mechanism to be trustworthy. It's just looking for shortcuts with predictive power, it's not reasoning, doesn't have a world model, it's just an equation that mostly works, etc, all the stuff we know about ML. It's not just about validation set performance, you could change the lighting or some camera feature or something, have some unusual mole shape, and suddenly get completely different performance. It can't be "trusted" the way a person can, even if they are less accurate.

These limitations are often acceptable but I think as long as it works how it does, denying someone a person looking at them in favor of a statistical stereotype should be the last thing we do.

I can see if this was in a third world country and the alternative was nothing, but in the developed world the alternative is less profit or fewer administrators. We should strongly reject outsourcing the actual medical care part of healthcare to AI as an efficiency measure.


I understood that you don't believe it can be made reliable. But my question was: what if it were?

Let me put it differently. Suppose I don't tell you it's ML. It's a machine that you don't know how it works, but I let you do all the tests you want, and turns out they're great. Would you then still be against it?


If my grandmother was a tractor, would she have wheels?

How trustworthy really are humans?

If this is a concern you have, you should use software that works for you.

A lot of people who disagree with me also happen to be profoundly immature child. I didn't say that one follows from the other, you added that.

We have laws, yes.

Yes, we have laws. When should the law intrude on the private transaction of two parties? Typically, the law holds both parties to their contractual agreement. If those two parties have contracted to abide by the output of an algorithm, can the law distinguish good faith manipulation of algorithmic inputs to benefit oneself from bad faith manipulation of algorithmic inputs to benefit oneself? Given that the whole point of a smart contract is to encode the terms of the agreement as code, when is it appropriate to step in and alter that agreement?

It always blows my mind to see the anti-intelectualism of HN.

Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: