Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the things this election, and the last four years, has shown me, is the impact of technology on our political discourse. It hasn't been a "good" insight.

Federal Election records indicate that the leadership of technology companies overwhelmingly supported a set of values that their employees appear to just as overwhelmingly reject. That dissonance is really quite remarkable.

I see one of the fundamental challenges facing the new administration is how to address this. I am not a fan of government regulation, more of a "let the markets decide" kind of guy. I recognize however there is a systemic risk where the technology "owner" can exploit it to harm society in a way that was previously impossible.

While the country has focused on disinformation and hate speech, consider a company that controls, in real time, the self-driving software in a car. Such a technology, if weaponized, to could kill hundreds of thousands of people. Or it could be weaponize to kill select people who were in the intersection of riding in a self-driven car and in disfavor by the company that really controls that car.

Early on in my career I chose not to work for military contractors who were the "big" employers in Los Angeles at the time I graduated from college. They were clearly engaged in finding clever, or at least more effective, ways to kill people and that wasn't where I wanted to spend my time. But what about someone working on self driving? It saves lives by avoiding some of the things end up resulting in crashes. But it also provides an opportunity for great evil.

Do we trust the person in charge to not be evil? Companies change and while they might declare their intentions when they are growing, what happens when they are on top? Sometimes those quaint notions get cast aside when being a little bit evil makes what you're trying to do that much easier or maybe that much more profitable.

What is the appropriate response, in a democracy, to a small number of people controlling a potential weapon that can destroy that democracy? Sadly this is no longer an idle question.

I don't have any answers here, just more questions.




> Such a technology, if weaponized, to could kill hundreds of thousands of people.

Try and drive a Tesla through a red light or through a person that it clearly sees. Try and drive a Tesla fast without being in it. It just doesn't work.

On the other hand, nothing stops someone from attaching a simple remote control system to a regular truck and driving through a crowd of people.

Nothing stops someone from making an aimbot for a machine gun. That can kill a lot of people fast.

https://www.youtube.com/watch?v=6QcfZGDvHU8


Interestingly I avoided injury by a Tesla which was approaching the intersection I was crossing and the driver was distracted. I heard the tire screech and looked over to see a driver clearly panicked both because they realized they nearly hit someone and there phone was now nowhere to be found.

That is a great thing.

But what if someone inside of TESLA, the folks who write the software and send it over the air to your car, decided to add their own "feature" to the car such that they could override the safety system with a network packet, or a text message to the car. That person would be in a position to tell the car to ignore its safety systems and kill its occupant.

It is kind of a staple of dystopian fiction, but now that we're getting closer to having that software out there in real time, the question becomes less fiction and more "What would have to be true for that to happen?"

The change here is cars that are always connected to the Internet and have command authority over all of the systems in the car. Before if you wanted to alter the software controlling a car you took it to a dealer, or a mod-shop, but now it just shows up in your car. What sort of insider threat programs does Tesla have? What sort of controls are their on releases? Do third parties have any opportunity to audit everything in the code? What happens if Elon orders an employee to put some code in? Do they do it? Do they report it?

And not to pick on Tesla here, the same goes for GM/Cruise and Waymo right? ALL self driving systems that are currently in development have full control of the car, an always connected component, and a dynamic software update capability. Should it be required they also have a mechanical switch that forces manual control without any means of circumventing the switch in software? That would have to come as a government regulation right? And then what happens when someone steals the Waymo van by throwing the switch and driving away with it?

Hopefully that gives you a sense of where my head is here in these questions.


Our society is built on Trust. You have to trust your bank to keep your money in your name and allow you to withdraw when you want it. You have to trust that the fire truck will come when you have a fire. You have to trust that your neighbours won’t randomly wake up one day become zombies thirsty for your blood.

Same way you ought to trust, but not blind trust. Trust but verify.

I think the self driving game will be won with someone who as a transparency mindset. Here’s the car. Here’s the software, here’s the guarantee that it hasn’t been modified and here’s the extensive tests that we’ve run on it, verified by 3rd parties vouching for its safety and quality.

Same with any technology company. You win by making solid technology backed with real rigorous testing vouched by trusted 3rd parties and general public.


I agree with you. For folks reading along, there is an entire discipline around this stuff, it is call vulnerability analysis and surety systems. There are a lot of good papers published by Sandia National Laboratory on these topics. They were responsible for developing the US surety system around access to nuclear weapons[1]. When I worked at Sun they gave a talk at an e-comerce payment processing forum that Sun was participating in to discuss how you approach the problem of securing something in presence of known and unknown bad actors.

In particular they discussed the systems around banking which prevent your bank from stealing your money from you. A topic that I found quite interesting.

But one of the things that has always stuck with me was the discussion of the trade-off between the "cost of effort" to "actualize" a vulnerability. It is the difference between having something that could be done in theory versus having doing it having a high enough payoff to actually do it. When you look at things like dye packs in money and silent alarms and time locked safes, those are parts of a system that minimize the amount of money you can expect to make off with in a bank robbery. They are part of a surety system that is protecting the money in the bank. And they don't make it impossible to rob the bank, they make the likely-hood that you'll have enough profit from it to risk it low enough that people don't do it.

[1] They are designed, in part, to prevent anyone from detonating a US nuclear device without specific authorization from the President.


I don’t think this is a good argument. Taking actions that influence people is sketchy but legal (eg FB allowing disinformation) but actually turning self driving cars into death machines is just ... not something I see as being possible, and goes into crazy conspiracy zone for me. There are a ton of regulations around the safety of passengers, which are likely to be expanded to include self driving tech as well.


All of the self driving car research will eventually be used to make self-driving tanks and other automated weapons of war. All technological innovation ends up being used for war eventually; it's just a question of whether the military or the public gets to use it first.


Do you also worry about Boeing flight control software, nuclear reactor software, or XRay machine software?


Generally I don't. That said, reading the various reports on the 737-MAX suggests that Boeing has some work to do in this regard.

While all of the examples you give could conceivably be attached to the Internet, none of them update their core software over the Internet. That would be a non-starter for the agencies that regulate them. When I see things like "autopilot updates" delivered to a Tesla as an update, it feels to me that self driving is an example of a technology that is 'running ahead' of the regulators at the moment.

Consider Facebook, which is a popular bad guy here[1]. There have been "crank" bulletin board systems since the 70's at least, and before that with HAM radio operators riling each other up over various perceived slights and insults. In 2016 we got to see that with a precision advertisement targeting system, and way more insights into people than ever existed before, companies like Cambridge Analytica could precisely target individuals who were susceptible to a particular line of reasoning. CA essentially offered Radicalization As a Service. We are still dealing with that to this day. Was there any reason to worry about BBS software? Not when it was a self selecting group. But when the BBS became a billion BBSes all hosted on the same platform with clear insights into the demographics, fears, and hopes of all of those BBSes at once, that was something new. And it changed the risk profile and was weaponized against people.

It has always been desirable for groups to find somewhat like minded people in order to form a group for collective action. That is the 'demand' side of the question, but before Facebook it wasn't economic to do that at scale. With Facebook it did become economic to do that, and it gave anyone a new capability. Not all of those people have (or had) people's best interest at heart and so we get the bad as well as the good.

Prior to self driving cars with over the air updates, continuous network connectivity, and full access to all systems on a car, has their been the capability (actualized or not) for someone anywhere in the world to target a specific car and take control away from the driver. The previous poster child, for vehicles anyway, were cars with computers that controlled things like engine starting/stopping and brakes. One issue was that you had to be near the car to know which one you wanted to attack. Self driving cars finesse that by giving you an interior camera view. It is a different realm of problem.

[1] I don't consider Facebook a bad actor per se, I believe that the forces that motivate them (page engagement, ad clicks, ad sales, etc) find ways to be serviced. And those ways are not constrained by ethics.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: