Elon Musk has been pretty successful at getting top talent to push the limits of engineering, but with neuroscience, it seems the research isn't there yet. What I'm really interested to see is how long Karpathy will stay at Tesla as other self driving projects drop like flies (ATG, Zoox, and most recently L5)
I dunno, self-driving is an interesting computer-vision / machine-learning problem and it is far more interesting than 90% of the stuff a lot of your FAANG companies are spending R&D dollars on. For anyone working in the field, the end goal is extremely rewarding in terms of the long-term beneficial impact on society it would have and the numerous lives it would save.
I don't think Tesla is going to drop FSD R&D anytime soon. They've also built out amazing software infrastructure under Karpathy's leadership to make use of their fleet of hundreds of thousands of cars to run shadow experiments with newer models in the background and continuously collect more difficult and interesting training data so they can improve their models and their internal test-sets for tracking metrics. They are also a relatively lean team from what I've heard, so it's not like they are bankrolling hundreds of very expensive ML PhDs at the moment. It is debatable if they (or anyone else) will "solve" FSD with current approaches, but I think the technology is already pretty fascinating and useful.
If you want to do fundamental research, the jobs are very scarce and, in many cases, not particularly good in terms of salary, stability, or location. A postdoc in the life sciences makes $50-65k/year, often either in a) a high cost-of-living area (Boston, SF) or b) a land-grant institution in the middle of nowhere, which is tough with a partner. These are usually short contracts too--mine is renewed annually. Faculty and Pharma jobs pay better, obviously, but are also pretty thin on the ground.
I like doing research that makes the world better, but doing so is an incredible luxury, even coming from a decently middle-class background. If my family were even slightly poorer, there's no way this would be possible and if one of them were to get sick or hurt, I can't imagine how I'd be able to stick it out.
It doesn't have to be this way, obviously. We could fund more stable positions--and I think it'd probably work out to more/better science per dollar spent. But right now...we definitely don't.
I'd change "the best minds" to the "some of the most educated and technically prepared minds".
And it's not only to click ads, there is a whole lot of engineers (even more in poorer countries such as Brazil) who end up working in banks and finance, mostly to deal with complex spreadsheets.
If by "best" we mean, most educated, most capable of solving engineering problems, "highest IQ" etc. (Which is how we generally define best in this context) Then yes.
There used to be a big incentive for the smart scientific and engineering minded types to go into academia, but now the majority go into either financial services or tech work because that is where the high salaries and actual problem solving are.
Well, the idea is that they demand the highest salary and those who pay the highest salary are companies interested in improving ad-clicking. It's not "abstractly best" but "empirically best". If you get the most money, you are by some definition the best.
But, indirectly it becomes an interesting problem (nlp research). If we didn't have people playing / making video games then we wouldn't have figured out GPU acceleration to use with deep networks (backpropagation is fast when you can do matrix multiplication fast).
I’ve heard it also as “making global scale dopamine delivery systems” when bringing in social media KPIs and other addiction-driven engagement like modern gaming monetization models
This is indeed a problem, but Denmark's highways right now have 1/6 of the death rate of US highways, so if saving lives actually was the goal, AI isn't exactly the low hanging fruit.
When I was in Sweden on vacation, I was surprised that almost all except the most minor roads have a central divider. To allow overtaking, they have 2+1 lanes, and then after a few km they switch to 1+2 lanes (extra lane in the other direction).
This eliminates the most deadly accidents (head on collisions) and makes overtaking completely stress free.
It's a simple intervention that you can do almost anywhere, and it saves lives immediately. You only need to invest a bit in concrete barriers and paint.
they set standards for what a street, road, and highway are. and they do not mix them. As a result they are safer, cheaper, and land use has a higher tax revenue.
Sweden has the lowest traffic related deaths of any country [1]. There must be something to it, however it's strange how lackluster the government response was to covid in comparison.
Edit: You think our covid restrictions was successful?
I was making an off topic comment but it was related to the information I gave comparing the wide differences of effort and in turn success of these independent issues. Maybe they weren't interested in that.
I think it would be a lot easier to develop full self-driving than to get Americans to accept and implement the road safety measures that are in place in Denmark. And that's not even counting if there are cultural factors that make Danes better drivers other things being equal. I can't even imagine how many billions of dollars it would take.
Cars won't save us from cars. Having widely available self driving car would save some lives but also set off another front in the culture wars and won't lead to significant changes in our infrastructure or how the burdens of mass driving are distributed.
The common point that you won't see huge structural improvements until human driven cars are gone is true and it's going to be a WHILE.
Even if you had perfect self driving technology the cultural and political capital you'd have to expend to get rid of recreational cars is just astronomical. At that point you may as well ban all of the fucking things, the electric ones aren't better they just shovel the suffering around a bit.
Just because L5 autonomy doesn’t solve every possible externality of driving, doesn’t mean it isn’t a watershed moment in human technological innovation and also massively life-saving and harm-reducing.
The externalities of driving are huge (both positive and negative) but the negative externality of driving accidents is nearly a trillion dollars per year in the US.
So it's easier for us to teach a computer to think than to teach people to think and act differently? Not sure if I agree with that statement, they require very different solutions, one is technical and the other social. I don't think just because the social solution is hard that it makes the technical one easier...
There is at least a glimmer of a suggestion that the technical solution will be workable. Waymo seems to be doing pretty well, and for all Tesla's fumbles, their autopilot seems to do better than humans on freeways in good conditions, which may someday save a nontrivial fraction of the lives lost in car accidents.
On the other hand, nobody has any idea how to get Americans to change their minds on guns, cars, or red meat, and there is no foreseeable course of action that would work. That's not to say it's impossible, but I don't see how to get from here to there. This isn't a case of people just not knowing that if they drove more carefully they'd be at less risk of dying. Public education is not even remotely sufficient to the task of accomplishing the behavior changes we're talking about. Hell, I bet you could spend a hundred billion dollars and not even get to the point of a solid majority agreeing there's a problem to be solved, much less make any progress on solving it.
Changing minds on guns, cars, and red meat is exactly as easy as setting up and constantly reinforcing the original (toxic) narratives around guns, cars, red meat, sugar, and the rest.
It isn't difficult or expensive in its own terms. You don't do it with "public education", you do by consistently dramatising the results you want in popular media, and demonising the results you don't.
Give it ten years of consistent messaging from multiple seemingly independent sources and it's done. Give it twenty five and it's so done the alternatives are no longer thinkable.
The difficult and expensive part comes from the enclosed nature of political power in the US, which has a choke point on the kinds of messages that are allowed to appear in popular media.
Cool, now you just need to convince popular media to include narratives about better road design and use in a way that is subtle enough to avoid pushback but obvious enough to have the intended shaping effect. This sounds both incredibly difficult and incredibly expensive, but you say it's not, and we have no way to test which one of us is right. Agree to disagree. :)
I think Americans are less of an outlier on that subject compared to other countries in the world. In fact, Denmark drinks more per capita than the US. But in any case it wasn't my goal to exhaustively list all the things Americans are stubborn about, just some examples that are ready to hand. ;)
Go look up changing to the metric system in the US. Failed effort billions of dollars. Definitely agree with the prior commentators statement. Technical solitarily already seems in the wild by enhancing driving capability...
Maybe not objectively, but probably for some people. While some could be, many expert engineers wouldn't be expert social scientists or public servants.
Agree. I expect 2050, easily, before there's even the remotest hope of even parity with human drivers.
I'm always astonished at how those most skilled at software, turn around and explain how software is better than humans at task $x. Yet, I've never used a piece of software without bugs. I've never seen a piece of software, not real world capable, which even remotely close to dealing with things, as well as humans.
And the more complex the software, the more bugs. Bugs abound in complex software. Full self driving is as complex as the linux kernel, easily.
How many bugs are in the Linux kernel? Right now?
To even hope, even begin to hope that things will be mostly bug free, bug free beyond human "bugs" when driving, we'd:
- Need to 100% freeze the platform, so the car has zero changes. Ever. Like a Volvo, 10+, 20+ years with only fixes for flaws.
No change in drive train. No change in how the car handles, etc.
- A 100% feature freeze fix for years and years, only bug fixes, no feature add.
It's just not happening overnight. And when it does, cars are going to require an addition of massive sensors, it's not going to be camera or radar, it's going to be camera, radar, lidar, motion, wind, and 50 other environmental sensors.
It's going to be mainine computing power in triplicate. Connectivity and sensors in triplicate. 100%, no way is the AI CPU even slightly network connected. No over-the-air updates.
Bugs cannot happen. Bugs kill. Bad hardware kills.
I do get that having a car self drive, in the desert, in constantly warm temp is an easy thing. Try it at -40C, in a snow storm, with a snow filled road, in the country with no ditches, no lines to see (due to snow), in the dark, when windy, periodic white-outs, and more.
Heck, I'll say there's a chance when AI can drive on a frozen lake. And yes, humans do and can.
So far, all AI driving is monoculture.
Not to mention, how about a 20 year old car? Or even 10? "Buy new?" Now you hate the environment! And 20 year old cars are driven daily by humans, no they aren't the primary cause of accidents, because, when driving you notice issues in the feel of how the car drives.
Can AI "feel" that? Sure! But that means loads and loads of sensors! And cost! ANd repair bills!
To be honest, my 2050 thing is likely mostly due to this. Cost.
What’s the point of comparing the US to countries that are as populous as the DC metro area? There are so many factors at play the comparison is completely meaningless.
Death rate is calculated as deaths per billion miles driven, so I don't see how the respective populations matter here. One could make the argument that population density is important to consider, but in the US the less populated states have higher death rates IIRC.
Denmark (like most countries) are definitely better at not getting killed on the road, but closer to a factor of two by miles driven and four per capita.
> I don't see how the respective populations matter here.
They don’t matter per se. It is just a stat I had in my head that I could provide without googling anything to show how different both countries are. There are so many factors to consider with regards to driving infrastructure and habits, even density is not enough.
I fully agree, it's really hard to compare things like these across national boundaries. The actual countries don't really matter for the point I tried to make anyway: There are large cross-country differences between countries right now, so it seems prudent to look into those if one wants to decrease traffic deaths.
Waiting for L5, even if it actually is achieved in the current AI cycle, will introduce a whole new class of problems once self driving cars become a meaningful share of traffic, e.g. coordination/cooperation between cars and so on.
No, it is a typical response of a guy who spent two years and countless nights studying econometrics and the issues of specification and identification. But thank you for wrongly assuming my nationality and disregarding my response based on that.
Not really. If you spend a dollar now on mosquito nets you might save a life. If you order a Tesla, you have yourself a fancy car that few people in the world can afford.
You can execute a vastly cheaper intervention now, or you can speculate on something that helps in the future at some indeterminate time, probably decades away, that will always cost more.
And to make another point against FSD as a humanitarian intervention. Looking at the highest vehicle deaths per capita (or per no. of vehicles), it over represented by developing nations. What's common to them is poor infrastructure, unpaved roads, older vehicles with less safety features, lack of government enforced safety standards, lack of road rules and enforcement, etc. Worldwide road deaths will be more affected by regulatory interventions, capital spending, etc. FSD won't make a real dint in this number.
If we just look at deaths then sure, this could be right. However malaria is something we know how to tackle and isn’t that expensive. It’d be much more efficient to cure malaria than to eliminate all motor accidents.
I'm sure there will be a ton of residual effects from FSD. One off the top of my head being Emergency Services will be able to respond to emergencies faster because of minimized road congestion.
And? Does that matter if we have better capacity/congestion planning because all the cars are self driving. There's so much wasted time in traffic and excess congestion because humans are at the wheel creating chaotic traffic fluctuations by their driving.
Yes. That does matter, a lot. Look up induced demand. Also consider that full-self driving will likely mean more cars on the road that don't even have a person in them.
Traffic backups and delays aren't just created from people crashing, it's because when you take a a dozen tiny humans and you put each of them in mostly > 15foot long cars, you now have 12 people managing to take up over 180 feet of space.
Speeding up emergency services is a solved problem. You take a lane that is filled with single-occupant vehicles, and you dedicate that to efficient forms of transit, whether it be a high-speed bus lane or a bike lane. Emergency services will not only opt to use that the majority of the time, they will also be less likely to crash with another vehicle due to not needing to weave lane to lane.
That’s definitely not true. The smartest (clearly not the best if you take conscience into account) minds of today work either in ad farms or as money movers. You can sugarcoat it however you want but if you work in google or Facebook you definitely are just farming ads but with extra steps.
Not directly but their work needs to benefit an ad company to be allowed to continue.
Or else it exists in some intersection of corporate benevolence and the work not actively harming the ad company. Which is just not an arrangement I trust sorry.
Plus I mean that's two people. How many people work at google and fb and how many who do get to decide they're going to do something more important than ads?
Or nearly any other field, e.g., mechanical engineering, electrical engineering, chemistry/chemical engineering, nearly any of the sciences involved in the medical field (from pharmacology to psychology to name a few). Even "softer" fields like law and literature have present day geniuses pioneering new ground.
I understand that HackerNews is tech centered, but some times when I read "the best minds of my generation are doing X" I wonder if the commenter can see the flashes of brilliance shining beyond their field or what they know as "tech."
I just flicked through a few job ads on the career page of Google Research. All of them describe fundamental research oriented towards improving the commercial services of Google. It is very far from pursuing whatever one is interested in.
If you are one of the smartest people on the planet, you can work on whatever you're interested in. You can, if you want to, work on fundamental research at Google without any direct connections to commercial services of Google. Just look at their publications.
> it is far more interesting than 90% of the stuff a lot of your FAANG companies are spending R&D dollars on
It’s not really about interesting for most people, it’s about is there a market for this and ultimately can I pay my mortage with this? I’m all for noble pursuits but don’t put down the trillion dollar smartphone, social media, PC markets that have long established themselves for the product we don’t even know is possible yet
> It’s not really about interesting for most people
This is an absolutely ludicrous statement. A very large number of people absolutely do prioritize what is interesting to them when it comes to their career. No one is going to be dirt poor, busking on the streets to make ends meet because they decided to work on self-driving tech. Nor is Tesla in any dire situation that they can't afford to fund their well-paid team of 20-25 ML engineers.
> It’s not really about interesting for most people, it’s about is there a market for this and ultimately can I pay my mortage with this?
This is so asinine. Just because some R&D project is the most profitable thing FAANG companies can invest in doesn't mean it's actually the best allocation of resources for the people working on it.
I think Karpathy at Tesla has basically unlimited funding, Tesla is developing custom hardware. They've already deployed said hardware in the wild, and they're probably about as close to Waymo to true full self driving. We also don't know what his pay situation is, but I wouldn't be surprised if he had multimillion dollar bonuses tied to various milestones. I know if I were Elon Musk, I wouldn't want to lose the guy.
So sure, he could leave, but to do what? To start building something similar from scratch somewhere else? He could go to Waymo to find that he went from being the project leader to an underling, and that Waymo is not using the approach he wants to use (he designed the Tesla project, and it's close to pure deep learning). IMO, unless something goes really wrong at Tesla, he will stay, because they're so close to success, surely he wants to see what happens, because the payout will be huge.
> I think Karpathy at Tesla has basically unlimited funding,
Tesla's R&D spending in their earnings report was ~1.5 billion. That's a small number for unlimited. To contrast, their competitor at GM, with a significantly smaller market cap, is spending 6.2 billion.
> They've already deployed said hardware in the wild, and they're probably about as close to Waymo to true full self driving
> and they're probably about as close to Waymo to true full self driving
Absolutely no one actually working on the autonomous side of the automotive industry, myself included, believes this. Tesla is around the middle-bottom of the pack in terms of actual driverless operating capabilities. They're at the top of the pack only in marketing and reckless deployment before the tech is ready.
If you work for a competitor, you are clearly biased. Show us some evidence, some uncut videos of GM's Cruise or whoever besides Waymo you believe is ahead of Tesla.
I'd actually say they're closer than Waymo to FSD, given that Waymo's solution relies on much more expensive tech and high fidelity 3D mapping. To my mind you're not true FSD until your car can drive literally anywhere an average human could, and I'm not sure that's possible with Waymo's solution until they've mapped the entire world.
You don’t get to be closer to FSD just because you have a grander vision of driving everywhere a human could. They have to actually demonstrate it. It’s been years and Tesla still doesn’t have the confidence to let drivers take their hands off the wheel.. you know, the true test of self driving. So how exactly are they close?
I didn't say "close", I said closer. Full self-driving (e.g. "Level 5") is driving everywhere a human can. I have doubts that Waymo will ever get to Level 5 without 3D mapping the entire world, which (to me) sounds like a somewhat intractable problem. There is a lot of world, and it is constantly changing.
The problem here is the "level" system everyone has been told is the standard, which heavily implies that if you're at "3" while someone else is at "2", then you're closer to "5". This isn't really true though because solutions at Level X don't necessarily scale/work at Level Y. I dislike using the level system for this reason.
I think Tesla is closer because they are certainly getting there[1], and I believe their model (relying entirely on CV) is the one that has the greatest likelihood of stretching to FSD (Level 5 autonomy). Mostly because there is only one system we're aware of that can achieve FSD (humans), and the sensory package there is entirely visual.
> I have doubts that Waymo will ever get to Level 5 without 3D mapping the entire world
Waymo is not trying to get to L5 at all. They are a strictly L4 technology and their focus is clearly on servicing where the market is, metropolitan cities.
L5 is a pipe dream for anyone; it's a moving target no one can ever reach. It essentially means the car can drive anywhere, in all conditions and without the driver needing to take over at all. So it really means Tesla can't declare FSD as "done" until it works for everyone, everywhere and in every condition. How close do you think they are for that to happen? Does that also mean if it encounters a unique situation and gives up, their L5 claims fall flat? Given their current poor performance [1], they have a long way to go to even let drivers take their hands off the wheel, so I'm not holding my breath here (vision stack or not).
The real end game for self driving is L4 because that's what you can reasonably promise to your users. That it works in a defined, tested area and when required conditions are met. This is why literally every SDC company is L4-only, they never promise L5. Either Tesla is focusing on the wrong problem instead of providing value to its paying customers or they are blatantly misleading customers by promising "L5 by the end of the year" every year.
That's a reasonable stance, but we're talking about FSD here. Your opinion about whether that's possible at all isn't particularly germane to whether or not Tesla or Waymo is closer.
And yes, I would say my bar is a little lower than yours seems to be, as hinted at in my first comment. As long as the system can respond with a solution that is equivalent or better than what a human driver would do the vast majority of the time, I see that as FSD.
Your lower bar is reasonable (though hard to quantify), but that's not what Tesla is promising and marketing when they say L5. They can't say L5 and then qualify it with "works only vast majority of the time", at which point it's no longer L5.
It seems like your bar is lower for Tesla (you're okay if it doesn't work always, yet you consider it true self driving), but extremely high for Waymo - who are actually fully driverless - just because it's geofenced. Sounds like you're not applying the same standards to both.
It's the same standard: Successful FSD is measured by automated driving (without any human intervention) that is equal to or better than humans (as measured by accidents/fatalities per mile) anywhere humans typically drive (e.g. where infrastructure for driving cars has been deliberately constructed).
Maybe it looks like my standard is tailored to fit Tesla, but it's actually not, it is merely the behavior I would want if I ever purchased a self-driving car. Specifically, I want a self-driving RV where I can program in anywhere I want to go and it will take me there.
I think Tesla's approach can easily get us there in my lifetime. I'm skeptical of Waymo's.
I agree that Tesla dropping FSD would take a while. In addition to the huge sunk-cost and investor expectations, they have an entire strategy (via 3 year vehicle leases which are currently offering no-buyout for customers at termination) of monetizing leased vehicles as robo-taxis.
Not saying they will succeed here, but just saying that to give up on FSD would be corporate suicide at this point.
Ever since I learned that Musk didn't found Tesla [0] or PayPal [1] I get really uncomfortable when reading news like this. It's clearly Musk doing another hostile take over of a company someone else started.
Hostile takeover seems like a stretch. Generally you use that phrasing when someone takes over a company in order to kill it and loot its corpse, not "invest heavily in the company and make it one of the most successful in the world".
A borderline non-sensical accusation w.r.t. PayPal, since he definitely didn't takeover PayPal, he founded X.com and got a payout through that after the merger and subsequent IPO + acquisition by eBay.
For Tesla, he was the first investor and took an active role in the company, starting in 2004 – less than a year after its inception and four years before they produced their first car (the Roadster). By the time the Roadster actually hit production, he was the CEO. Was the Roadster (a Li-ion sports car) Eberhard's idea? Absolutely. But it seems very unclear that it actually would have existed if not for Musk. And the rest of Tesla's vision/direction (that has made it the world's most valuable tech company) seems to have come almost entirely from Musk.
He wholly founded SpaceX and they're doing pretty well.
> Generally you use that phrasing when someone takes over a company in order to kill it and loot its corpse,
You may be thinking of a leveraged buyout or LBO.
A hostile takeoever is one where the management of the company are excluded from acquisition talks. The potential acquirer goes over their heads and proposes a deal with the shareholders directly.
An LBO is a type of hostile takeover. In either case, the motivation from the "aggressor" is not typically "gee I just really want this company to succeed".
In any case, I certainly don't think Musk's relationship with Tesla qualifies.
I mean tesla without Elon is not really tesla. I never really thought he FOUNDED PayPal, but rather part of the so-called PayPal mafia that was successful at starting other companies. I never heard that he didn’t pull his weight from Peter thiel or the others
I can't find a source right now, but I remember reading about that Elon Musk tried to fight for Paypal to use Microsoft software instead of Linux when he became the CEO after the X.com merge. Since Paypal wanted to continue using Linux (who wouldn't?) they instead switched CEOs.
Edit
> I never heard that he didn’t pull his weight from Peter thiel or the others
> At the time, he was CEO of X.com, and company executives were not pleased with his leadership. While Musk was on the plane with Justine, executives delivered a letter of no-confidence to the company’s board, pushing Musk out as CEO, and replacing him with Peter Thiel.
2000 was the year of Windows 2000 and Windows ME. Did you ever use those in a professional capacity at the time? I think if you did, you wouldn't say you're a big Linux fan _and_ advice a company to use Windows. Or maybe we simply have different definitions of what a "fan" means.
I was more thinking in terms of what you could readily hire programmers for around that time. Definitely did not mean to imply that Windows was the actual better tech stack, just that for various business reasons going with Microsoft may well have been a better choice.
Though in regard to tech stacks, I also thought X.com might've been using .NET and (if that had been the case) then to my mind it was the better option, compared to what many people were using for scrappy Linux web backends in that era – PHP (which I say as someone who wrote his first web backend in PHP). But after looking up the timelines, I realize that this wasn't the case. Musk was replaced as CEO in 2000 and .NET was released in 2002.
How does that change the fact that the statement was "he didn’t pull his weight from Peter thiel" and it seems that he meaninglessly fought existing culture and also was ousted, which seems to indicate that he did in fact not pull his weight according to the board?
I only wish I was slightly older during the dot com boom. I had to finish college and do it with flying colors, being from a poor family. If you had the means and some padding to take fairly reasonable risks, you walked away with riches. These people were at the right place and the right time, let's not lionize them as investment geniuses. They had money to put down on some horses, that is ALL.
That doesn’t line up with what I’ve read about that time period. A lot of the companies that were well funded by such bets went bankrupt. In all likelihood you would have lost money during the bust. The companies that weathered the storm were actually delivering good products and had cash flow. That takes a lot of work on the tech and business sides.
Maybe you could have timed the bubble and exited with a pile of cash, who knows. If you want to make those kinds of bets you could make them right now.
The far more common scenario in the dot-bomb era was that you lost your job (which almost certainly wasn't paying the kind of salary many developers expect today), were left with a pile of worthless stock options, and maybe spent a couple years asking people if they'd like fries with that. (That's only a slight exaggeration. I was very very lucky to almost immediately land another position--which then went through its own rough times but not to the point of being unemployed.)
Some people made out of course, but in general ~2001-2004 was not a great time to be in tech.
For this specifically, Musk was so immature and inexperienced that he was either kept away or removed from higher posts, but since the ideas themselves materialized and were successful (see Zip2. X dot com), he walked away with millions. He was the definition of failing upwards. And once you have 200 bars of walkaway money, it's really, really hard to fail.
Per Wikipedia: "Within the merged company [PayPal], Musk returned as CEO. Musk's preference for Microsoft software over Linux created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in September 2000."
Having been there as an adult, I can say right now is a far better time to walk away with riches. There were a lot of scams that made scammers money, but very few made real money if they were legitimately trying to build a business.
I agree with that. Certainly there were lottery winners who got in on the ground floor of ultimately very successful companies. And there were others who had disposable income who bought into companies like Yahoo early and got out in time.
But, in general, salaries were mostly unexceptional in tech even in Silicon Valley, a lot of companies went bust, and even most of the surviving firms--including the big systems/storage/chip/networking/etc. players--went through some very lean years with a bunch of layoffs.
It was certainly no sure thing, and it’s easy to say that in hindsight. There were definitely a lot of people who walked away from the dot-com era with worthless stock options and wounded egos. Plenty of people ended up taking the “safe” route too and getting a job at IBM.
We’re in the same place with crypto right now. People will look back and say “oh jeez if only”. The early bazillionaires are already minted but there are a lot of crypto millionaires made every day. It’s just a lot of people dumping their life savings into BTC/ETH and enjoying the ride.
There's a strong survivorship bias in what you hear about. A few hundred folks made solid money, but many, many more had to go hunting for jobs after the bubble burst.
I really wouldn't go far based on that list. It's mostly from the Waitbutwhy article from years ago when they were still basically a research/brainstorming group and hadn't even figured out what they wanted to do or what approach they wanted to take.
For comparison to Musk's other companies , it's analogous to Bill Cantrell or Mike Griffin or that guy from sea launch at SpaceX.
Some in this group were very briefly there like Tim Hanson.
Some I think are misreported to not be there , unless anyone has any extra information. For example , Paul Merolla was there at the last conference , even though he's reported as not being with the company.
Some I think are also advisors like Sabes , unless something has changed.
One of them is just a neurosurgeon , but they already have a whole surgery department with a lead now.
Yeah but in a lot of cases it's because money (especially AI fundraising money) is hella cheap and it's far more profitable (like, 10-100x) to co-found their own AI startup than to play senior scientist for someone else.
Nothing in AI would pay a cent to a worker coop, because revenues are very small right now, it's all "potential".
So even though the expected value is (correctly, in the medium term, IMO) super high, everyone would get nothing based on current or near term revenues, which is exactly the situation VC exists to rectify. What you're seeing is the free market working correctly as more founders jump in to take VC money and make bets on their ability to execute.
Well, a "mostly worker coop" in the sense of a company that takes capital from elsewhere but where shares are otherwise spread out among the workers could still work as a VC funded company as long as they addressed the issue of how to make it possible for that VC to get an exit. It becomes easier to get to a middle ground the cheaper it becomes to take capital.
Interestingly one of the things Marx and Proudhon violently agreed on was that cheap credit is one of the most radical things you can create. The reasoning is simple: The cheaper credit is, the less power employers have over employees. To take a hypothetical extreme: if credit is free and guaranteed, an employee can walk out at any point and take whatever time they need to figure out what to do, at which point employer power of employees is nullified.
As such, at some point along a gradient of cost and access - and it may well be an impossible point to reach - capitalism becomes indistinguishable from socialism, in that capital ownership becomes a matter of choice.
TLDR: This technology is in its infancy but is very much so in the R&D phase. He needs to retain top scientists and avoid making promises.
The research isn't there yet, which is why Elon should be trying to retain people who have deeply studied the hippocampus and brain. What the heck is he doing? Find the best people in this domain. The potential is there with the electrode technology he's built. It's an AI problem now, to make sense of the data. Supervised learning mostly. You can measure what the brain does when certain things happen. He needs more researchers in the fundamentals. I think if they've built the robot that inserts the electrodes non-invasively, they can continue to add additional electrodes and get an even more accurate measurement of what's happening elsewhere in the brain. This tech will takes years and years to master. Neuralink a science company. Or at least, it should be at this point if I'm taking it seriously. God knows there are hundreds of tech companies selling SAAS software for domains that have negligible research. Neuralink shouldn't be like that. Don't overpromise and underdeliver just yet, causing best people to leave. This company has tons of potential.
> The potential is there with the electrode technology he's built. It's an AI problem now, to make sense of the data. Supervised learning mostly. You can measure what the brain does when certain things happen.
This is an unproven hypothesis. Since we do not know how computation actually happens in the brain, we can't say for sure that electrical signals can be matched to the actual computations happening, even if they are correlated. It is still possible that the electrical signals we are measuring are just byproducts, and that the real compatation happens at a chemical level or inside each individual neuron or even in microtubules as Penrose believed.
The nice part is that this means that neuralink's attempts could be extremely useful to our understanding of the brain even if they fail - it would actually be an amazing discovery if they could show that you CAN'T deduce the brain's intent from observing electrical activities.
We do know for sure though that you can use electrical signals to interface with motor neurons, so at least improvements in prosthetics should be something that neuralink can realistically deliver.
Even if the electrical signals that are measured are only echos of the triggering signal, it can be utilised as a control signal.
The problem comes when you want to send a signal back into the brain - but even then there's a chance that the brain could rewire itself to take care of redirecting and reprocessing the fuzzy and inaccurate input.
Reasonably accurate output has the potential to revolutionise prosthetics (feedback aside).
> Even if the electrical signals that are measured are only echos of the triggering signal, it can be utilised as a control signal.
I'm not thinking of something like echos, but more of auxiliary signals - e.g. Perhaps they are signals to increase energy supply to a particular are of the brain, or for controlling 'peripherals', while the main computation happens in other channels.
It's also an interesting question to see whether the analysis techniques could work on a simpler and clearer machine, one where we know for sure that is controlled only by electrical signals - could we infer what a CPU is doing by just measuring electric currents in various areas, without any knowledge of the code running there? This is an experiment that is eminently doable, especially if we design a core specifically for this purpose (making a large and slow core in order to focus purely on the problem of analysis, not difficulties in probing and data collection).
> It's an AI problem now, to make sense of the data.
Not a neuroscientist, but it seems like the problem is more involved than just training a DL model on a data. We need detailed maps of the brain before we put any electrodes in it. Meanwhile, full mouse brain map is likely a decade away [0]. Humans have a bigger brain, imagine how long it’d take. It’s just a lot of manual labor which takes incredibly long time. Keep in mind that you also need to map thousands, if not millions of brains to model population heterogeneity.
Getting top talents or investing on top talents? He has the money and he has the guts to take big risks which is remarkable for an investor. I think any scientist with a "revolutionary" idea that requires lots of capital will proactively approach Elon Musk.
>but with neuroscience, it seems the research isn't there yet.
The interesting aspect of your comment is in fact: what is the capital outlay and time needed for the science to actually get there.
Elon has a track record of severely under-estimating those, but he is also smart enough (via PR / govt grants / leveraging past successes, etc ...) to stretch both beyond anything reasonable, to the point of actually reaching success (e.g. TESLA, SpaceX).
Good to know, I may have to give Neuralink another try. My previous experience with the original 8 a few years ago during the hiring process was overwhelmingly negative. I'm hoping their culture has changed to be more in line with Musk's public persona since then.
Musk's public persona is a nightmare, even if you only look at his public professional persona -- encouraging overwork, getting himself knowledgeable enough about a subject to fool laymen and investors but not deeply understanding it enough to provide reasonable estimates, a total disregard for safety of both his customers and his workers, and developing an unhealthy cult of personality.
I own a Tesla, I love SpaceX, I think Elon Musk is insufferable.
> getting himself knowledgeable enough about a subject to fool laymen and investors but not deeply understanding it enough to provide reasonable estimates
This is both a strength and weakness. How many times have people wished CEOs actually knew what they were talking about? As long as Elon knows enough to build a decent team and lead it well, he doesn't need to know everything.
From the outside it looks like he sees himself as 'one of the engineers' which is something I deeply respect, even if there's major issues as well (FSD and Hyperloop being the big ones).
Yeah, I appreciate any engineering firm that at least attempts to have some engineering leadership in charge.
> major issues as well (FSD and Hyperloop being the big ones).
That Vegas loop too... That's going to be a truly awful tunnel fire someday, and the safety systems there seem seriously lacking. (Also, the whole project feels like a grift.)
It’s in a conceptual prototyping stage, jeez. Criticism is fine but you gotta let the bud flower, come to fruition and then put regulations. That’s how literally every single world changing technology is invented. If Elon’s ideas don’t work out, fine. It was a boondoggle on his own dime, what’s anyone’s business in slamming down on the efforts?
Risk taking is amazing. I’m not sure if I want to live in a risk averse world where ideas are nipped in the bud by CEOs of GE and Honeywell.
"Tunnels" is not something that is at a "conceptual prototyping stage". We've built tunnels for hundreds of years. We know what the dangers are, we don't need a prototype to kill a bunch of people to find that out.
I'm not sure what down-voters are disagreeing with: electric cars have been in existence for over 100 years, there are very few unknowns there. Self-driving cars are a very recent development, and not a single developer has achieved widespread Level 5 autonomy. I thought that'd be pretty uncontroversial, but this is HN, I guess.
Those that work for him call it "the impossible ask": Musk will basically spin through the engineering floors of SpaceX and push the team for something that's functionally impossible. Whether this strategy motivates brilliance to emerge is probably not _terribly_ in dispute: they do get results. The "at what cost" element and the burnout and other impacts... those are more tricky to weigh.
I don’t get Elon hate just the way I don’t get fanboyism and cult following. Neither side provides an objective and truthful representation of reality.
No single personality can be 100% perfect. Let’s not ignore pretty much the entire world changing revolution of EVs and reusable rocketry, while focusing on the guy’s public persona and how he behaves that’s apparent to us and project what he must be like in general. Seems deeply and profoundly unfair.
Why can’t someone have flaws which is ok to point out, but then ignore their accomplishments. Opposite also stands! Fanboys of Elon think he is some kind of a messiah to save the world from global collapse but turn a blind eye to his rather rude behavior?
It’s like populism has a reverse backside of the coin.
IMO silent majority has pretty balanced view of him and just doesn’t care to participate in a holy war praising or hating on the guy. Social media and regular media bias towards the polar opinions, as with anything else.
History respects Edison, Jobs, Tesla, Carnegie, Rockefeller, Hughes for what the accomplished while acknowledging their individual flaws. I think Musk will be remember similarly. Depending how successful his ventures are over the coming century he might even be remembered more than any of them.
It's an interesting observation. Personally, when I look back at giants and read their biographies, I find the most interesting bits usually about how they were motivated, what drove them, how they overcame challenges and hopelessness, how they forged business relations - I think the least interesting part is their personal life and what they did to piss someone off or what they tweeted/told publicly. The latter is just typical soap opera of details that provide little value to me. The latest Steve Jobs movie primarily focused on the melodrama of his personal relationships and offered literally nothing about what made Apple succesful.
Can I say, it's really refreshing to see your list of very apt comparisons to very smart and successful people. Many tech-heavy news sites (looking at you ArsTechnica) compare Musk to Newton and Einstein.
To me, Musk's flaws are much smaller than some of the others you mention. Namely: overly optimistic (some would say fraudulent but eh) timelines, a puerile sense of humor, and calling that diver a pedo.
The diver absolutely did start it, and if I was a gambling man, sexpat seems like a reasonable bet. But calling someone a pedo without evidence is a very unwarranted escalation.
It in response to a post in which the author said they wanted to revisit joining Neuralink because hopefully now their culture is more in line with Musk’s public persona. And the post ended with praise for Musk’s projects.
That doesn’t read as “haters gonna hate” to me, but rather as a narrow criticism of just the one aspect of Musk’s businesses that was brought up.
> Let’s not ignore pretty much the entire world changing revolution of EVs
Elon sure did. He decided, apparently out of pure greed, to invest big into Bitcoin, driving the price up and with it increasing the carbon emissions of it, enough to quickly wipe out basically all carbon savings Tesla has worked to achieve for its entire company lifetime.
Building the plant and mining the uranium, doing upkeep on the plant, and decommissioning it and processing the fuel are not non-polluting.
There is always pollution. Some forms of energy have more and some have less. It is never good to just waste energy, that will always cause excess pollution, no matter the type of energy used.
" a total disregard for safety of both his customers and his workers"
At least in the case of SpaceX, they do not seem to be as reckless as you paint them. After 20 years in space industry, they have 0 fatalities on record, which is uncommon for a corporation of this size and history.
And they scrub launches whenever they are concerned about weather or anything off-nominal that their sensors indicate. Which led to an excellent safety record for Falcon 9s once the technology matured.
> not deeply understanding it enough to provide reasonable estimates,
Reasonable estimates like what ? If your company works on something that has never been done before there is not much data at hand to make good estimates is it ?
I have so many colleagues with that sentiment who give terrible plans because ¯\_(ツ)_/¯ it’s research. They mostly talk about timelines for the big end goal pie in the sky, and they’re generally wrong. That was me when I was more junior.
I also have colleagues who promise small concrete incremental deliverables in predictable timelines. They rarely talk concretely about the big end goal pie in the sky’s timeline, but generally deliver the big thing sooner anyways.
The answer isn’t making reasonable estimates of an inestimable thing, it’s providing reasonable estimates for the reasonably estimable pieces. Instead of “we’ll have ultra mega science in three to five years!” it’s, “we’ll have this specific piece of the stack built by 2022 and we hope it’ll enable these other slightly less concrete things towards ultra mega science.”
Everything from self driving, to shipping cars, to moon and Mars missions. It's one thing to be an optimist. It's another to promise something you have no path to deliver.
Musk said he'd be launching a mission to Mars every possible window starting 2024. Including manned missions. I believe he was going to send an experimental mission to Mars before that.
My point isn't that these companies don't get there, it's that they take much, much, much longer than estimated.
The first one won’t be crewed, and there’s no way that’s going to be ready by 2022. So 2024 will be unmanned. Maybe 2026 will be the first crewed test mission, but 2028 is more likely. But we’re still talking aspirational Elon-time. If I were betting money, I’d say the 2030’s is more realistic.
Note that they have done zero work on building a anything other than the rocket body to get there. Making the interior volume habitable for 2 years of interplanetary travel is going to be non-trivial.
> but not deeply understanding it enough to provide reasonable estimates
There are enough senior software engineers in this world that will have a crazy hard time making a good estimate. There was this HN comment once that mentioned that learning how to estimate work is it's entire own discipline.
It can be done and trained, according to that comment, but the comment also seemed to imply that it seems very unusual to do so.
I can't find the comment unfortunately, I read it years ago here.
Musk’s public persona is: the guy without whom nobody would know the names Tesla and SpaceX. As noxious as his personality seems to be, he does deliver.
Having had some very minimal observation of Max up close at another venue, I was intrigued to see him in the position.
Smart guy. Is it possible to be too self assured? One of his best attributes, having a bold disregard for limits most people set, in some ways did make him a great match for the initial phase imho.
Not sure if speculation is frowned on here, but let me label this clearly as speculation: I’m guessing the company is ready in the next phase to have a CEO who is more of a cultural fit with the medical community. Meaning a little more of a conventional, conservative (not in the politics sense), mild person. Max is a bit more of a take charge and get it done type, not a soothing type. Again though this is based on very little anecdata.
I have no idea if the same is the case in here, but Musk had to purge the senior leadership of both Tesla and SpaceX (and multiple times at that, I believe?) in order to achieve the innovation velocity he aimed for.
On each occasion the press called a setback what amounted to losing deadweight, people too invested in legacy ways of doing things, and the companies went on to far surpass the most optimistic expectations.
> but Musk had to purge the senior leadership of both Tesla and SpaceX (and multiple times at that, I believe?) in order to achieve the innovation velocity he aimed for.
From the ex-Tesla people I worked with, I think the public narrative of Musk “purging” staff isn’t entirely true. He desperately needs top talent.
It’s more the case that his companies focus on hungry employees who want to get some success and a big company name on their resume. He can pay them mediocre compensation and work them to death with the promise of a big career win.
Once they’ve had a few years in the upper ranks of a Musk company, they can use their resume to walk into any number of easier jobs that pay better, so they leave. They are then replaced with a new generation of career climbers who can be worked to death in exchange for the resume building experience.
After having been at KSC and working around Muskrats, almost being one of them myself, this is also the impression I took away. Kind of like IB for the tech industry.
Feels kind of like he had to purge people that wouldn't go along with embezzlement and fraud like using NASA money at SpaceX to buy his own Solar City bonds as part of a rescue with high conflict of interest or using Tesla to bail out Solar City with a fake demo on the set of Desparate Housewives with non-functional solar shingles.
> Musk unveiled the Tesla solar roof in 2016 on the set of Desperate Housewives. At the time, he was trying to acquire Solar City, a solar energy company formed by his cousins. Musk was also chairman of Solar City at the time.
> The roofs he showed off at the event weren’t fully working, Fast Company later reported, and Musk allegedly had said prototypes of the tiles were a “piece of shit.”
>As SpaceX's chairman, CEO, CTO, and majority stockholder, Musk caused SpaceX to purchase $90 million in SolarCity bonds in March 2015, $75 million in June 2015, and another $90 million in March 2016. These bond purchases violated SpaceX's own internal policy, and SolarCity was the only public company in which SpaceX made any investments.
The SEC has bought into his novel legal theory that if enforcement actions could hurt shareholder value (hurting stock price due to targetting the cult of personality that is also propping them up), then they shouldn't be done. I can dig up the tweet where he said this if you want.
I mostly agree but primarily because departures are inherently non-news.
I acknowledge that there is little transparency in the private sector (I'm not suggesting that there should be) and for companies of public interest the public must grasp at straws, such as departures.
I can also acknowledge that executive and governance "challenges" can have a high correlation with corporate distress, but it's probably not much different than the general distribution of companies.
People's expectations of what other's careers should look like is warped. I think mostly because people wish they were founders or valued personnel for a project they would get married to. But there is little utility in being married to a forever project, they should appreciate the utility that was provided for the time it was relevant.
Both companies are leechers for government money, that's all they do.
The best product from Tesla is the hype they sell to public officials to take government money and the hype they sell to retail investors to take their money to pump the stock.
People compare it to Amazon, they are not even in the same league, not the same ballpark
I'm not sure why this really matters? As someone who is personally indifferent to Elon, this is not very interesting. To Elon detractors and people who want to see him fail, it just seems like an opportunity to 'dunk' on him in the media. People come and go from companies for various reasons, doesn't mean they are doomed.
I'm not an expert on Neuralink, but the little bit I've seen from them it seems like they are in very early exploratory stages. I imagine this is an area of science that's very well understood. So they are likely blazing a path into uncharted territory. If the least they do is make meaningful contributions or discoveries in this field, I still think that's pretty awesome.
As far as delivering a product? I'm not aware that they've promised anything as far as that. Sure like most Musk companies they talk big game and throw around lofty concepts, but that's always been their strategy to churn up interest in the media.
Presumably they intent to start out with some type of device to assist paralyzed folks interact with computers/ phones. If they do achieve this, that would be remarkable. But this is realistically still what, just a guess, 3-5 years away?
To me, it's a very, very bad sign if a co-founder leaves before there is a product.
The founders are supposed to be the biggest believers, and the biggest champions, devoting themselves to a cause. If THEY can't believe in the company, it's not a good sign. Their vision and leadership is integral to keeping a company moving towards a goal.
It's not a mark of death (it could be a good thing, if there is conflict, for example), but overall I'd say it's not a good omen of how things are going at the moment.
By the way, I meant to say it’s an area of science that is not very well understood. Sure the concepts of connecting computers to our brains have been around for a long time, but these guys are actually trying to do it, and pioneering modern techniques and research.
Kudos to that I say. At the very least they are pushing the research in this field forward. At best if they are lucky, they will achieve things we’ve only dreamed of. Literally making science fiction a reality.
And honestly isn’t that the point of every company Elon creates? Yeah I get it, he’s a weird, sometimes annoying personality. But I’m glad he exists. His creations are literally pushing us into a areas of technology we never thought possible. And it’s happening right before our eyes.
I think the hate on Elon has gone too far. We need to learn how to separate ourselves from our own personal opinions, and appreciate what his companies are actually attempting, and more often than not making reality.
It is because Musk is making claims that are not technically possible, and may never be technically possible. Experts have warned him against making these claims, but he is in contempt of them. So, it is no wonder why people are leaving.
Knowing how Silicon Valley operates, it would be likely that he would copy somebody's groundbreaking research (without giving proper credit) and make a few changes (to commercialize it, with gimmicky features) and have his claim to fame.
I'm so sick of this overly negative and defeatist way of thinking. You don't have to go back in time all that far to have people from that time think most things in our day to day life would be impossible to make.
Maybe you don't realize this, but Elon knows he isn't setting easy goals. He knows they'll miss sometimes, but having a deadline and all eyes on you helps progress. But yeah sure, let's just keep selling ads back and forth forever and push down anyone who tries to do anything good for society. Great input.
Dude, I am an electrical engineering graduate student, who is proficient in controls, among other things. I also am very aware of what is technologically possible, versus what is a marketing gimmick.
Musk has been in trouble several times with the NTSB, which is not an honorable thing to be on your record.
It is unsafe to drive any self-driving car without full human attention. This is going to be this way for quite awhile. It was irresponsible of Musk not to integrate technologies to ensure that the human at the wheel was paying attention to the road (eye gaze analysis, posture, facial emotions, hands on the wheel detection, etc.) for every single second that the driver was operating the car, with a mode such as autopilot.
It's already there. no seat belts, hand not on wheel, etc the car slows down and parks on the side of the road. I suggest you research and verify yourself.
It was the experts that you disdain with quotation marks, but the rest of us call scientists and engineers, who made Tesla and SpaceX, not Musk. It was not scientists and engineers who scoffed at the ideas of Tesla and SpaceX, but businessmen and other non-creators.
The ideas that Musk is promulgating through Neuralink are decades old; the technology he is using is hardly anything special. What scientists are pushing back on is the unrealistic timeframe Musk promises to achieve his scifi fantasies within, the consequences and repercussions of those technologies, and the short term effects of overselling and propaganda about these ideas, which are otherwise in fact valuable.
> It was the experts that you disdain with quotation marks, but the rest of us call scientists and engineers, who made Tesla and SpaceX, not Musk. It was not scientists and engineers who scoffed at the ideas of Tesla and SpaceX,
This is definitely some rewriting of history. Plenty of engineers gave a long list of reasons why reusable rockets couldn’t work. Difficulty of station keeping for the drone ships, cost of refurbishment, safety margins, etc
> Tesla's little driver assist would not be called 'autopilot' and ending in 4 hour long lithium fueled human barbecues
The fire did NOT last "four hours". It was put out in 2-3 minutes.
“With respect to the fire fight, unfortunately, those rumors grew out way of control. It did not take us four hours to put out the blaze. Our guys got there and put down the fire within two to three minutes, enough to see the vehicle had occupants,”
Hyperbole or not, it sounds like you're now backpedaling.
Moreover, it's also the same (fire lasted 3-4 hours) false narrative that many media outlets ran. Even though there was never an official report or account of the firefighting measures taken.
Less people have died from Autopilot than eating lettuce contaminated with E. coli over a decade. Everyone still eats lettuce.
Stupid people doing stupid things on Autopilot (watching a film [Florida], jumping in the back seat [Texas]) would’ve died doing stupid things eventually. Can’t blame Autopilot for when the monkey tries to defeat the safety system. No system is foolproof.
That doesn’t make Musk any less of a liar (with his successes amplifying the hopium), but let’s not pretend people aren’t culpable for their failings. The nuance is important.
Less people have died from Autopilot than eating lettuce contaminated with E. coli over a decade. Everyone still eats lettuce.
OTOH, more people have dried as a result of Autopilot/FSD failures than the entire rest of the self-driving vehicle industry combined.
And the worst part is, Tesla doesn't even know what caused the failures, since at least one of those deaths were caused by regressions to Autopilot (for example, the Bay Area incident where the car swerved into the divider).
You can blame the system when it misleads its users. To this day, the Model S configurator calls the system "Autopilot" and uses "Full Self-Driving Capability", "Navigate on Autopilot", and "Full Self-Driving Computer" as bullet points of things it can currently do.
The fine print isn't a justification when you're saying as loud as possible that this is a magic machine that does everything. Describing it as such is misleading.
For comparison, here is how Cadillac describes Super Cruise, a comparably-rated driver assist suite:
"Super Cruise" drive assistance feature. That's it. That's all you get when picking out the car. It frames it as an assistance, not a replacement or autopilot. If you go into details, you get "A driver assistance feature that allows hands-free driving under compatible highway driving conditions"
If you search out their detailed marketing materials, the message is consistent:
"Hands off the wheel. Eyes on the road." "Adaptive cruise control". "Stay centered". "Lane change on demand". Note that none of these promise that the computer takes over everything. The closest they come is "the first true hands-free driving-assistance", and that word assistance is absolutely key in framing this as not a replacement for all driving.
He didn't start Tesla. And you can thank NASA for funding SpaceX. In both cases you can thank generous government subsidies for keeping his businesses afloat until they started making money.
Let's also thank the thousands of engineers who made both Tesla and SpaceX work. We need to stop giving one guy all the credit.
One of the goals of Neuralink is increasing the bandwidth between the brain and a computer.
What if a brain is just inherently slow when transmitting? Apparently all languages average 39 bits transmission rate. Perhaps that's because its how fast we can collect our thoughts?
I go back and forth on this myself. To some extent, I expect it involves the whole 'thinking fast and slow' set of tradeoffs.
However, there are states of mind I've both read about and experienced, where it seems like you can fit a much more complete, rapid and specific understanding of a complex situation's interconnections in thought than you can get across in the bandwidth of speech. This is part of why we invented slide shows, haha. But in all seriousness, one of the more interesting outcomes here could be the ability to 'project' a complex thought/feeling experience in a way that speeds up the propagation of knowledge & understanding.
Something like this could revolutionize everything from education to mental health. It's super uncertain if we'll ever achieve that, but we could learn some useful things on the way. For me, the biggest questions are, can we significantly speed up the learning process, what types of learning work best or worst, and how can we use this to better understand the brain?
If you haven't seen it yet, here [https://waitbutwhy.com/2017/04/neuralink.html] is an extremely long, cartoon-filled, fanboy description of Neuralink's potential which somewhat discusses these ideas and a lot more.
Language is abstract of brain function. Neurons fire electrical signals in milliseconds some cases even faster, words come second as a result of the firing. I'd think that neuralink is trying to figure out right now what fires where and at what time in the brain to effectively capture and transfer it into bits imho.
That's not the only goal though, the more immediate goal is to restore mobility/communication/freedom for disabled people. Neuralink can be a huge success even if it doesn't bring about a cyberpunk future (which, honestly, do we even want that?)
I think tapping the brain directly could theoretically enable us not needing to "collect our thoughts" and work with unsupervised data.
I predict, the biggest obstacle will remain not having even the most basic understanding of the brain, with reductionist tech optimism remaining as prevalent. Ultimately I fear, understanding the brain will be as achievable as predicting the weather in silico. Complex chaotic, or evolutionary principles give birth to emergent states and we are stuck with a leaf's understanding of a tree, while spawning little complex blackboxes of our own creation, celebrated for unbeknownst slowly eating away our existential foundation (see The Economy).
I like the Kahneman "Thinking, Fast and Slow" systems concept and I would extend the metaphor using CPU (slow) and GPU (fast). I would agree that the brain is inherently slow, using the CPU metaphor it has an embarrassingly low clock rate. But each tick is supported by brilliant hardware optimization (GPU) so the data throughput and processing per tick (and watt) is remarkable. So yes I agree it is slow, but then the art of a BCI will be interpreting the state changes between these slow ticks that actually represent tremendous amounts of data.
It appears that internally its transmission rate is faster.
For example, try driving a car, vs having two people collaborate to drive a car, with one on the steering and the other on the brakes.
The two person setup is theoretically at an advantage because you have double the sensory input and double the 'brain power', yet the results are far worse.
This suggests speech is a limiting factor for performance for the latter setup.
User Illusion made the point that your consciousness can only ingest 40 bits/s. So the reason language is "slow" is because there is no reason to deliver information at a higher rate than people can ingest it.
I don't really blame anyone for leaving one of Elon Musk's portfolio of companies. Reviews of Muskian company culture on sites like Reddit and Glassdoor make it seem like a very poor work-life balance.
You must be over-enthusiastic about the product and willing to work brutal overtime hours for little more than a market rate engineering salary. Hard pass.
> You must be over-enthusiastic about the product and willing to work brutal overtime hours for little more than a market rate engineering salary.
That sums it up.
These environments can produce a lot faster, but they require a steady stream of new hires to replace all of the departures.
Most hires know what they’re getting into. Spending a few years at a Musk company is a career boost, and they know it. Once you’ve had the resume boost it’s easy to pivot into another company as “the person who worked with Elon” where you can get paid more and work less.
I am not sure, that the interface tech with brain invasion is the best route going forward in the space. There is work on non-invasive alternatives, that promise sooner results for some use cases, e.g. https://www.mindportal.com
How many other high-profile startups have survived a cofounder split like this?
It doesn't seem disastrous, necessarily – Drodopbox probably would have happened with Drew, but not without Drew – but it's ... not exactly a happy thing.
It's hard to imagine Stripe without one of the Collisons, or OpenAI without gdb.
I think it's a much smaller signaling risk, especially given how much time has passed and the number of founders (9 total, including Musk).
The Paypal Mafia tends to structure their companies this way... a large, tiered system of founders and junior founders. This is more like losing an early team member than, for example, a Collison.
(Also, it's ironic you cited OpenAI, since it did lose a founder... Musk)
You can always choose to not use a phone or the internet? The difference is that life becomes hard when you wanna opt out of a mainstream technology that we fully do not understand.
Indeed if you want to make a living opting out of certain technologies is impossible despite their invasive effects. Imagine competing with a programmer than can solve leetcode / code an entire system in a few seconds using such a brain interface. It would be somewhat difficult to attain that kind of productivity with old I/O devices
This might be true but if something is possible, made acceptable by the majority (even if it’s detrimental to them in the long term), and is capitalizable for commercial gain I doubt you or anyone can stop it from happening. Technology will progress to places wherever it can, the only way to at least slow it is to make it illegal.
I think that Facebook with their acquisition of Ctrl labs are further ahead than most people think I’in HCI and might even be ahead of neurallink, especially as Facebook has significantly more money behind their effort.
Yeah CTRL labs has been flying under the radar, but they have a ton of top talent and already demonstrated insane results. I will go further to say they are easily ahead of neuralink in terms of bringing a faster-bandwidth interface; of course, for more ambitious projects that Musk has tweeted about like mental health disorders, you must go through the brain, but there's no indication the neuralink approach will scale to this either.
I'm not so interested in the code - unless I have a very expensive custom silicon fab, I probably can't create my own version.
I would like to see them doing extensive data dumps of the data their hardware collects from the brains. That seems like an obvious thing to do, since the likely outcome is researchers will find new ways to analyze and use the data.
Obviously with human subjects there could be severe privacy implications (you might be literally publishing your inner thoughts, even though nobody knows how to analyze them yet). But for animals this should be fine.
I wonder how much did ethical concerns contributed to that decision.
I am conflicted about neural-link. On one hand, if successful, it will improve lives of many disabled persons. On the other hand, fiddling with nature in this manner has never been good for anyone involved. I mean, poking things into your brain to control external devices, does humanity really need such tech?
Not the first time I see something like this. What I don't get is what do these people propose to do. Suppose many health troubles are caused by living in the cities and agricultural diet (which is highly speculative). Is there a way to feed 7-8-9 billion people by following a hunter-gatherer diet?
Imagine if you could record your memories and play them back to yourself or others. Consider having an entire generation of family memories accessible to those who have dementia/alzheimer's.
This technology will be life changing, consumer applications aside.
You are really simplifying at how memories or the brain works - we (IT workers) tend to think of memories as just some information that can be written down, stored and replayed. But the mechanism is much more different. You would need the specific brain to make any sense of it. You can't just upload some memories to a different brain for playback - these memories need to be formed in a long and slow process, same like you can't just upload some muscle mass to your body with a device. The same goes for retrieving memories, you can't just ask the brain to dump all the memories into a device. These memories would need to be "relived" (retrieved) by the brain at normal speed and this would be a tedious process. Unless we could do a scan of the whole brain at a single time and scan every single state of a cell (and then also build an interpreter device). There is a lot of biology behind memories and interaction with the whole body and chemistry to make it work. You can't rip memories out of the body because they are part of the whole system and they don't make sense outside of it.
For any new technology it is important to ask, who will have access and how will we treat those without access.
Sometimes new tech enriches all our lives, nearly everyone benefits from renewable energy for instance. In other cases you risk furthering the divide between haves and have nots. When a person experiencing homelessness has their neural device go bad, how do we ensure they aren't just left in the street suffering? Are we comfortable living in a world where the elderly are permitted to suffer from Alzheimer's when we have a cure like you describe, just because they were too poor to afford treatment?
Secondly, it is important to ask, how could the technology be used if greed or maliciousness are the main motivators. Do those downsides outweigh the positives?
I'm moderately bullish on brain computer interfaces, but I caution or challenge people to put on their engineering hats and really think deeply about both the positives and negatives of new technology.
Isn't it already happening now ? Many people simply don't go to the doctor because of the associated costs. But this dosent mean we shouldnt do research into curing diseases. A better solution to the scenario you provided above would be a nationalised healthcare system, or atleast something close to it where people can get a basic amount of care without having to think about money
Yes, it's exactly what's happening now. It's extremely what's happening now. If you don't take deliberate acts in addressing your assumptions about access when thinking about the future, you'll end up in the same place as now.
I'm saying looking at the current model and asking yourself, "Is this really the right model?" is a key part of building something new.
You already can, it’s called a camera. There’s nothing stopping someone from getting a camera implanted in their head, or even just wearing a Google Glass or something like it.
And what of the court-ordered memory extraction by invasive brain surgery? Perhaps not even of alleged perpetrators, but of bystanders. It sounds far-fetched, but we've done worse. The choices we make today dictate the future we live in tomorrow.
Is it really that different from court-ordered email dumps? FBI raids taking people's computers? Before email and the widespread use of computers these things weren't done, but I don't hear anybody saying we should go back to "the good old days" before computers and email because at least then we wouldn't have all this invasive government snooping.
Email dumps don't involve literal brain surgery with the risk of death.
Also, unlike emails, memories are fuzzy, subject to change, and to being dreamt up, drug induced, or all manner of other things we probably aren't ready to cope with, given all the other ethical problems we have in our justice system.
well since you asked, yes never been a fan of keeping much pictures or memories from the distant past. It's about living in the present, those moments are gone, you already experienced them, no reason to cling to it, just let it go, forever. We'll be all dead soon anyway and none of your childhood memories will matter in the slightest, circling around the past only burdens you unnecessarily. You'd be better off to just always look forward and never look back.
I like having a window into the distant past because it's interesting. I don't see how that could be a bad thing. Sure, we should live in the present, but doing so isn't necessarily at odds with looking back. In fact, even if you're focusing on looking forward, you still need to consider the past and learn from it.
Probably not at all. These people knew what they were getting into and obviously never shared your world views in this matter. Why would they have founded such a company otherwise?
I bet the reason for leaving has more to do with a vision they disagree with, some internal inter personal issues, or something similar. Or possibly there are competitors that are offering attractive salaries. Who knows? Ethics are very unlikely to be a factor here.
Anyway, us fiddling with nature is our very nature. It's what we do. Modern society is the result of thousands of years us fiddling with all sorts of stuff. I'd argue the net result isn't all bad.
With the hindsight of Facebook and the likes on social media, which went from "oh cute, I can be in touch with all my high school friends" to becoming a greedy, evil, manipulative corporations with unlimited money and captive audiences, I for one, am off the cyborg bandwagon.
I really love technology, of what is possible, of what positive changes it can bring about. But history, even the very very recent ones, have show how quickly technology can be pivoted to suit vested interests.
I'm wondering if the Borg episodes were written in the FB era, how would they have turned out. The Borg are basically a neuralinked FB with ships and guns acting like the Horde of Huns.
Probably since a lot of us spent the middle parts of our career watching our democratic institutions get dismantled in front of us using the tools we built early in our careers.
what's the definition of "democratic institutions" and what exactly has been dismantled?
I still go stand in line to vote just like I have the last 25 years. The elections results are announced, the winners take office, the losers go home. Same as it ever was.
After the Covid fiasco I'd fully expect that over cautios bioethics becomes itself controversial. We're still counting extra hundreds of thousands dead because regulatory agencies refused to even consider challenge trials, not to mention dropping the ball in almost any possible way: actively discouraging masks, forbidding rapid tests, and lately hurting trust in vaccines with a fully reflex banning of AZ and JJ that had a clear outcome from the beginning.
Since nazi germany and Mao I don't think there was a category of people that caused as many death and suffering as bioethicists. Moderna was clearly ready to bring a product to market, but it took a pandemic for us to get rna vaccines. God know how many people died needlessly because people created a whole career path by chanting "thalidomide".
The pandemic has an exponential growth. Starting vaccinations even a few weeks earlier would have saved many lives by lowering the curve earlier. Starting them three months earlier might have... actually, probably would have skipped the winter wave completely, at least in the developed countries. Which would have freed extra doses for the developing countries as well - it's an interlinked chain that works better earlier you start, because that's how vaccines work. They're a preventive measure.
And the cost? Infecting 100 or 1000 volunteers in low risk categories, with an expected death count of under 1. There were websites where people registered in thousands for this.
You can't test only low risk individuals; that is an absurdly skewed sample set. You'd completely miss the safety and efficacy data on exactly the people you want to vaccinate first.
I feel there's a paradigm difference here. Classical scientific method says "make hypotheses, test, confirm or infirm". That was the golden standard for 400 years. It works very well, but it has weaknesses (what doesn't). First is that it doesn't specify how you should select the hypotheses on which you want to perform (costly) experiments, and the second is that it's very bad at processing partial data. You can confirm on infirm - with some complicated math on how good the experiment was.
Nowadays we have much better protocols, mostly based on the Bayesian method which is much more robust on getting small pieces of information. One of its principles is actually this: no matter how small and useless an experimental result is, it should still budge your understanding of the phenomenon even if it's a tiny bit. There are some caveats here as well, but overall it's a whole different ballgame from the big, old, clunky scientific method.
So how does this apply to this situation? You have a cycle of 2 weeks for covid infections. A completely-pulled-out-of-my-ass protocol would involve 100 very-low-risk individuals split into vaccinated and control and infected deliberately with various doses. You get some data from this: how well is the vaccine working? how is the disease progressing depending on initial viral dose?
For the next 2 weeks you switch to slightly higher vulnerable populations, again split into control/vaccine.
For the next 2 weeks you ditch the control to lower the risk, and add another decade to the age of the population.
Now you have good enough data to start vaccinating vulnerable population at large.
You keep testing on volunteers, because you're about to vaccinate millions and you NEED extra data, like what's the fastest protocol to vaccinate everybody. Except now you have enough efficacy data to know that you won't kill your vaccinated volunteers, so you start using more mixed populations with higher ages - though the bulk will still be low-risk. And play with whatever variables you need adjusting, like vaccine dosage or booster timing, and adjust the population-at-large protocols based on that.
Compare this with what we did, which is literally last century methodology. We pulled out of the ass a protocol: vaccine dose of X, booster timing of 3 weeks. Then we tested it in a large scale long term trial. We confirmed it worked. And used it as-is, and declared that any change will require another large-scale long-term trial. Look at the definition of the scientific method in the beginning of my comment. It's just not common sense in a pandemic, except for bureaucrats minimizing their personal risk. Which is why my rather justified anger with the system.
Phase three clinical trials for the Pfizer vaccine ran ~16 weeks.
You need a minimum of 5 weeks before you can begin your challenge trials, starting from the same point, for the 3 weeks between doses and the 2 weeks after the second dose where immunity is being tested.
If you run three 2-week rounds of challenge trials after that, you're already up to 11 weeks to finish your challenge trials and begin administering the vaccines to the population at large, versus the 16 weeks for a regular-old phase 3 trial.
Now, five weeks is nothing to sneeze at -- given the other parameters, it's not unreasonable to think that on the order of a thousand deaths could have been prevented by using what vaccine supply we had available earlier -- but it's also patently not the case that we would be five weeks ahead of where we are now if we had started vaccinating earlier: we spent most of the last five months in the US supply constrained, and we didn't wait to start building out our capacity, so we'd probably be at the same point we are now in terms of total vaccinated today regardless of whether we started vaccinating 5 weeks earlier or 5 weeks later.
I get a strong "agile vs waterfall vibe" here, but I guess there's a good point. Russia did just that, i.e. skipped stage 3 clinical trials and progressively vaccinated volunteers. The EU was bashing them for "experimenting on their own citizens", but then had to acknowledge they should have been more aggressive with the trials.
At the same time, despite Russia's early start, they've now administered ~20 million doses, to a population of ~145 million, compared to ~50 million doses administered in the UK, to a population of ~67 million.
We have a natural experiment between countries that waited for standard clinical trials before beginning mass vaccinations, and countries that didn't. It's too early to take the "final score" [accurate, globally accepted final death tolls by country probably won't be available for a year if not years] to see who handled the pandemic best, from beginning to end, but right now it doesn't seem like vaccinating first made for a lasting lead in the vaccination race.
Elon Musk has been pretty successful at getting top talent to push the limits of engineering, but with neuroscience, it seems the research isn't there yet. What I'm really interested to see is how long Karpathy will stay at Tesla as other self driving projects drop like flies (ATG, Zoox, and most recently L5)