You first have to define the opposing party. Are you speaking of the slaves or of the other humans who oppose slavery? Because remember, part of the justification is that slaves and unborn babies are not fully human, and thus cannot be the opposing party.
You're assuming it's like killing. By that you're making a moral judgement. Significant part of the population disagrees with you. That means your judgement is not universal and shouldn't be forced on others.
There are moral principles vast majority of the population agrees with but abortion=killing is not one of them. Don't force others to live according to your religion or moral conviction.
He is implying that the algorithms used by insurance companies are optimizing for profit instead of solely accessing the risk of payout. In other words, Tesla drivers are willing to pay a higher premium to mitigate the same risk.
Tesla is a super risky piece of kit for an insurer.
They are a sole source provider of sorts and maintain a tight lock on authorized body shops. Even a car with lots of low volume high cost parts like a Benz has a competitive market of resellers, dealers and service providers.
Tesla is the problem here. Having them be an insurance provider just makes them a bigger problem.
And all 50+ BMW authorized body shops can work on BMW's aluminum-bodied cars? Only a modest % of BMW's cars are aluminum-bodied.
A little googling shows sentences like "Any body shop you are considering to repair your BMW 5 and 6 series, should be specifically equipped to perform special aluminum welding." and "As part of our CERTIFICATIONS by various manufacturers, we are required to purchase state of the art tools specifically for aluminum repair."
Thats not what the parent is saying. The parent is saying that, in the same way that retailers might act on the knowledge that OSX users are willing to pay higher prices for goods, insurers act on the knowledge that Tesla drivers are more risk-averse.
Its a pretty bold accusation. Car insurance is probably the most competitive market out there.
Why would dozens of car insurers in multiple states collude to hurt poor Tesla and put themselves at risk of Federal and State sanction? If they were doing that for a tiny niche automaker like Tesla, why wouldn’t they screw over BMW or Lexus?
This whole controversy is just a distraction to draw attention from Tesla’s service practices and support for a integrated “car as a service” model where Tesla owns financing, sales and insurance.
So did Volvo for decades. You didn’t need magic Volvo insurance.
When you have a highly competitive market, and most participants decide to make an identical business decision that isn’t in their competitive interest (ie charge a high margin for a commodity), that’s generally accepted as evidence of collusion.
Have you noticed the massive amount of money the competing car companies spend on advertising? Have you ever suspected collusion when you shopped for your own car insurance?
Why do you think people are reluctant to switch? Perhaps because the market is so competitive that there really isn't much difference between the offerings?
I don't switch because it's a pain to even try. I've had experiences in which a price I was quoted turned out to be lower than I was actual billed, so I don't have a lot of confidence that I'll get a straight answer if I do ask for quotes for new companies. I'm not sure I have accurate answers for questions they might ask. So I keep paying whatever, even though it's more than I like.
For all I know, I could be paying half as much elsewhere. This is just the company I've been with for many, many years.
Yes, it is a multidimensional competition. Price is not the only variable.
So is service when you make a claim (I've been "screwed over" twice when making car insurance claims). And so is obfuscation of services provided. So is money spent on advertising. And lobbying.
All insurance companies compete with a unique strategy on the mix of variables.
That a human agrees that the product works correctly. These can be silly things like observing that buttons overlap, or observing misspellings.
"What is it that you're testing manually that you can't automate?" I can't automate what I can't predict.
Simple example: Would you get in a robotic car that only had automated testing? What about a robotic flying car?
Real example: 20 years ago, I bought a modem that had an "obvious" problem when calling a dial-up BBS. It didn't flush the buffer appropriately, leaving me with an incomplete screen of text.
I'm sure the modem passed all of their automated tests! But, did whoever wrote the automated tests predict that the buffer should flush if no data came within a very small amount of time? Nope!
Another example: Have you recently used Netflix on Android with Chromecast? It breaks in a lot of corner cases when you put your phone in your pocket. Netflix proudly laid off their manual testers and only does automated testing. Can you really automate every aspect of the Chromecast in a test harness? I bet you can come close, but not achieve 100%.
Manual testing can discover bugs. But the first step toward debugging should be writing an automated test to exercise the manually discovered bug, imho.
That said, bug-driven testing is a separate issue from TDD.
What cloud platform services have they shutdown? Project churn in other parts of the business doesn't mean I shouldn't trust their container service or load balancers.
I have that one, and it's very nice, the keys feel similar to a regular keyboard. It also feels pretty solid mechanically when you fold it out. However its not that small. The feature on the MS one to allow easy pairing to more than one device is nice. The textblade looked interesting because its very small. However I would be very skeptical without some third party reviews of people who actually used the device.
Things actually broke at 90nm. Gate leakage went up enough that most analog/RF circuits scaled their transistors back up to 130-150nm dimensions while the digital guys cashed in the density increase one last time.
65nm was the first node where static RAM cells didn't scale with the rest of the digital circuitry. RAM cells are more sensitive to leakage since they have a "writability constraint" where you have to be able to shove enough electrons from outside the cell, through a transistor, with enough oomph to change the state inside the RAM cell.
40nm was where RAM scaling really broke. Designers had to start jumping through amazing hoops to support tricks for the manufacturing guys to eke out the last jump even for standard digital circuits. Most technologies started trading off multiple gate oxide thicknesses to manage leakage current.
28nm was where everything basically went to hell. The strong form of Moore's Law (twice the transistors for same cost) broke. RAM cells are way off the scaling curve. Leakage is everywhere. Multiple gate oxide thicknesses are the rule, not the exception. Designers are jumping through tremendous hoops for manufacturing (aligning all gates in the same direction over the entire chip, for example).
Below 28nm has been a disaster, and, as pointed out, a lot of the sub-28nm stuff is more marketing than actual physical dimensions.
We fixed that problem by dropping back to under 2GHz; focusing on clock rate rather than IPC and energy efficiency was just a stupid time for Intel and the industry.