Hacker News new | past | comments | ask | show | jobs | submit login
Insurance firm to replace human workers with AI system (mainichi.jp)
127 points by sjreese on Dec 30, 2016 | hide | past | favorite | 69 comments



I have no information on this, but the cynic in me thinks this is likely a total non-story/spin. I find it unlikely anyone would be so confident a AI system that they are "planning to introduce" that they'd schedule staff cuts.

What's more likely is that staff cuts were already planned. This puts a great spin on a (I would guess most likely free/cheap) experimental deployment of Watson.


I only skimmed, but it indicated that 34% of a pool of ("primarily") 47 workers would be replaced.

So ... 16 employees. I suppose that's interesting, but it's not as big as the headline made me believe.


It sounds like they're replacing the adjusters, which I thought was a pretty unstructured aspect of insurance claim management. _That_ was the big story to me. Seems we're all closer to becoming redundant than we thought.


That's not what the article is saying. It says 34 employees (not 34%) primarily from a pool of 47 workers on 5-year contracts will be "made redundant". (the rest may not have their contracts renewed) That's more than 30% of the 131 employees.

The article also cites other companies doing the same thing, even if no staff cuts are involved for now.


This is reporter math we are talking about here. Expect the numbers to be surrounded by good prose, but accuracy to be dubious.


You are correct -- I misread it.


This is a non story because insurance (and banking) replaced human workers with expert systems a looong time ago.


That's just not true. Source: my sister is a commercial underwriter at a large US insurance company. And while they do have software that gives a suggestion for a premium, it does not know enough to be accurate, so she has to always adjust it.

This is definitely an industry where more automation could easily be done, but the big insurers are a conservative, risk adverse group.


I don't dispute your retort because the post you are replying to was worded to sound very absolute, but...

AI doesn't have to replace 100% of all humans to have a huge impact on unemployment. If you introduce a system allowing 4 people to do the job 5 used to do you're setting the stage for 20% unemployment, which is a huge deal at scale, and also the 4-to-5 ratio is very conservative for a lot of modern automation projects.

Are there many fields where AI/robots will be doing 100% of the work in the near future? No... next to none, I'd think... but there are LOTS of fields where they will be doing a huge amount of the work while being supervised by a relatively skeleton crew of humans sanity checking their work.


This is true, in the UK at least. Insurance is the most trailing-edge industry I've worked in... incumbent firms are full of servers under desks, proprietary bloatware with no APIs, spreadsheets-as-databases. In my experience tech could make large swathes of the insurance sector redundant without even needing to resort to AI.

Insurance doesn't suffer the same degree of competition as other parts of the economy... it has a triple-walled garden of hefty regulation, significant capital requirements, and the chicken & egg problem that you already need to have relationships and experience in the insurance sector to do business there... or spend time and money buying them in. Even the banks white-label their insurance products from insurers.


Not to marginalize your sister's job, but couldn't that essentially be training for an "AI system" (NN or otherwise), and over time it will reach similar conclusions with increasing accuracy? Maybe they're doing that at her work already behind the scenes.


An AI system could probably get close, but the act of capturing all the variables might cost the insurance companies more then the current system, which involves a lot of 'gut feelings' about what is important or not for a policy.

There is also licensing involved, in my sister's case she had to earn a CPCU before she could do here job on her own.


>which involves a lot of 'gut feelings' about what is important or not for a policy.

Edit: I see you answered this same question below. Whoops.

Can you give some examples?

I thought it's pretty well understood that the "gut feelings" of experts have been and will continue to be outperformed by algorithms for these sorts of tasks. My imagination is failing trying to come up with something data-based that an agent would see and a computer couldn't.


Thanks for the info, and yeah licensing was something I hadn't even considered but could be a big liability for a company relying on machines (side note: wonder how the self driving cars handle this?)


You fundamentally overestimate the current and near future capabilities of machine learning.


My naive view from being a customer of various insurance companies is that there's a series of lookup tables and charts that does things like "male age 18-25, no history of smoking, here's your rate", with some room for variance based on other factors (some linear or log scale for family history or blood test results, for example). On the surface, those all seem like logical computer functions to do, and as an extension things that would work well training a NN off of after starting with the basic lookup tables. Can you go into more detail on what I'm overestimating?


That's the most trivial form of term life insurance that you're referring to.

What table would you consult to come up with a rate for E&O and liability insurance for the CEO of Uber? Remember your goal is to make money.


That's a great point. Even if it was say 99% accurate, that 1% would likely be the outliers that make or cost the most.


Agreed that 1% could cost or make the most... now do you trust a human or computer to make the final decision? My guess: Humans will end up costing more.


> it does not know enough to be accurate, so she has to always adjust it

What kind of stuff does it miss?


Market stuff, like how the companies entire book is doing (what losses they've had so far that year), regional balancing, past payouts to the insured, raising or lowering the overall risk of the book, and just plain sales (sometimes you have to take on more risk then you'd like to meet a sales quota, but you can balance that by making other lower risk clients pay more).

It's also complicated because it's hard to model buildings correctly. My sister likes to tell a story about a total loss she had for a standard masonry building with sprinklers. From her point of view, this is the perfect type of building. They're hard to catch on fire, and if there is a fire, the building puts it out. At worse, your damage is limited to the one room or section where the fire was, since the internal masonry walls keep it from spreading. This building burnt down because the solar panels on the roof caught fire, which spread across all the panel over the entire roof. The sprinklers never got a chance to go off because the roof collapsed. It's certainly possible to model this one case, but the problem is there are a million one off cases like this, and we don't know about them until there's a loss. Right now, human intuition from the underwriters and inspectors is what they use to cover try to cover the gap.


But you're giving an example of how human intuition failed, since she insured the building and had a total loss.


I'm not in insurance but I assume that it is the nature of pooled risk that there will be a million one-off cases (or N cases where N is the number of members in the pool). Given that, is it possible or even desirable to make changes based on what happened to this specific insured business? Would the cost of that change, be it increased premiums or an outright refusal to insure, outweigh the cost of a smaller pool? I guess that's something for the AI to decide.


Why would it matter on the how the entire book is doing? Why can't the AI system take all of this into account assuming it has access to said data? Further, why should the AI system be used to run the entire company like you suggest. That is, it doesn't need to take into account every aspect of the business to replace a large portion of the workforce.

> This building burnt down because the solar panels on the roof caught fire, which spread across all the panel over the entire roof. The sprinklers never got a chance to go off because the roof collapsed.

Type 3 construction that burned to the ground? It doesn't take a human to realize or intuit that even type 1 buildings burn eventually (ask a firefighter!). If there is a way to truly model every possible variable when insuring a building, I will put money on humans doing a worse job than expert AI systems. You're asking the AI to not only predict but _know_ the future, and not asking that of the human... seems silly.


A friend told me about s Russia bank that tried to use true AI system. After initial precise results on the rate of defaults after one year they have to scrap it and went back to using logical regression with heavy human reviews. The biggest problem was that when AI errored, the errors were huge with no way to know what caused them.


Having seen how these expert systems are created, replacement is way overdue. For example a team of 10 "senior mortgage professional" spend weeks of figuring out what difference it makes if the co-applicant's age is 60 or 62. They argue, and arrive to a conclusion that it increases the risk by 0.01 (whatever).

Event the simplest logistic regression training + evaluation will provide value to most insurance, mortgage or other money-related decision/"expert" systems.


This is a very ambitious attempt to replace complex and messy human powered business workflows with an AI system. Most current business uses of "AI" tackle tightly defined problems, like assigning invoices to a certain group, filtering out spam etc. Even if the implementation is successful, they will still have to retain quite a few people to deal with exceptions generated by the automated process.


I would love a 6 month or 1 year follow up story on the success or failure of such initiatives. Hearing about the plans of a company to implement a massive software system that affects core business processes is akin to hearing about the plans of someone to change a habit--things often work out as planned, but also can fail miserably.


c.f. blockchain initiatives. All the announcements get wide publicity, their failure or quiet scrapping much more rarely.


Yes, but it is like a VC portfolio, the payoff on the few that get to prime time is huge


I thought the studies showed that changing habits and implementing massive software systems tend to fail?


Ha, I guess I was giving the benefit of the doubt.

Certainly you hear more about the failures, but that be because they get better press. Not sure about any studies, would love to see a link or two if you have them lying around.


probably just my having soaked in the conventional wisdom, looked around quickly found the following http://www.umsl.edu/~sauterv/analysis/6840_f03_papers/frese/

quotes:

At companies that aren’t among the top 25% of technology users, three out of 10 IT projects fail on average.

AND

On average, about 70% of all IT-related projects fail to meet their objectives.” In this case Lewis includes not only projects that were abandoned (failed), but also those that were defectively completed due to cost overruns, time overruns, or did not provide all of the functionality that was originally promised.

The difference between failure in the two quotes is that the first one seems to consider failure as just abandoned completely as being unachievable. Whereas the second also considers failure as not having achieved all goals.

It seems to me that if the project is big and central enough to a company's processes that it might be worth betting against the survival of that company.


The bottom 50% have no idea because they have no consistent objectives.


Thanks for the info!


Changing habits work when it is done in lots of small steps. Like run 5 mins longer each day. Enterprise systems are like saying, next year I will run a marathon.


And halfway through, it suddenly is swimming and not a marathon, you need to wear a tie, and whether you'll be racing in water or in lava is to be decided in the future.


Maybe if the software system relies on humans in some degree..


such studies inevitably biased to find failures tho.

successes rarely get written up. so finding details on them is much harder.

also depends what you mean by fail.

if fail means "ran over budget and hit loads of unexpected problems" then yeah, most probably do "fail".

if fail means "shut down prematurly and abandoned without hope" then afaik, your only really talking about stuff by google and microsoft. most other software houses would fail with their software. and plenty are still arond from the 90s.

intel. ibm. apple... not so much real fail, for example.


With AI starting to actually supplant jobs, something sad has just occurred to me:

I foresee our (the US) government (and probably others) restricting the development and deployment of AI systems that would supplant human jobs, merely for the sake of ensuring people are employed.

I think it's sad because it would present a real opportunity to advance our society significantly.


Based on our past history it seems unlikely to happen. Any country which institutes such policies risks falling behind. Also who wants to be branded a Luddite?


The lesson you should be taking away from the luddites is not just that they didn't foresee that there were long term benefits to the technology, but that they violently opposed the progress. The potential for violence is the lesson. Get rI'd of large portions of the popukation's means of making a living by replacing their jobs with ai and there will be violence.

The process of replacing jobs with ai needs to go slow enough that the risk of large scale violence is minimized


I have the feeling that before we put the support systems in place for universal healthcare that you'll need a Phd to have a janitors job, and be thankful at that.


My wife was working for an insurance company until September. Despite being one of the most efficient workers, and having full time coworkers retire, she remained a temp her whole time there. In fact, they stopped hiring full time for many positions because new software systems were in the works. Software devs occasionally came and shadowed my wife to see how things worked. I secretly hope their profits have been hurt by treating employees like they're disposable, but I doubt it...


I'm wondering what jobs, according to people on HN, are most likely to be replaced by AI in the coming years.


Almost everything. I work in digitization in the public sector and there are almost no business processes which can't be improved or turned into a self-service with minimum supervision and we've been doing it for a while now.

The reason it has been going on without anyone really noticing is because very few people get fired because of it. The real effect has been on a slow down in new hires as people retire.

The reason for this is that whatever efficiency you free up isn't directly tied to a single job function. Say you do a self-service system for handling employee transport costs. This might free up an entire job function worth of hours in a HR department, but they are coming from 6 different employees. Doesn't lead to anyone being fired, but eventually you'll automate enough systems that someone retiring won't need to be replaced.

Programming isn't even a safe zone. I mean, think about how much time you save by using things like modern frameworks and the interconnectivity of everything and then compare that to how it was 25 years ago.


Drivers, factory/warehouse workers (ongoing), clerks (ongoing), and secretaries.

And Im in the crowd that doesn't think we'll see it replace all jobs in a field -- just 60-90% of them, which causes major labor problems when talking about common jobs.


But some jobs can perhaps be replaced by "lower-education" jobs.

As a (perhaps contrived) example, family doctors could be replaced by lab workers, who take simple measurements, feed them into a computer, and the AI does the rest (i.e., correlating conditions to a large number of existing patient files, and hence referring patients to specialists).


To use your example:

Suppose right now, we have 1 doctor, 1 nurse, and 3 lab techs per 50 patients per day. I think technology generally lets us do the same job with just 2 nurses and 1 lab tech. So we lose 40% of the jobs from the higher paying side and probably more like 50-75% of the pay.

In less contrived examples, I think we lose a lot of the jobs in the 25th-75th percentile range, which is the middle classes.

So it's not that we see no jobs, it's that we see bad jobs and the elites. The middle gets automated out, and it's starting to be faster than people can retrain.


> Suppose right now, we have 1 doctor, 1 nurse, and 3 lab techs per 50 patients per day. I think technology generally lets us do the same job with just 2 nurses and 1 lab tech. So we lose 40% of the jobs from the higher paying side and probably more like 50-75% of the pay.

You're making the extreme assumption that the amount of medical care demanded remains constant despite the fall in prices (e.g. employees: 5->3, patients: 50->50). An alternative extreme is that employment remains fixed while falling prices improve accessibility (e.g. employees: 5->5, patients: 50->90).

In reality we may easily end up somewhere in between (e.g. employees: 5->4, patients: 50->70). This also highlights two aspects of automation: on the dark side, it reduces demand for work, on the bright side it improves availability (here, of medical care). If as a society we're able to deal with the former (e.g. by conjuring up new occupations) we stand to improve our future significantly through the latter.


I agree automation increases availability. I never implied it didn't have benefits -- just that we're likely to see the disappearance of middle class jobs because we'll be able to fill new ones with computers faster than with people.

Even if it increases employment and availability (4 nurses, 2 techs, 100 customers), we're seeing a decrease in income provided -- 1 doctor and 1 tech for 3 nurses. Less spread across more people.


Unless we're positing a singularity, I don't think the public's demand for better treatment is remotely close to satiation point, or that computerised efficiency and accuracy will reduce the demand for nice, qualified middle-class people to explain what the computer is recommending for them. That's even before we've started considering whole new classes of middle class job that mass adoption of technologies like gene sequencing could entail, or the largely-justifiable layers of regulation and respect that give medical professionals a lot more power to keep their jobs relevant than the average union member

I don't think "surplus of trained doctors" is a real problem I'm likely to see in my lifetime, never mind a likely consequence of the foreseeable future improvements in medical data collection and diagnosis.


Expect a lot of doctors to be replaced especially for disease diagnosis.


On what time scale? Eventually, all of them.



That's not AI, that's automation and improving their systems combined with cutting the number of branches (as people do more online vs needing going to a branch)


10 years ago we put on the market a fully automated subprime mortgage loan underwriting system, guess which was the most frequently requested feature - manual overriding (exceptions)!


Feels like many insurance companies have replaced humans with a simple printf("No payout"). They only pay in the rare case when someone manages to raise a social media shitstorm.


This is believed by many people on HN. It's false.

A thing you might find useful to look at: loss ratios. Loss ratio is industry jargon for claims paid plus claims-related expense over premium income. GEICO's, for example, is 82.1: for every $1 in premium they take in, they pay 82 cents in claims.

The industry is regulated to a degree that few are, in both the US and Japan (and many, many other countries). If your loss ratio is too low, your friendly neighborhood insurance regulator will not look favorably upon that fact.


Did you know the original model for insurance was to get a loan for enough to cover your clients losses and charge them premiums enough to cover the cases where a payout is required plus the interest on the loan? To make a profit you would invest a portion of that capitol as not everyone's house will burn down at the same time. That all changed in 1945 when The McCarran-Ferguson Act exempted insurance companies from antitrust law and then companies began sharing information to set prices, now they make around 20% profit on the premiums as well as you cited.


I have had good luck with complaining to my local health insurance regulator.


Not according to the loss runs I see at the insurance firm I work for.


Watson doesn't work. This is more IBM spin coming out via clients who want to prove to themselves that their millions of dollars weren't wasted.


Training output set:

Denied

Denied

Denied

Denied

Denied

Denied

...


hello 2017


Great news for the stockholders!


I wonder how long before somebody in the US senate introduces law(s) designed to regulate impact of AI on human employment.


why can't we get AI to replace some of our government? Maybe our elected officials wont be saying stupid things? Maybe lobbyists wont be able to buy influence?


We'll use Bernie Sanders and Ted Cruz as training set.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: