It's worth noting that highly regulated industries are not necessarily bad for startups to enter, exactly because they throw up particularly high barriers to large companies. Google can't enter health without accepting the full baggage that comes with that - people worried about their privacy, every random nutter looking to sue them and get a payday, FDA breathing down their neck every step of the way, etc. A small startup can be much more nimble and targeted and address a very specific niche in a clever way and be very successful.
Most people don't worry about their data privacy, or else they wouldn't be on FB, take those buzzfeed quizzes, etc.
The P in HIPAA stands for Portability. At it's heart, the act was supposed to guarantee patients have access to their health information, not bring health data liquidity to it's knees.
This is Jonathan Bush, of Athena, testifying (read: ranting) a couple weeks ago about regulations and innovation in healthcare. The big take away is that healthcare specifically sets these rules with incredibly high barriers of entry, and then at the last minute does a complete 180. We've seen it every step of the way with the EHR incentive program, CEHRT, ICD-10, payment reimbursement, etc.
https://www.youtube.com/watch?v=CekfvGDiab8
How many people do you see posting all of their conditions and the medications they're taking on Facebook? I can't think of any friend who is THAT open.
Also, whether or not people care about their privacy doesn't mean it shouldn't be protected. Not just for themselves, but for their family as well. --Let's say I don't allow my medical information to be used, by my brother does. If he has a genetic disease and a potential employer finds out about it, they might decide not to hire me because there's a chance I may have it as well, which could cause problems if it ended up needing treatment. Laws that prevent discrimination are all well and good, but the problem can be proving the reason they decided not to hire you.
Never said privacy shouldn't be protected, only that it's not exactly valued by BOTH sides of the equation (and of course, YMMV). Up until recently (Omnibus rule), HIPAA had little practical power in that department from both an audit perspective and a fine/mediation perspective. The largest fine levied? It was for inadequate patient access to their own health information, not a security breach.
And even with the new rule, there are currently no regulations surrounding de-identified PHI being used for marketing purposes, research, or sold for whatever other purposes. So now you have data wharehousers like IMS spinning up software dev depts with the specific goal of harvesting patient data.
As far as identity vs membership vs attribute disclosure, I linked to a good study below.
I find it interesting that there are more comments in the average HN healthcare-related thread than on any of the recent NPRM. Hell, there are more comments here than people who actually showed up for FDASIA.
I support regulation in a lot of cases, and feel that that FDA took a reasonable approach to the recent mobile medical device guidelines. What I, and pretty much everyone else (other than the AMA) rails against is the indiscriminate flip flopping of what regulations, standards, etc will be required, and on what time horizon.
Frankly I'm okay with health data being illiquid. Everyone should be absolutely terrified of this data getting into the hands of the same people that try to predict if you are pregnant to sell you crap, or use your credit history or Facebook posts to deny you a job. The future in that direction is "Google Gattaca."
I'm not. The illiquidity is why different healthcare specialists can't share data about me without resorting to a ten finger interface that leads to transcription errors. I want my general practitioner and my spine specialist to be using the same database for records, test, and scans. I'm okay with being embarrassed if it means living longer and better.
I wish I believed we could make a database shared between my GP and my spine specialist without my records also being shared with all insurers, employers, marketing companies, security services, medical researchers, credit rating agencies, and anyone who slips any hospital employee a hundred bucks.
So do I, but given the world as it is, wouldn't you much prefer some idiot marketing guy spamming you on the basis of your medical records, than a screwup in the chain of communication between your GP and spine specialist leaving you crippled or dead?
Yes, of course. But "idiot marketing guy" isn't the worst case scenario, nor is it even the worst plausible scenario. Job loss and inability to get health insurance aren't hypothetical concerns... laws have been written about this because they happen, at scale. While I'm inclined to think the regulations as they stand today are heavy-handed and more expensive than they need to be to get the job done, that doesn't negate the fact that they exist for a reason, a reason that isn't just hypothetical but happened a lot.
Job loss and inability to get health insurance are serious issues, granted. I will suggest the root causes of those need to be tackled for other reasons anyway, starting with the utterly insane practice of having employers involved in health insurance.
It seems almost like the real issue is the insurance schema that makes medical care inaccessible without third-party money.
This notion suggests that the right place to start the kind of big-data medical disruption that could work would be a nation with a weaker or nonexistent medical insurance framework.
It's not just insurance. Companies these days are using credit history as a reason to deny people employment. The credit card companies will hand out this information to almost anyone. Imagine what these folks will do with medical data.
...and the hospitals are using credit card data in their population management models. Oh, you've stopped by the liquor store 3 times this week and now presenting with pancreatitis? Sorry, you are now in our "at risk" billing class.
Just cause I'm feeling particularly paranoid today.
I don't have anything requiring regular medical treatment, but my medical records identify me as someone who has suffered mental health problems, who regularly drinks to excess, who habitually uses cocaine, and who caught an STD in a nazi-themed prostitution orgy while I was a sex tourist in a deprived country.
I'd prefer to retain my privacy and take my chances on the medical miscommunication front, thanks.
I can't help but think that the nazi-themed prostitution orgy part doesn't need to be in the medical records. You should probably talk to your practitioner about logging discretion. ;)
It's the asymmetry of it. A person might have only one GP and one specialist. That is fairly easily managed. It's not good I agree. But it's MANAGEABLE.
Once there is a single large integrated database it's a HUGE target for people to creatively re-interpret the rules such that they can sell access to it. It's also a hacking target too since doctors tend to be a real pain in the ass about collecting all kinds of information that's not medically necessary but perhaps necessary for billing or in case you try not to pay your bill.
Right now this information is federated meaning that there's no one single point of failure. Hospital X's systems might go down, but Hospital Y's systems are still up. That means that unless something REALLY BAD happens across all the hospitals you're not going to die because a computer crashes.
I am far more on-board with good interchange protocols (Diaspora) than with one large centrally managed database (Facebook).
This is a false dichotomy. Can't we have secure, somewhat non-portable EHRs with super strong "Won't release without auth" procedures, or perhaps, as someone else implied, the data should be transfered via sneakernet on USB or similar?
And how common, as a ratio, are crippling medical screwups related to multi-practice miscommunication? I'm sure the absolute number is non-zero, but risks must be weighed. If one person having a crippling issue saves 100,000 people from having their personal data released against their will...
Highly secure systems are possible in theory; we just don't have them today, and we aren't likely to have them tomorrow either.
Crippling medical screwups that could have been prevented by having the right information available at the right time are actually shockingly common. I don't remember the specifics, but I've seen claims to the effect of a five digit annual death toll in the US alone.
I'd like for that database to be something that I control. This is, something that I carry with with me, like a usb stick; and that I have the software/tools to view it. Then I could actually take read though any notes and maybe take a more active role in my health.
I would like that as well, but do you seriously think that would work for most people? Would you want your less technical loved ones to be responsible for the physical security of their data and carry it with them at all times?
I think that's over-thinking it. Medical alert bracelets already exist; I can't imagine it'd be too much challenge to embed a ruggedized USB stick in one, and people generally don't worry about less technical people failing to remember to wear their bracelet.
That's a really good point. It wouldn't take much more miniaturization than what we already have to put that in an earring or something else people wouldn't mind having all the time.
It wouldn't be that much more different then paper medical records. Sure it would enable some interesting attack vectors, but I don't see that to be a compelling reason to not do it. It also wouldn't have to be a mandatory thing. People that are comfortable with it can use it, and those that aren't don't have to. Much like banks, there were (are) lots of people that don't trust back and choose not to use them. The same would be true for something like this.
I don't think rayiner is worried about being embarrassed, he's worried about (for example) not getting a job one day because his private health information has made its way into the hands of a potential employer.
Can't we just make it illegal for an employer to use this information? While not perfect, ask any Black person, it seems preferable to the mess we have today.
Great... now your doctor can read the news in your patient file only once/after and while you're in the room. I'm sure that won't affect his or her bill rate.
As it is, I only get to see my doctor for three and a half minutes when I need help, after 5 minutes with a PA, and I don't know if the PA has even had a chance to communicate any of what I told her to the physician, so I have to write everything down lest I forget to repeat something important. Now it sounds like you want to remove the chance they might have actually reviewed my history before I get there, by having me carry it around in my pocket with me?
> "Most people don't worry about their data privacy, or else they wouldn't be on FB, take those buzzfeed quizzes, etc."
The first part of the sentence is flawed, so the latter doesn't follow. It implicitly assumes that people even understand how things work (they don't, imho) and therefore can make a sound judgment, based on that knowledge.
For example, I could argue that people simply don't value their future selves (ie 30+yrs), otherwise they wouldn't be eating all this junk food now and never exercising. In some sense that's true, but it's mainly driven by ignorance.
Most people don't understand how data privacy works. A huge number of people don't even realize their Facebook posts can be viewable to the public let alone how that data can be collected, analyzed and shared with third parties. Besides, even the most completely oblivious Facebook users generally don't throw their entire medical history on their wall.
>It's worth noting that highly regulated industries are not necessarily bad for startups to enter, exactly because they throw up particularly high barriers to large companies.
That doesn't make any sense at all. Large companies are either A) already past any barriers to entry or B) have enough money to bust through any barriers to entry. There are probably very few real world situations where being "nimble" is enough to overcome onerous regulations. The reality of the matter is that you just need a lot of money to pay lawyers.
It sounds very contradictory. In most of the heavily regulated industries such as healthcare or insurance, one of the economic efficiencies are a better understanding of the complex law, startups can not solve this inefficiency in their favor.
I'm not saying you don't need to understand the regulations and law - you absolutely do. But you do have an advantage in being able to be less conservative and more innovative in how you interpret them. For examples, take Uber and AirBnb - they are both pushing the boundaries of highly regulated industries. An existing hotel chain couldn't do it because they have too much to lose, and target such broad markets. But a scrappy startup can.
I'd argue they are at disadvantage. Small startups can't eat the cost of spending 2 years developing for a particular standard just to have it changed on the due date.
That's a fair argument but in practice isn't true. The pace at the larger health IT shops is FUCKING GLACIAL. I've been through getting an EMR through two meaningful use stages in a fifth the time combined than a typical larger shop might've gotten one done, with a tenth the manpower.
In verticals like this don't dismiss the advantage of being lean, nimble, and wholly stocked with incredible and enthusiastic people.
I wish more people saw healthcare this way, and I guess now I'll probably use Uber and AirBnb as an analogy to help them to. THIS is the way to think about health, and how to treat it's risks and limitations.
Maybe sometimes, but that's strong. Just avoiding groupthink or the weight of ecosystem apathy that tends to grow out these depressing verticals that hire incestuously from other shops that have been building shitty products really slowly for years.
I don't mean to downplay the ridiculous bureaucratic and regulatory crap, but I don't think it's fair to lay the blame at it's feet either.
Honestly, health is just a Ripe For Disruptions™ as any startup-interesting vertical. Takes more stomach and open-mindedness than anything.
It's a bummer. Page/Google are being limp and lazy on this. With opportunities as broad as they have they can afford to Do Hard Things, where Hard is something a little bit out of their wheelhouse. That's okay I guess, but it's a big shame cause health IT could really use some shops with lots of leverage and actual engineering talent to help move the needle when it comes to the constantly backward standards.
[FWIW - I'm a decade and a half long healthcare startup hacker]
Well-loved YCombinator startup Stripe spent a bunch of its early time wrangling with the complexity of the payments industry precisely so that their customers wouldn't have to.
Legal frameworks are definitely tractable, and investing the time to navigate them more effectively can give you the same sort of competitive advantage as mastering a technology.
This is a great point. Competitive advantages come from many places. Working in 1) an unsexy area, 2) a complex area, or 3) a regulated area can all be great advantages if you can manage it.
This may very well be the usual "we currently have no plans to enter the market of...", stated a few months or a year before some product/service is launched, in precisely that market.
> Maybe: 'One way of misrepresenting what you said..'
Then.
> Allow me to put words in your mouth... I don't want you people talking about other ideas related to this article.
Consistency?
> I (Multics) would like to talk about how regulating industries like medicine prevents Google and others from investing in them and potentially benefiting society.
I think you understand my position. Except, what I said was framed in the context of the parent comment. And it was an assertion, not a request.
Basically, he's saying that it's too heavily regulated for them to want to dip their toe in. Seems reasonable, it's the same reason I never bothered with the healthcare ideas I've been interested in. Too many landmines, and not enough latitude to try creative things.
As a kind of note and response to all of your other responses, the issue isn't actually anonymization, but being able to correlate it with other data.
That is, if health data were anonymized, and was done right, and was made unable to be correlated with any other data, it likely would be sufficient. It's when you start allowing it to be correlated with personally identifiable things that it ceases to be anonymous.
That is, sure, let's take a case where you have a super rare genetic disorder. That, combined with the time in 2005 where you broke your leg, is sufficient to distinguish you from every other person in the country. In short, you have a unique health profile.
So what? Unless there is further information, that can't be traced to you. As an example, it's when we start saying "Ah, and the person is receiving treatment at (facility)" that we now know where you live. It's when we start correlating it with usernames that we start getting an internet trail. It's when we start correlating those with forum profiles that we get a real name, and now we know who you are.
The only other way someone could match that profile with you, is to have access to the profile, and to know you personally. Otherwise it links nowhere.
I agree the risk is huge; people don't do it right. But anonymous health profiles are -not- in and of themselves dangerous; it's when details linking them to further information leaks out that it's a problem.
But, pragmatically, while yes it would be incredibly hard...has anyone here read the rights they're signing away when they go to the doctor? Does everyone here trust every system a doctor uses, every system a health insurer uses, and every system used by marketers and researchers that the feds -do- allow to have access to this data? The real risk of Google would be that they could correlate it with so many other things about you; but the health insurers still have your medical history combined with all your PII.
Although your health records may have some legal protections, health care is only one determinant of health.
Other determinants of health, like your gender, food choices, lifestyle, income, driving history, family history, physical environment, education, social network, etc. have all been heavily mined.
Much legally protected stuff can largely be inferred anyway. There aren't too many people without peanut allergies that haven't bought anything containing peanuts for the past 5 years.
There are only two changes that would make it very easy for me to accept revealing my medical data, for science, research or just about anything else.
1. No insurance companies involved as health care gatekeepers. At the moment, they are very much an adversary to me.
2. Strong, enforced laws against employers discriminating for health. I'm sure the letter of the law currently sounds strong, but I'm assuming you have to sue to right any wrongs. Advantage employer.
Neither one of these will happen in my life time, because insurance companies make huge profits on throttling our healthcare, employers will always like flexibility to do what they want with the law, and both camps fund Congress.
The anonymization rules for PHI are strict enough that it would make a lot of the interesting mining you could do difficult if not impossible. Specifically the restrictions related to dates and locations.
It's a lot trickier than this. Suppose you have a rare genetic condition that affects 0.005% of the population. It takes very little additional information to single out a person when the first thing you do is rule out 99.995% of the possibilities.
Another possibility is they thought it was a cool idea, then found out how regulated it was, how it would be nearly impossible to add any features without massive government oversight, and got out.
I was an intern on the Google Health team and witnessed many problems unrelated to external barriers.
1) The original codebase was a nightmarish mess of Java/GWT code that did very little (my first real-life encounter with a FactoryFactory). Most of the developers from this first version drifted away from the project and by the time I arrived there was a second team who were talented, but, unfortunately spent much of their time slowly refactoring other people's crappy code.
2) I didn't observe very much clear product vision. Instead, we had a paranoid obsession with matching the features of Microsoft Health Vault (which was equally meandering & useless).
3) There was a huge top-down pressure from Marissa M and other high level managers to make Google Health into something astoundingly successful, suffocating any possibility of incremental progress and disempowering the actual developers.
4) We had one MD on staff and very little other experience with medicine or healthcare. The developers were very far from the problem domain and relying on a game of managerial telephone to ascertain what the current state of medical record management is and what improvements are possible.
Anyway, in short, my experience was that Google screwed up.
Healthcare is heavily regulated, and I understand why some companies would be reluctant to branch out into it.
But that doesn't mean the privacy and legal concerns surrounding HIPAA regulations are unwarranted. Yea, it would be nice if we lived in an ethical utopia where we wouldn't have to feel worried about people looking through our health records. But we don't. I would not feel comfortable with my health records being easily accessible, even if that would lead to better data-mining opportunities.
Guy made a company Retractable Technologies and made medical devices that prevented infections and saved lives. Hospitals won't buy it because of the system.
> Hospitals won't buy it because of the system. Which happened partly due to regulation.
If you actually read the article, it happened due to the absence of regulation. Hospitals formed an entity (the GPO) to bargain collectively for lower prices from suppliers. This is a classic example of collusion. The GPO started negotiating with the suppliers for a cut of the contracts that they entered into on behalf of the hospitals. This is a classic example of the principal-agent problem. As a result, Retractable Technologies' couldn't break into the market.
Collusion and principal-agent problems arise naturally in free markets. Indeed, the usual response to them is regulation. Antitrust enforcement would've prevented the hospitals from colluding with respect to purchasing supplies, and as the article points out, Medicare's anti-kickback provision, had it been applied to the GPO, would have reduced the principal-agent problem.
The problem with HIPAA is that it's supposed to look after my interests as a patient, but if I feel that it isn't, there's no way for me to opt out of it.
> The problem with HIPAA is that it's supposed to look after my interests as a patient, but if I feel that it isn't, there's no way for me to opt out of it.
To the extent that the first part is true [1], the second part is not true in any substantive way -- the privacy protections restrict what can be disclosed without your consent. So, yes, you can effectively "opt out" of any of the restrictions by consenting to disclosures.
[1] HIPAA -- the Health Insurance Portability and Accountability Act -- exists, in terms of direct goals, mainly to look after your interests as a potential purchaser of health insurance, the as-a-patient privacy protections are secondary to that, and were put in place in HIPAA, and subsequent revisions, to mitigate political opposition to the incentives for automation and related standardization of electronic transactions provisions designed to make the health insurance system -- both in terms of enrollment and claims processing -- more efficient.
I should have been more clear- What I mean is, I can't sign a form saying "I hereby waive all my HIPAA protections". All I can do is sign a form saying "I allow entity X to get access to my records", but there is no provision for a global, generalized waiver. (Of course if I signed a document waiving all protections I'd also want that document to contain reference to an alternative set of protections.)
I think the most illuminating thing about HIPAA is the fact that it lays bare just how poorly doctors and lawyers and healthcare administrators actually understand logical security in the computer science sense. I will point to the use of fax machines as a superb example. The law essentially considers PGP and a fax machine to be security equivalents.
I think that ignores a lot of the technology involved. A fax van be intercepted, but an email is guaranteed to be recorded by intermediate servers. In most cases, e-mail will be data mined in a webmail system. So while PGP is clearly better, I think its reasonable to say fax is, in practice, better than email. Unless you want patients suddenly getting Valtrex ads because someone sent their health records over webmail.
Not sure who downvoted you, but this is indeed the essence of one of the counterpoints against regulation-- that if we regulate an industry, the innovation in that industry will simply occur elsewhere in the world, outside of US regulation.
That is generally true for things like minimum wage laws or regulations on manufacturing. If it is expensive to manufacture in the US then companies can easily manufacture in China.
It doesn't really work for healthcare because you can't remove it from the jurisdiction. It isn't practical to fly to another country to receive emergency medical services or if all you need is to fill a prescription. Meanwhile the customers with the wealth to sustain research into novel health products are in the countries that impose heavy regulatory burdens on anyone who wants to service them.
Actually a lot of people do ... I know people who get their eyes lasered in Turkey for example (in this case it's because the treatment is cheap).
Larry Page suggested the Island idea in last year's google IO.
I also don't really understand ... especially Larry Page suggested this. I also don't get the other comment why it would not work. Imagine Google buys couple of Islands and founds it's own state, they could do whatever they wanted in the boarders. It's a bit scary yet not so far fetched.
There's a whole lot of complicated issues of international law around gaining sovereignty, which is highly biased against the creation of new states. And beyond that there is the issue of gaining recognition from other states. Getting the recognition of even a number of the "micro states" that likes to recognise other small states because it helps validate their own existence would be hard enough. Getting the recognition of larger countries, which with corporate involvement would instantly be suspicious this was a tax avoidance scheme or otherwise something that would not be beneficial to them would be much harder. (E.g. many countries would start looking at their maps and worrying about what companies that is important to them could be coaxed by a poorer neighbouring country to buy territory)
To answer your question seriously, what makes you think your new country will be able to trade freely with every other country (e.g. USA & EU)? There's no point having an island with your own rules, if the only people you can get as customers are the inhabitants of your island.
Also, all land (more or less) is already in one country. Turning to split off a piece of one country into a new country is often quite deadly. Literally.
A lot of the regulation has allowed people to die when there was a chance they would have lived without it. Regulations on advertising and OTC drugs are absolutely needed, but stopping a doctor from prescribing a risky drug to a terminally ill patient who will die without it anyway is counterproductive.
The main reason people aren't healthy has nothing to do with the medical system or regulation. It has everything to do with lifestyle. Technology will not help at all in that regard. The mobile-device generation will be less healthy and even more physically disconnected than the current one, which is a horrifying thought given how bad the state of affairs is right now.
Man is a physical animal, and movement will always beat analysis when it comes to improving health outcomes. We already know how to improve health, we choose not to do it.
stopping a doctor from prescribing a risky drug to a terminally ill patient who will die without it anyway is counterproductive.
And if the patient survives, but has a debilitating condition that was a side-effect of that risky drug, then claims he was going to survive anyway and now his life has a shattered quality because the doctor prescribed an drug that hadn't been fully cleared yet? It's not as black and white as you're painting it, and drugs are not always silver bullets that save your life and send you back to playing the violin like the virtuoso you once were.
Conversely, if you didn't have that regulation, you'd have medication with a much lower quality - more people dying, and more negative side-effects for the ones who survived. Plenty of drugs look promising at the outset, then turn out to have serious issues.
> stopping a doctor from prescribing a risky drug to a terminally ill patient who will die without it anyway is counterproductive.
It's also about balancing incentives. If there were no such regulations whatsoever, you'd find yourself in a situation, where a patient with a mild cancer and a broken arm is potentially "terminally ill" and needs the New Risky Drug. It could degenerate to regular, systematic experimentation on humans in the guise of "doing everything we can".
The regulations more exist to keep you from getting people discriminated against.
I mean, yes, we don't want folks to die. Nor do we want folks making bad decisions based on information they don't understand. However, to say that it is just "people could die" ignores the fact that they have technology entering in vehicles and whatnot.
The first time a Google car crashes and Google lawyers have to sit across from a now paralyzed child on the witness stand, Google will quickly think twice about the Vehicle market.
This was a huge problem we faced in Automotive. All big automotive companies are VERY cautious about safety, to what would seem like absurd levels.
The rest of the world will welcome the Google car with open arms.
The US will have to make do with their automobile deathtraps and continue to live suboptimal lifespans (35th worldwide) for a country with such a high GDP per capita (6th).
This is the USA, it's about 40-50 years behind even my country (the UK) on discrimination-related law and practice, and law related to societal issues in general. One could argue that this is just the set of trade-offs that the US has made for itself, but the real-world results are looking grim.
Obama has done great things with healthcare. If Congress was not so deadlocked, he would have done more. So far, he got rid of the denial of service due to preexisting conditions. He started a government run program, which will greatly reduce prices in the long term.
HIPAA laws protect patient rights very well. I think we are on the right track.
The reference to HIPAA and how it restricts things they'd like to do with health information -- and the fact that Goolge is investing in Calico, which is more about health technology -- suggests that the concern is more with the regulations that are designed to prevent financial fraud, exploitation of health information for scams and discrimination purposes, etc., that affect the market for health information technology, and less about the kind of regulations that are designed around the safety of health care technology.
Right, but unless you fuck up pretty badly software is not going to kill a patient directly (barring, say, pacemaker software).
We had to do a risk analysis review recently, and figured that unless you physically dropped one of our servers onto a patient you couldn't directly cause harm.
Anyways, you'd be a lot less concerned with regulation if you knew how brainfucked and unscientific the whole field of medicine seems to be--it's not as far along as you might expect/hope.
Yeah, I'm familiar with that--note again that that was a combination of hardware and software, and that the perhaps leading cause of actual damage was the omission of a mechanical safety interlock that existed on earlier models; with that interlock, the buggy software wouldn't have mattered.
There is a difference between embedded systems or devices (pacemakers, imaging devices, etc.) and EMR/records/data mining software.
The regulations are all calibrated to defend against a Therac-25 (well, sort of) and seemingly not to deal with modern software development or deployment.
Bringing a healthcare product to market is really hard. A friend just had a good one with a proven product and sales fail simply because institutional hospitals take to long to really implement a decision to the point they will actually pay for a piece of software they want to use even if it isn't something regulated.
Google certainly has the cash to sit this sort of thing out, hire sales and support people etc. Microsoft and others are doing so, see Microsoft Amalga for example.
However a more intriguing area, to me, is doing some more basic research without becoming a health company. Google is doing this with the glucose contacts and things. These are novel ideas with significant IP that could be spun off into an independent company and/or licensed to to an existing drug company to push through clinical trials and bring to market.
> Bringing a healthcare product to market is really hard.
I think this is the place that google might enter and play well. If google can develop a very high-level , fda certified development tool/operating-system ,such that developing fda certified products become much easier , they could have a very big win on their hands.
I've seen some research on such systems ,so it's a clear possibility. And since it's a new thing , it might need some changes in the fda, which google has the tools to push.
Rightly so. If I recall correctly, instead of just providing the results to you they also included recommendations to try to prevent things their tests had shown you were at risk of. Their tests have never been vetted/proven accurate by anyone other than themselves so providing health advice based on them is reckless. If they were allowed to continue anyone could start their own 'DNA testing service' create bogus reports and bogus recommendations and people could get hurt.
... which is not nearly as big a risk these days as it was in the days of snake-oil salesmen; there isn't a "new town to set up shop in" for a bad DNA testing service in the days of the Internet.
I'm in favor of protecting people from bad medcine, but I think the current regulatory structure is an overly-restrictive tool for the job. Either that, or (given that it steps on the neck of 23andme but lets GNC and homeopathic practitioners continue to operate) it's mis-tuned for modern technologies and tools.
So what? I want to see my results. They make it clear it's not an exact science and they link to the relevant studies so you can make your own decisions. Why should the FDA prevent them from sharing that information with me? I'm not even American!
Did you read my comment? It wasn't about the results. You want your results - you got them. It was that they were providing recommendations based on the results which haven't been verified as accurate by anyone but 23andme.
I'd love to see them take an altruistic approach and shift the focus to countries like Africa where health projects would provide a lot more utility and have a lot less regulatory burden. Once a product is visibly working in one country it's more difficult to make cases against it elsewhere.
I've heard Bill Gates state this as one of his philanthropic goals; to fund things with a cost/benefit analysis that doesn't make sense in the developed world but does elsewhere.
I've worked with people involved in HIV vaccine trials overseas and, in fact, things really don't change as much as you might think.
The basic tenant "first do no harm" is ingrained at an many different ethical, institutional and legal levels that it isn't like you can, say, justify a more risky vaccine in an area with a higher risk for HIV or whatever.
In fact trials have been cut short and research into entire vectors (ie the cold virus used as a transport for the HIV related material) cut off when trials in Africa started to appear (statistically) to be slightly harmful in any way.
I feel this is a good thing. Scientists and medical people holding themselves to this high standard is the reason the anti-vaccine crowd really doesn't have a leg to stand on.
> to fund things with a cost/benefit analysis that doesn't make sense in the developed world but does elsewhere.
While it seems to go against "do no harm" ,in reality many low cost products can start at low quality, but with time and experience improve while still offering much lower costs. So the logic can make sense.
Not sure it works for vaccines thought.
Also i wonder: what were the benefits of the vaccine you described ? were they weighted against the slight harm ?
Unfortunately, the same doesn't apply to unethical treatment. More and more patients get certain treatments or recommendations by doctors or hospitals not because they need them but because they're profitable for the doctors or hospitals.
This is the insurance/financial product salesman's spirit at work, and it needs to be stopped. Right now.
I think part of the problem is the "insurance" mentality - if you don't pay for the treatment yourself, but the insurance just pays for everything, you aren't interested in an economical solution (and maybe the minimally-invasive treatment), but you take what's recommended. If you have to pay for treatment yourself, you begin asking questions. (Disclaimer: I live in Germany, a country with "free" health insurance, which I pay for with an effective 15% tax on my income.)
I tend to believe that Goog won't do it. Imagine going from kingpin of the Internet, making the rules as it suits you, to an area where you're subject to laws made by others, just like everyone else. They wouldn't be that special any more. Methinks, they're too spoilt for that.
Google seems to avoid enterprise/gov, and especially highly regulated, business. One can see the advantage of it - that allows the company and the products to avoid bloat and culture corruption endemic to doing enterprise/gov/highly regulated business