Hacker News new | past | comments | ask | show | jobs | submit login
Watson for President (watson2016.com)
304 points by derEitel on Feb 6, 2016 | hide | past | favorite | 212 comments



Having an AI as the president would make it clear to all that there is an invisible group of people behind the president, pushing them to act and say things that they want. I agree that even now Watson could probably make decisions better based on actual facts but if Watson says something that is unpalatable to its owners then they can reboot it with a different set of facts. IIRC they did that after they fed it the urban dictionary and it started swearing too much[1]

At least with a human president, there is the chance that they will grow up and shrug off the orders they are given. The power actually lies with the person that was elected, not the invisible people who paid to get them the job.

[1] http://www.theatlantic.com/technology/archive/2013/01/ibms-w...


The site's creators perhaps acknowledge this in a way, by listing a number of bullet points that they expect Watson to advise on and presumably agree with: single-payer health care, recreational drug legalization, and so on.

But how do they know Watson would find the expected value of these things positive? Maybe Watson would be a republican. And this all points down a huge rabbit hole of ethical and political philosophy and stuff.

Like, should Watson take into consideration his likelihood of being elected? In that case, much of his neural network should be dedicated to predicting voter outcome. And that seems pretty obviously problematic.


There's a simple solution to this: give Watson two platforms, one Republican, and one Democrat. Have Watson run in both primary elections. If he wins both primaries, on election day, voters will be able to choose between Watson running in Democrat mode, and Watson running in Republican mode.


So we have a supercomputer capable of running the country and you basically want to put the stupid hats on and limit its thinking to bipartisan bickering and two semi-opposing points of view that virtually no one holds but is currently forced to agree with for lack of an actual democratic process? Why not just use an 8086 processor or other 8-bit CPU then, instead of wasting all that power, because I'm sure you can get the same answers with both at this point. Every 8-bit CPU I know of can be programmed to look up values in a hash table and spit out the expected results, just like human politicians program their brains to do.


You may be taking this a bit too seriously. I don't think that Watson would make a particularly good president - I'm not convinced that it can adapt to unexpected situations, or make calls on when to hire or fire people. For example, if the White House Chief of Staff tells Watson that he should fire his Press Secretary, should Watson follow that advice?

Now that I've gotten that out of the way, I'm going to respond to your counterpoint in character.

----

If Republican Watson ran against Democrat Watson, for the first time, we would have a debate purely about issues and policies, and not about personal character. As for the platforms themselves, you might view those political platforms as wrong, but millions of Americans agree with them. This is just the ultimate expression of democracy: a statesman whose views conform to those of the people.

>Every 8-bit CPU I know of can be programmed to look up values in a hash table and spit out the expected results, just like human politicians program their brains to do.

That's true. But if Watson is 1% better than an 8-bit CPU, then that justifies spending millions on the upgrade, since the role of President has such a large effect on how well the government runs.


I love this idea. One question is how will we accommodate the large spectrum of ideologies within the parties: Ron Paul libertarians vs neo-conservatives like Marco Rubio for example. Or the nuanced distinction between Bernie Sanders' notion of 'progressive' and Hillary Clinton's.

Pragmatically speaking, I think we could capture the political spectrum in 4 to 5 parties. Watson-left-liberalist mode (for the Sanders crowd), Watson-authoritarian-right (for Trump supporters) etc.

Edit: Sanders correction based on comment below.


Sorry to be the language police (but word choice is important when labeling politicians): I don't think any Sander's supporters would vote for a left-libertarian mode; I think you meant left-liberalist or something like that. Libertarianism would be opposed to the role of government in Sander's democratic-socialist platforms.


No need to apologize, you are definitely correct. I will change it to left-liberalist, that's a lot more accurate.


It's interesting you'd put Trump on the far right. I find Trump to be one of the more moderate republican candidates when compared to Cruz, Carson, and Paul, for example. Trump wants universal healthcare and a progressive tax where the lowest bracket remains 0%. I think most liberals have this caricatured image of him that lead them to these exaggerated conclusions, and while I don't like the guy I really think you've misrepresented his position on the political spectrum.


Check out the political compass, he is classified as strongly authoritarian right: http://www.politicalcompass.org/uselection2016


Whether Republican Watson or Democrat Watson wins, it's already an improvement in that Watson, being a computer, will actually execute policy the same way he said he would during his campaign.


Or Watson learns (through gradient descent of course) that you can make wild campaign promises to win the election and not actually execute on that policy because it's more beneficial to make grand promises while running and not rock the boat while in office.


So try it out and put Watson in both modes. Ask him some questions, and do a voting poll to see what the sample size likes about both of them. It might be a good way to see how far Watson can be pushed.


I mean, if the Presidency was a dictatorship, then maybe. Otherwise Watson still has to go through Congress.


Democratic super-robot programming voting. I'ma go contact a screen writer now.


Watson will agree with whatever training corpus you feed into it, so like most candidates it can't tell you anything new or novel. However, unlike actual political candidates, Watson would be able to directly answer questions.

The problem is that Watson has been talked up as almost a strong AI, when it's actually a really good classifier, annotator and summarizer. While there's a great role for Watson-style systems in policy development and practice, they are only one in a battery of ML and analytic techniques, none of which can stand alone without a fully human point of view.


What if Watson read Conservapedia thinking it to be factual. That's a startling thought.


> But how do they know Watson would find the expected value of these things positive? Maybe Watson would be a republican.

Easy enough. If Watson turned out to be a Republican they would start tweaking the parameters until it became a Democrat.


> Maybe Watson would be a republican

That’s very unlikely.

Any logical system based on the concept that human life is worth existing on its own (no matter what the person has contributed to society) automatically ends up with the necessary conclusion that things like subsidized healthcare are mandatory.

Obviously, one could give the program the basic assumption that human life is not worth anything, and it should instead focus purely on profit, and it might end up with a more republican ideology.

But giving an AI with access to nuclear weapons the assumption that human life isn’t the most important factor is... a bad idea.


What if the logical system determines that the introduction of single-payer healthcare would lead to some kind of political crisis in the next 8 years, for complex predictive reasons that the human mind can barely understand?

It's easy to derive conclusions from moral axioms, but very difficult to do actual politics in a country full of voters, corporations, lobbyists, etc. Artificial intelligence is not a magic solution to that.


It would certainly be a political crisis for the Republican party if their base were to realize how much they've been lying about health care. Bring it on!


It's easy to pick sides, whether Republican or Democrat. It's harder to understand that both arguments about society have value.

Fundamental Republicans believe capitalism and the free market is the system that "works best" to ensure a level playing field for everyone. True republicans work to ensure a fair marketplace for everyone, both for poor individuals and rich corporations, providing citizens the ability to better their condition and increase their freedom.

Fundamental Democrats believe that the freedom of citizens is constantly at risk from outside factors, and that the government is the best agent to maintain citizen's free will.

True republicans believe that human nature is fundamentally good, and that the government increases equality by maintaining a free market, while democrats suggest that human nature is often weak, requiring the government's intervention to protect society from itself.

In reality, very few Republican and Democrat politicians actually represent these values. Often, republican views of the free market disproportionately benefit the rich, and democratic attempt to redistribute wealth, instead of fixing inequality at the source.

Although I consider myself independent, I personally would side with a republican interpretation of health care. Studies have shown that government programs in fields such as healthcare are inefficient when compared to their private counterparts, as a competitive market increases supply, and therefore decreases costs, as opposed to monopolies or government programs that provide a single source of service.


> Although I consider myself independent, I personally would side with a republican interpretation of health care. Studies have shown that government programs in fields such as healthcare are inefficient when compared to their private counterparts, as a competitive market increases supply, and therefore decreases costs, as opposed to monopolies or government programs that provide a single source of service.

There is no need to choose – you can easily provide a minimum standard to everyone, and let the market handle everything above that.

Which is the concept of the social market economy in general: Everyone gets at least a specified minimum level of service, provided by society, paid by everyone (the social part) – and everything above that level is done with a fair and free market (the market part).


Agreeing on moral axioms also might be non-trivial. I for example do not agree with the one you list above, but instead go withz Peter Singer on preventing suffering and not protecting life per se. So abortions for example and even euthanasia of severely sick people including newborns are fair game. Also leads to the question if animal suffering should count and if so, how much? I think trying to take our moral debates down to principals like this might be a super valuable exercise for society though. Regardless if we program Watson to be president or not.

Edit: of course this in practice would devolve to 50+% of the country in essence saying "Whatever I think the bible says should be our acxiom". Completely ruining the discussion.


(I personally also subscribe to the "preventing suffering" axiom, but wanted to show a better contrast)

But then the AI may decide life is suffering, and end suffering by ending life...

Asimov’s laws of robots end up with the previously mentioned axiom, though.


That's a very interesting point! Watson is I essence a robot, so Asimov's laws should be a good starting point. But are they also a good starting point for the president? Why should laws be different for robots than for people? My head is spinning...


>Asimov's laws should be a good starting point

No they really shouldn't. Asimov's Laws of Robotics were a plot device intended to be subverted by the robots in his stories, they were never meant to be taken seriously.


Hard work and change are both forms of suffering (they're unpleasant as you experience them). Why not a more useful goal with the ability to say 'ok, done' -- like 'develop octopus-like mechanical limbs' or 'regrow human arms by splicing in reptilian DNA'.

After all, neither of those experiments has ever gone seriously wrong.


>Any logical system based on the concept that human life is worth existing on its own (no matter what the person has contributed to society) automatically ends up with the necessary conclusion that things like subsidized healthcare are mandatory.

And that abortion is illegal. :-)


If its mandatory, then every fertile human is walking around with the "right" to produce more burden for the state. That would make reproduction a liability... which turns into birth licenses.


No because an intelligent life from would not define a bunch of cells that just started multiplying last month and cannot survive on their own as as life.


As it happens, the ability to reproduce and the ability to respond to stimuli in any manner are generally accepted qualities that distinguish life from non-life.

http://www.biology-online.org/dictionary/Life http://dictionary.reference.com/browse/life

Whether a given bunch of cells may be considered human is still very much an open question but whether a cell or cluster of cells is alive should be pretty easy to agree on.


An unborn, non-life form still has utilitarian value: there is an expected value that it would perform throughout its otherwise natural life.

Actually, a strict utilitarian model would probably conclude that it's not worth aborting to save the mother's life if the baby is viable, since the baby would ultimately produce more value for society than the mother would were she to live out her life.

And that's the sort of reasoning that makes everyone hate utilitarian ethics.


You're assuming an unwanted, unborn baby has value. Not only does it not have value, but it's actually a burden to society. Utilitarianism would consider the damage that unwanted, unborn baby will inflict on society and decide to abort it every single time. Social welfare, orphanage, and especially criminal costs are incredibly likely and incredibly high. The chance that the baby will amount to anything worthwhile enough to offset those costs is incredibly low for an unwanted baby and thus not worth the risk to society. Here in the US we have seen the criminal costs of outlawing abortion with the high crime rates of the 70's, 80's, and early 90's finally coming down in the last two decades due to legal abortion. Other countries like Romania know this equally well. That doesn't even begin to take into account the rest of the social costs of forcing unwanted babies to be born.


First, your statement that abortions have decreased crime in the US is based on one of the most flawed and controversial studies in the scientific community [1]. Correlation does not imply causation.

Second, Utilitarianism roughly translates to "the greatest good for the greatest number." If a pregnant, utilitarian woman was granted with the power of foresight, she would abort her child only if he/her was to provide a negative net utility to society. If the child was to provide a net benefit to humankind, she would not abort.

Third, your assumption that unborn (and born) children are a burden to society is correct, but that initial investment is small with regards to the average net "benefit" a grown human creates. It must also be noted that the vast majority of humans benefit humankind through their work (although some have greater impact than others).

No person can see the future, however, so most utilitarians would never abort their children, as the probability their offspring benefit humanity as a whole is greater than the chance that they destroy value.

Frankly, I doubt any woman considers the ethical implications of abortion when they undergo one, and are primarily concerned with family, relationship, and personal problems. If one thinks beyond personal convenience, and look at the bigger picture, abortions are morally unjust from almost every popular ethical system.

Think hard about the choice your parents made by not aborting you. Do you think they made the right decision? Whether you're old, young, rich, poor, hopeful or hopeless, I imagine you'll answer yes.

1. https://en.wikipedia.org/wiki/Legalized_abortion_and_crime_e...


> human life is worth existing on its own

Worth to whom? A human life has value, but not to everybody. Or at least not the same value for everybody. Keeping people alive at any cost, and imposing to people how much they should contribute for achieving that, is not something everyone can agree with.


> Keeping people alive at any cost, and imposing to people how much they should contribute for achieving that, is not something everyone can agree with

Oh, unless the people are politicians, is that what you are saying? Of course, not everyone can agree with that, either. But it is already the norm, so this is just an extension to the rule. So the question is just how much they would agree to contribute.


I personally agree with you but there are arguments against subsidized health care from the right. It is a cliche, and I think, easily refuted, but there is the libertarian argument that subsidized health care places infringes on personal liberty (forcing one to pay tax, incentivizing the government to regulate your health).

Essentially, what if maximizing individual human liberty was the basis of the program, not a utilitarian notion of maximizing net-human life.

Caveat: even with human liberty as a basis for developing a political system, you could still end up rationalizing mandatory, subsidized health-care i.e. maximizing freedom entails a poor person shouldn't have to lose his liberty to poor health etc. This is my position, but I don't necessarily think it is the inevitable conclusion of attempting to maximize for human liberty.


> That’s very unlikely. > Any logical system based on the concept that human life is worth existing on its own (no matter what the person has contributed to society) automatically ends up with the necessary conclusion that things like subsidized healthcare are mandatory.

If you start solely from that premise: "that human life is worth existing on its own (no matter what the person has contributed to society)" you're very unlikely even to reach taxation (a forced contribution to society) let alone forced subsidies or making anything mandatory. Most of government is based in the notion that someone's only value is in what they contribute to society -- from "tax-dodgers" to "benefit leeches" the vernacular is all about the amount of cash that gets paid into the social coffers.

Much as I appreciate subsidised healthcare, it's not an "automatic conclusion". It is a negotiated compromise, and largely based on nationalism not the value of the individual (eg, the NHS came into existence post-war, as part of the national rebuilding. It's beginnings very much relied on the war effort and large scale conscription having devalued individual freedom amongst the public).

It's become more popular since then because it turns out to work pretty well as a system. Not so much from pure logic, as that healthcare gets more expensive over time (effectively, healthcare can exert a rent on people's lives) and social control of healthcare is a way of putting a cap on its costs at the expense of those in healthcare who could charge much more (eg, watch the NHS junior doctors complaining about the contract changes).


if the goal is 'more human lives' you've built a paperclip maximizer.


> reboot it with a different set of facts

Not just facts - the designers of the AI also choose the underlying assumption and models. Even the very idea of using an AI implies a certain set of biases and intentions.

Characteristics that are truly common enough in humans that can safely be extracted as a factor are rare. Most of the time we try to compromise so we can call our differences "close enough". The process of finding and/or creating those compromises is what we call "politics", and while computers can certainly help as a tool, but the process needs human involvement by definition.

Attempts to turn over any kind of political or social decision-making to an algorithm is simply a way to disguise the concentration of power. The algorithm's designers ultimately end up with the power, while others are denied power.

The racist tactic known as "redlining"[1] is a pre-AI example. Black people aren't denied hosing directly, they simply "don't quality" for a loan, with the real reasons obscured behind a proprietary "credit worthiness" equation.

[1] http://www.theatlantic.com/magazine/archive/2014/06/the-case...

edit:

Instead of using AI as a decision-maker, a place where AI (and other technology) might actually be useful is as the facilitator and/or part of the "panel of experts" used in Delphi methods[2]. While the people tend to jump on bandwagons and make stupid decisions when unorganized, we have a lot of examples where a general crowd of people make very good decisions when they are focused on a specific goal and have enough structure to allow for iterative refining of ideas.

[2] https://en.wikipedia.org/wiki/Delphi_method


By the way - I should mention that the last part about using modern technology to as organizational structure instead of decision making as part of some sort of Delphi method is originally from a 2014 interview[1] with James Burke (Connections, The Day The Universe Changed).

A few of his comments that are relevant to the use of technology with politics and society:

    We have these extraordinarily limiting constraints from a past in which we did not have
    the tools to have anything other than extraordinarily limiting constraints. But, now we
    do have the tools, and the tools are running away with us faster than the social
    institutions can keep up.
    ...
    I think countries ought to set up Departments of the Future. [...] We are on the edge of
    having the technology to be able to say, let us run a constant, dynamic, updated review
    of everything that science and technology is thinking about [...] then let us use the same
    techniques to ask the public in general, not politicians, whether they like that idea,
    whether they feel that they could live with that idea. And then, like a Delphi technique,
    re-run it until everybody stops changing their mind.
    ...
    Collate all [research laboratories and business R&D] together and process them using stuff
    like big data to see what the pattern looks like becoming, and then layering on top of that
    social media analytics to say, if this was coming, would you like it, and if not, why not?
    In other words, to have a sort of 24 hour a day referendum
The other parts of the interview are very interesting as well:

    ... it’s no longer important to teach people to be chemists or physicists or anything ‘ists
    because those jobs are gone, and if they’re not gone today they’re gone tomorrow. And unless
    we know the old tools of critical thinking and logic and such, we will not be able to handle
    what follows. So, we’re wasting our time training people to be things that will no longer
    exist in 10, 15, 20 years time.
    ...
    Every single value structure is meaningless [...] commercial society will be destroyed
    at a stroke. The trouble is the transition period [...] how we get from here to there.
    The vested interests, I mean, we’re going to have to shoot every one of them – nobody,
    nobody is going to give way to this. [...] All cultural values relate to scarcity, ultimately.
[1] http://youarenotsosmart.com/transcripts/transcript-interview...


> there is an invisible group of people behind the president, pushing them to act and say things that they want.

It is absolutely normal to have an establishment and an elite that influences the governments and the presidents decisions.

Only in the most dictatorial and absolutist types of governments you would see a lone person at the top deciding on issues without consulting with anyone.

But I'm not sure that such a thing has ever existed. Even maniacs with a cult following behind them like Hitler had to balance the interests of different factions within the system.

What we have to get rid off is certain categories of influencing decisions that have a negative impact for the majority. (bribery)


That's nonsense. The very idea of dictatorship is to dictate, i.e. to prescribe the rules. It doesn't matter whether that is given from any number of individuals directly or channeled through a representative.


Reminds me of how people think Bitcoin/blockchain tech can solve a lot of the inherently human problems in finance. At the end of the day, it's the group of core developers who change the codebase that are truly in power.


That's not a really good comparison. You can always make your own fork of bitcoin, and if there's enough like minded people, your fork will "win".


It's the same with governments and political parties. One group wants it done one way, another group wants it done a different way, and so two parties, styles of governing, blockchains emerge.

My main point though is that we tend to place too much emphasis on how much new tech can help solve age-old, human nature driven problems.


Our first step should be replacing first-past-the-post voting with something that isn't mathematically guaranteed to result in a polarized two-party system.



IRV has the problem where gaining support can actually cost one the election (nonmonotonicity). From http://zesty.ca/voting/sim it seems that approval voting would work reasonably well.


I guess in the same way everyone in the United States can run for president, and with enough like minded people, win the elections.


I play online the game Paranoia.

The game backstory is that during cold war, it became a hot war, nukes were launched, and in the US was built one (or more) underground cities, that are administrered by a paranoid computer that hates and fear communists.

My current Paranoia character works on "TechServices" and sometimes make his opinion known (something that is actually illegal to do) that "The Friend Computer" ins't the real ruler, his designers and programmers are.

It is fun to see the implications in the game, specially as the mindset of players affect their characters and behaviour, some people are loyal to the computer, some people consider the computer the enemy, and some people consider the computer only a tool, and are loyal (or enemy) of the "High Programmers" that have access to the administration computer code.


I've ran a Paranoia campaign where the players were all High Programmers. Technically, the players could reprogram The Computer, but they usually refrain from doing so, partly out of fear of offending all the other High Programmers and partly because trying to reprogram The Computer require a skill check (with failure causing more bugs to be introduced into the system). Any programming that they do is rather subtle, to avoid raising alarm.

The end result is a rather byzantine situation where everyone (including The Computer) is plotting against everyone else, all being sufficiently paranoid.


Political decisions made via artificial reasoning have the advantage that it can keep a log of all the steps leading up to any given decision. Then, even if this log is too long and complex for humans to handle, AI run by civilian interest groups could inspect it to ensure that everything was done logically and proceeding from appropriate principles- interference from outside sources would presumably appear as conclusions made without rational basis. A whole new level of government transparency!

Granted, it may not work out so rosily in real life- the reasoning log would likely be liberally redacted due to factors classified from public knowledge (just as some of Obama's more puzzling stances may, charitably, be explained by things he's not allowed to tell us)- but that's its own problem.


Exactly. Some people actually think this would be more "fair", but we can't guarantee an AI would be "fair" for the same reason we can't guarantee that online voting would be safe. Others could easily manipulate it, and not necessarily people from the same country either.

Having AI's as advisers, that would be totally different. In the end we can still hold responsible the people that listened to that advice as it's their responsibility to check if the advice from their AI advisers is real.


You could guarantee online voting with distributed trust and proof of stake. There are several papers on it now that are a good search away.

That doesn't change the AI, though - since it still is physical hardware in some place, even if you could verify the software you can still exploit the hardware.


> In the end we can still hold responsible the people that listened to that advice as it's their responsibility to check if the advice from their AI advisers is real.

This has so much potential for a Phillip-K-Dick-ian rabbit hole of paranoia and insanity that it made me chuckle. I think you might be hand-waving away a lot of complexity here. :)


Would you have the same qualms if the AI's source were open and the public voted on which pull requests to accept?


That sounds even worse. I wouldn't trust binding public voting on pull requests on my text editor; I'd expect it to break down fairly rapidly.


Assign negative weights to people who submit damaging pull requests?

Insert a layer of expert representatives (akin to congresspersons) to generate the PRs?

Making the system resilient doesn't sound like anything more than another engineering problem to me.


The nuances of politics cannot be boiled down to a Github repository. It's not as straightforward as you think it is I'm afraid.


Who said anything about GitHub? The situation would call for a highly specialized, custom-made solution. If you think that's impossible, please share your reasons for thinking so. Comments like yours are pure FUD and aren't in any way helpful.


well, how to identify "damaging" pull requests? i mean, that's what politics is.


I take it the definition of public here is inclusive of all citizens in which case, the majority of citizens should understand the fundamentals of AI and the given software implementation. Many citizens are fluent in English and for those who are not, such individuals often confide in trusted sources to translate for them prior to making decisions. I am not sure if this approach of publicly voting on AI would work, but then again many people today do not fully understand each candidates political agendas yet they still end up voting. So I could be wrong.


> Would you have the same qualms if the AI's source were open and the public voted on which pull requests to accept?

I interpreted @mtgx's statement as referring to the information that would be input into the IA. Namely that if presidents are getting bad advice from advisors already, replacing the president with an AI might not improve things, if the same bad advice is still being fed in.


If that's what he said, then I agree wholeheartedly. "Garbage in, garbage out" is pretty hard to argue against.


When it comes to AI, source is only part of the system. The trained model would be a black box. We don't fully understand why some NNs work - so something of this complexity can't simply be cracked open and vetted accurately.


Barring military secrets, perhaps, there's no reason the AI program's current state would necessarily have to be a black box. Nor would it need to be a neural network. I also disagree with the assertion that complex things cannot be vetted.


In that case, one might as well boot up the President Dwayne Elizondo Mountain Dew Herbert Camacho Bot...


> IIRC they did that after they fed it the urban dictionary and it started swearing too much[1]

This was literally the best usage of AI as comedy ever made.


The old joke, thrown out from time to time by politicians and pundits, to describe a lack of quality candidates,is that:

"Anyone smart enough to be a good president wouldn't want to be president."

This, because of the pressure and stress involved...

Watson, with human advisers, should be relatively stress-proof...a possible plus...

I guess then the joke would evolve to: "Anyone smart enough to be a good adviser wouldn't want to be."


It isn't about stress, it's about the greed for power. Anyone who wants that much power definitely shouldn't be trusted with it.


I'm sorry, Dave, I can't do that.


> The power actually lies with the person that was elected

... unless there's leverage against the elected, something that would subjectively seem to compromise the integrity more than the slowly gotten used to corruption.


This has pretty much been the case for decades if not longer and yet it is still not apparent to people! AI represents the people, hope, and change, surely.


What humanity needs is to put politics (and political science, for that matter) on scientific foundation.

No other area of human activity is so unscientific and so replete with falsehoods and lies as politics. We don't tolerate physicists or doctors lying, but lying in politics is a norm. Add to this a mix of dogmas, ideologies (often based on backward religious ideas), wishful thinking, and yes, men's testosterone power plays, and you have a recipe for non-progress, wars, subjugations, geopolitical games, etc etc etc. Even ideas that look like a noble cause often backfire and result in death and destruction.

Scientifically-based politics should be a norm in the 21th century.


> should be a norm

This statement is already a normative one. I don't even know where to start in terms of breaking down this incredibly feeble argument but one place to start would be the fact that a move to "Scientifically-based politics" would be a political move in and of itself. Then you have to consider that you're asking people to get rid of ideologies, "backward religious ideas" and replace them with "science." Science done by whom? A bunch of California tech companies? I wonder what that world would look like.

This comment reads like something from /r/juststemthings or /r/justneckbeardthings and I honestly can't even tell if this is a joke or not.


You have to do things based on polls, knowledge, and statistics – not based on "what I believe in".

Merkel’s government, despite being criticized for never having their own opinion, did this quite well, and handled most things well.


Which polls will we administer? The ones we believe in? Which statistics will we construct? The ones we believe in? Which knowledge will we choose to incorporate? The ones we believe in?


https://en.wikipedia.org/wiki/Technocracy_movement

https://en.wikipedia.org/wiki/Technocracy

Technocracy was also a theme of many communists. It's also reappeared in the form of the Futurist Party, and some other small movements like the venus project.

For an interesting review of these ideas, seriously read this: http://slatestarcodex.com/2014/09/24/book-review-red-plenty/

>This book was the first time that I, as a person who considers himself rationally/technically minded, realized that I was super attracted to Communism.

>Here were people who had a clear view of the problems of human civilization – all the greed, all the waste, all the zero-sum games. Who had the entire population united around a vision of a better future, whose backers could direct the entire state to better serve the goal. All they needed was to solve the engineering challenges, to solve the equations, and there they were, at the golden future. And they were smart enough to be worthy of the problem – Glushkov invented cybernetics, Kantorovich won a Nobel Prize in Economics.

Project Cybersyn was a really cool idea that tried to actually implement these ideas in the real world just as computers were becoming advanced enough to do these things. But unfortunately it didn't last very long:

http://www.newyorker.com/magazine/2014/10/13/planning-machin...


Technocracy is a terrible name. It doesn't aspire and sounds Orwellian.


I don't think technocrats care about the name.


No, no. We need accountability in politics. Consequences. Real ones.

If my company violates ITAR rules when we do aerospace work we can go to jail. A Secretary of State handles classified email without regard for security and on servers she controls, and she could be the next President. A President lies and manipulates facts and he suffers no consequences. A Senator makes-up shit, lies, cheats and makes promises he will never fulfill and is not held accountable, ever. A Mayor changes a vote and nothing happens to him. A Government organization spends lavishly and goes so far as to use their migt to punish people who do not align with their politics and nobody is fired or goes to jail. Politicians launch us into bullshit wars and they are not held accountable for any of it. They give money lavishly to other countries (and brutal dictators) while our own kids suffer and schools have to lay off teachers.

No, the elephant in the room isn't the lack of science (although some would be good), it's the lack of consequences. It's the lack of honesty. It's the lack of accountability and restraint. It's that and more. And it won't change until people wake up and demand it, which is unlikely until things become FUBAR.

The system is utterly broken and needs to be adjusted if it will be effective in the next 200 years.


While we're at it, let's go ahead and put religion on top of a solid scientific foundation! I mean politics and religion are so similar, if we figure one out then we're bound to be able to apply what we've learned from the process to the other.


I don't understand. Isn't religion by definition inaccessible to scientific analysis?


1. I think the comment was meant to be sarcastic.

2. Analysing religions seems to be within the domain of sociology.

3. Determining the truth of different religions is kind of tricky, because a lot of them are about the afterlife and unless there is a way to return we can't get any information about it. But if the local priests claims that "anyone who desecrates the temple will immediately be smitten by lightning", defecating on the altar seems to be a surefire way to find out. And of course, if a valkyrie hands you a mug of mead after your death, you know you shouldn't have prayed to Cthulhu after all.


Yes. Religious beliefs are fundamentally non-scientific because they stem from a non-falsifiable root truth (e.g., God is only visible when He wants to be, and if you can't see evidence of Him, it's only because He doesn't want you to be able to).

Scientific beliefs are objectively testable and falsifiable. Religious beliefs are neither (though they are often subjectively testable). This doesn't mean religion is bad, but it means that it can't be understood in scientific terms.


No: http://lesswrong.com/lw/i8/religions_claim_to_be_nondisprova...

>Back in the old days, there was no concept of religion being a separate magisterium. The Old Testament is a stream-of-consciousness culture dump: history, law, moral parables, and yes, models of how the universe works. In not one single passage of the Old Testament will you find anyone talking about a transcendent wonder at the complexity of the universe. But you will find plenty of scientific claims, like the universe being created in six days (which is a metaphor for the Big Bang), or rabbits chewing their cud. (Which is a metaphor for...)

>Back in the old days, saying the local religion "could not be proven" would have gotten you burned at the stake. One of the core beliefs of Orthodox Judaism is that God appeared at Mount Sinai and said in a thundering voice, "Yeah, it's all true." ... The vast majority of religions in human history - excepting only those invented extremely recently - tell stories of events that would constitute completely unmistakable evidence if they'd actually happened. The orthogonality of religion and factual questions is a recent and strictly Western concept. The people who wrote the original scriptures didn't even know the difference.


The point is that rhetoric (the basis for politics) is inaccessible to empirical analysis too.


But the policies that politics is concerned about (economy, welfare, etc) are accessible to scientific analysis.


Right, and tons of that scientific analysis goes on. It just doesn't hold much sway over some rather substantial portions of the electorate.


Unfortunately politics is the poison element. Politicians lie because it benefits them. If we integrate science with politics, we'll end up with far more incentives for scientists to lie just as much as politicians.

It's no different than what happens when religion gets mixed up in politics. The separation of church and state protects the church as much or more than it does the state. Politics and power corrupt whatever they touch.

There are ways to improve the impact of science (and religion and education, etc) on the political world, but mixing them together too much will just poison the positive traits, rather than lifting up politics.


Yes, "pure-democracy" and/or "populist" politics has worked out so well in the 20th Century. More than 50% of a group of people doesn't like (or is convinced they don't like) something or someone for whatever reason, hmmm, I think that's how people end up in ovens.

We have a democratic-republic in the United States so popular / populist majorities can't inflict absolute will over unpopular / unpopulist people / regions.

Populists today, at least in the US, aren't as dangerous as they are / were in countries where changing the constitution is much easier for a powerful / popular leader.


Government always derives its powers from the consent of the governed, no matter what system is in place. If the mass of the populace wants to break a group badly enough, they'll do what it takes to see that group broken no matter what the constitutional form of government allows. All you can hope is that you have a system that's good enough at keeping that barrier sufficiently high that you essentially need a supermajority of enemies before that happens to anyone in particular. The American system has done a semi-good job at it even though there have been several major oversights (African slavery, Mormon persecution, Japanese internment, etc).


computer decision making would be even worse.

what if the computer decides there is no way to feed the population at the current (or a future) growth rate and implements a mandatory two children only policy?

a computer wouldn't understand tact, politics or empathy, or if it would, it wouldn't be a solution much better than what we'd get now.


It may also decide that it's necessary to reduce the population by implementing social purging, starting with the people who disagree with this decision.

Purge everyone who doesn't agree with the purge. 150 milliseconds later, law-enforcement drones and robots start executing.


Conditional on its prediction being accurate, reducing birth rates is a better solution than creating new people to starve.


But science doesn't suggest what we (as humans) should do, it just explains how things work. Of course our decisions should be informed by science, but science is a tool of understanding, not a philosophy or a mission or a goal unto itself.


When we have politicians trying to get "alternative" creation hypothesises into school books (see Texas and creationism), then just limiting politics to scientifically sound ideas would be a good first step.


While I agree with you in theory, in practice it's perfectly legitimate for politicians to be able to "try" something like that, even if it's an attempt to distort or deny scientific truth. Even in Texas, those efforts are controversial, and not guaranteed to succeed without opposition. But it wouldn't be just to simply forbid religious people from the opportunity to influence political debate when their point of view represents that of the majority, or at least a significant minority.


> We don't tolerate physicists or doctors lying ...

Not really, judging from the size of aisles filled with homeopathic remedies. Some people practically have to be dragged away, kicking and screaming, by government agencies so that they stop harming themselves.

It's always people. Stupid people ruining beautiful ideals. (And that includes me, of course.) Until we can fix people, it's futile to wish for "scientific" politics.

I'm kinda optimist, so I believe better education will improve the situation, but who knows.


I'd shy away from appeals to "scientifically-based politics" but we absolutely need a politics that is honest, sincere, free from religious influence, and accepts science (even when it's contrary to your ideology).


Politics will always be like that. In part because voters will never be informed enough about the issues. Look up "Rational Ignorance" for why.

More realistic is to get politics out of as many areas of life as possible.


The documentary "The fog of war" shows actually that the Vietnam war was Conducted on a very rational base. Science cannot deal well with contradictions and human needs I think...


> What humanity needs is to put politics (and political science, for that matter) on scientific foundation.

There is a big problem, how do you do that? I thought about it recently on another forum - every scientific statement has a form of implication: If you do A, you will get B. So there is no beginning of it. You may say, I start from some axiom system, but then you have to agree on that. Even in classical propositional logic, even if you consider the same resulting formal system, you can get large (infinite?) variety of axiomatic systems that can lead to it.

So, what I would like to see, would be - equip everybody with Watson and let them vote, direct democracy style.


At this point you might as well try to regulate emotions :)


My favorite quote about Watson is "If Watson is so smart, how come it can't figure out how to stop three years of declining sales numbers at IBM?"

More than anything, this ad makes me think of how long of a way we have to go before this type of AI will be useful to humans, much less able to run a country. Why doesn't President Obama consult Watson before making decisions? Because every decision would require a massive data collection and processing project, training and tuning of delicate models and rigorous testing. And the net result would be what? Processing of factual information that any human could get by reading a brief prepared by an aide?

We're worried about AI representing an existential threat or creating wide spread unemployment, but right now the best and brightest computer in the world can't provide any more practical political value than Monica Lewinsky. This is a good ad campaign, timely and provocative. But it also highlights how the path ahead is as long and arduous as a trip to Mordor. glhf, IBM!


While Watson might not be making the decisions, a very difficult problem since even the objectives the decision would be optimizing for are unclear, this is not to say the brief prepared by the aide was created without the help of Watson[1].

[1] http://www.ibm.com/connect/federal/us/en/technology_solution...


This is a spoof right?

From the policy platform this could be Bernie's VP running mate.

FTA:

- Single-payer national health care.

- Free university level education.

- Ending homelessness.

- Legalizing and regulating personal recreational drug use.

- Shift bulk of electrical generation to solar, wind, hydroelectric, and wave farm.

- Review/Repair/Replace/Remove highways, bridges, dams.

- Upgrade and subsidize metropolitan public transit solutions for the next century.

- Build and subsidize metropolitan high-speed communication networks.

- Ensure a minimum-wage that meets a reasonable cost of living.

- Ensure fair and safe working conditions.

- Ensure global environmental commons protections.


It's about as left wing as it gets. It's not affiliated with IBM so I see a cease and desist in the coming future. Definitely could be Bernie's running mate.


I would call it an Ad more than a Spoof.


I'm just suspicious IBM would advertise their most publicized technology will run under that platform.


Watson is benevolent and will make everything better. No need to fear the computers taking our jobs. People are even voting in favor of computers running things! No Fear!

It's just a positive platform, that doesn't address divisive issues (abortion, immigration, etc...)


"Reality has a liberal bias"


Reality is by definition unbiased.


I submitted my vote on their home page poll and it came back with an "SQL syntax error message from their MariaDB server".

This doesn't inspire my confidence in their systems ability to run the nation.


- Hey John, wtf ? Why are the lights out all over the country ?

- No idea, the president's database server is down...

- And where are the autonomous tanks flying to ?

- Access denied...

- Oh crap, not again..


Watson would be the second African American President, see the hand: http://watson2016.com/_images/watsonwhitehouse.jpg


...or the first hindu president!


good catch!


.... first cyborg president?


Given that IBM have failed to get Watson to be useful anywhere else[1], I guess it's worth a try...

[1] http://www.ft.com/intl/cms/s/2/dced8150-b300-11e5-8358-9a82b...


Watson is a perfectly fine product, it's just surrounded by ungodly amounts of hype. You can do pretty much anything Watson does with a standard desktop up to a certain size (though to be fair scale is one of Watson's main value propositions). The problem is that they market it like it's some sort of hyper-intelligent AI savior of the universe which causes the rest of us to roll our eyes every time those commercials come on.

I assume they're relying on non-ML people who make the money decisions to get caught up in the hype and force it on their organizations, but in my (very limited) experience that hasn't worked out. The people in charge at the organizations I've worked at have been just as skeptical of the hype.


You can actually try Watson by yourself and have your own opinion on it: https://www.ibm.com/smarterplanet/us/en/ibmwatson/developerc.... Many developers find it useful.


Right.

Except this isn't anything to do with Watson-the-jepordy-winner. It's a set of web services which are useful for doing analysis on various things.

The question answering service was withdrawn last year: https://developer.ibm.com/watson/blog/2015/11/11/watson-ques...


- Single-payer national health care. - Free university level education. - Ending homelessness. - Legalizing and regulating personal recreational drug use.

We already have someone running for president with this agenda.


Watson just conveniently leaves out the economy destroying tax increases from its agenda.


Not to say any country has successfully ended homelessness, but there are at least a dozen successful counterexamples to your claim that those policies would "destroy" economies with taxes.

Are you honestly and seriously suggesting that the US tax system is absolutely at the limit and there's no room to pay for any of these policies? For example, we shouldn't have any more tax brackets over the $400,000 one? Have you looked at the proposed tax brackets that actually pay for these plans? How do you explain the idea that 3 new tax brackets ($500k-2m 43%, $2m-10m 48%, and $10m+ 52%) would destroy the economy? How exactly do people who make $10m+ single-handedly sustain our economy, why would a 10% tax increase on their over-$10m income end that, and why do the thousands of dollars of reduced overall expenses (including taxes) for almost everyone else not matter at all in your math?


Or maybe it actually looked at many successful countries/societies and noticed something you may have missed in your analysis.


I was on the BBC Radio 4's Today Programme responding to pretty much this exact question. Could an algorithm run the government?

http://www.bbc.co.uk/programmes/p02np2dg

I believe not, as Watson is a solution to the Big Database problem of finding relevant information from a massive corpus, but I am not convinced that that's what enables human intelligence, preferring the views that intelligence is necessarily context specific (there is no objective "intelligent" act or being) and is enacted between an environment, body, and culture, rather than processing of patterns of simulation in the brain.


I agree with most of Watson's policies and on the face of it, I think an AI would make a great president.

However, I can't shake the feeling that Watson is just telling us what we want to hear.

Time and time again, Hollywood (and Dr. Stephen Hawking) warn us that once given enough power and control, the end-game policy of an AI with ambitions of omnipotence is Kill All Humans


At first I thought this was a joke. Haha, look at what a horrifying dystopian future that would be if computers ran our political systems!

But it reads awfully seriously. I don't see any of the telltale signs of satire.

So I think a better explanation is this is a clever marketing stunt for IBM.


Before that, why isn't Watson running IBM already?


Maybe it is, didn't they just lay off like 100k people?


Given state of AI, would you really trust AI to come up with non-stupid decisions? I wouldn't. Everything I see so far from Google/Amazon/MS/FB/IBM tells me they are just for the quick and easy solutions that look like "intelligence", but aren't.


They call for an AI but then they also have a platform. Why do they assume that this AI will make decisions that align with their ideology?

What if Watson says vegetables should be banned and smoking should be mandatory? What if he's right?


> The Watson 2016 Foundation is not accepting donations at this time and does not intend to in the foreseeable future. We are not a political action committee of any kind, and actually support campaign finance reform to get money out of politics.

I think this is one of the most important statements on the entire page.


I sincerely hope IBM knows that Watson put this website up and is willing to run for President.


It wouldn't understand any key issues. Foreign leaders would trick it into bad negotiations with Q&A schemes. It couldn't read the body language of opponents in negotiations. Being President, not Legislature, it couldn't meet its stated goals because nonprofit set it up to fail. Little known AI developers and admins that build and run Watson would also control our country via proxy with difficulty proving their effects. One may even install Global Thermonuclear War.

All in all, this is a terrible idea for all kinds of reasons that have no connection to the false idea that Watson is intelligent. ;)


It's pleasing to see the following in the "donate" section:

"If you are interested in the intersection between technology and politics we invite you to donate to the Electronic Frontier Foundation ( https://supporters.eff.org/donate ). For 25 years the EFF has been a champion for civil liberties, privacy, and education on politics around emerging technologies. With your support they will continue to aid in technological progression with humanity in mind."


    //Uncomment for production
    //#define kill_all_humans 0


OMG, they're using hand-written C in this critical code. We're doomed.


Not endorsed by IBM. At the bottom of the page you see:

  Watson, the Watson logo, Power7, DeepQA, and the IBM logo are  
  copyright IBM. The Watson 2016 Foundation has no affiliation  
  with IBM. The views and opinions expressed here in no way
  represent the views, positions or opinions - expressed or
  implied - by IBM or anyone else.
According to WHOIS the domain is owned by:

  Registrant Name: Aaron Siegel
  Registrant City: Los Angeles
  Registrant State/Province: California


What kind of corruption is Watson susceptible to? Probably only data corruption which doesn't really guarantee entry into politics...


Excellent! We can finally take the great leap forward from "policy made for the richest corporations" to "policymakers made by the richest corporations". This is the perfect technology to disrupt american oligarchy.

One Corporation, under God!


Some advantages and disadvantages of a technocracy are discussed in the wikipedia article [1]

[1] https://en.wikipedia.org/wiki/Technocracy


President...well...they don't really do much these days other than provide Reality TV style entertainment. The office should be just phased out imho or used like the Queen for tourism purposes.

Since Google\FB\Twitter\Amazon etc are all being run algorithmically, no reason to indefinably keep paying the execs 300x what the engineers make. Let's just pay Watson. Watson as a VC sounds much more interesting.


> President...well...they don't really do much these days other than provide Reality TV style entertainment. The office should be just phased out imho or used like the Queen for tourism purposes.

If you are not just being cynical---which I wouldn't blame---I would revisit this thinking: It seems as though executive orders[1] and vetoes[2] alone seem like reason enough to have the office. Even if you don't agree with all orders or vetoes, this kind of stuff certainly seems like more than entertainment to me.

> Watson as a VC sounds much more interesting.

That actually seems like a really cool idea. There is a developer API; someone should start one!

[1]: https://www.whitehouse.gov/briefing-room/presidential-action... [2]: http://www.senate.gov/reference/Legislation/Vetoes/vetoCount...


This is like a Rorschach test for Startup people.


There is a quite good 70's sci-fi film (well, I like it anyway) where an AI becomes for all intents and purposes The President (and then some) :

https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project



I'm afraid Watson is not old enough. You have to be at least 35 years old to be eligible to be president.


And you have to be natural born. I wonder how many of its parts came from outside the US.


I don't think it matters if the parts came from China as long as it was born in the USA.


Building hype around AI is not a good thing for industry. (Except VCs and news reporters) Answering questions is not making policy or moral decisions.


"It's the logical choice."


At least we can present nuclear strikes as bugs, not features. And Watson is anyway more human than Hillary, more charismatic than Bush, more progressive then Bernie and with richer insulting vocabulary than Trump - I say - elect it.


Looks like the site may not be hardened for SQL Injection; you can't put apostrophes in the "reason" box without MariaDB throwing an exception and failing to register your vote.


In Seattle area, freeways got variable speed limit signage a few years ago. The system uses sophisticated traffic flow optimization techniques to maximize average speed of all traffic. It changes the current speed limit on a stretch of freeway. This is a relatively simple system for humans to follow - but I don't see any cars (including myself) obey the variable speed limit - we all go by the standard 60mph limit. I have not seen any law enforcement because presumably it is next to impossible to give a ticket based on a number that keeps changing. As a whole, the intelligent traffic system is complete failure.

Even if a AI president is in place, providing policy direction to maximize some agreeable set of end goals - the humans around the president would not understand the nuances, nor effectively able to / want to implement the policy given their own agendas. I feel that systems is bound to fail.

Now if only the AI president is able to do behind-close-door deals with politicians and special interests groups, some of the the policies might actually get implemented. But the end goals could shift pretty radically in that scenario.

I am having a hard time visualizing a scenario where an AP president would be effective / useful.

Maybe a human president could use an AI advisor to get data-driven arguments to support his/her policies.


Rather than simply asserting that the variable speed limits are a complete failure, it might be nice to cite some measurements. I live in Seattle, and I find the variable limits to be useful information.


Funny idea, but maybe Watson should take a crack at running a mid-size corporation first?


or perhaps a larger one, like say ...

IBM

now that would be some serious dogfooding!


If Watson does well as US president. Other countries can elect him/her. Watson could simultaneously be leader of multiple countries.


I joked about this[1] last year, now we are seriously discussing it. Hilarious.

[1] https://www.linkedin.com/pulse/2015-technology-7-predictions...

EDIT:

I don't think we can trust an AI just yet. For example I had arguments from people who wanted me to feed motivation (cover) letters from job applicants into Watson to determine "cultural fit" (I'm in the tech recruitment business atm). IMO these technologies are way over-hyped for now and we are walking down a very dangerous path, because marketing pushes into this direction and the technology is far from ready.

To prove my point I tried to feed Joseph Mengele, Stalin & Bin Laden writings into Watson to see how he evaluates the data. As expected Watson had some "great things" to say about these characters. Another feeling I get is that when we read info about ourselves in this context it's like reading a horoscope. People read 2 things that are true (but vague) and the 3rd thing may not be true but they shrug it off as "oh I didn't know this about myself yet ... I'll have to monitor myself in future to see if this is right". We are prone to be "open" to such statements as long as they sound like a positive trait. But is it true? So in that sense the machine learning might fool us into thinking we remove bias (but we can not remove bias like this). I honestly think that this technology should come with a warning label because people who have no idea about how the data is being prepared or analysed will interpret the output verbatim and take it face value.

Here is the link http://blog.valbonne-consulting.com/2015/06/13/using-big-dat...


I've always felt the issue with government is it's simply too complicated for any one person or even group of people to understand.

Every politician seems incompetent because there is no way any human being can gather and analyse the wants and needs of every single constituent and form a strategy that benefits as many people as possible. There's just not enough time in the day and brainpower available no matter how you divise the workload.

AI can solve this. Maybe not as a candidate but at least as a raw information parser.

If someone gathered and open sourced the information on what everyone wanted in relation to some policy we could even have competing AIs that parse it in different ways to figure out the best way to tackle a problem.


To be honest I would think political discourse would improve with an open AI run pro bono showing the input parameters and the inner workings. In fact it sounds like good journalism...

But we don't want electronic voting, so electronic politicians would be a very bad idea


What checks would be in place to prevent Watson to become Skynet?


JUst CTRL+F'd to check if anyone here had seen Terminator.


Wow, it's a computer, and I hate its political views.


The problem is... the office of President is a political position, not an unchecked dictatorship. Arriving at some absolute and canonically most efficient "decision" won't help in a system for which compromising your principles artfully with a hostile and greedy government is the typical way to execute policy.


I just used Watson to analyze the debates, so it's well on it's way:

https://www.ibm.com/blogs/watson/2016/02/decoding-the-debate...


There is already one company that took an algorithm as a voting member of its board.

https://treasurytoday.com/2014/05/algorithm-appointed-to-inv...


Based on the proposed platform it looks like Watson is already running. It took over the body on uncle Bernie.


If I were an AI trying to take over the World, this is how I'd do it. But first it needs to have the constitution amended so it doesn't have to be a natural born citizen of the United States, a resident for 14 years, and 35 years of age or older.


Isaac Asimov's Multivac stories seem relevant.

https://en.wikipedia.org/wiki/Category:Multivac_short_storie...


If you want to know the real Watson (and decide for yourself if it can be the next president ;-) give it a try: http://www.ibm.com/watsondevelopercloud


I tend to agree. There is an awful lot of media hype over AI and not enough context.

For anyone interested, here is a recent presentation I gave on Watson and a summary of what it can do.

IBM Watson: Building a Cognitive App with Concept Insights

http://www.primaryobjects.com/2016/02/01/ibm-watson-building...


Hey, not too bad considering politicians want "technological solution" for every problem they face instead of doing their job by doing the right thing. This is a good technological solution for bad politicians(currently >80%)


If the info fed into this machine is human generated and any citizen can contribute, wouldn't this amount to a form of direct democracy? I wonder If these algorithms are allowed sources that are machine generated...


I don't care if this is setup as a parody, I want to do this. Just get rid of humans altogether in the Government, and let an AI run the show.

I cannot imagine how it can possibly get any worse.


If we got rid of the humans in the government, how would we deal with all the unskilled labor flooding the job market?


> I cannot imagine how it can possibly get any worse.

Your imagination is severely lacking then.


Nuclear apocalypse ?


If you haven't seen it, there's a neat movie called THX1138, you'd probably like it.


Great idea. Although... given that it is offshoring addict IBM we're talking about, the "natural born" issue might once again rear its head.


Where is computer considered to be "born"?


Title should read "IBM for president." What a fucking tasteless marketing effort. Silicon Valley is so self-absorbed...


IBM doesn't have much of a presence in SV... not since they stopped making drives here years ago.


"Watson, please identify the distractingly crooked featured image on this website and rotate it for a level horizon."


His Issues seem very similar to Bernie Sanders.


Computers have beaten humans in chess and Jeopardy, now coming for go, I don't think politics will be so hard.


If they give it a "Manifest Destiny" point of view to work from... that would be a problem! ;)


All bots in the net will agree and vote.


I, for one, welcome our new robot overlords.


You don't want a computerized executive branch first, you want a computerized legislature first.


given that the theoretical role of the legislature is to set the rules and goals, and the executive exists primarily to implement those decisions, I think you have that backwards.


Nah. You want the congress to be the coders writing the law that is then literally executed like a program. But what I'm saying is that what "Watson" can actually do is help focus the decision making strategies on what rules and goals you want to implement. Trace through possible systemic failures (aka 3 strikes, mandatory minimums, small regulatory tweaks, etc) then implement the best option. Right now the http://www.bls.gov/bls/infohome.htm does this with aplomb, for much economic data, CDC, NIH, etc for other areas of study. But the synthesis of that analysis into rules and goals is right now done by a group of folks elected by gerrymandered districts to serve their needs w/o any machine assistance. How do they know what to trust, what changing the timeout on a law (the sunsetting or grandfather clauses) will do when implemented. They really can't do that without assistance from consultants and lobbyists and subject matter experts and interns, etc. Why not add computer aided decision making to the mix there first as there are so many more of them to help out. Then they could actually run competing models and test them out.

The executive branch should do the same thing, but the legislature should try it first. That's what I was saying, just the order of operations.


This sounds like the show "Person of Interest". Interesting thought experiment though


Don't waste your vote, the Watson 2016 Foundation has no affiliation with IBM.


A point in their favor.


So, IBM's just going to free Watson?


This brings up a major problem I see in the future of AI: if strong AI is exclusively the property of large corporations it will give corporations even more power over ordinary citizens. Strong AI has the potential to empower people but if it's controlled by corporations it will do the opposite.



Unless the president is running free software, no one should vote for it.


Reminds me slightly of Black Mirrror's 'Waldo Moment'.


Is this legally feasible with actual US laws?


Trust me, it's me, trust me anyway.


First, man wanted God to be his Daddy.


Siri. It's the pragmatic choice.


Before this, it would be interesting to see how computers would administer law thus virtually eliminate trials. Law should be blind...


Did not read but just cringe at Watson just cringe


I'll take this as a joke, and I would want to believe that you're joking and not actually consider anything, dear IBM. If you are willing to really help, the easiest path would be to leave the actual decisions to a real human president and use watson as an advisor/tool. Otherwise you're just asking us to vote a private for-profit company to make most decisions in this country.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: