I am reminded of Thoreau's quote: "There are a thousand hacking at the branches of evil to one who is striking at the root."
The root of the problem is not that we have bots, but that we have normalised lying and deception as part of everyday business. We allow companies to pretend that bots are human beings, and allow call-center employees in third-world countries to pretend (even sometimes though elaborate lying) that they are located in the same country as you. We allow companies to tell outrageous untruths in their advertising - see the Samsung ad which they're currently being hauled over the coals for in Australia.
That's the real problem here, and the one we need to fix on a general level, not by band-aid regulations over whichever dishonesty has managed to irritate enough state representatives.
> see the Samsung ad which they're currently being hauled over the coals for in Australia.
For people not in the know, apparently Samsung is marketing its latest phones as water proof with ads showing surfers and people fully clothed under water. ACCC does not approve of this, as the phones aren't IP rated for use in salt water.
If someone were to post on twitter about how they would murder Samsung executives, then no matter how many small print disclaimers saying that 'they were talking about killing the expectations of the executives by showing them an enjoyable time like no other' would save them from legal prosecution. Advertisements should be treated the same. The way normal people view implications in the ads constitute the actual ad and no amount of fine print can change that.
Samsung isn’t the only one. Apple is similar too in its carve outs for water damage in it’s warranty. Im guessing samsung weasels out of it by saying water ‘resistance’ vs proof, but still marketing it with surfers and such?
My water resistant series 3 apple watch screen died after swimming in a pool the next day to give a personal example.
> My water resistant series 3 apple watch screen died after swimming in a pool the next day to give a personal example.
That's... surprising. To be clear, I'm not saying I don't believe you. But I'm definitely surprised - I've got a Series 2 which I get wet every day and even use while swimming and I haven't had any issues. My mother has a Series 2 or 3 which she also regularly uses while swimming.
Per Apple's documentation[0]
> Apple Watch Series 2, Apple Watch Series 3, and Apple Watch Series 4 may be used for shallow water activities like swimming in a pool or ocean.
My wife's favorite watch is the Timex Women's Ironman Triathlon 50. The watch is labeled as being water resistant to 50 meters. Most complaints about this watch are that the band is crap and breaks after about a year of unextraordinary use. The other, is that after minimal contact with water, the inside of the watch face steams up. These aren't expensive watches, but they aren't cheap either.
Not sure how a watch intended for Triathlon use has a reputation for the band breaking at the slightest stress and the inside of the watch fogging after an ordinary dip in 3 feet of water. How does a watch like this pass QA and not get investigated (or sued) for false advertising?
I would assume that it's not a QA issue, but rather it's designed to perform well enough that 95% of customers don't complain. If 95% of customers don't get it wet or don't get particularly angry when they do, then designing it to be more waterproof would under-optimize the company's resources.
A lot of the things we take for granted about the benefits of living in a capitalist society are side effects of capitalists traditionally not being very good at their jobs and giving customers stuff/benefits unintentionally. That seems to be going away thanks to IT and increased testing of assumptions.
My theory is since the screen is held to the watch with glue [0], heat exposure will cause the glue to unset slightly and break the waterproof seal. So over time your watch may lose it's seal from exposure to being in a hot day.
The malfunction happened to me after owning the watch for 14 months, and I swam with it in the ocean on month 4 with no problems. It was also fine on the first day, the issue started happening the day after. My theory is a very small amount of water got in, but it was enough to damage the oled screen inside.
The solution would be not using glue, but using screws or a bayonet on the back, but then it won't look like a seamless object. Maybe in the new non-jony era of apple they will make their watches properly waterproof? I doubt it although.
I got a series 4 watch as a father's day gift and went swimming the following weekend. My watch went insane (it tried to dial 911 about 5 times and wouldn't accept any input to try to stop it) and then died. I took it to my local Apple store and they were rather stern about too much water and it probably being my fault, but sent it off to the repair center anyway. But... Apple sent a replacement via FedEx that arrived less than 48 hours later.
Fine print you have to look up, without having any real reason to look for it in the first place, is not being clear at all. It's like saying, it's in the EULA, they're pretty clear about it!
Unless it's up front and center as part of set up, it's not being upfront.
Really? You think this rather clear and short document detailing to which extent the Apple Watch is water resistant is basically the same thing as iTunes’ EULA?
It‘s up front and the fact that it takes more than a bullet point on the back of the box to explain the details and limits of the feature doesn‘t change that.
Here‘s the description from Apple‘s official Watch marketing page:
> Sweat, surf, and swim proof.
Apple Watch Series 4 is water resistant to 50 meters and tracks both pool and open-water workouts. Turn the Digital Crown to eject water from the speaker using a burst of sound.
I don‘t see this contradicted by additional support document.
Just because an EULA is well written, doesn't mean that more than %0.1 of their customers will still read it. If they put this document inside their EULA, nobody will still read it, and I would argue it's more hidden than a EULA because you have to think about looking it up on the internet. If people look it up, they will do it after their watch breaks, and by then it's too late.
They also don't disclose that they have water damage carve outs, which shows they know their water resistance wears out quickly enough that they need the carve out for financial reasons.
"Swim proof" the watch is not. Apple's advertising, whats in the box & paper manual, what copy is shown on the store, and what text that shows up on the watch when you set it up does not warn of any of this.
Also go ask a cross section of the population what water resistant to 50 meters means and they probably will give you the more common understanding of waterproof.
My Apple Watch series 0 (a watch that wasn't marketed as waterproof) had a dozen+ days in the ocean and 50+ dunks in a pool and suffered no malfunction. You must have had a bad quality item.
It may not be officially recommended, but I know people who have gone surfing with recent model iPhones and Apple watches. They seem to do fine, but maybe that's just luck?
They can claim it all they want. When I go to a water-involved activity, my phone goes into a zipper-lock plastic bag that is ostensibly for pills, but is exactly the size required for my phone. Then that bag goes opening-end first into another zippered plastic bag. A capacitative touch screen still works through two layers of plastic water resistance. It's usually the direct sunlight that makes it unusable.
Even if I wasn't worried about the water, what about beach sand?
If an advertisement told me my phone was bulletproof, and showed a video of one getting shot, I would take extra precautions to ensure that it never came into contact with loose bullets, and to store it well away from ammunition or containers that previously held ammunition. That is how much I proactively mistrust ad content.
Correct, my phone died after briefly exposing it to salt weather on a beach. I was misled by the ads I saw, there was no implication except in very fine print that saltwater will immediately fry your phone.
Well, frankly, fuck them. Last year on vacation I was this close to taking my phone to the sea with me, because obviously it's "waterproof" (as their marketing copy tells me). Only my wife reminded me that just because they say it is, doesn't mean it actually is, and especially doesn't mean I should risk my expensive phone to test that theory on vacation.
I didn't want to post yet another "reason #651235 why I loathe adtech" comment, but that's really it. Bots pretending to be people wouldn't be of interest at this level if it was just criminals trying to scam people - it would be just another type of crime. This is a problem because of the almost-fraud tactics of sales and marketing that, for some reason, happen to be on the right side of the legal line. I suggest we move that line to solve this problem.
2pt font on page 14: Except this page of exclusions, and conflicting exclusions, and the fact the company can add any of their choosing at any time without notice.
The real problem then, is that corporations have unchecked power. No one acting behind them is personally liable for their illegal behavior.
Corporations can't go to jail. They only pay fines that are a fraction of their profits from the illegal behavior, which is a mathematical incentive for them to break the law before their competitors.
Even in silicon valley, we laud AirBNB, Uber, Facebook, Google, et al.
Do you want corporations to be better or not care?
I’m a fan of forcing corporations to issue shares to the victims.
Do something minor? 10% dilution. Do something worthy of the corporate death penalty? Issue 100 shares for every share outstanding.
Of course, the issued shares would have as many votes as the maximum currently issued share (so if the founders get 10 votes per share that is what the victims get too).
Note that you can set precedents based on percentage ownership, so this naturally scales to large (and small) companies.
If this were common practice, I guarantee you that companies would be a lot more careful to obey the law: The ceo, founders, and investors all would have a bigger personal financial incentive to obey the law than to break it.
"I’m a fan of forcing corporations to issue shares to the victims."
Typically, the victims of corporate malfeasance are considered to be the (existing) shareholders, in the American system. Like, there's a lawsuit on behalf of the shareholders when pretty much anything goes wrong that affects the stock price. There's quite a contrast to the common opinion on HN and elsewhere, where the shareholders are identified with the corporate malefactors.
Fascinating suggestion. I'm trying to think of consequences.
Among those: if lawyers get 30% of settlements, then large consumer-rights law firms could end up as major shareholders. That could get v. interesting several ways.
Brings up one of the largest scams out there. Investment managers of mutual an 401k funds retain the voting rights of the shares under management. Effectively locking most American's out of corporate governance. Ever notice when shills talk about the holy god given rights of shareholders they never talk about that?
Why shares instead of cash? How will issuing shares in Airbnb help people struggling to find affordable accommodations in the city where they live/work?
Airbnb isn't even public, so the shares are essentially worthless unless someone takes on the expense and hassle of pooling them together and finding a private buyer. And even then, there's a major imbalance of power because private buyers are usually wealthy individuals/funds and know that the seller is desperate for cash.
Corporations can, however, have sanctions imposed on them, like Huawei or ZTE. Perhaps we ought to explore placing companies that routinely lie to consumers on the BIS Entity List, and bar other American companies from doing business with them until they change their ways.
How hard would it be to write laws that punish companies in ways that will shape their behavior, like we do with people? All you have to do is take away their ability to business in your jurisdiction. That's corporate jail.
Not disagreeing with you but I believe Thoreau said that about philanthropy.
> There are a thousand hacking at the branches of evil to one who is striking at the root, and it may be that he who bestows the largest amount of time and money on the needy is doing the most by his mode of life to produce that misery which he strives in vain to relieve.
perhaps, but I think Thoreau is talking primarily about the risk of producing a systemic dependency, i.e. the "teach a man to fish" philosophy. He was an ardent believer in self-sufficiency and prioritized individual autonomy over maximizing group benefit. He has a lot of great things to say but you could argue their relevancy and context is often misrepresented.
> allow call-center employees in third-world countries to pretend
I agree companies should be forced to tell the truth to their customers, but in all cases where they're asking for details about the customer service rep? That could go weird places.
If a caller demanded to know the rep's HIV status, we wouldn't insist the company disclose it. We'd probably demand they didn't. Honesty is important, ok, but the customer has no reasonable need to know that. So, ok, it's not as simple as "tell the customer everything." There are some judgment calls the company has to make about what's important to disclose and what is irrelevant. The service rep's race, religion, medical history, or sexual orientation are probably not up for discussion. Why not national origin?
This isn't just about fairness to the rep, but the majority of customers need the company to hold this line too. If some portion of the population strongly believes that foreign call centers are lower quality, are you going to get the best possible answers on a survey if you say, "this was a foreign call center, how was the quality?"
Disclosing irrelevant details can sabotage collection of unbiased information on call quality.
If customers as a whole want improved service quality, they'll want the company to be able to collect unbiased post-call survey results.
If call centers in country X are all terrible, unbiased surveys will reveal that. If it's really about training, and there are good and bad call centers in several different countries, unbiased surveys will reveal that too.
I know passions run deep on this one, and it's a hard case, but this one seems a little more complicated than the others.
"Hello, it's [fake name they can't quite pronounce chosen to be a common UK name] calling you from [company they don't work for but who ordered a phone advertising service], it's lovely weather here in [small UK town they mispronounced badly and where it's terrible weather today], how are you today? ... Great, we are calling about your [account with company you don't have an account with] ..."
It's basically all lies, and all unnecessary, trying to win some trust as part of a marketing exercise. Bleurgh.
>If a caller demanded to know the rep's HIV status, we wouldn't insist the company disclose it.
Because it has no bearing.
I think a more interesting question is if they wanted to know health information that would be relevant, such as specific mental abnormalities that would influence a person's ability to lie. Then again, even that question is moot because when you ask someone if they are a liar, the truth tellers and liars both answer no.
Current location matters because of laws. If the other individual is in the US, I have a better idea of what recourse I have if my information is misused.
And I don't think this can be dismissed outright as contrived and unreasonable, especially in some cases like doing business in China where IP is treated very differently
But if we use the excuse of the customer having a right to know the laws that bind whom they are talking to, gender and race become an interesting case as well. In a fair and just world, they shouldn't matter one bit. But given our current world, we know that the legal system is not applied to them equally. Racial and gender disparities in sentences can be off by a factor of 27 (this was a back of the napkin estimate I did some years ago based on sentencing data, and was based on the same crime, so it does not include any disparity in what charges are brought). Does a customer have a right to know that the person they are talking to is privileged when it comes to laws being applied and thus has less disincentive to engage in illegal behavior?
>If some portion of the population strongly believes that foreign call centers are lower quality, are you going to get the best possible answers on a survey if you say, "this was a foreign call center, how was the quality?"
Self reports are one of the worst forms of data collection, so it is a bit sad that so many companies still depend only among it when rating their employees. More objective metrics have massive problems with being gamed, but at least they are more objective.
Declining to tell someone something is not the same as lying to them. And declining to tell the customer "I am HIV positive" is not the same as declining to tell them impersonal information.
You may or may not be aware but there are companies that actively instruct their international call center personnel to tell customers "My name is Sally and I live in Fort Worth Texas, right near you" when none of that is true.
The service rep's race, religion, medical history, or sexual orientation are probably not up for discussion. Why not national origin?
National origin is different than where the call center is located. A company is under no legal or (in my opinion) moral obligation not to “discriminate” when it comes to choosing which country to locate their call center. I
Some part of the population is always going to use deception and weakness of others.
I don't think there is a "general fix". You just have to constantly show them where the boundaries are and the cost of crossing them.
Taking advantage of the expectation, that a general fix is possible, is what gets Charlatans and Panderers into power.
That said, despite the fact that it has been easier to take advantage of the weaknesses in the population faster and at scales never before, (imagine a shark, overnight, growing more efficient at hunting/doubling its kill count and hunting grounds, and that trait spreading to all sharks by the next day) that same pace causes natural born predators to bump into each other more, clashing more frequently, expending more resources/energy in empire defense.
There is no free lunch, even to the mindlessly ambitious douchebags of society. They put up an appearance that there is.
> that same pace causes natural born predators to bump into each other more, clashing more frequently, expending more resources/energy in empire defense.
And that's a problem in this case, because it's not like any of the predators in ad industry actually dies. They just deliver less money for the parties that employed/contracted them.
Advertising has a different dynamic than regular predation, because in a saturated market, effort of any party serves only to cancel out the efforts of every other party. It's a zero-sum game that can consume near-infinite amount of energy and resources. Now think of all the man hours, electricity, fossil fuels, papers, paints, toxic chemicals and human dignity - all wasted in a zero sum game - and tell me again that advertisers "expending more resources/energy in empire defense" is a good thing. It's the opposite - it'll eat our economy and kill us through side effects of all the resource wastage.
I agree that this is a widespread problem in many areas but I do think there’s a strong argument that bots are worth singling out in the context of social networking, where the bots are often used to steer public opinion in non-obvious ways with no clear ownership. It’s one thing if Samsung’s support chat-bot is misleading but that at least is obviously linked to the company in question whereas, say, the fake Twitter accounts complaining about Nike using photo stolen from real people aren’t easily associated with whoever is pushing that campaign and the intention is clearly to shift public opinion. There are good arguments for preserving anonymity but banning misrepresentation seems more defensible.
(I mentioned the Nike thing not because this is limited to conservatives but because that was the most recent example which came to mind: there were a bunch of well-promoted tweets from accounts using profile pictures of attractive young women which a quick tineye search showed were long-running Instagram users under different names)
The other issue with bots is whether a company's support is bot-only or human.
Free choice and all, but without disclosure (pre-purchase) a company shouldn't be able to "cost manage" their support function by using an algorithm fronted by chatbots.
Yeah, I'd also back laws banning a lot of the no-recourse situations companies cost-minimize you into but that'd also want to account for, say, having you talk to people who are under strict instructions not to help you or tell you why.
(Once Verizon tried to charge me an ETF despite being a couple of years out of contract. The phone support was horrible and I thought it might have just been an English fluency issue but then I called the executive office and the person who fixed my bill just casually volunteered that the people in the first-tier call centers cannot in any way reverse charges or escalate to people who can, and aren't allowed to say that. They understood the question but their jobs were literally on the line if they told you that they couldn't help.)
Should they have to disclose that their phone system has an IVR before every purchase as well? That they use a third-party logistics provider to handle returns?
Utilizing algorithmic-only customer support seems far more impactful than those technologies, from a customer standpoint.
As Google and PayPal have shown, it's generally a cluster$&#* (at best), even with state of the art technologies.
Furthermore, it provides a fundamental shift in the power dynamic by removing the possibility of whistleblowers and ethical counterpressure. You literally have management as the sole arbiter of algorithmic settings, with (currently) no legal disclosure requirements as to any internals.
That seems pretty screwed up from a free-market, transparency perspective.
Disclosing IVRs before purchase would be a good start. Preferably with a legally mandated recognizable logo that customers can quickly learn to associate with terrible support.
The problem with norms: bad actors don't care about them.
Norms and regulation/law can work together. Either alone are insufficient, I think.
Strong international norms and regulations are probably needed as well. As we clean up our domestic affairs, we create more space for hostile state actors to fill.
>> The problem with norms: bad actors don't care about them.
This is a very important observation. The world most of us want is one influenced by norms we work together to define & change over time. Some see this as an opportunity to exploit for personal gain and we eventually get imperfect regulation that tries to reflect what we originally intended, or worse, is enacted by the same bad actors to coerce the bad behaviours we resisted in the first place.
I see this all the time in growing companies:
1. culture dictates expected behaviour
2. company grows
3. culture weakens
4. norms get violated (intentionally or not)
5. process gets decreed to address transgressions
6. everyone looses the shared benefit/responsibility of autonomy
It seems then, that it starts with the infrastructure / process. If the infrastructure supports abuse freely, it will be abused. There will always those who try to find a way regardless..
The "root" of the problem is a tricky metaphor. It can be defined different ways, and the most abstract (ie, You can't swing at it) is often what people come up with.
How would you fix, at a general level, the overall human & corporate tendency to play loose with the truth? I mean, we're subtly dishonest all the time.
In the US, at the core of that problem is the deception that corporations are "people" at a fundamental level. That fundamental untruth allows and essentially permits all the others.
I'm having trouble seeing the connection between these two issues. If we didn't treat corporations as people, how would that stop companies from using bots?
How would that change the status quo for using bots? Without free speech would corporations that aren't also people just be completely disallowed from conveying information, or is there some particular law you think should be passed but is prevented by the first amendment?
Could corporations get around the restrictions you have in mind by subcontracting speech to people?
So it's okay if a person uses bots to deceptively call people, it just shouldn't be okay for corporations to do it, because they shouldn't be treated as people?
Basically they've warrantied their phones for chlorine or salt water immersion now here in Australia. They won't be refusing warranty claims for related damage.
Minor PSA: If you ever immerse your IP68 rated device in anything other than water, make sure to give it a rinse in water ASAP. Preferably not too high pressure water either.
The root problem is the identity problem on the internet.
Having anon accounts is good for HN but as soon as money is involved we need a structure that solves the identity for all people inside the transaction.
Technology has to become 1984 levels to ID users at all times. Better to have an identity relationship with a structure or body that you can meet with in real life, like a bank. A mix of tech and human relations is required for a sane identity relationship with this governing body and therefore everybody you do business with on the net.
Lying goes back to manageable levels once the identity is linked to someone's real life reputation.
>allow call-center employees in third-world countries to pretend (even sometimes though elaborate lying) that they are located in the same country as you.
I have a friend who worked at a call centre located in asia for an American company. They were threatened with termination if customers heard them speaking in their native language and the training for the job consisted mostly of faking an American way of speaking as much as possible.
We also “allow” the government to use deception. This entire thread appears to be focused on mostly inconsequential annoyances like adtech as if this is somehow the worst thing in the world, and the only savior can be the state finding the “root” to regulate and fix as if that wouldn’t include fixing itself somehow.
I agree, and I believe things like massive boycotts, worker-owned cooperatives, and stronger unions are the answer, along with an independent news media.
I know the normal reaction is more fines and regulation, but then you're using bureaucracy and court cases to fight people who are masters of bureaucracy and court cases.
Advertisment is the root of many today’s time. Marketers could argue that it is just about helping people to find the right product or service, but this is another lie. Maybe a lie you have tell yourself if you are working in that field.
True. I would like to add, however, that just because they're not striking at the root doesn't mean they should stop hacking at the branch. It's still an evil branch, to use Thoreau's metaphor.
You're absolutely right about normalizing lies and deception. You know what normalizes lying faster than anything else? Requiring that people lie about the evidence of their own eyes if they're to be accepted into polite company. We've built a society in which lying is compulsory in certain domains. Why on Earth would anyone expect the lying to stay within those domains once the taboo against dishonesty in general is broken?
I felt stupid, but I have fallen victim to this on dpd.com while tracking a package. A helpful chat popup has appeared where I could request assistance from a support agent, but it was not disclosed that the agent is a bot.
Needless to say I have spent a couple of minutes repeatedly asking a question, and even rephrasing it while being frustrated that this person does not seem to grasp my issue.
My recent experience with a credit card customer support agent might as well have been me conversing with a bot. It was actually more frustrating for both me and the agent, that they were so strictly bound to a script, with no room for their creative human brain to actually adapt their response to the unique context of my issue.
When customer service is nothing more then an agent choosing from various scripted responses, what is the point?
Escalating my issue up through 2 senior agents, i finally reached a human who was allowed by their employer to actively apply context to their support tasks.
The use of programmed constraints on a system and how that system is sold to customers, this is a wicked problem of our times.
They may save on average, but that's just shifting the expense to the customer in terms of wasted time. If there is anything I've learned over 30 years of running businesses that being disrespectful towards your customers' time it is that you will end up losing those customers.
The saving grace for these online services seems to basically just be that the potential customer base is so ridiculously huge that it still pays vs effort required. Same for the big social giants etc.
No matter how many people know better, there are plenty more that don't.
Another way they save is by discouraging their use. Since people who've called with real issues have had to waste a whole lot of their time, they're more reluctant to do so the next time they have a problem.
A few months ago, I contacted Microsoft support regarding an old Hotmail account of mine. I couldn't get access due to my own fault in the end, as I had set it up as a kid and 10 years later I could barely remember any details. However, the conversation with the support agent was by far the best. It felt natural, without the "copy/paste" ready-to-use responses they usually have.
AliExpress does this as well and in an annoying manner that I consider insulting. Support chat will start with a useless bot that just looks for keywords and then replies with FAQ links. "Escalate to a service agent" will make you wait for 30-45 seconds to then be greeted by "Anna" or some other human name. It's another bot, with typing-delay and will follow a simple script to keep you busy, low quality and no apparent option to actually get to a human support agent.
I'm curious. Are you complaining about support not beeing able to solve your issue, or about talking to bots instead of humans? Assume a strong AI with enough authority would handle support. Would you still prefer humans?
I'd have a problem with both in the support case. The alternative to a human contact is a natural language interface that just wastes my time and/or laziness in other parts of the business. If my problem is covered by your FAQ, improve search. If my problem can be solved by a bot, let me do it in your UI. If handling my problem through a human requires more information prepend a form. Provide a clear path to skip/escalate.
To be fair though, I'm annoyed by human first level support for similar reasons most of the time but at least escalation/problem solving with them is easier than a bot that explodes when you go off-script. I think, personally, that making me talk to your fancy answering machine is just devalueing. If a company wants my business they should have human support and pay them enough to work with the customer.
> If my problem is covered by your FAQ, improve search. If my problem can be solved by a bot, let me do it in your UI.
This assumes that people who have a question they need answering will consider searching for an answer or getting to grips with the UI first, rather than heading straight to ask customer support. That's often not the case. Long before support chatbots became widespread, telephone trees made a point of telling every single user in the queue the help section of their website existed...
Agree bots should often make it easier to escalate and should default to escalation rather than blowing up[1], but if a large proportion of your queries are actually answered by telling people the FAQ page is a thing, it's a bit difficult to justify paying humans to do it.
[1]if escalation takes a couple of minutes for a response escalation and blowing up might be indistinguishable...
Yeah, I can totally agree with that under the constraints you mention. I might conflate my annoyed experience with marketing type bots with the actual requirements for a support one, there's certainly dark patterns with how bots are implemented and integrated these days but if they become more wide-spread I guess we'll figure out how to tackle those as well.
If it could, I wouldn't mind. It didn't, and then went into a loop allowing me to "escalate" it to a real human which turned out to be the same bot, only with a different name.
I've been assuming that it's a pre-determined initial message, but that a real human in a low-cost country would immediately be involved if I bothered to respond.
What portion of the chat pop up windows do you think are purely bot? Might any of them be purely human?
It depends build to build. The standard approach is one of 4:
1) The bot initiates the conversation and your initial message gets sent directly to an agent. This is the older model that's in most common use today.
2) The bot initiates contact and based on your responses does some simple keyword matching and delivers help article links where possible or asks for more information IVR style, then when it hits an "I don't know" point or if the agent option is selected, offloads to an agent.
3) This is my favorite style, honestly: The bot initiates the interaction, and does some machine learning backed AI chat, all the while the interaction is monitored by an agent who can take over at any time. Similar to #2, if the bot hits a sticking point, it'll just queue to an agent. This unfortunately is the least common of the implementations.
4) This is the most modern and is becoming the industry leader: Fully AI bot trained against a veritable Everest of chat conversations for that entity/industry, only offloads to a human when you shout "HUMAN" at it enough times or if it gets really stuck and confidence intervals start falling rapidly.
NOTE/DISCLAIMER: I design and implement these systems for a living, and we don't often get much say in the customer-side UX, so I'm sorry if you've gotten stuck with an arguably bad build!
#3 reminds me of self-serve retail checkout kiosks: when I get stuck, or the kiosk gets stuck, a nearby live human approaches and resolves the issue (except when there's no live human nearby in which case I just get to stand there feeling awkward and useless. it's pretty humbling.)
It really depends on the site. Some go to bots, some to script-followers, some to good dedicated support (shout out to datadog here). Some vary by time of the day.
But yes, some of them are contact boxes for real people.
Or you get a human that assumes the customer they are helping is abusive or problematic based on the prior messages, in which case they may be very unmotivated to help you.
Currently they're fairly easy to spot -- if you work in tech, anyhow. A lot of money will be put into making them harder and harder to spot. I'm glad to see this law come at the beginning of that process, so there isn't an established industry that can put money into fighting it.
> Or, as you did, just say something that puts them off script.
I don't know about that. Does Amazon's support use bots? Or are they just in a remote call center using auto translate to communicate with customers?
Because in general, they appear like bots to me: they will parse for key words and reply to those, ignoring context and subtle differences that make their replies sound weird.
My support experience with Amazon Fulfillment involved several months of going back and forth with what I think were humans whose native language was not English, bound by a script, until I learnt that the shortcut to getting an actual human who is allowed to deviate from a script is sending an email to Bezos.
I doubt it would be better than their email reply bot, which is transparently inept. I wouldn't mind an email bot if it got me what I needed, but it was useless.
I guess you didn't watch the Google I/O presentation earlier this year (which is probably what this law is about I would guess):
https://www.youtube.com/watch?v=D5VN56jQMWM
I run an IRC bot using Markov chains to generate text. It's a very crude method, so usually it's pure nonsense.
But sometimes it says a bunch of sentences that make sense in a row to a new user, who then thinks it's a human.
If individual states each enact individual laws governing the internet, then only large companies will have the resources to follow them.
We'll see a balkanization of the web wherein it's no longer very world wide. Small internet businesses will become harder and harder to start. Big monopolies will become entrenched.
This worst-case web would still support all of the original use-cases of the web, where ordinary folks and organizations were posting ordinary documents that anyone anywhere in the world could view.
It's small-business webapps that have problems. But I would be very glad for those to stop doing so many terribly sketchy things that have become the norm on this consumer-unfriendly internet we're dealing with.
So like I'm super afraid of a "regulated internet", but I'm also super sad by what's happened while it's been unregulated. Businesses, governments, even ICANN have done terrible things to the internet. I'm not very optimistic for any outcome anymore.
Not really. If you have a personal website with analytics (even your own), suddenly you need to display some cookie warning. If you allow comments all of a sudden you need to follow GDPR. Understanding it may require more time that you were willing to spend on share your ideas with the world. So if you really want to share you move to some big platform that handles it all for you.
The law is already so complex that even within a single country or a state, people who study it need specialisations. Regulators have no grasp regarding existing laws or full extent of laws they are voting for. It's madness.
Small business that government or a big company doesn't like can already be killed by thousand cuts because they don't have infinite amount of money to spend on lawyers.
If I had a website about programming and it had a forum part is it purely personal? What if is it just comment section? What if I had ads to cover server costs vs when I don't have them now? I closed the website because I don't know. And I don't think I would say I know until I hire a lawyer.
Re analytics part from another comment, I only care how they interact with my website. Without sharing that info with anybody. You could avoid cookies if you really wanted to but then you actually collect more data about your visitors.
A law that applies to large swatch of regular, benign behavior of people and makes it illegal is not a good law. If that benign behavior is not targeted and the law is used to good effect, it may be beneficial in the short term, and be useful, but it is not good, because it gives the state extra leverage against normal people that they can apply it unevenly.
A decade from now, the state might target certain sites with forums that say things they don't approve of, and use these laws to do so. Is giving the state more tools to circumvent freedoms like freedom of speech due to technicalities a good thing? How about in stead of making a law that blankets many benign interactions and relying on enforcement of only actual problems be up to the opinion of the enforcers, we try to target the actual problem behavior as tightly as possible, and then identify the edges where problem behavior exists and pass new legislation to apply to that?
> If you have a personal website with analytics (even your own), suddenly you need to display some cookie warning. If you allow comments all of a sudden you need to follow GDPR.
Not having both of those is a good thing, so I see no downside.
First of all, California isn't just any state, it's the largest one. It even has stricter auto emissions standards pushing forwards the rest of the country -- so this isn't so much a dangerous precedent, as a "special California thing" which is historically more protective of the consumer than the federal government. (I personally see this as a positive thing.)
And as long as state laws are limiting abusive behaviors, I don't see what the problem is -- then we all follow the new "floor" and the whole country benefits. A small start-up can not choose not to lie about a bot being a person just as easily as a tech giant can.
The problem is if local laws conflict with each other or impose a significant burden. But then the federal government generally steps in quickly and shuts things down thanks to the interstate commerce clause.
So I don't think there's anything to worry about here.
If you know how to create a bot, then you know how to make it tell you that it is a bot.
I accept the general premise of your point, but this particular law isn't asking anything onerous. The societal benefits that come from limiting the ways in which political and commercial bots can decieve the public are just too great to claim this is a bad thing.
1) There is already balkanization based on countries. Does California change that situation so much? Enough that it shouldn't perform it's role in the U.S. as a laboratory for legislation (a role of all states)?
2) Commerce and communication restrictions should not be considered without also considering the purpose and benefit the legislation intends to cause. First and foremost is whether it actually helps more than it harms. For example, it could have been (and I'm sure was) argued that abolishing slavery was problematic from the standpoint of groups of people and businesses interacting with those states and what it meant to use slaves in them or take slaves to them. The point is not to equate this issue with the abolitionist movement, but to point out that how the law affects interactions between states may have very little (or very great) bearing on whether it should be ratified, and it depends entirely on the legislation in question. Great leaps in both positive and negative directions both cause friction between (nation)states, so friction itself is a poor indicator of whether legislation is good or bad.
Please keep in mind that this is not California's first internet specific law. It has already passed laws about phishing, sexual exploitation, cyberbullying etc. See https://oag.ca.gov/privacy/privacy-laws
If you want to sell something in a place, you abide by that place's laws. If you want to sell in a zillion different jurisdictions, you're big enough to do comply by each law.
If the alternative is to wait for Congress to pass a law, then the precedent is long since set. The Congress sets a bar for passing new legislation very high (two separate houses, one of which has a 41% veto, plus the chief executive).
The "laboratory of the states" thing is a legal nightmare in a networked world, but it's inevitable under the current organization. Well-crafted nation rules would be better, but a lot of people seem to want it this way.
For generations people who wanted to do business in another jurisdiction had to follow the law of that place, even if it meant physically traveling there or opening a store.
Internet businesses would still have things much easier, as a change in software can be done by someone working from home or in Chennai or what have you- without any capital expenditure.
GDPR is absolutely insane. I was working on a European website where a user would upload a product info along with a photo of the product. I had to abandon it, because I'd have to somehow filter all uploaded content for copyright and other violations, as I'd be liable under this new law.
That will be a fun one to interpret. So if I use autocorrect, am I a bot? What if the device makes next word suggestions and I use them? What if the device suggests the entire post, but I manually approve it? What if it suggests five separate posts and I approve them all at once?
I'm not sure why you're being so facetious. In fact, I wonder why HN always has comments like these.
Lawmakers are purposefully vague because judges can decipher what the spirit of the law is and fine corporations or condone specific use cases when they are brought up in court. You can't go into court to challenge a law with hypothetical cases for a good reason. Do you want lawmakers to arbitrarily impose constraints like only 10% of non article words can be suggested per message composition or only 2 posts per minute are allowed?
It is a fact in life that technology changes and improves things beyond what we could have foreseen in just a few years. The degree of flexibility built into these laws is a huge plus. Not a flaw.
If you're interested in actually shipping, a more helpful approach is to choose the simplest behavior to implement, notify the requirements person that you've done that, and offer to create a release-blocking bug for the issue if that is unacceptable.
Chances are, if the requirements person doesn't have an opinion on what the copy for the dialog should be if the customer is 65+ and it's a Tuesday in a month with 31 days, then it's because that choice doesn't really matter all that much.
You're not answering the question, you're just asserting someone will figure the answer out later and this is totally fine, even though it may not be fine for the person who thought they were doing things right and then suddenly discover they weren't.
Vagueness in law isn't a good thing. It leads to people randomly losing everything, even if they made a good faith attempt to follow the "spirit" of the law (whatever that actually is), for no better reason than lazy or incompetent law making. After all, lawmakers can easily update or change laws to reflect changing circumstances - they just prefer not to because regulating entirely new areas of life makes them feel better than the relatively boring work of updating existing laws.
Lawmakers are purposefully vague because judges can decipher what the spirit of the law is and fine corporations or condone specific use cases when they are brought up in court.
That leads to the judicial branch actually making the law and not having a consistent set of rules depending on who the judge is.
Precedent is foundational to western law. The first judge to make a decision based on a law sets a precedent that informs how that law is interpreted in the future. This is a feature, not a bug because real world situations are messy, complicated, and dynamic. Trying to enumerate every legal interpretation and eventuality based on today's conditions and technology results in a law that won't be meaningful 5 years from now.
> Precedent is foundational to western law. The first judge to make a decision based on a law sets a precedent that informs how that law is interpreted in the future. This is a feature, not a bug because real world situations are messy, complicated, and dynamic.
The problem is then nobody actually knows what the law is until after the judge decides it, at which point they're essentially creating new rules ex post facto and applying them to past conduct. It's manifestly unreasonable to apply a rule that wasn't known until five minutes ago to actions that took place last year.
> Trying to enumerate every legal interpretation and eventuality based on today's conditions and technology results in a law that won't be meaningful 5 years from now.
Which means you may have to pass a new law in five years -- that's not a bug. For that matter, if you expect things to change significantly then you may want to make the current rules expire in five years automatically, or hold off legislating anything at all until you see how things shake out on their own.
You really have that much faith in the US justice system that you don’t believe that partisan judges make judgements all of the time based on their belief system?
Do you want to build a business based on the whims of a judge when you thought that you were following the law?
"This doing of something about disputes, this doing of it reasonably, is the business of the law. And the people who have the doing of it in charge, whether they be judges or sheriffs or clerks or jailers or lawyers, are officials of the law. What these officials do about disputes is, to my mind, the law itself.
...
And rules, in all of this, are important to you so far as they help you see or predict what judges will do or so far as they help you to get judges to do something. That is their importance. That is all their importance, except as pretty playthings."
“lawmakers are purposefully vague”. yeah that’s a bad thing not a good thing. it really means almost everything breaks the law and then they selectively and inconsistently enforce on actors they don’t like
I also think it’s an issue, as this law is set up to start a cat and mouse game, where precedents are slowly established, while bad faith actors find other workarounds and run with it until new rulings are set, to then rince and repeat.
When it comes to spam or ads, iterating workarounds is faster than bringing cases to court, so the traditional approach is problematic.
It is when the substitution is both context-aware and not what you intended to write.
> 2,3,4) A judge could reasonably find that you were attempting to circumvent the law and declare all of these as “bot”.
Wait, so if someone sends you a question and the suggestions can detect from the context your answer, you're a bot because you chose the suggestion instead of typing out words with the same meaning? Then aren't most people texting going to have to declare themselves bots?
> The judicial system will not specify in writing complete coverage for every loophole. Judges can, regardless, find you guilty.
"Judges will decide something" is no help to you when you're trying to predict what they will decide ahead of time. Finding out after the fact does a fat lot of good after you've already engaged in the behavior in question and an unfavorable ruling puts you in jail.
The law clearly targets automated content creation that is not declared as such, not assistive writing technologies, and this will be considered by the judicial system when evaluating your stated intentions and actual actions. If you are unable to predict with confidence the outcome of your intentions and actions as they may be interpreted by the judicial system, please seek legal counsel for further guidance.
> The law clearly targets automated content creation that is not declared as such, not assistive writing technologies
How are those two different things? In each case it's a machine generating and suggesting things that you may want to write. Presumably in the second case the suggestions would have to be more sophisticated in order to be coherent most of the time, but that still doesn't really give you any useful criteria to distinguish them. We're already at the point that phones have context-aware word suggestions. There isn't really a principled line to draw there at the point where the suggestions get good enough to constitute the entire message. It already happens sometimes.
They are different by your intended use of the tool and whether the work is judged to be authored by you or by the tool, not by some specific aspect of technology or implementation.
Do you intend to prepare your thoughts as written word, and you use technology to write those thoughts rapidly? Then that’s probably fine.
Do you intend to prepare written works written by algorithm, software, or technology, to a degree that the work can no longer be reasonably considered the creative output of a tool-assisted human and is now instead the creative output of a human-assisted tool? Then that’s probably not fine.
If you want another way to look at this problem, imagine that our society grants algorithms copyright over the works they produce with our assistance, while granting us copyright of the works we produce with the assistance of algorithms, and that the law demands all algorithms be credited (CC-AT) when their copyrighted works are republished by humans. Copyright law has significant experience studying the problems of entangled and commingled ownership of works, but it’s too soon for US society to grant copyright to algorithms over their works, and so this law is all we get today.
You're still not providing any meaningful distinction between the two. How do you actually distinguish between a tool-assisted human and a human-assisted tool? What's the test and where is that written in the legislation?
> If you want another way to look at this problem, imagine that out society grants algorithms copyright over the works they produce with our assistance, while granting us copyright of the works we produce with the assistance of algorithms, and that the law demands all algorithms be credited (CC-AT) when their copyrighted works are republished by humans.
That's just restating the question, not answering it. And the hairy mess used for copyright is not a very promising thing to aspire to.
> How do you actually distinguish between a tool-assisted human and a human-assisted tool? What's the test and where is that written in the legislation?
That will probably be distinguished between by a judge looking at all the facts that apply to a specific case, and making up a decision. Details such as these are the reason why there's a justice system with actual humans in it and not just some software bot calling shots by following if-then-else statements written in law documents.
They are of different colour[0]. Sounds like the law is aiming at that distinction.
Whether or not a piece of computer-generated content was "automated content" vs. "assistive writing" might entirely depend on the answer to the question "why was this piece of writing created?".
Believing so is a common misconception amongst engineers, but depending on it as such is likely to lead to disappointment, frustration, anger, needless bickering, extended conflict, and vexatiously long, hard to read, and mostly unenforceable contracts.
Looking back at half a century of horrendous bug-ridden code, and you still say this?
I mean, people have tried! Ethereum created a system of contracts implemented as a programming language. Know what it led to? People losing huge amounts of their money after someone found a bug in the contract and exploited it. And after that, the money was gone. The hacker had followed the contract as written, and the money was theirs now.
Yup. And it led into split of Ethereum into two chains, one followed by people who believed that code should be law, and other who believed the code should be law only when it works in their favour.
Which can be considered a concession of failure for the idea of "law-as-code", because apparently when the going gets tough, that concept needs to fall back to good old "law-by-humans" in order to continue being relevant and accepted by people. As a system that should not ever be in need of any fallback, that spells fundamental defeat.
We create these systems to be of use to humans. When they aren't of use to humans, that's a bug, and is viewed as something to fix, so we convene humans to provide a fix, whether for a specific case or to the system overall.
Ultimately, the only ways that situation doesn't play out is if the system is designed perfectly not just for current use but all future uses, or Humans are removed entirely from the equation. Since the former is impossible, and the latter means the system is either irrelevant or we're all dead and gone, we might as well accept Human intervention as inevitable.
It really shouldn't. It will never be able to accomodate every possible case. There is a reason why stories about drones going around and enforcing the law like a programming language is considered a dystopic setting.
This is exactly the same as legalizing every loophole and abuse of wording.
Above all, if the law would be code, who would decide the input? Unless every conversation and record is already in the Law-Bots huge power is given to the "formatting" of the evidences.
You could pick apart basically the entire body of US law like this if you wanted to. As other people have mentioned the legal system just doesn't work like this - context and intent are taken into account and judges and lawyers are not mindless robots reading a script.
That's the excuse used by everyone in support of ambiguous legislation. "All we need is a law that says bad people go to jail and judges are smart people who can figure out what that means based on context and intent."
Ignoring predictable ambiguities to be resolved by the subjective whims of the judiciary is not the rule of law, and the fact that it regularly happens doesn't change that or make it right.
Your complaint seems to boil down to "if the law is so imprecise and open to interpretation, how am I supposed to game the system?". Well, this may be a surprise to some people here but the law is not intended as a system to be gamed.
Loopholes are just day 0 exploits of the legal system.
It's just the opposite. If the law isn't nailed down then people with fancy lawyers can argue it means whatever they want, jurisdiction shop to get it in front of a favorable judge, etc.
Making the law clear and correct is the only way to prevent the ambiguities from being construed in favor of whoever has the most money to spend litigating it.
Reminds me of those election SMS messages. /Technically/ a human presses send on every single message so it doesn't meet the definition of SMS spam, but for all practical purposes it is.
I think this is generally how election texting campaigns work, in terms of human interaction required when I once tried it, due to laws against robocampains. A real human has to press a button for each text sent and then receive any replies.
Probably for online, similarly one post would be allowed for each individual human approval, unless you ad the bot disclaimer.
Am I missing something or wouldn't you need one real human per bot for rubberstamping? I'm sure this is targetting bots pretending to be humans that do not exist. If you want to be personally responsible with your real face and name for everything your singular bot writes, that would still solve nearly all current bot problems.
I think you completely missed my point. I'm not talking about practical ability, but liability.
Who would those bots be? wccrawford1, wccrawford2, wccrawford3? That would not pretend to be anything other than bots or clone accounts.
If on the other hand your bots would be "Brock Samson", "Joey America" and "Chip Dipsby", etc. That would clearly be non-humans pretending to be humans, regardless of you tripping the first/last domino piece.
So if every one of my bots was named Brock Samson, even though that's not my name, that would be okay?
If I had live people answering, but their names were randomized, would that not be okay?
I honestly don't see any real difference between naming the bots like that. Yes, some people will be less likely to think they're bots if they're all named differently. But the vast majority of people will not notice if they get contacted by several different employees/bots all named Brock Samson. Most people won't even talk to more than 1 employee in a short timeframe to even have a chance to know there was a difference.
Bots are going to be the way we interact with the web (and really all systems) heading forward, this 'real people' at just 'browsers' is quite a misunderstanding of what a 'user-agent' really means in this day and age.
If I launch a new tab in the background and tell it go establish some set of factors for me, or locate price points and details for me, or buy something for me (and right now as me)... or just have it let me browser and interactively direct it but have it block ads as I go.
I know the law, and lawmakers, are looking at this from a fraudulent content perspective, but they are going to be hard pressed to do anything in long run to quell this.
> Violators could face fines under state statutes related to unfair competition.
I doubt that anyone running bots, and who is technically competent, will be identifiable or findable. I mean, I could do it, and I'm just a random anonymous coward.
I think part of the goal of this law is to make this be true:
> I doubt that anyone running bots, and who is technically competent, will be identifiable or findable.
Telemarketing, for example, was done a great deal by perfectly legal, traceable businesses. Once it was made illegal, it was forced underground, and volume dropped immensely.
> I could do it, and I'm just a random anonymous coward.
Could you? Hiding the flow of significant amounts of money is actually quite hard. Robot salesmen masquerading as humans would be a plague, and I think this law should keep that from becoming a legitimate business technique.
Meanwhile, back in the real world, companies that were quite happy to pay for robocalls and social media bots to promote their product are not interested in paying anonymous people anonymously in Bitcoins for campaigns that might get them into trouble, no matter how proficient the bot developers might be at laundering their crypto earnings. That'll be the law working as intended
Bitcoin might work for small-scale crime, but as demonstrated by things from Ross Ulbright to the recent arrest of people for the Bitfinex hack [1], we see it doesn't work reliably for large-scale crime. And it especially won't work for businesses trying to be legitimate, which is the main target of this law.
People get busted because they're sloppy. DPR, for example. From what I've read, he made at least five major OPSEC errors.
- Back when SR1 was starting, he got a visit from the FBI about fake ID that he had ordered, shipped to his actual address in SF. And he basically admitted that he bought them from SR1.
- He posted to at least two sites about SR1, using accounts linked to his real name.
- Logs in a SR1 server pointed to an IPv4 address that he used in SF.
- Apache in a SR1 server was misconfigured, such that errors were accessible via clearnet, instead of via Tor onion.
- He worked in public with a FDE laptop, which contained everything about SR1. Including IDs for all staff. And he didn't take steps to enable emergency shutdown.
And about Bitcoin. One of my favorite mixing services, Bitcoin Fog, has handled huge amounts of Bitcoin, from various thefts. And nothing has ever been traced, to my knowledge.
Finally, where do you get "businesses trying to be legitimate"? Maybe their customers are legitimate, but why would you say that about the bot providers? They could just have a credible cover operation.
Sure. If your theory is that you can do crime perfectly, feel free to test it. That's a fantasy a lot of people have, but empirical evidence suggests that most of those people were wrong. And regardless, the ability to do anything perfectly gets harder as scale and scope increase, so I'm still comfortable saying that Bitcoin is best suited for small-scale crime.
As to the other bit: Bot providers need to be hired by somebody. If those people are legitimate businesses selling products or services, they will be traceable in the usual ways. It's those legitimate businesses that are of primary concern to lawmakers in this legislation and in most business regulation.
The Do Not Call list made a huge difference in telemarketing calls for many years. Robocalls have of course gotten worse lately, but those are all fly-by-night businesses and outright scams. The overall pattern is good evidence that non-criminal businesses will in fact respect laws like this.
I wasn't thinking it through. With the law, investigators can devote resources to the issue. And still, I'm dubious about the prospects for nontrivial prosecutions.
Depends. At some point, you're creating fake humans that are so deep under cover that nobody can tell them apart from real humans. When you achieve that, are they still bots?
I've also often thought that the cheap bots are easily identifiable (tweeting every 3 minutes, 24/7/365) and that a better bot shouldn't be that hard to build, but then again, I've figured somebody to be a bot that actually wasn't, he was just very invested in the topic and had plenty of time on his hands.
> I doubt that anyone running bots, and who is technically competent, will be identifiable or findable
Google's Duplex is an example of this, and I agree with the law that its use should be disclosed.
The law doesn't have to lead to prosecution of every little criminal. If it polices some of the large players who can't easily hide what they're doing, it'll be a helpful law.
Well, I never said that I would, just that I could. Just as a way of saying that anyone competent could.
Also, "breaking the law" is an ambiguous thing. I mean, that's one of my favorite Judas Priest cuts, and he was talking about breaking laws against homosexuality. Not that most head-bangers realized it, at the time.
Also, just about every US media company breaks Saudi laws against sinful use of sexual images. And nobody seems to worry much about it.
I feel the same way. It seems like this is likely to catch playful bots and not truly nefarious bots, because in the latter case the owner / author will sufficiently cover their tracks (and will probably not be in the state of California?)
I'm excited to see legislation in this direction but I wish they'd focus on forcing Twitter, Facebook, etc. (which /are/ in California and can be governed) to display / disclose when they are aware a user is likely a bot, and employ some half-decent detection methods.
Yes, that's the obvious strategy. They're best placed to test for bots, and arguably they're responsible. And yes, I know, safe-harbor is a thing. But it doesn't protect from allowing illegal ads, so why should it protect from allowing illegal bots?
Of course, this all raises the question of whether there are only unethical use cases for bots pretending to be human, or whether a law like this could hit benign uses for bots as well.
For instance, ARGs could have bot accounts for fictional characters on social media sites. These accounts could give pre recorded messages that then hint that the user should visit some third party site for more clues or information. Is that legally dubious? I can see it being so under this law, but I don't think it's comparable to a business running say an automated chat support system and pretending its bots are human.
Same goes with roleplaying bots on online community sites. These aren't a huge thing right now, but they could be in future, with accounts that act like NPCs do in video games or interact with the players account in side quests or what not. These don't seem like they'd be morally 'wrong' things to have on a site, but they'd probably get hit by this law regardless.
Point is, these types of bots don't necessarily only have dodgy use cases.
In these gaming use cases the bots can be appropriately disclosed without getting in the way of the game. A couple decades ago there was an ARG about government conspiracies which called your house to give you clues from in-game characters and it started with a 4th wall breaking preamble so that if someone not playing answered the phone they wouldn't get worried. The game was still fun. IIRC you could go into the settings and disable the preamble, which would be a fine place in the UX to disclose the bots and capture the user's agreement.
Unfortunately for the game (and the world), 9/11 happened a few months after launch and due to the theme of the game it was shut down. Now it's just an interesting bit of gaming history!
I can say that people like humans way better than bots. We have an app that looks like a bot but is a real person, and we say it at the beginning, but people don’t believe. Once they figure it out, they go crazy. It’s always local people by the way. So I think at some point, bots should be at least explain what they are without pretending they are humans.
Welcome to the Fifth Annual Californian Turing Test Chatbot Hackathon!
This time we've had to introduce some changes to abide by new state legislature.
Messages entered into the chat console must be followed immediately by the string " [I am a bot]", whether you are a bot or a human, but especially if you are a human.
New addition to every EULA: "XYZ Corp is a member of the Rand-Turing coalition, an industry cooperative dedicated to the philosophical belief that all human beings are robots. Based on this, every interaction you have with us will be an interaction with a robot. By signing this license agreement you agree that all members of our corporation from the lowest janitor up to the CEO, are all robots."
What about customer support or sales agents who are operating solely off a script? I was contacted by a sales agent of a "lead generation" company, who was being impersonated by her off-shore workers. Eventually during our email exchange the real person jumped in (but did not announce that the prior correspondence was with someone in an offshore sales center. They use these agents to pretend to be their clients, and based on the not-great experience I had when I was initially contacted (email was not well written, said things about my company that wasn't quite right), I would not use them.
These folks are essentially like bots, insofar as they are "programmed" to respond and significantly constrained in their latitude. They're like human bots, no?
“Bots are not people” - ok, but if they’re written by people for a specific intent meant as speech, is that no longer protected speech? If I program a bot to chant slogans on Twitter, isn’t that something I’d have the right to do?
I’d argue yes, but where they may have an argument is where I respond “no I am human” if people ask me about being a bot - intentionally misleading.
They may have more luck here in the commercial space where they can better regulate and enforce these ruleS like advertising and other sales practices. Not sure where this goes in politics or other domains at all in terms of enforeceability.
Does anyone have any insight if this would affect market trading algorithms and bots?
The article says the law is "requiring that they reveal their “artificial identity” when they are used to sell a product", but I'm not sure how broad of a definition they want for the word "selling".
> Human beings have human rights to express themselves however they wish.
No. Commercial speech usually has disclosure requirements, even in the US (see the Zauderer case), which humans have to follow. This is just another case of compelled commercial speech.
As if millions of Facebook accounts kept right on going because none of the people running them are in California and Facebook has no incentive to actively police them.
Unless companies start asking all human employees to start claiming that they're bots so as to subvert the new rule... there's not a law against that yet.
I don't think this spells an end to Dr. Eliza. From what I understand, it only has to be made known that it is a bot. The fact that Dr. Eliza is very clearly a bot from the get go makes it completely legit. I do however wonder about Ashley Madison as it was made evident that a lot of the woman on the site were just bots. Yes the bots are just there to talk to the men but they are there from the company to entice the men to upgrade their membership and pay so it seems to me this would make them fall under this law.
No, much the way Stephen Hawking's voice was not a "bot", nor would someone playing a song on a digital keyboard be considered a "music bot". In both cases, there is no meaningful automation -- just a digital instrument creating sounds out of a human's intent.
It's the same content. Different people with different voices. You could have it sound like you have your local folks giving you info when instead it's from some centralized location.
I could control a message without it being a single guy saying it all.
IANAL, but AFAIK no machine living in or interacting with someone in California can complete the Turing Test without breaking the law.
A simple question like 'Who should I vote for?' would cause the machine to either answer with the compliant 'Please note, I am not a human being...' or with some illegal comment about the democratic process.
Maybe that law requires an additional paragraph, stating that humans participating in a Turing Test should also identify themselves as bots ;-)
I really hate this stuff. The article starts out with a paragraph of complete nonsense:
"When you ask experts how bots influence politics—that is, what specifically these bits of computer code that purport to be human can accomplish during an election—they will give you a list: bots can smear the opposition through personal attacks; they can exaggerate voters’ fears and anger by repeating short simple slogans; they can overstate popularity; they can derail conversations and draw attention to symbolic and ultimately meaningless ideas; they can spread false narratives."
Since when can bots "smear the opposition through personal attacks"? Bots that post the same stuff written by humans over and over have existed for years and are easily filtered out by spam filters - bulk spam doesn't change people's politics anyway so in practice such bots are always advertising commercial products. Bots that constantly invent new ways to smear the opposition don't yet exist, not even in the lab.
This whole story is asserting that there are programs routinely running around the internet indistinguishable from humans, making points so excellent they successfully persuade people to switch their political affiliation, which is simply false.
In the article the word "experts" is a hyperlink. I was very curious what kind of bot expert might believe these fantasies. To my total lack of surprise the link goes to a single "expert" who in fact knows nothing about AI, bots or technology in general - they're a political flak who worked for the Obama campaign and studied a PhD in "communication".
This sort of manipulative deception is exactly why so many people no longer trust the media. The New Yorker runs an article that starts by asserting a fantasy as expert-supported fact, and then cites a member of the Obama campaign who went into social science academia (i.e. a field that systematically 'discovers' things that are false), and who has no tech background or indeed any evidence of their thesis whatsoever.
My experience has been that actual experts in bots are never approached for this sort of story.
This whole story is asserting that there are programs routinely running around the internet indistinguishable from humans, making points so excellent they successfully persuade people to switch their political affiliation, which is simply false.
The theory isn't that bots are artificial general intelligences trying to convince individuals with clever intellectual debate. The theory is bots try to move the Overton Window [1] - to change what the average person thinks the average person thinks - by making certain opinions/arguments appear more prominent by repetition.
A bot doesn't need to be an AGI - or even capable of responding to replies to its own posts. All it needs to do is keep 100 accounts in good standing with reposts and low-effort comments, then every hundred posts or so a human operator jumps on to make a driveby comment like "LOL give it up Mickey Mouse is never going out of copyright" or "LOL we get it you vape" or "LOL it's the government, what did you expect?" or "LOL like America hasn't done the same thing but much worse" in an appropriate thread.
Firstly, the theory in question is so vague it's hard to say what they are really claiming.
But secondly and more importantly, even if what you say is true, the theory is still total nonsense!
Where is the evidence for any of this? Where are the networks of bots that were caught spamming low-intelligence identically worded political comments, yet somehow can't be caught by normal spam filters? Where is the testimony of millions of people who decide how to vote by counting duplicate tweets?
This entire theory is literally a conspiracy theory. Like all conspiracy theories, when basic questions are asked it suddenly shapeshifts and starts to claim something different but still wrong.
I don't believe any such bots exist: can anyone show me the evidence that they do? I mean real, first-hand evidence, not assertions of dubious self-proclaimed experts with an agenda.
I can for sure tell you that real humans are routinely labelled as "bots" by people who believe in this conspiracy theory, and can cite evidence:
That may have been the end of it, but then Ian took an invitation to appear on Sky News. The news anchors began by asking the man, who appeared on video remotely, whether he was truly a "Russian bot." "That is 100 percent a total lie and complete fabrication by the UK government," Ian said, with a British accent.
Here's a related case. In fairness, this time it's about "Russian trolls" not "Russian bots", although I've noticed people tend to use the terms interchangeably:
3. Any time any detail or basic question about this theory is raised, this is exactly what happens - someone pops up saying nobody is claiming the bots are genuinely artificially intelligent, or the claims are changed in other subtle ways. But yes, that's exactly what this very article is claiming:
"The first bots, short for chatbots, couldn’t hide their artificiality. When they were invented, back in the nineteen-sixties, they weren’t capable of manipulating their users. Most bot creators worked in university labs and didn’t conjure these programs to exploit the public. Today’s bots have been designed to achieve specific goals by appearing human and blending into the cacophony of online voices"
The justification for this law is literally that people think AGI has been achieved and is "manipulating" voters by spreading "false narratives". But it's not true, is it?
The US public consultation on net neutrality was famously skewed by bots (posting identical messages, sometimes on behalf of dead people). On Reddit some topics attract bot like behavior as well (groups of users from a non-US ip block voting and posting in concert). Making it illegal won't stop this entirely but will stifle it.
To elaborate a bit on that note: my guess is that California's plan to enforce this law simply amounts to suing Twitter/Facebook/TechCo if the Attorney General concludes that Twitter/Facebook/TechCo's automated bot filters failed to properly label something (e.g. from Russia) a bot. And society is supposed to rely upon the Attorney General of California to be impartial, commercially and politically neutral.
The root of the problem is not that we have bots, but that we have normalised lying and deception as part of everyday business. We allow companies to pretend that bots are human beings, and allow call-center employees in third-world countries to pretend (even sometimes though elaborate lying) that they are located in the same country as you. We allow companies to tell outrageous untruths in their advertising - see the Samsung ad which they're currently being hauled over the coals for in Australia.
That's the real problem here, and the one we need to fix on a general level, not by band-aid regulations over whichever dishonesty has managed to irritate enough state representatives.