Does it count if there's no user interface? Amazon has my email address, I've been a customer since the late-nineties. They keep inventing new email lists and signing me up for them. Each time I get a new "newsletter" it says something like, "You got this message because you're subscribed to the 'Tablet News' newsletter." I click the unsubscribe link to remove myself from it. Along with the unsubscribe link there's a link to my subscriptions. When I go there, it only shows the ones I want (the specific authors I'm following). I want to unsubscribe not only from this latest list you just signed me up for, but also from all future lists you may want to sign me up for. I really don't want to get any unsolicited marketing email from you. Really. I don't want it. Please let me out.
Also stop letting marketplace sellers email me begging for feedback after every marketplace item I accidentally order. I try my best to not order marketplace seller items anymore but when I accidentally do (or buy a gift for someone that is only offered this way) I always end up getting emails from these guys. Are you sharing my email address with them? Does unsubscribing or responding to them share my email address with them? I have no idea. There is never anything useful and it's impossible to unsubscribe from all past and future marketplace emails which is really annoying. Come on, amazon, I really want to love you and continue shopping there but it's getting to the point that I'd rather go to wal-mart! (ok not really)
In general, I'm disappointed how Amazon went from being a store to a "marketplace". Sometimes I want to buy somebody who'll actually curate their list of products and has a reputation to stand behind that they don't want to sully by selling garbage or price-gouging on items they don't normally stock.
I buy a lot of used, old, obscure things (mostly books). Very often, Amazon is the only place I can find it reliably; my other options would probably be asking my Japanese/German/etc friends to dig through second hand shops and mail it to me.
I love the Amazon marketplace. (I also hate the 3rd party seller feedback emails but I have strict email filters so who cares)
The marketplace works out very poorly if one doesn't have Amazon in their country. The info about shipping abroad is very much buried into whatever, and Amazon IIRC mandates that purchases can't be combined: if I buy two $0.01 books, FROM THE SAME SELLER, they will be shipped separately at $4-6 shipping & handling each.
It's not that I think the Amazon Marketplace is a bad idea, it's that the way the Amazon Store and the Marketplace are conflated by Amazon that's the problem.
eBay and AliExpress do what Amazon Marketplace does better than Amazon Marketplace, but Amazon wins because they've leveraged their success as a store for it.
If you are interested in books from the German-speaking region you should also take a look at zvab.com (now owned by Amazon) which is the largest marketplace for professional used book sellers.
For what it's worth, marketplace sellers don't have access to your email address, Amazon relays email messages through their servers.
I am a seller on Amazon. I didn't think I have access to email addresses but you got me curious. I just went into an order and clicked "contact buyer." It gives a contact form that has the receivers address as something like dq22t5nz9n27qma@marketplace.amazon.com with a note "IMPORTANT NOTICE: When you submit this form, Amazon will replace your email address with one provided by Amazon in order to protect your identity, and forward the message on your behalf. Amazon will retain copies of all e-mails sent and received using this service, including the message you submit below, and may review these messages as necessary to resolve disputes. By using this service, you consent to this action."
Personally, I don't contact my buyers at all ever unless its a reply to a question they asked me.
On the email front, I've been getting bizarre emails from Trulia about 1-2 times a month for the last six months. I don't open them but the subject is "1 new rental available in $(my town)." I own a house and I don't remember giving Trulia my email address ever even when I was apartment hunting many years ago. This only started six months ago. I wonder how I got on that list?
Going forward, you can append "+whatever" to the username portion of your email address, e.g. You+spam@gmail.com, and gmail and most other providers I've used will ignore that part. Use it to trace the sharing of that address. I used that when signing up for a particular mailing list and found that my address was shared with about a dozen other marketers.
People have learned that you can simply strip that part out of the address... it's in the standard. If you want to know where your data leaked out, you should use a different email every time (e.g. use your own domains).
> People have learned that you can simply strip that part out of the address... it's in the standard.
I'll take "Common RFC 821/2321/5321 myths" for $300, please.
RFC 5322 and RFC 5321 are very emphatic that the local part has no semantics whatsoever except those given to it by the MTA. There is no semantics for what "+" means in the standards.
Using Postmail's virtual aliases and a nice web interface to manage them, I create a random virtual alias (h9jle20gavs32@domain.tld) associated with a site name for every place I have to input an email address, it makes spam tracking pretty simple.
I'm still surprised that so few people know about www.spamgourmet.com it's been around for 15+ years and offers the exact feature you're talking about. You don't even have to create the alias, it will be created on first use.
Yeah, I’ve always figured as much. I use Yahoo’s disposable emails that don’t feature a “+” in them. There’s no way the sender can tell they’re sending to an alias.
This comes up every time and that's just too much of a pain in the ass for basically no reward. Do you think spammers are too stupid to strip out the + part themselves? Its not worth the effect to even open the emails so having the + doesn't personally help me. I'd have to worry about having a different email address for every service I use? no thanks.
> Do you think spammers are too stupid to strip out the + part themselves?
No, but they are only targeting gullible people anyway so they don't bother:
> Finally, this approach suggests an answer to the question ["Why Do Nigerian Scammers Say They are From Nigeria"]? Far-fetched tales of West African riches strike most as comical. Our analysis suggests that is an advantage to the attacker, not a disadvantage. Since his attack has a low density of victims the Nigerian scammer has an over-riding need to reduce false positives. By sending an email that repels all but the most gullible the scammer gets the most promising marks to self-select, and tilts the true to false positive ratio in his favor.
Not just Amazon, I am constantly clicking through those unsubscribe links but just get auto-signed up for new lists. Fuck you Amazon and anyone else that does this. I'll buy your junk when and if I need to. Leave me alone and stop wasting my life.
Never click a spammer's "unsubscribe" links -- that just tells them they have a valid account. If you use a big email provider like gmail, flag them as spam. Otherwise, make a rule in your mail client to automatically file anything from their domain to the trash.
I've had good success with unsubscribing from legitimate mailing lists using the links at the bottom of an email. It's not the company's fault if I unthinkingly subscribed to their newsletters - they shouldn't have all their emails forever tagged as spam for my mistake.
Real spam however, rarely makes it into my inbox - it gets filtered out and deleted without ever being opened.
> It's not the company's fault if I unthinkingly subscribed to their newsletters - they shouldn't have all their emails forever tagged as spam for my mistake.
It is. Default checked "subscribe" boxes during sign in, hidden settings, new lists which you are auto subscribed to; it's a never ending battle and the incentives are wrong. If there was no unsubscribe link but only the spam button, publishers would be much clearer about these things.
Ironically, the unsubscribe link has probably led to more spam, rather than less.
Half the time, clicking the unsubscribe button leads you to another form that is almost impossible to comprehend, which results in (maybe) unsubscribing you from one list, but subscribing you to fourteen other newsletters and update and special offer lists, for a net increase in spam.
If the email says "click here to unsubscribe", then it should do exactly what it says. Taking you to a page that allows you subsequently unsubscribe is not sufficient.
And yes, you should only ever follow links for companies that you are confident are not spamming you out of the blue, because of the danger that you are just confirming your email address is active.
My other gripe is that I'm not sure how anyone is going to tell that I have clicked on a word in an email, given that it's displayed on an xterm. But if they say that by clicking on it I am unsubscribed, it's their problem to make sure it happens.
I don't think spammers care about the difference between an email they have and an email they have that clicked a link. They're going to spam both anyway.
I'm pretty sure they do care. They're renting out a bot-net at some $/message, and only make money when someone actually clicks through to buy their "p3n1s pills," so knowing that a real clicking human is associated with an address matters quite a bit.
But really the problem got so bad I had to stop using gmail alltogether.
Moved over to mail.yandex.com and now I do not get interrupted anymore in my life.
Thank you email and IM and "notifications" but no thank you, if I ever receive a notification of any kind, that account will either get nuclear delete option or disabled forever.
Time to move to new email providers and new emails, and stop pretending email address is an ID, because its not, its a mailbox, when it gets full create a new one, and only people you care about and when you care will receive its attention.
Bulk email can be split into two categories: Opt-in and Opt-out. Opt-in is email that an individual requested or agreed to receive. Many legitimate mailers use opt-in methods for marketing. Individuals are responsible for reading and understanding a company's privacy policies and acceptable use policies (if applicable) before submitting an email address. If a privacy or acceptable use policy clearly states that signing up for the service results in receiving marketing or commercial email, then the individual has "opted-in" to receive email and that email is not spam.
A company emailing you when you have given them your permission to email you, but about certain topics you decide you don't like is not spam.
Permission is granted company by company, not topic by topic. I get that "spam" has become throwaway shorthand for "email I'd rather not receive", but that is not the definition used by law and by antispam groups.
I've also experienced this several times. I just went through the wave of shopping emails that come from Black Friday in the US, and found that most of the time I only unsubscribed from the company's "Black Friday" list. It's a shameful way to get around CAN-SPAM.
Companies know there's a risk of unsubscribes with every email they send. If they have several lists, they ought to show all lists you're subscribed to, with an option to unsubscribe from them all. They might actually keep some legitimate subscribers that way.
> Also stop letting marketplace sellers email me begging for feedback after every marketplace item I accidentally order. I try my best to not order marketplace seller items anymore but when I accidentally do (or buy a gift for someone that is only offered this way) I always end up getting emails from these guys. Are you sharing my email address with them? Does unsubscribing or responding to them share my email address with them? I have no idea. There is never anything useful and it's impossible to unsubscribe from all past and future marketplace emails which is really annoying. Come on, amazon, I really want to love you and continue shopping there but it's getting to the point that I'd rather go to wal-mart! (ok not really)
I've begun adding 1-star reviews when I get requests begging for feedback. Seems like the only thing I can do to discourage the behavior
One pattern that I consider "dark", but don't see in this list, is using loaded options on a dialog box. One that I often see in apps is like:
Rate our app!
<OK> <Not Yet>
Those really get under my skin because the developer is clearly trying to play a psychological trick on me, but it's so brazen and obvious that it just pisses me off. And bigger companies do it too (e.g. Google).
A popup which shows this dark pattern. Try it again in a private/incognito tab since it'll only do it the first time. Or don't bother: it's not exactly super exciting.
Honestly one that only appears when you go to close the tab is a lot less obtrusive. And besides that I don't think being in-your-face really makes something fall into "dark patterns."
Constant nagware until you update is asinine. "Update Now or Later?"... f* you, I don't like what I see in 10... but I'm stuck with dancing around daily fucking warnings.
I swear... I'm not bitter :) I also won't buy another iPhone.
Workaround: the iOS10 update requires 1GB of space. The nag only appears if the update has been auto-downloaded. Use "Manage Storage" to delete the 1GB update file for iOS10. THen load enough music/video on your iPhone so there is less than 1GB space. Presto, no more iOS10 download (won't fit) and no more nagging.
First of all there's a big difference in how Android gets upgraded. Google doesn't need to upgrade the base operating system in order to push changes to its apps and they can even push security fixes and upgrades to the base operating system by means of Play Services and supporting libraries by which they push backports.
It also depends on the Android device. I have a Nexus 6, released 2014, that's on the latest Android 7 (Nougat) and my old Nexus 4, released 2012, is on Android 5 (Lollipop).
Maybe I would have preferred if my Nexus 4 also got upgraded to the latest, but then again it might not have the needed juice, I like very much how it runs right now and all the apps I use still work and receive updates. And I also have an older iPad that got upgraded by Apple and is now unusable.
Even so, with Apple there's no way out - once upgraded, it stays upgraded and once support is dropped, it stays droppped. With my Nexus I have a choice - because of Android's nature I can always use CyanogenMod. It's not exactly a solution for the non-technical-savvy, but it works.
My Note 4 came with kitkat installed two years ago. Shortly thereafter I was able to upgrade to lollipop. For the past several months now I've been ignoring the available upgrade to marshmallow because honestly neither my cell provider nor Google has any incentive to let me keep a perfectly good phone for another year or two, so I'm worried a major performance hit is the only real feature I can expect from the new version. I know I'm being slightly irrational in this concern, but can anyone confirm that the jump from Android 5 to 6 doesn't cripple phones with "old" specs (since it's targeted to the latest phones)? I'd love to update if it turns out this is not the case.
5 to 6 was fine on my G4. I'm not sure how it compares to the Note 4 but the phone is at least a generation old. There is another pending OS update I've been ignoring for a few months, not for fear of being crippled, but just because I can.
Not sure if this was intended but this comment is actually kind of funny considering Android's lack of updates is a frequent point of criticism. Update notifications are probably the one thing you don't have to worry about with Android.
On a more serious note, in my opinion updates are an exemption and the user should be urged to do them.
I don't necessarily want to AVOID an update. I don't update to new OS's.
XP sucked until SP2. 8 sucked until 8.1u1. 10 gets better each roll.
My issue with iOS 10 is the changes to the underlying flow (like unlocking the phone) and the lack of tweakability (basic issue with iOS). Naggware on top of that grates my nerves.
If it was only about the security update. But in the case of iOS it is not. It's is the opportunity to start the nagging all over again. Please start using iCloud. Let's use Apple Pay now. Let's use Apple Music.
Or how about popping a dialog to inform me that cellular data is disabled for $app every time I open it while not connected to wifi, even though $app most certainly does not require access to data in order to work.
I'm of the "avoid popups at all cost" school of UI design. As soon as you let developers have modal UI that isn't meant to accomplish a specific, user-initiated task, they will immediately start using that power to be obnoxious.
Even worse; let's break backwards compatibility with your paid software so that you have to purchase an upgrade license... If you've had a Mac long enough and you run paid software you know what I'm talking about. :(
Considering the wide availability of beta OS updates, I'd say that's on the developers of said software. Sure, as someone who works in audio and has to deal with a range of software instruments/effects and audio interface drivers, I know exactly what you're talking about and I have little understanding for developers expensive software/hardware that don't test and fix their products before OS releases.
Considering the wide availability of beta OS updates, I'd say that's on the developers of said software.
Why? If the developers produced a product that works and sold it to a user at a fair price, why on earth should those developers then have some sort of indefinite responsibility for producing updates because other people chose to break compatibility?
There is a reason we have standards and there is a reason it's important for system software in particular to define and support standard interfaces. In fact, that is arguably the primary function of an operating system: to provide a stable platform on which other software can run.
The fact that some of the main OS providers no longer seem to recognise this and instead consider instability and backward-incompatibility to be strategic assets makes me genuinely afraid for the near future of our industry. If I want to do something as simple as buying a laptop for one of my small businesses tomorrow, so someone can get on with useful work and will be able to continue doing so for a long time, there are currently no good options available.
I certainly disagree on this point. Take the case of a small software company that ends up not being all that successful. I purchase a license for said software; it is useful for me, but the company goes out of business since they are unprofitable.
Now Apple breaks backwards compatibility with this software when I upgrade.
Even in the case of the software maker still being in business, should they have to provide me free upgrades for life? If they do that then they have diminishing profits with every upgrade. On the flip side, should I have to pay for a new license when I don't need the new features and am perfectly happy staying on the existing version if it only would continue to work?
To give two quick examples of this I used to run Parallels on my Mac so I could run a couple of Windows apps that I absolutely had to have and it would have been very inconvenient to use a boot camp setup and reboot every time I needed access. Then when the Mac version changed Parallels quit working; my only option was to buy a new license for Parallels. To make matters worse; you never really know what software will break when there is an upgrade, but if you don't upgrade then you end up in the boat that some other piece of software you are running does get updated, but you can't use that software without getting the latest version of Mac.
When I compare this to Windows; I'm still running today on Windows 10 software that I purchased for Windows 95. For all of the many things I dislike about Windows; the one thing I applaud them for is their level of backwards compatibility.
But Apple doesn't just push security updates. They have a well established track record of pushing entire new OSes the same way, a very poor record of crippling performance or outright bricking older ("old" is hardly a fair description) devices in the process, and a stubborn insistence that applying these updates is a one-way process that may never be reverted regardless of any harm they do. Any business that adopts such customer-hostile policies deserves every bit of resistance they bring upon themselves, and I have not the slightest sympathy for them.
I can't, since the "tonight" option never works and "right now" I'm interacting with my phone for a specific purpose that (like navigation) that won't wait half an hour or three quarters of a charge for an OS upgrade.
Right, the whole "enter your PIN to update tonight" screen. I just gave into the nags and updated my old iPod Touch (which I only use as a remote for some home automation things), and now it runs considerably slower.
Yeah, I gave in on my iPhone and now it runs really hot. But this time around they're being so annoying about it that having an OS that clearly wasn't designed with my hardware in mind is probably the preferable option.
Long story short, I'm pretty sure this phone is my last iPhone.
The only one where I can give an honest answer is "No, I love my laptop too much to upgrade." I'm actually having exactly this problem. I would like to upgrade to a new notebook with more RAM (4 GB soldered in is becoming a bit claustrophobic these days), but I have several rare autographs on my notebook cover and don't want to abandon it.
I don't get how this practice gets to live on anything other than uninstall screens. If I'm presented with an option like that, my first response is to close the window (whether app or webpage), then uninstall everything from that vendor.
Heck, I even uninstalled every jwz package from my Debian systems after the xscreensaver fiasco.
There's another brilliant app rating pattern (perhaps a dark one). A dialog pops up and asks if I loved their app. A "yes" takes me to the marketplace for rating, but a "no" only allows me to send them feedback. It both positively skews the ratings and collects feedback from disgruntled users.
Not necessarily dark though... as an app developer I'd actually really want to hear the feedback from the people who dislike my app. As in any business, oftentimes the most disgruntled customer is the one that can provide the most useful feedback.
An even smarter way (in my opinion) of getting 5 star reviews is by showing the dialog only after x hours of use. Users most likely to rate badly will have the app uninstalled before the dialog shows.
Which benefits everyone if the problem is a simple thing you can resolve by helping them directly. Support is also part of the experience, and good customer support should factor into the review as well.
I can see how you'd see this as providing a positive bias, but I see it more like getting a chance to see if you can't help the customer out before they give up on your app. It also reminds the customer that there are people on the other end - so even if the issue can't be resolved, and you still get a one-star rating, the level of vitriol seems likely to be reduced - something all too easy to forget when angry-reviewing.
Of course if they just take the feedback and dump it, that's a different story - but again, I would think anyone with that experience would still leave the negative review.
TL;DR - too many downloaders use negative reviews as a combination support request and cudgel. I think this is a reasonable defense against that.
Users with negative experience are more inclined to leave a rating than users with a positive experience, which makes the ratings not reflecting the reality of the overall user experience in the first place. (http://cdn2.hubspot.net/hubfs/232559/The_Mobile_Marketers_Gu...)
This technique is a genuine way to encourage sharing positive experiences about the app. In the same time, it offers a chance to the app provider to improve a bad user experience.
Whether it's a dark pattern or not, I think really depends on the motives, are you genuinely trying to make the app better or are you only interested in the people's perception of it.
And you almost need to this because app store reviews are so deadly otherwise. You have no way to respond to reviewers (who've had problems) as you might on TripAdvisor and virtually no one goes out of their way to find an option to send feedback before going nuclear in a review.
Other big players like Microsoft are doing it too. And that dialog can be worse:
Do you like our app?
<Yes, rate it> <Later>
It's also freaking annoying because it interrupts your workflow, the thing for which you ended up installing that stupid app in the first place. It's basically disrespectful of their users' time and needs.
See also: the Windows 10 "upgrade" fiasco. It's as if they took their ethics lessons from the bully in _Calvin and Hobbes_: "Yes means no and no means yes. Do you want me to hit you?"
I feel you... I do it in my app and I hate doing it, but it's a necessary evil. Way too many people see the comments/ratings as either a support board or a way of punishing bad apps but very rarely they give a good or even just decent review, without being asked.
> I feel you... I do it in my app and I hate doing it, but it's a necessary evil.
This is what makes the IT part of my heart die a little inside even more: when even the "good folks" in this industry feel like they have to do obnoxious things just to get ahead or keep up.
Folks, this is why the bigger arguments about competition or arbitration and so on always come back to the same point ("there's no meaningful choice"): Eventually, everyone winds up doing the same thing because it's the only way to simply not lose ground to the others who are doing the obnoxious-overall-but-good-for-just-me-in-the-short-term tricks. If I uninstalled every app that demanded this...well, I'd have very few apps. And that's even worse because it's just obnoxious enough to help the individual app developer but not so obnoxious as to spur people onto meaningful action, yet the annoyance is still present...always grating on nerves...disrupting just a little bit of productivity or happiness...needlessly.
You may want to reconsider. I've had much better success with providing "Rate Now", "Rate Later", and "Leave Me Alone!" options. Even great apps are prone to bad reviews if the user is being prompted for the 5th time.
Like many users, I as well have fallen victim for having a temporary fit of rage and leaving a 1 star review with a 1 liner of nonconstructive feedback for an otherwise useful app.
As if to say "Fine, you want it? Here's your god damn review!"
There's also the brilliant idea that they show a giant "Why?" text box if you rate 1 or 2 stars, so that you type away your frustrations and after sufficiently venting, you do not actually go to the App Store to write that review.
On Android, apps can't self rate like that, all they can do is take you to the store page and let you rate their app. The app can't know what rating you gave.
Does iOS let apps self rate like that? This seems inconceivable to me.
Bottom line: when an app keeps pestering you to rate it, say "Yes fine I'll rate you", they take you to the app store and there, you do nothing (or one-star them). The app should register that you rated them and stop pestering you with that dialog.
I think maybe you're not parsing it--they're asking you to rate the app inside the app, with a dialog box created by and owned by the app, and only pushing you to the app store if you're a 4 or a 5.
I don't think either platform lets that happen. Still, if you know your user is going to rate anything less than 5 stars, why would you still take them to the store to rate? And the amount of people who lie on those dialogs is quite small.
It's absolutely possible on Android. Just show the user a dialog which contains 5 star buttons. If they press the fourth or fifth star, take them to the app page on Google Play, otherwise don't do anything.
It's not allowed in the Google Play TOS, and it's one of the darkest patterns there is, but it's technically possible just as it is on iOS.
Tinder has another horrible UI trick, not sure if it qualifies as dark pattern or just terrible coding. On Android, they require you to enable the global option "Always allow Wifi scanning" in order to get your location. If this is disabled, they pop up a dialog saying "Enable? Yes | Cancel". Hitting "Cancel" immediately pops the dialog back up again, so you cannot use the app at all until you click "Yes".
edit: Some of us do not want to enable this power-draining, privacy-sucking global option just to use Tinder. An xposed framework module was created to bypass the check, but Tinder has actually begun checking for it and the app doesn't work properly if it is enabled.
Google Maps on Android is massively using dark patterns. If you are on the go and open the app, it requests to enable the location service to continue. The choices are: "Enable location service" or "Cancel". It is phrased in a way that it will not run the app if you tap on "Cancel" but it works perfectly if you select "Cancel".
The worse, if you agree, it enables the location service for everything all the time where you have the feeling it was just for the Maps app.
I just discovered that 2 days ago and I must say I was really really angry at Google for this clear dark pattern use.
I've turned off Google Play Services' permission to access location services, camera and microphone. Google Maps now basically doesn't work (it runs at ~10fps and the search box is non-interactive). Gmail scolds me every 10 second while composing emails that "this app won't work properly unless you give Google Play Services access to your camera and microphone" (but works fine otherwise.)
Their piping of apps' access to things like location through Google Play Services to force you to give them access makes a mockery of the permissions system. I really don't like the way they've been doing things the past few years.
Edit: Gmail complains about lack of access to camera and microphone, not location services. Fixed.
When Google released the new permission system in Android, they blew their one chance to actually make permissions meaningful. The fact that "portscan my network" is one of the Other permissions is testament to how unconcerned with user security and privacy they seem to be as an organization (despite, no doubt, some individual developers who care). I'm pretty close to deciding that my next phone will be dumb and featureless.
Wait sorry - it was camera and microphone that it nags me about. I'll update the other post.
But still, I can't see why Gmail should need access to those devices, and far less why it should harass me so aggressively about it when it works fine without them.
Just conjecture, but I feel like there was a sea-change at Google regarding attitudes towards users' privacy, about the time that adblock became widespread and competition started heating up between them and Facebook for ad revenue. They're behaving far less ethically than they did even five years ago.
Thank you. This has been going on for a while (as I recall, a Maps update last year first committed this crime - wish I had an apk backup of the prior version).
I now use HERE WeGo or OsmAnd (on non-GAPPS devices, from f-droid). While the experience is not nearly as cool, I love the fact that I am not participating in uncontrolled monitoring and unexpected battery drainage.
+1 for OSMand. The app keeps getting better and better, kudos to developers! It might not be as "cool" as G Maps (though I'm not so sure about it), but at least I can take my maps with me everywhere I go. Not to mention the fact that it is not sending my position anywhere.
I am prompted that each time I activate the GPS to use OsmAnd (for example).
The worst is that if you accept it once, this setting is saved and there is no obvious way in the UI to change it back (which was possible in the past I think).
To reset it, go in the Applications manager in the settings, choose Google Play Services and reset all its data.
There is no way it is a bug, someone had to think and put this deceptive behaviour as a feature (and someone had to design its deceptive UI, someone had to do unit testing for this deceptive behaviour, someone had to QA it…).
There's another one by Google maps where you can't choose to avoid tolls very easily. You also can't tell it to remember that you never want to use toll roads, so it is very easy for you to suddenly find yourself on a toll road.
Oh, God, don't get me started on that one. Northeastern Illinois is toll road central, and I-355 is the most expensive toll road in Illinois (save for the Chicago Skyway). Then there's that short stretch of I-80 that runs concurrently with (toll) I-294, with no reasonable alternate routes. That one isn't ridiculously expensive, but it's still annoying.
The tollways here are especially heinous when you don't use them often enough to justify the hassle of getting an I-Pass, since the cash toll is twice the I-Pass toll.
Yeah, and I don't expect that anyone tricked into / accidentally clicking "rate now" is going to leave anything but a zero-star review complaining about how the app tries to trick them and has annoying nag screens. I've written a few of those myself, then chuckled to myself since the developer was so short-sighted. If more people did that, they'd probably lay off with the "rate my app" pop-ups. If I REALLY love an app, I'll go find the developer on twitter and thank them, and write some nice reviews.
There's also the "pre permissions pop up" trick where a dialog that looks almost exactly like the iOS permissions box shows up first, and only THEN does the real one show up if the user "agrees" to accept it.
Remember, as an app developer, if a user denies the iOS permissions dialog in your app, you can't EVER show the dialog again -- the user has to manually leave the app and re-enable the permission in the iOS Settings.
Successive calls to the function to ask permission automatically return false for "denied" so it is in the developer's best interest to try to avoid showing the real iOS permission pop up unless the user somehow indicates that they are going to accept the permission (in this case, in the "pre permission" clone dialog).
When I used this "trick" (yes, I'm guilty), only a handful of thousands of users actually accepted the soft-prompt and then successively denied the real iOS hard prompt.
I've often seen this used to explain what they're planning to do with the permission, e.g. "Would you like to enable camera access so you can customize avatars with photos?"
Done properly I can appreciate it, especially on Android where permissions are often nonsensically bundled together. Although I'm never sure when it's crossed the line into scam. Is the permission for getting the user's gamer id really "make phone calls"?!
Is it a dark pattern, though? You're respecting the user's wishes to not use that permission (ignoring for the moment that you couldn't use it even if you wanted to).
I really don't see a problem with it. Apple's default UI is not great, and it can be highly confusing if the user doesn't know why the app is suddenly asking for access to their location or photos or whatever. By explaining and asking first, you smooth that over. There's no deception or trickery, it's just a matter of not surprising users.
(And to be clear, I've never built one of these extra permission prompts, so I'm saying this purely from the perspective of a user.)
It's not a grey area, it's a dark pattern, and you're basically intentionally breaking Apple's app policies, and screwing your users over just to get more permissions than you would get if you followed the rules.
That's simply not true. Moreover, there are some people with pretty strong arguments who do not even THINK this should be considered a dark pattern.
However, it doesn't seem like you are interested in furthering this discussion. Would you have rather I not posted at all?
There are many YC companies listed on this Dark Patterns site, and their founders definitely frequent this form. So far everyone else has been completely mum.
Given that you are on YC's community web site, may I suggest using some tact?
The correct thing to do here is to hit "OK" and just kill the App Store App without leaving a review (since they can't tell if you actually review it). Or maybe leave a 1 star and a note that you were coerced into rating it!
Related: there's an unfortunate cause of excessive app rating prompts — what's shown in search results/top charts/other areas (on iOS) is the average of the latest version, not the overall aggregate.
If you're at a company that does scheduled releases (e.g. once every three weeks), you'll need to continually ask people to review the app to keep that rating high.
Otherwise you only get ratings from new users and users that are discontent with the particular version. It's rare that people who have already rated the app 5-stars will continually go out of their way to rate the app 5-stars again without prompting.
It's a case of bad UI rather than a dark pattern, if you click 'disagree' it always remembers the decision, the checkbox only affects the 'agree' button.
Even worse: the ones that go "Do you like the app/ Are you happy?" Either option will leave the app. No opens up your email with the to filled to their support and yes opens the app store.
My only nitpick: the author wants the industry to agree on a "code of ethics."
Unfortunately, such exhortations strike me as naive. They are unlikely to work, because the truly bad actors will continue to use dark patterns regardless, putting pressure on all other actors to follow suit. The key challenge is not in getting the good actors to do the right thing, but in preventing the bad actors from doing the wrong thing.
Meanwhile, even sophisticated consumers like HN members pay a cognitive or financial cost to deal with dark patterns every day, which are prevalent throughout the web. Everyone I know is sick and tired of this crap.
The only viable solution I can think of is regulation in the form of a consumer-protection agency, working with the industry, that can fine bad actors up the wazoo.
I agree and that goal sounds similar to the goal of Kill Analytics [1]. "The industry" doesn't even exist as an entity. Gov regulation seems like the only viable way to fix it. But on the other hand, Google AdSense has done a lot to improve advertising online. Popups and bad ads used to be waaay worse than they are now (not that they are okay now). So maybe we do just need a few big players to step up and change things.
Do it in a way that earns them more money. Sounds difficult but that is what adsense did. If legit ux practices earn more, companies will use them.
Potential scenario: someone makes a browser plugin that blocks dark patterns as if they were ads, so companies who use them don't see any traction with them.
Or 50 years out when everyone is computer literate, users are aware of dark patterns and punish companies who use them by not buying their products.
Conversion rate optimization tools and data. Businesses turn to dark patterns to increase the conversion rate, and there isn't a ubiquitous tool for them to use. In theory google could provide a lot better funnel analytics and offer suggestions/options to improve that aren't dark patterns.
For advertising, there is an "industry" that can agree on such things, and it's called the Interactive Advertising Bureau (IAB). In fact, they're now acceptingpublic comments until December 22 on the new ad types that are supposed to be "leaner":
Thats a good point for advertising. Though iab doesn't touch much for onsite ux outside of display ads. In my limited experience iab perpetuates/supports what most consumers would see as deceitful. Not CRO, but excessive data collection and sharing.
Privacy Badger is more on the defensive side: It allows you to avoid tracking, so you just disappear from the POV of the analytics platform.
Kill Analytics looks more offensive. It actively sends garbage data to the analytics platform, thus destroying its value proposition to the site operator and discouraging its continued use.
>Besides, do you really want a bureaucrat telling you how to design your web site, with penalties enforceable by law?
"Bureaucrats" already tell you how to design your website. There is already laws against some forms of deceitful advertising, unfair trade practices, and information sharing and the like. For example, let's say I was working to design an airline ticket booking site...
>For both domestic and international markets, carriers must provide disclosure of the full price to be paid, including government taxes/fees as well as carrier surcharges, in their advertising, on their websites and on the passenger’s e-ticket confirmation. In addition, carriers must disclose all fees for optional services through a prominent link on their homepage, and must include information on e-ticket confirmations about the free baggage allowance and applicable fees for the first and second checked bag and carry-on.
How about those hotel "resort fees?" Currently they FTC says they are OK as long as they are disclosed before booking. So a hotel can advertise $20/night but when you go to book say "lol btw there's also a $30 resort fee." This of course makes comparison shopping impossible and exists for no other reason to deceive you. Rumor has it the FTC is going to backtrack on the policy and disallow separate resort fees.
Those aren't web site design issues, they're truth-in-advertising and business practice issues, which apply to all forms of advertising and business.
I'm talking about technical and design issues, like requiring every web site in the EU to pop up a stupid banner while you're trying to read something that blocks the content just to say, "Hey! We use cookies! Got it?" Now imagine taking that to the next level, with pages of regulations saying where and how other form elements must be laid out on the screen. Imagine another banner popping up on every EU web site saying, "Hey! Here's the link to our privacy policy! Got it?" Then clicking that away stores a cookie, which pops up the cookie banner... All because some bureaucrat who doesn't even know what an HTTP cookie is wrote a regulation requiring everyone on the whole continent to acquiesce to the bureaucrat's ignorance so he can claim to be pro-consumer and privacy-conscious and get reelected.
> I'm talking about technical and design issues, like requiring every web site in the EU to pop up a stupid banner while you're trying to read something that blocks the content just to say, "Hey! We use cookies! Got it?"
When I worried about the impact of the the EU cookie directive I read it. Surprisingly, it only requires Cookie notifications for web sites that use cookies for purposes that are not strictly necessary for the web site to function. This means that the operators of web pages that show cookie notifications are probably spying on their users for advertising (or other) purposes. The EU cookie directive only makes this obvious.
I think EU politicians know what cookies are and how they are used. You can see that in the list of cookies exempt from consent:
• session IDs
• authentication cookies
• user-centric security cookies
• session-limited multimedia player cookies
• social network cookies (for logged in members of the social network)
Yeah those popups suck. I think there are better ways to implement things like that. Maybe a button in a web browser that reports sites to a 3rd party that assesses sites and can hand info over to the government for levying fines.
What about certifications? Some certifying org can (for a fee) routinely audit websites for whether they use dark patterns. Then, people just know to avoid sites that don't have a good certification, and can report shady stuff.
If they push the envelope too hard, you report, they follow up and can potentially pull the cert. Maybe have browser integration too. (But good luck disambiguating between this and SSL certs for the average person...)
If I'm a developer (say, a junior engineer with my first real entry-level software engineering job out of college), my direct manager (who generally supervises and stringently handholds) basically tells me exactly which features need to be implemented (and often, even how).
I don't have much of a say in which patterns (dark or light) get implemented, and I probably won't have the gull to "stand up" and "rock the boat" as a 22 year old fresh out of school.
It's even worse if I'm married with kids... How do I explain to my wife and children why I lost my job for refusing to implement the product guys' ill-conceived version of "Roach Motel" in the frontend?
This is why I sympathize with the VW lowly engineer "fall guy" whose head met the chopping block for the entire pervasive executive diesel cheat scandal.
A code of ethics is a perfectly acceptable solution actually. Several other institutions have codes of ethics that must be followed and there have been legal adoptions because of them. For example the Hippocratic Oath. Many times doctors have been asked to do harm to patients. Of course it does happen, BUT if a doctor refuses and gets punished in some way for that then a major lawsuit can ensue. Many engineers have codes of conduct as well.
I think this would be much harder to pass, and would be more akin to financial advisers being required to be fiduciaries, but hey, that is now required in the US. What is required for that to happen is mainline support though. But that will be difficult to get. I think this is, like the privacy stuff, is a little too abstract for the average person.
Blockchain based application platforms can allow re-architecting of applications such that the data that backs the application is not "owned" by the company, but instead is either public or user owned.
Currently, if you want to use LinkedIn you have to either use their website. Sometimes there are 3rd party options that consume the API but in the current model of the web, as soon as one of those 3rd parties is seen as problematic then API access is typically revoked or restricted to give power back to the company.
In the public data model the data and API are publicly exposed and cannot be arbitrarily restricted. In the LinkedIn case this would allow a 3rd party to build a new UX on top of the LinkedIn database that excludes the copious dark patterns. Under this model, companies who abuse their users risk getting displaced by an alternate application backed by the same data that favors the user.
- Disclaimer: I work pretty much exclusively developing software in the Ethereum ecosystem which is one such blockchain based platform.
Sure, blockchain-based applications might have that ability... but what incentive do companies like LinkedIn have for doing this? What incentive does any company that uses these dark patterns constantly have to use a blockchain-based application? Especially because most of the time, the users data is the profit machine, so companies certainly want to own the data that backs the application.
> but what incentive do companies like LinkedIn have for doing this?
They don't have any incentive and I suspect they will hold onto that data until it's "pried from their cold dead hands".
I think that companies like LinkedIn and other massive data silos are going to atrophy and die as users migrate to new platforms that treat them better and give them more control over their data and experience. I'd like to point out that while there is little incentive for current companies to adopt this architecture, it doesn't mean that new companies won't be successful implementing their business under this architecture. Admittedly, almost all of this is utterly unproven given the newness of blockchain based application platforms.
One way to look at this is that the current model of internet companies is highly anti-competitive. The data they "own" is really the data of all of their users who can freely give it to any other source they choose. The fact that they have control over the database is what gives them the competitive advantage. These new application platforms which have open public databases can change the game such that the previous closed-data model can no longer compete.
I think at this point we've amply demonstrated that the overwhelming majority of people--and I think that's probably a critical mass of people, from a social sense--don't really care about their data and only minimally care about experience.
What do you view is the meaningful reason for users to switch to these other platforms with some kind of better underlying data model? In addition, what's the meaningful reason for a company to adopt such a better underlying data model instead of keeping a data silo and just making better features on top of such a silo?
> What do you view is the meaningful reason for users to switch to these other platforms with some kind of better underlying data model?
I completely agree that people don't currently care about their data in the sense that people are complacent about their privacy and aren't likely to change very much in that regard.
I think people care about UX but to what level?? Might be minimal.
There are a few compelling reasons why I think these new open platforms are likely to succeed and I'll try to capture them succinctly.
1. Data Economy: People choose options that save them money or make them more money. While people don't care about owning their own data, they will care about a new platform that lets them earn money for passive things like keeping their smart phone location services turned on, or allowing access to their browsing habits.
2. Account Portability: Currently if you transition from selling on Ebay to Amazon, or from driving for Uber to Lyft you have to start back over from zero. If you own your data then you just bring it with you over to the new platform and all of your reputation and whatnot can come with you.
3. Network Effect: These types of open platforms are capable of robust cross-platform integration. Right now we see the power of this in things like the suite of products that Google provides. We can have these types deep inter-connectivity without needing the applications to be from the same source.
I also want to acknowledge that this isn't going to be a smooth ride and there are big challenges to overcome, but the potential exists and it won't happen if we don't try.
> what's the meaningful reason for a company to adopt such a better underlying data model instead of keeping a data silo and just making better features on top of such a silo?
I believe that the article linked below titled "The Golden Age of Open Protocols" is the most compelling argument I've seen.
What about an organization that helps consumers to cancel or opt out of deals instantly. From the talk if a dark pattern gets a consumer to accept a deal, it is very difficult to leave, so going to help for this could work especially if the help can cancel hassle free very quickly.
> The only viable solution I can think of is regulation in the form of a consumer-protection agency, working with the industry, that can fine bad actors up the wazoo.
A code of ethics would be 100x more effective than this, and still be ineffective.
There's no search but I'm curious if LinkedIn is included. I never took screenshots, unfortunately, but I feel like they've had close to 5 in my own experience alone.
LinkedIn is one of the most common examples when talking about dark patterns, their most famous being the number of ways they tried to get you to invite your gmail contacts.
Just reinstalled Skype and can see that they're trying to get access to my contacts also. They tell you to set up access so you can use the app.
First up is microphone, and you click allow. Second is camera, and you click allow. Third is contacts, and you click—wait a minute, why do you need this? Disallow.
Don't know what they would do if they got my contacts (hopefully not spam them like LinkedIn), and don't intend to find out.
Like many people, Skype is a fallback communication channel for me. I don't care if my brother or best friend is on Skype because I call them on the phone to talk, text message if I want to chat, or Facetime if I want to video chat.
I'd say Skype is a way to communicate with people, not a way to contact people. And for me, it's a method of last resort for people I can't call/text/Facetime/Hangout. I understand some people might say yes, but it's still somewhat deceptive to put this in the same category as "enable mic" and "enable camera", which Skype cannot operate without.
Skype should be a little more private than your cell phone number is. If I add someone's phone number to my contacts list, that DOES NOT mean I want them automatically added as my Skype contact.
Of course. But for them to ask doesn't seem like a dark pattern, since the main use case for the app is contacting people you know and it's reasonable to think most users don't want to micromanage different contact lists as you describe. The dark pattern stuff is more like when your app for rating ice cream flavors is asking for your entire address book.
I don't recall skype ever asking me for my permission to do this. I updated the app and then suddenly all my phone contacts were skype contacts and I had to go through and change the settings to never do that again and I had to manually delete each contact it had created.
To me this would be akin to facebook automatically adding the local chinese restaurant as my friend simply because I had their number saved in my phone.
Airbnb also wants access to your entire Google contacts or Facebook friends list if you want to use the mobile app. Let's just say that I refuse to use their mobile app and only use the desktop/browser version for this reason.
You want permission to access my entire contacts/social network? No thanks.
Here's the one of the most egregious examples of LinkedIn's dark UX. The transition from "I'm accepting incoming invites" to "I'm inviting people to connect or join LinkedIn" is intentionally subtle: https://goo.gl/photos/ncnTiMWeSBJfg3rZ6
Here's another one I found recently: The email spam they send you is categorised into a bunch of categories. You can subscribe or unsubscribe to each individually. The 'unsubscribe' link at the bottom of the emails sent will unsubscribe you from the first category only, no matter which category the email is from! And the web site doesn't tell you this, it just pops up an almost content-free 'unsubscribed' message. So you keep getting spam until you manually load the site and dig through the settings pages and uncheck the other categories.
If it were truly the case that you couldn't see it without signing in, that would not be a dark pattern - there's nothing misleading about that, it's just a policy you don't like. using the buzzword of the moment to criticize something you don't like, without considering whether it really applies, is how phrases lose all their meaning.
(of course, that's not true here and you don't have to sign in)
The distinction is subtle, because there is a common dark pattern where websites try to trick you into creating an account to access public content, even though you don't actually need to.
This is a dark pattern because it's trying to make me believe that I should create an account when obviously I just want to get the file that my friend sent me. The link to download the file directly is at the bottom of the message.
Not sure what you're seeing. I opened the link in an incognito window (therefore, no google signin), and everything works fine without signing in: http://i.imgur.com/ZKhACZM.png
Also tried a different browser with a clean slate, no issues.
But you learn more this way, linked in is just frustrating!
A company that tries to be clear on everything is Google, but I still find they morph so rapidly that their documentation is often 2 or 3 generations behind.
At the same time, if someone has indicated that they want to give my app a bad review, am I obligated to take them to the review point so they can do it?
I don't think that any of these dark patterns represent the breaking of obligations by any party.
Dark patterns don't represent anything truly sinister, and in most cases they are perfectly legal. They are just bad UX because they're dishonest about their intent.
Buying tickets with RyanAir is stressful due to these kind of practices. They're less aggressive than in the past, when I wouldn't even continue because I just had zero trust in the company, but they're still sly.
A sneaky one I saw recently is something like:
[ ] Subscribe to newsletter about our services by unchecking this box.
(It doesn't matter whether the box is initially checked or not, the user will be tricked into the desired behavior.)
I don't remember the exact phrasing, and it was much more shrewd than my own, but it relied on a boolean flipping of the value of the checkbox towards the end of the field label. Any user seeding the start of the sentence will leave it in its current state.
Would gofundme's entire brand and business model be considered a dark pattern? They go out of their way to make it feel like a nonprofit, promote campaigns for issues that already have actual nonprofit status/direct donation pages, and do their best to hide their fees (which, last I checked, were actually higher for legitimate nonprofits than for regular campaigns).
This kind of "hall of shame" websites are interesting to me. However, most of the regular users do not know nor care about this stuff.
Well, in a semi-ideal world, there would be a comprehensive "hall of shame" database containing the information about the tricks, problems, dark patterns, etc. for all websites. Then, some helper apps or browser extensions could warn us about these issues while a regular user is browsing.
One of the problems with this idea is that it gives a huge authority to the owner of that database and there would be lots of questions about its neutrality.
In an ideal world we could detect those patterns programmatically but as you hinted, there is the question of "when is it considered a dark pattern and when is it only clever marketing?".
The answer is actually clear - it's always "dark pattern" from the POV of the user, and it's always "clever marketing" from the POV of the business applying it. Each side draws the line so far in the direction of the other that there isn't any easy compromise.
One concrete thing I think can be done right now is to make a Chrome extension that audits the current website on the number of dark patterns and visibly surfaces a score for the website so people can flee from them like the plague/websites-with-the-broken-ssl-padlocks. Vote with our wallet.
Is there a category for grouping notifications such that spammy notifications are lumped in with other important ones you might want to receive?
Google Photos is a big culprit unfortunately with their photo backup. They keep pinging a notification to get me to remove local versions that are backed up in the cloud. I don't want to do that. The only way to remove the notification seems to be disabling all app notifications.
Worse, when you go into settings, they have a variety of settings that all take you into a deeper level of settings when you click them.
Except "Free up device storage."
Clicking that does not take you to a deeper level as expected (despite looking like a nav tree item), but instead actually does the one thing I didn't want to do, with no confirmation dialogue.
Do you even actually have "important" notifications you might want to receive?
Is any automated system more important than your focus?
What I have is, disallow or delete the app or "service" as soon as I receive a "notification" from it, only my wife and family is allowed to light up the led on my phone.
An oft-overlooked aspect of dark patterns is the impact on accessibility.
Ever received a spam email, hunted for the unsubscribe link, and found it in light grey, against a white background? Imagine how much worse that is for someone with low vision. Ditto for pop-up ads with a tiny grey X in the corner.
Many of the dark patterns described in the video rely on hiding/obfuscating opt-outs and these have an even bigger impact on people with visual/processing disabilities.
How about this one in the new uber app: If you disable location services (which recently switched to either "always" or "never", no longer offering "only when open") you can't use your history or saved favorite places to set a location, you have to manually type it out. Couldn't believe they would be so shady just to get location services activated.
After the first time this appeared on HN, I quit LinkedIn and deleted my profile. They still sent me "xxx wants to connect with you".
I'm really getting tired of turning down Amazon Prime on Amazon. I use Amazon less because of this. There are about three extra pages of Amazon Prime ads to click through for every purchase.
Somewhat related—Amazon Prime Video now shows ads at the beginning of videos. This wasn't part of the deal when we signed up, and it's pretty deceptive to just start doing this to customers.
I didn't see it in the agreement (actually went back and looked for something that would cover ads), and it's not clear what limits, if any, they think there are. That is, could they just decide to show as many ads as Hulu and say "yeah, we said you could have access to this catalog. we didn't say it would be ad-free".
Video services need to be disrupted in a way that sticks (not like Amazon adding ads after selling everyone on an ad-free service). The only service I use with any regularity now is Netflix since it has no ads, but even they sometimes have problems with finding the show you actually want to watch.
I was on vacation recently and the room had DirecTV. I tried searching for a program and all the search results were for channels not subscribed. Several of the channels in the guide list were presented as if subscribed, but then when a show would start, would prompt to charge $6 to continue watching and the show would stop after a few minutes. Finally, I found a channel that was subscribed and not PPV, and when an ad came on 30 seconds later, I tried to turn off the DirecTV box.
Here's the DirecTV dark pattern: there was a "Please Wait..." message on the screen while the ad played instead of just turning off the output! How can anybody actually be making money from TV ads when they are so obnoxious?
I'm pretty sick of corporations double-dipping in every industry. Video services charge you for watching, and then sell you to advertisers. Supermarkets charge you for products, and then sell you to manufacturers. ISPs charge you for bandwidth, and then try force video services to pay as well. Where's the exit to this hall of mirrors?
Speaking of Netflix, how about them autoplay video ads for Netflix original shows?! Can stand that shit. I'm on the edge of cancellation every time I see video previews autoplay!
I opted for the ad-free Hulu subscription, and that, for the most part, is great.
They do have a few programs on there that are not eligible, but then they say up front "due to streaming rights, we have to show you ads, but it's just one before and one after".
That, at least is better than the ad-laden Hulu before they offered that option, where any prolonged bout of streaming would show you the same ad over and over and over.
I sent them 2 dark patterns in the past which they didn't put; in an email I received long time after inquiring about it one of the developers said they're under the pump and will get to it sometime. And they don't.
Good luck figuring that out though, since the what's new page hasn't been working in pretty much forever and the only way to see if something has been changed is to check each category individually.
And yeah, it doesn't update much. I remember sending in my own examples before, and those never got added either. Kind of wish there was a site about this with a more regular update schedule or something.
At least they're pretty honest with it. Every time I walk by their counter at the airport I laugh at the sign because it's like, no frills but you'll pay for everything!
In a way I kind of like that model - just not on an airplane.
I kind of disagree, I want it to be that way in theory, but in reality it is rather predatory. A no frills flight for low budget where you pay for everything (and debatably more than an equal seat on another airline) and then they pressure-cook the passengers for a high interest credit card. Their pitch went on forever, way way longer than the usual "signup and get extra miles".
I watched a bunch of folks sign up on the flight and it made me feel really badly; the same way that check-cashing places scam their, mostly not-affulent, customers. These folks are even more vulnerable to these kinds of dark-patterns.
My recent experience with LA Fitness, or City Sports, as they call them here in the SF Bay Area, have been similar to what the video describes. However, it was not as painful as I'd imagined it will be. I went to my account page and clicked on the Cancellation Form link (not necessary but recommended). After several screens I got a cancellation form for my account which I printed and mailed using USPS Certified Mail. After two business days I got an email stating that my account will be closed and when my access privileges will end.
Spirit Airlines is an interesting case, though, because like someone else said, they're very up-front about this. Is it still a dark pattern if the company admits it and essentially warns the user?
The unmissable warnings don't really happen until you get to the terminal though. Admittedly, I have not used spirit for some time. But the signs that say "we don't really want you to have to pay $100 for the carryon you forgot to pay for on the website, but you didn't pay for it, so tough shit" don't seem terribly genuine.
I will never fly spirit again, though. They just straight up cancelled a flight of mine an hour and a half before the flight for literally no reason, I got no notification via email/text or phone until I went to the airport to check my baggage (another thing I hate doing, but I was traveling for a pool tournament and you can't carry pool cues on a plane...). I had to book a last minute flight with another airline and my total airfare ended up being almost double what I paid for spirit.
I've flow then once. Both the flight there and return were over an hour late. They had no automated bag tagging terminals, so you still had to wait in line even if you checked in via phone.
The prices were just crazy too. They depend on people overlooking fees and getting screwed over to make them profitable. I got away with paying nothing over what I was quoted on the website. I will still never fly them again.
> they are not mistakes, they are carefully crafted with a solid understanding of human psychology, and they do not have the user’s interests in mind
I dispute this. I work with someone who is a marketing person and very much drawn to dark pattern rubbish. Most recent incident is a good example - a sales promotion where something is added to the cart if the customer buys a certain product. I pointed out that this was a 'dark pattern' and made sure my boss knew that such an idea is illegal in the E.U.
For me the illegality is not something that scares me, I doubt I will go to jail for writing the code, however, using a 'dark pattern' is a problem for me.
I like to think that I am a customer focused person, my marketing clown certainly is not. In fact he cares not one iota about any of the customers, his world view is selfish.
So, I point out the illegal aspect, next thing is that he wants the items given away. I don't see how that makes our products look good and I have no idea how to make money out of making a product and then shipping it to them for free. So again I am not sold on the priority of the project.
Returning to the 'selfish' aspect, my marketing clown does not code or appreciate the effort involved in making the auto-add work. I can do the code for that and think I could get the MVP of it done in a day, with some testing after that. Then there is the thinking through of the unintended consequences - I imagine that we would get plenty of customer service emails if there was a problem with the offer. The UX is also not thought out. I am sure that I could spend all day getting the message to the customer sorted on the website and emails, but if I didn't do that then the whole thing would certainly be 'dark pattern'.
There is nothing clever about my selfish marketing clown and his naive ways. However, he gets a performance bonus based on 'customer acquisition' metrics that the rest of us don't get. He has an interest to not care about anything other than his Google Analytics nonsense, customers, rest of the team, the company making money matters not.
Although anecdotal, this is how 'dark patterns' happen - marketing clowns, their selfish ways, their inability to understand the problem space (because they don't do code or customers) and workplace bullying make these things persist.
Your experience seems to support, not refute, the idea that these are conscious decisions (albeit at a marketing, rather than coding, level, which I think is probably what the original meant) rather than inadvertent bugs.
I don't get the impression GP's disagreeing with the "not mistakes" bit, only the "carefully crafted with a solid understanding of human psychology" side of things - they're deliberate, yes, but more often than not they're clumsily-applied snake-oil, not the evil-genius stuff they're pitched as.
Thanks for that. Perhaps what I see in the anecdotal 'marketing clown' is a lack of human empathy, which is kind of the foundation stone for understanding 'human psychology'. Some people lack the wiring to care about others and see things from their point of view, whether it is the customer or the workmate. They also do not pick up on basic psychology, in my example there is little appreciation of how upsells work - there are facets and nuances to this that anyone who works in retail gets to grasp.
So it is a case of these people not knowing what they are doing, far from knowing the customer psychology and deliberately deceiving them, there is just no thought beyond doing some silly marketing campaign for the month end results.
What is also wrong with dark pattern is that the customers have to be churned - they will not be coming back after the customer service disaster that goes with the sale.
I don't care about quarterly results, I care about building a business that does not need marketing beyond word of mouth and white-hat SEO. So, in ten years time, customers will return for customer service provided to them, not to facilitate whatever silly offer is needed for my marketing clown's month end. I want customers to want to come back for a great product and great service, this is not compatible with 'dark patterns'.
We often give intellect where there is none. Notably in TV dramas where the 'killer' is supposed to be clever, in reality most people that commit crime really are not thinking at all, they have not thought it all through. 'Dark patterns' is a bit like that.
They're refined by evolution-- ones that make money get kept, ones that don't get removed. That is about as much genius as is really needed.
Better, in fact-- if no one really knows whats going on then no one can have any pangs of ethics, no one can turn whistleblower after getting fired, etc.
That was interesting. My reaction was "is this actually a dark pattern, or just designed by a complete idiot?" Because the user trying subscribe and the user trying to UNsubscribe both have a problem in understanding what is going on there. Somebody should get an award for that.
This presentation is great. I think a good next step on the path toward ending unethical UX could be in creating an international ethics review board for it.
I know it sounds silly, but this is how a lot of decisions are agreed upon by many large organizations, and help encourage involvement and following the rules. See W3, ICANN, ESRB, IETF, etc.
The "BUXE", or Board for User eXperience Ethics (just my name idea) could be founded by a group of consenting UX designers, companies, and organizations. Together they would vote on and establish UX design principles that would be up for review every year or so.
The BUXE will accept fees for reviewing a website's adherence to their ethics and would give ratings to them based on how well they follow the guidelines. The resulting site can then publish their BUXE rating on their site.
Individual developers could be given honor status if they are particularly vocal or involved in ensuring the development of ethical UX that could be accolades for them to brag about (something important to developers). It's a good resume booster, anyway.
While I'm familiar with the site already, I'd love to have an RSS feed of newly added submissions. Unfortunately, when I click on Recently Added I get a list with pattern definition links that all 404, and no link to the actual submission.
That and a api where you could `GET darkpatterns.org/patterns/www.linkedin.com` and get a list. Would be great for building a nice browser plugin with.
That would be amazing. An educational plugin that could pop up notifications at relevant times to inform and warn users of known dark patterns would be very powerful.
There is no "works for everyone" method to stop this. You can vote with your wallet, and support vendors that act ethically, or act in whatever way you are OK with.
When this type of UI disappears from the internet, then you will know that the majority of consumers agree with your viewpoint. Until then, people keep buying those insurance upgrades, and not caring(if they cared, we wouldn't be in this situation).
If it all seems glib, that's because it is glib. People are taking advantage of other people, just below the threshold where those victims care enough to do something about it. This is the world we live in. I'm not sure how to end this on a positive note.
> When this type of UI disappears from the internet, then you will know that the majority of consumers agree with your viewpoint. Until then, people keep buying those insurance upgrades, and not caring(if they cared, we wouldn't be in this situation).
In a situation where you are legally compelled to buy insurance and essentially all providers do this that's completely wrong.
Does darkpatterns.org support api query? There ought to be one available so that various tools can be built. For instance, a browser plugin to warn users upon visiting dark pattern websites
I've been pondering the recent trend in pop up ads where if you try and dismiss it by clicking the tiny, hard to find, checkbox, and you miss, it will cause it to move. This forces you to actually pay attention, to a degree, to the ad, rather than habitually dismiss it.
These are usually found in ads, or notices like the New York Times puts up notifying you there are only so many free articles left.
I think of this as a gray pattern usually, as it is designed to keep the source of revenue going to fund the sight you are currently reading. It's a surprisingly effective innovation.
Some of us always hit back on those pages. No way am I going to read your ad. Try too hard like that, and I will leave the page rather than ever look at it to find the close button.
Pricing pages are another type of U/I that could be label black hat. Personally, I think the typical pricing "tricks" like anchoring, bundling, freemium are fine and part of running a business.
This definitely reminds me of the process of deactivating your Facebook account. You really have to read what's on the screen in order to choose the right options.
Where do we draw the line between "dark patterns" and "smart design?" I feel like the Hacker News community could be advocating for a persuasive pricing page design one day and decrying its dark design pattern the next. The obviously evil patterns are easy to avoid, but it's difficult to distinguish between appropriately persuasive and inappropriately manipulative in the grey-middle.
Is the use of a tech shift or form factor change as an opportunity to redefine the product so as to reduce customer freedom or introduce surveillance a dark pattern?
I've thought of this with the mobile revolution. You could never have introduced total device lock down and ubiquitous telemetry so easily in the PC era. There would have been an outcry. But change the form factor...
Quora does a ton of this in their email newsletters to try and reactivate you! Want to unsubscribe? Okay! We'll just make up a new "round up" newsletter and re-subscribe you. So desperate to stay alive I guess...
Isn't part of the problem that "conversion rates" and "engagement" now trump just about every other metric? Anyone know good ways to quantify annoyance rate and off-puttedness?
These are product updates and often times contain security and stability fixes, nobody is trying to trick you here. Apple is proactively trying to keep their users up to date, I see no problems with this.
This is just PR speak to cover for the false dichotomy that you must accept new features (and regressively slower OS upgrades) to receive security updates - or be vulnerable. In reality, Apple could easily back-port security fixes. They are abusing security to force undesired changes to functionality.
Edit: Take a look at how any Stable/LTS Linux disto handles this. If a bunch of hackers can do it on such a diverse software stack, surely the company with the largest cash reserves in the world can figure out how to do it on a software/hardware stack they control completely.
I'm not talking about minor releases here... Avoiding an upgrade to a major release (iOS 10) triggers an unavoidable antipattern: Every 24 hours iOS asks if you want to upgrade now or remind you later... but that's not the worst part... At times it will show what looks like the lock screen but if you enter your passcode the OS will auto-upgrade the next time you are plugged into power and on WiFi. If you aren't paying attention, you think you are simply unlocking your phone instead of authorizing the upgrade. I have no problem patch releases, but I find that major releases over time eventually bogs down older hardware.
I think labor unions would be the best option. Put their employers into strike, DOS their website, and contact the press. No one will feel bad about the scammers.
My pet peeve: If you add a credit card to the Uber app there is no way to remove it without replacing it with another valid payment method. If you google solutions to this you get third party recommendations to plead with their customer service to have it removed. Seriously?
This customer-hostile approach really needs to be killed.
I suspect there are many interfaces that would reject this number, as it's not a visa, master card or amex (which start with a 4, 5 and 3 respectively).
5105105105105100 and 4111111111111111 might be better alternatives (other test numbers that also pass the luhn check).
And when using email addresses to test your software always use [something]@test.com instead of [whatevs]@example.com because the people at test.com love getting your random test data.
Thanks for the tip, that may be useful in the future!
I ended up using my bank's "virtual credit card" service to create a virtual CC with a balance of 1 SEK to get rid of my Uber account. Anyway, I think this is shameful of them.
I would be ashamed of this practice if I worked for them. There is no excuse. You can't blindly blame it on A/B evaluations and what ended up making the company the most money. It's simply unethical.
> I would be ashamed of this practice if I worked for them.
There's no shame because there are no consequences. "Oh, you were the guy at Company X that wrote that annoying Dark Pattern Y, huh? Can you walk me through the ethics of that?" - Said no interviewer ever.
I was just discussing this with a former co-worker. [Here in Europe.]
Also: I'm pretty certain that if you had a history of using dark patterns.. that would be seen as basically fraudulent behavior here. Investors would stay far away.
We both shared our distaste for a typical american way of accomplishing personal financial success - fake it til you make it, etc etc.
Maybe this is a part of what sets SV apart from Europe - and why SV keeps winning :). Fraud works.
This might seem obvious, but why do you want to have an Uber account with no credit card attached? Usually SaaS require you to have a valid credit card attached always. I don't see the issue.
Really, I would be equally happy to have them remove my Uber account altogether. Except it's equally cumbersome and non-obvious. There are no immediate actions to do it from within the App or the website:
I'm in the EU and I was not able to remove before doing that trick with a virtual CC. What country are you in? (I'm in Sweden.) It's quite possible that officials/bank people in your country made noises about this and they white-listed this particular country.
From Spain but when I did remove it I was physically in the U.K if it makes any difference.
Although, if I think of it, I have also PayPal tied (but you can cancel that from their interface). Maybe Uber thinks I do have a payment method and they don't know I'm not allowing the charge from my PayPal anymore.
The problem is that they do work within the defintion that the growth hackers are working with. They do not measure the people it drives away and never return.
Think of it as similar to the mall kiosk people -- they dont care if you are offended, you probably would never have bought their product anyways.
Also similar to spammers who now send emails that are so stupid that you think no one would ever click them -- except the small number of people who are so naive that they do -- which is what they are trying to select for.
Which points to age old problem of any public network: spam.
I've had the opportunity to speak to quite a few people who use the modal popups for newsletters and the like. They do get used. A lot more than you might imagine - and often enough times to warrant pissing off some small portion of your users.
This may vary on the audience and my anecdata is where N = 5~ devs.
I worked for a large e-commerce store. We implemented pop-up email newsletter opt in and saw 4x growth in our mailing list. Pop-ups work, they suck from UX perspective but work nevertheless.
An awful lot of the content on this website shows naivety/lack of understanding of the website, and in a few cases, displays information which is simply untrue.
http://darkpatterns.org/british-airways-distract-from-cheape... - I'm pretty skeptical that this had any malicious intent. It is showing the cheapest option in the column. Often the customer will already know which class they want to fly in, so it helps them to be able to skim down a column looking for the cheapest option.
http://darkpatterns.org/directline-com-july-2010/ - Hilarious. If this website had any idea how much negative impact price comparison sites have had on car insurance in the UK, they'd be praising Direct Line for not lowering themselves to the tactics of all the other insurers (offering an incredibly unprofitable first year rate, then massively increasing pricing when you renew)
I'm sure there are more - these are the most clear ones after skimming through about half of the categories.
The one exception is the British Airways site. That page is very confusing. At best is awful UI. At worst it was created to trick the user into purchasing a more expensive ticket.
I totally agree that the UI is absolute dogshit, but I don't think I'm willing to definitively call this out as a dark pattern when it definitely could be shitty design.
That's my problem with this website - it would have more impact if it was more honest/genuine and only called out websites which are definitely 100% dark patterns. Or perhaps they could show questionable websites further down under a slightly different heading - to show that they do recognize it's not black and white.
Also stop letting marketplace sellers email me begging for feedback after every marketplace item I accidentally order. I try my best to not order marketplace seller items anymore but when I accidentally do (or buy a gift for someone that is only offered this way) I always end up getting emails from these guys. Are you sharing my email address with them? Does unsubscribing or responding to them share my email address with them? I have no idea. There is never anything useful and it's impossible to unsubscribe from all past and future marketplace emails which is really annoying. Come on, amazon, I really want to love you and continue shopping there but it's getting to the point that I'd rather go to wal-mart! (ok not really)