Hacker News new | past | comments | ask | show | jobs | submit login

Good multivariate testing and (statistically significant) data doesn't do that. It shows lots of ways to improve your UX, and if your guesses at improving UX actually work. Example from TFA:

> more people signed up using Google and Github, overall sign-ups didn't increase, and nor did activation

Less friction on login for the user, 0 gains in conversions, they shipped it anyway. That's not a dark pattern.

If you're intentionally trying to make dark patterns it will help with that too I guess; the same way a hammer can build a house, or tear it down, depending on use.




I often see this argument, and although I can happily accept the examples given in defence as making sense, I never see an argument that this multivariate approach solves the problem in general and doesn't merely ameliorate some of the worst cases(I suppose I'm open to the idea that it could at least get it from "worse than the disease" to "actually useful in moderation").

Fundamentally, if you pick some number of metrics, you're always leaving some number of possible metrics "dark", right? Is there some objective method of deciding which metrics should be chosen, and which shouldn't?


"user trust" is a good one, abeit hard to measure

Rolled out some tests to streamline cancelling subscriptions in response to user feedback, with Marketing's begrudging approval.

Short term, predictably, we saw an increase in cancellations, then a decrease and eventual levelling out. Long term we continued to see an increase in subscriptions after rollout, and focused on more important questions like "how do we provide a good product that a user doesn't want to cancel?"


So, it's just a process of trial and error, in terms of what metrics to choose and how to weight them?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: