Hacker News new | past | comments | ask | show | jobs | submit login

I often see this argument, and although I can happily accept the examples given in defence as making sense, I never see an argument that this multivariate approach solves the problem in general and doesn't merely ameliorate some of the worst cases(I suppose I'm open to the idea that it could at least get it from "worse than the disease" to "actually useful in moderation").

Fundamentally, if you pick some number of metrics, you're always leaving some number of possible metrics "dark", right? Is there some objective method of deciding which metrics should be chosen, and which shouldn't?




"user trust" is a good one, abeit hard to measure

Rolled out some tests to streamline cancelling subscriptions in response to user feedback, with Marketing's begrudging approval.

Short term, predictably, we saw an increase in cancellations, then a decrease and eventual levelling out. Long term we continued to see an increase in subscriptions after rollout, and focused on more important questions like "how do we provide a good product that a user doesn't want to cancel?"


So, it's just a process of trial and error, in terms of what metrics to choose and how to weight them?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: