I worked for a startup that employed this strategy. It didn't work - not even close. That's not to say it can't. But I became skeptical. Among the problems I observed were:
1. What's a successful test? We collected lots of data, but in the end it was still a gut decision which product(s) to go with, with all the usual emotional factors in play.
2. Just because people say they want a product, it does not follow that if you build what you thought they meant, they will buy it.
3. Tests can get a lot more expensive than you expect, and you may need to run a lot of them. This is related to #1, since you have no objective criterion to know when you're done testing.
4. This is a mechanistic model for making products, which assumes that you can turn a product "on" or "off" depending on data. But many successful products don't come into the world that way. They are driven by vision and total commitment on the part of the makers. (Think of that guy at Apple whose project was canceled and who snuck into the building for a year to keep working on it.) This is psychologically incompatible with the idea of testing a bunch of ideas, none of which you care about that much, and picking the best one based on some test results. That approach that tends to be favored by managerial types who don't much care what it is they're managing - not the classical entrepreneur who's consumed by a passion. Who do you think is more likely to have the kind of persistence needed to carry a product forward?
I came out of this with two conclusions. One was that there already is a mechanism for testing which ideas will have market success: the market itself. So the strategy here is really to beat the open market, which gives some idea of how hard it is to execute.
The second was that this way of working is not for me. I want to work on something that I am passionately commited to because it is a creative expression of my being - as well as something that people will pay for. Sometimes the one has to precede the other.
Who do you think is more likely to have the kind of
persistence needed to carry a product forward?
Someone who is confident that he is building something people want.
You have to make up your mind on whether you're building something people want, or something you want. Either is fine, but you have to walk into this with open eyes.
Everyone is confident they're building something people want. The question is, how do you propose to tell if it's true? Ultimately, the only way to know for sure is to bring it to market.
Measuring sign-ups on landing pages from Adwords did not (in the cases I observed) live up to its promise as a short-cut in this process, for the reasons cited.
Edit: I certainly believe in adapting what one's doing in response to market evidence - e.g. listening to one's customers - rather than stubbornly assuming that one knows what people want. That, however, is not the point of the OP.
I think you're expecting too much from this method. This is a means of checking the pulse of the market. When you approach an injured person to give them first aid, checking their pulse is one of the first things you do. If they don't have a pulse you take appropriate action. But if they do have a pulse, you don't continue checking the pulse again and again, using ever-more-sophisticated pulse-testing technologies. The quality of a paramedic's care is not proportional to how precisely and accurately she can measure your pulse.
This test might tell you whether your marketing is going to be very difficult. If you can't get anyone to click on a teaser for your product you've obviously got a problem, and you might want to refine your pitch a bit before doing anything else. Or at least make a plan that involves a relatively long, slow, profitless period during which you shop your minimalist prototype to people and try to build word of mouth.
You're right that the real size of your market is hard to estimate with this technique. Tim Ferriss goes so far as to recommend that you take potential customers' emails [1] before putting up the screen that clearly informs them, with great sorrow, that you're "out of stock" and offering to email them when you've got product to ship. I think he's right that this is the only way to be sure that the rubberneckers on your website are actual potential customers, but I'm not sure I'd go so far as to do this. (Even though it looks, to my surprise, like the FTC would condone it if you used the right fine print...) And I'm pretty sure it wouldn't work for software. People don't pay for software unless someone has reviewed it. They're not fools.
As for the fact that this model is "mechanistic"... That's why it's a useful spiritual exercise. The tendency to fall in love with your first idea is strong, and the tendency to build the thing that most inspires you -- as opposed to the thing that will cause customers to give you money -- is strong. But it's often easier to make a profitable thing lovable than the other way around.
[1] I originally stated, in this post, that Ferriss recommended taking down credit card numbers as well. After looking closely at this part of his book, this turns out not to be true. The fine print matters a lot in cases like this, so I'm sorry for the mistake.
Well, let's be fair. First, I mischaracterized Ferriss' wording and have corrected my original post: He suggested showing the final price, capturing emails and phone numbers, and asking the customer to click through to a CC-collection screen... but not actually collecting CC numbers. I'm sorry for the error.
Second, Ferriss makes it clear that he understands that many people wouldn't find such a practice to be ethical, even though "it is legal if the billing data isn't captured". (That, by my reading of the FTC rules, seems to be true.)
Finally... I think that any tutorial on direct marketing technique would be shortchanging you if it didn't suggest the possibility of doing dry testing. I get the impression that it's not exactly uncommon. Indeed, the entire personal computer industry was founded by a company that launched their first product -- the Altair 8800 -- by taking money from customers and using that money to fund their manufacturing. That's much farther than Ferriss would have one go, although I think it might still barely be legal if I read the FTC rules correctly. (Which I'm sure I don't. Ask a real lawyer.)
I think getting hands on experience with the marketing of your unborn product is a very healthy thing to do. It will probably influence what you eventually build. And if it doesn't, at least the work is stuff you will need to do down the road regardless.
This strikes me as unethical, and probably illegal in the United States. We have a set of consumer protections called Truth in Advertising that probably apply here.
Your FTC link is really handy. You should read it more carefully. On that page is a paragraph which directly addresses this situation:
Q: Is it okay for a company to "dry test" a product?
A:"Dry testing" describes the practice of placing an ad for a product to see if there is sufficient consumer interest before actually going to the expense of manufacturing the item. Although the Mail Order Rule doesn't specifically deal with this situation, the FTC has issued an advisory opinion that such ads must clearly disclose to consumers the fact that the merchandise is only planned and may not ever be shipped.
Then it references another FTC publication about the "Mail or Telephone Order Rule" (also known as the "30-day Rule"), available here:
The Rule requires that when you advertise merchandise, you must have a reasonable basis for stating or implying that you can ship within a certain time. If you make no shipment statement, you must have a reasonable basis for believing that you can ship within 30 days. That is why direct marketers sometimes call this the "30-day Rule."
And it provides this relevant paragraph:
Dry-testing
Q: We want to sell by mail or telephone a product that is not yet available. Does the Rule apply?
A: It depends. In an advisory opinion, the FTC told a publishing company that it could "dry-test" its merchandise as long as the following conditions were met:
> In promoting the merchandise, the merchant can make no suggestion that the merchandise will be shipped or that customers expressing an interest in it will receive it.
> In all promotional materials, the merchant must disclose all material aspects of the promotion, including the fact that the merchandise is only planned and may not be shipped.
> If any part of the promotion is later dropped, the merchant must notify subscribers of the fact within a reasonable time after soliciting their subscriptions.
> If, within a reasonable time after soliciting their subscriptions, the merchant has made no decision to ship the merchandise, it must notify subscribers of this fact and give them the opportunity to cancel and, where payment has been made, make a prompt refund.
> The merchant can make no substitutions of any merchandise for that ordered.
If these conditions are not met, the Rule applies.
So the answer is: This is not against the FTC's rules, so long as you are reasonably up-front about what's going on. It's particularly easy to comply with these rules if you avoid taking orders or money, but it seems as if you could even work around that, assuming that you're honest and timely and issue refunds to everyone on demand.
[NOTICE: mechanical_fish is not a lawyer and this is not sound legal advice.]
He writes: "Our goal is to find out whether customers are interested in your product by offering to give (or even sell) it to them, and then failing to deliver on that promise."
Again, this feels unethical. If this practice was widespread it would further undermine trust online. It probably violates AdWords terms of service too.
I'm assuming the goal is to see how many people would click on a "Register" or "Buy Now" link, rather than to see how many people would fill in their full credit card and shipping information? The former seems like it would still give some valuable information without being too bad, while the latter does seem questionable.
Because the ads are paid-for, there is natural limit to amount of this sort of "recon" ads. If it's one out of 20 that I click on, I wouldn't be to upset and I don't see recon ads havng more than 5% of overall advertising budget in the world economy.
I'm conflicted about this. On the one hand, it seems like a good idea to test your product ideas in this cheap way. On the other hand, you could easily miss great products that just need a little time to catch on—some products don't lend themselves to an immediate decision based on some web page ad copy. For example, in Founders at Work Joel Spolsky talks about launching FogBugz:
We had no idea [how FogBugz would do]. At the time, you could have told me that this thing was going to sell zero copies, and I would have believed you. You could have also told me it was going to sell $50,000 a month's worth of copies—an equally unrealistic number—and I would have believed that too.
Now I have enough experience to know that almost everything you launch is going to sell $2,000 to 3,000 in the first month, and that's the way the first month of any software product always is, if you do things perfectly. But at the time, I just had no idea what to expect.
The tactic described in the article has limitations, but the underlying purpose of properly assessing the market before building the product is exceptionally important.
1. What's a successful test? We collected lots of data, but in the end it was still a gut decision which product(s) to go with, with all the usual emotional factors in play.
2. Just because people say they want a product, it does not follow that if you build what you thought they meant, they will buy it.
3. Tests can get a lot more expensive than you expect, and you may need to run a lot of them. This is related to #1, since you have no objective criterion to know when you're done testing.
4. This is a mechanistic model for making products, which assumes that you can turn a product "on" or "off" depending on data. But many successful products don't come into the world that way. They are driven by vision and total commitment on the part of the makers. (Think of that guy at Apple whose project was canceled and who snuck into the building for a year to keep working on it.) This is psychologically incompatible with the idea of testing a bunch of ideas, none of which you care about that much, and picking the best one based on some test results. That approach that tends to be favored by managerial types who don't much care what it is they're managing - not the classical entrepreneur who's consumed by a passion. Who do you think is more likely to have the kind of persistence needed to carry a product forward?
I came out of this with two conclusions. One was that there already is a mechanism for testing which ideas will have market success: the market itself. So the strategy here is really to beat the open market, which gives some idea of how hard it is to execute.
The second was that this way of working is not for me. I want to work on something that I am passionately commited to because it is a creative expression of my being - as well as something that people will pay for. Sometimes the one has to precede the other.