Hacker News new | past | comments | ask | show | jobs | submit login
Open source software only comes in one edition: awesome. (codinghorror.com)
75 points by fogus on July 2, 2009 | hide | past | favorite | 61 comments



As usual, Jeff mixes different concepts, does some hand-waving and comes to a stunningly weird conclusion. What Microsoft did with artificial memory limitation is ridiculous, unless someone comes up to offer a reasonable technical explanation. However, I find it equally ridiculous to jump from there to a conclusion that open source is better because it does no market segmentation.

Let's keep one thing clear, if you're not selling your product then you won't do any market segmentation for it. You might, however, do market segmentation when it comes to offering support for that product. Does that make you as evil as Microsoft? Are 37 Signals evil because they offer different plans whose prices probably don't scale linearly with the costs behind them (in other words, they don't get the same profit across plans)?

Of course, Jeff didn't actually use the word "evil", but the implication of moral inferiority is almost palpable. Open source has many advantages (and disadvantages, too), but not segmenting the market is not one of those; rather, it's a consequence of not selling the software itself.


>What Microsoft did with artificial memory limitation is ridiculous, unless someone comes up to offer a reasonable technical explanation.

There is no technical explanation. It's purely for business reasons. This limitation seems artificial, but it's no different from any other software product that comes in multiple editions:

A software company (e.g. Adobe, IBM, Oracle, Apple, Microsoft) develops a software product; when the product is ready to ship, they call it "[Product Name] Professional Edition". Then the company cripples the product a bit (sometimes by deleting a few lines of code that enable important functionality) and calls that version "Standard Edition". Then they cripple it even more and call that version "Starter Edition".

All they are doing is manipulating bits to give people an incentive to pay top dollar, just as Microsoft did. Giving everyone the full functionality wouldn't cost them any more, since it's all just extra bits, which cost the company $0 to copy (actually, they need to spend millions of dollars building, testing and maintaining these crippled editions, so they would actually save on costs if they didn't cripple the software). I understand that what Microsoft is doing with the 32 GB limitation feels different and ridiculous, but it really isn't.


This limitation seems artificial, but it's no different from any other software product that comes in multiple editions

Right. For example, if a totally hypothetical individual developed a collaborative knowledge-base system and offered it as a service, they might decide to arbitrarily segment the market based on page views.

Now, one might make the argument that there is actually a cost associated with page views whereas there is not a cost with having your software recognize all the memory on a third party's machine, but just between us businessmen we can agree that that is malarky. For one, the cost of a marginal pageview is too small to be measured but the price to the customer of that page view is about ten thousand dollars in the first year if it was their millionth-and-first page view in January. For another, the cost the 9 million page views separating tier #2 of the hypothetical service from tier #1 is far, far less than the pricing differential.


> a reasonable technical explanation

Alright, I can come up with a reasonable explanation:

For databases and servers a lot of bugs and corner cases are only hit by servers that get loads and loads of traffic. Fixing those bugs cost a significant amount of money. By charging more to the high-traffic servers the cost of fixing those bugs can be justified.

It is also fair, in a sense, because only the high throughput servers really benefit from the extensive testing and stability, so why shouldn't they be the ones to pay for the fixes?


I agree with charging more for the Datacenter edition than for Standard edition, for those same reasons . But that's not a reasonable technical explanation for artificially limiting the amount of memory your OS can use. I was specifically referring to that issue.


Exactly. And to be fair, Novell and Red Hat have done similar things with SLE and RHEL and, come to think of it, openSUSE and Fedora. Since the code's open, it isn't quite the same, but both offer different levels of service and maintenance, available at varying prices.


Do they need a reasonable technical explanation? It's their ball, and they can take it home if you don't want to play by their rules. Don't like it? Then, run MySQL/Postgres on Linux.

Perhaps avoiding customer anger is reason enough to avoid arbitrary limitations.


Linux comes in more editions than Windows.

All the same price ($0), but you have to think, which was Jeff's point. And you have to think more than for Windows. It's the blessing+curse of diversity.


Do you really believe that the 37signals approach is not effectively the same thing?

Consider the basecamp $99 plan vs the basecamp $49 plan:

100 projects / 35 projects vs. 20 GB storage / 10 GB storage

Do you really believe those extra 10 GB and 65 projects cost 37signals $50 a month to deliver to you?

Similarly, do you really believe that the cost of supporting 48 GB of memory versus 32 GB cost Microsoft $1000 per customer to build?

Wolf in sheep's clothing, exact same concept with Web 2.0 patina. Rich customers pay more.

Anyway, my argument isn't really about the money, but the mental friction. I'm sick of dealing with marketing weasel feature matrices


Consider the basecamp $99 plan vs the basecamp $49 plan:

100 projects / 35 projects vs. 20 GB storage / 10 GB storage

Do you really believe those extra 10 GB and 65 projects cost 37signals $50 a month to deliver to you?

No, I don't. What I said was "they offer different plans whose prices probably don't scale linearly with the costs behind them (in other words, they don't get the same profit across plans)". I acknowledged that the price is not a linear function of the costs and that they get more profit from more expensive plans. My question was "why is that evil?"

Similarly, do you really believe that the cost of supporting 48 GB of memory versus 32 GB cost Microsoft $1000 per customer to build?

I'm sorry if I sound hostile, Jeff, but I read your post before commenting on it -- did you read my comment before replying? I said that the artificial memory limitation is completely ridiculous (unless there's a valid technical reason for it). As I explicitly state in my other comments, I believe that's unethical.

Here's the summary of what I believe, a Cliffs Notes edition of my comments: Having rich customers pay more is not unethical, crippling the features of your product to extort money is.


I'd agree, companies which sell software products by-the-license and companies which sell software products by-the-month are both in, essentially, the same business. They've both got price/feature matrices, even the lowly shareware company, which at the simplest level has a "Trial" and "Paid Version" two-column matrix. This is nothing to get worked up over -- it is a fundamental fact of our industry. We charge money for value, and value exceeds marginal cost by a stupendous amount.

I'm sick of dealing with marketing weasel feature matrices

This line would be funnier if there were an ironic marker or perhaps some indication that it was self-deprecating humor. As it is, it sort of sounds like you mean what you're saying. Which would be... interesting in light of your own business model, announced like two days ago.

http://www.stackexchange.com/


There is worse in the enterprise market, where they basically charge exactly what you can afford (sometimes a bit more) ratcheting it up as you get more and more locked-in. Read about software pricing economics and how you just have to suck it up till you hit the "Fuck You Oracle" point here:

http://blogs.sun.com/bmc/date/20040828#the_economics_of_soft...

Amusingly, it looks like he'll be an Oracle employee since they are now buying Sun.

(Note, this is "just business" as far as I'm concerned, i.e. about .7 on the weasel scale. What saddens me is organisations too short-sighted or misinformed to see the trap they are lumbering into as they are too distracted by salesmen with slick powerpoints, particularly if they are governments wasting my tax dollars)


> but not segmenting the market is not one of those; rather, it's a consequence of not selling the software itself.

Consequence or not, how can it not be an advantage?


Because market segmentation in itself, as a concept, is not a disadvantage, which means that lack of it is not an advantage in itself.

Like I said, market segmentation in software does not necessarily have to be a bad thing. Consider a hypothetical text editor that comes in two editions. The "basic" edition has features like syntax highlighting, regex search and replace, etc. The "advanced" edition has a feature that allows you to open and save files not just locally, but also via SFTP. When it was first written, the editor didn't have the SFTP feature, the author developed it because there were some users who asked for it; they weren't the majority, but the author thought the idea was nice. Instead of incorporating that feature into the one and only edition, the author decided to offer two editions: a cheaper "basic" and a more expensive "advanced" edition. It was the author's idea of placing value on the effort involved in writing the SFTP feature. The majority of users can go on happily using the "basic" edition without paying extra for it, while those who really want SFTP can pay for it.

One way of looking at this market segmentation is to say that the author was greedy and imposed an artificial limitation on one of the editions in his product. Another way of looking at it is to say that the author offers two products that satisfy different needs at different costs. If we were talking about two products from two different authors, we would be comparing their features and prices quite naturally, without complaining about market segmentation.

However, when you're offering exactly the same product, with no differences, for two different prices, then you're just being greedy and taking advantage of your customers (which is something commonplace nowadays and most people, myself included, just accept it). Close to that is the situation in which you slap a truly artificial restriction on a cheaper edition of your product, not through a lack of feature, but through a deliberate crippling of that feature (e.g. the five-dollar edition can open only up to 3 files). The exception, of course, is a crippled free edition which allows potential customers to try your product before deciding whether to buy it. In that case, you're just being annoying ;)

That, in summary, is why I believe that market segmentation is an advantage, within certain ethical bounds.


To me, market segmentation is different when it comes to software.

In your text editor example, the author can add an SFTP feature, but once it's coded it's coded. There's no extra cost to ship out all software with that feature after it's coded, except perhaps some extra support costs. (I imagine those would be pretty minimal.) At that point, I only see two reasons to segment the market from the users' perspective:

1) To not confuse your less advanced users with more features. This is a pretty weak reason, IMO. 2) To make sure that the people who actually wanted the feature are the ones paying for it. After the cost of development is paid for, this reason goes away.

Beyond these reasons (unless I'm missing something, which is entirely possible), releasing different grades of software just hurts your users. It makes you more money, though, so maybe that's enough justification.


I'm not sure how having different editions of the same software hurts users, but maybe I'm just applying a different grade of the same meaning to the word "hurt" ;)

I suppose that the author could let users drive new features by making a list of proposed features and having users donate money towards the implementation of those features. I'm pretty sure I've seen that somewhere. But the segmentation seems to be the easier way.

Going back to the editor example, suppose you have two editors from two different authors; one is cheaper, the other has SFTP feature. The author of the one with the SFTP features put a higher price on the overall development effort than the author of the cheaper one. And he's charging you more than the other author per every copy, even though he coded it only once. Why is this so drastically different from having two editions of the same product from the same author?

To take your logic a step further, why is the author charging money for every copy? Once he coded the whole product, it's done and there are no per-unit shipping or manufacturing costs. The alternative would be to somehow get the money, upfront, that would cover the development costs and net him some fixed profit for it. Who knows, maybe if we could switch to that business model piracy would no longer be an issue. But until then, as long as charging per unit of software is accepted practice, I don't see a difference between two editions of the same product and two products with different features. (And I'm not talking about unscrupulous feature crippling here, that's different)


It increases the price to the users of the "advanced" version, because by keeping 2 versions the seller has to collect AT LEAST enough extra to pay for the increased overhead costs of dealing with 2 versions.


>There's no extra cost to ship out all software with that feature after it's coded

By this logic, Chipmaker should only sell their most powerful processors, because the actual manufacturing-cost is not really the reason for the price difference between a top of the line i7 processor and a different one with lower specs.

You can look at it from a different angle: Let's say you have a powerful program and you are charging 1000$ for it (e.g. Photoshop). Your market research has shown that a lot of "casual user" are looking for a product like the one you are selling, but are not willing to pay the 1000$. So you develop a version without the more advanced functions and sell it for 200$ (e.g. Photoshop Elements).

It's not really about "not confusing the user", but about offering another option. How does this hurt the consumer?


I think that actual manufacturing costs are often the reason for price differences between a top of the line and one with lower specs. The processor dies are huge and costly, and throwing out non-perfect dies is undesirable. Due to the intricacies of silicon manufacturing, the same design can have different maximum speeds on different dies, and due to defects, the same design can have different operational sections on different dies. This leads to speed grading and the disabling of cores/caches, and hence a whole line of chips from one design.


I sincerely think the aim of his posts is to please his audience, get attention.

The posts are more and more noisy, pointless and irrelevant. It looks as if he's forcing to write an entry regularly, even when he's got nothing to say.

You can always sum up a post as "X is very bad except when it's not" or "Y is very good except when it's not".


For a serious treatment of this subject, I'm going to keep trotting out "Information Rules" by Varian and Shapiro. (Yeah, I also have a summary "somewhere", but go buy it, it's worth it!).


It's not "ridiculous"; the "technical explanation" is because it makes more money. It's just business.


He didn't say that opensource is better; he just said that you always get the best, 100%, full edition of it.

However, this might be not true for dual-licensed software.


He implied it when he said 'opensource was awesome' and suggested the Microsoft offering was not.


A company needing over 32GB of server RAM is getting more value from it than if they only needed 1-4GB RAM. It's a measure of customer value created.

I suspect Jeff is upset mainly because he didn't know. 32GB is an arbitrary seeming boundary, that wasn't signaled clearly (enough to him). He breathed to life a plan to live his ideal of RAM-cheap, coder-dear. He went to the trouble to buy RAM. And to pay for it. And to install it... All the while expecting that it would just work.

He feels ripped off, tripped up, cheated, deceived and abused. He didn't get what he paid for (specifically: what he thought he was paying for). This a problem of expectation-management. And Jeff is quite right to blame marketing. And he's right to punish them. As his article has. Go Jeff!


Clarification I think my comment seems a bit sarcastic, but it wasn't meant to be. I think marketing should be as up front and clear as possible about what customers are getting, instead of hiding it. Publicity (like Jeff's response) is one of the few pressures on them to do so.


Clarification 2: I, like Jeff, also have a pricing matrix for my product. Sometimes people don't read it properly and are confused - although I'm nice about it, I tend to feel unsympathetic. This is because it's hard for me to really see from someone else's point of view, since we only actually ever see from our own...

It reminds me of the founder/maintainer of a very successful programmer's website who said, "People will not read what you write [for a web-app GUI], not matter what you write, or how you write it". His frustration was maxed out. It occurred to me that one solution is to only provide buttons (i.e. actions) for what you actually want users to do. This constrains their choices to only valid ones. This amounts to a kind of "wizard" - but they've been very successful, so this isn't a bad thing. It also obeys the "don't make me think!" imperative, provided the GUI is designed-well... It also helps if the pricing options are also designed well. In fact, I believe it is worth sacrificing profit/benefit to the customer (or other efficiencies) if by doing so, you can make the choices clearer and simpler for the customer (eg. clarity can come from obvious patterns in the pricing, even if they totally don't fit the true demographics).

To apply this to successfully communicating the limits of each pricing option, where "successfully" means that the customer hears what you say, you could have a button for each limit. By consciously selecting the limit, the user would know what it is. Unfortunately, this can lead to frustrating backtracking when one choice constrains other choices in unexpected ways. One can muck around with different orderings of the choices, but I think that ultimately, the only real solution is to make the options match what the user would expect.

That is, change your pricing model to accord with what the user expects. Not vice versa. This is a kind of "pricing positioning", where the goal isn't to sell more, but to communicate better, by associating your product with a category, and then following the standard pricing approaches for that category - whether they suit you or not. The purpose is to communicate, and doing it well makes everyone happier.

For me, it's intrinsically valuable: successful communication is a joy in itself; miscommunication is suffering.


> "Already, I'm confused. Which one of these versions allows me to use all 48 GB of my server's memory? ... Just try to make sense of it all. I dare you. No, I double dog dare you! "

Dare accepted. It's actually right there, 1 click away from the page Jeff linked to: http://www.microsoft.com/windowsserver2008/en/us/compare-spe...

The answer to his question is Enterprise, Datacenter, and HPC.


For someone who started computing on his father's Osborne 1, hearing people casually mention that they upgraded their server to 48GB induces something like vertigo.

A 768,000 fold increase in main memory. Christ we've come a long way.


And to think, hard drive seek latency has only improved by 2x or 3x.


The Osborne 1 didn't even have an HDD! You think HDs are slow, try 5 1/4" floppies for latency.

Anyway, SSDs soon. I will not be sorry to see the back of spinning magnetic media of any stripe.


x3,000,000 for me (ZX81 with 16 K RAM pack - but if I'd stuck with the standard 1K, it would be... x48,000,000).


One version of Moore's Law predicts x1000 increase in 15 years (x2 every 1.5 years; 2^10 = 1024).

48GB isn't standard today. The ordinary amount in an average desktop PC is about 2GB. Therefore, in 15 years (2024), we can expect ordinary, standard desktop PC to have 2TB RAM.


Heck, thanks to github, OSS spawns a new "edition" as fast as someone can click the fork button.

Not that I'm complaining! It's nice to be able to find a project abandoned by the original dev, but picked up by a 1/2 dozen other people.


This is like complaining that your student version has the artificial limitation that you can't use it for commercial use. If you needed to use more than 32GB of ram, something only heavy duty commercial servers would need, then you buy the heavy duty commercial version of Windows Server.

You have paid a price before that was in essence subsidised by the higher-end commercial users - now you are one of them (Congratulations!) and get to "subsidise" standard versions for the rest of us.

It's the same theory as scaled taxation (which is a different argument as government has separate responsibilities than business) - think of yourself as in the highest earning bracket now.


Maybe the awesome foundation will give Jeff an awesome grant to upgrade his windows server to awesome edition. Isn't it an awesome day on HN?


That's hardly the case. In Jeff's words, I dare you, double dog dare you to count the number of Linux distributions. Each version satisfies a different customer target, same as each version of Basecamp satisfies a different type of company.


The point is that features in a Linux distribution are not cut out because of "consumer surplus". They are customized to make certain tasks easier for some people ... but the various distributions don't have artificial limitations.

Say you are tired of KDE in Kubuntu, you just "apt-get install gnome-desktop". You are tired of Apache, then "apt-get install lighttpd". Defaults are not limitations.

And seriously, most people pick one distribution and stick with it. I'm using Debian for everything ... work servers, laptop, home station. OK, to be honest the home station has Ubuntu on it, but that's still 95% Debian.

And, if there's a feature missing (like hardware limitations), you are most likely to fix it with a kernel upgrade, or with a patch someone on the mailing list gives you for free.


I strongly agree... I think the OP of this threads' contention is not in the same spirit as Mr. Atwood's. I think this Coding Horror post was lamenting the existence / necessity of the "pricing tiers" common in commercial software.

Linux distributions, and even versions of Ubuntu itself, are not organized in a "tier" layout, where Tier N has X more features than Tier N - 1, or Tier N's features have X more capacity.

Linux distros and Ubuntu versions, on the other hand, are organized horizontally: They don't have better features (per se), but different features.


Not only that, if you select the wrong edition, switching to the correct edition costs less with Linux than with Windows.


Let's just look at the list of Ubuntu editions, since that's comparable directly to Windows or Mac OS X.

Ubuntu

Kubuntu

Edubuntu

Xubuntu

Lubuntu

Fluxbuntu

Mythbuntu

Ubuntu JeOS

Ubuntu MID

Ubuntu Netbook Remix

Ubuntu Studio


1) Not all those are official editions. 2) None have features deliberately turned off. 3) Most , if not all, differ only in what optional components are installed by default, If you change your mind you can simply add whatever your missing - you can install Ubuntu, then add the bits that make it Kubuntu, then add Xubuntu etc.


But AFAIK they all cost the same price?


Well, I think the point of the article was not that segmenting made the product more expensive, but rather, you now have to sit down and spend some time carefully deciding which version you need.


This post does a very effective job of highlighting rule #1 of market segmentation: every customer should be almost immediately sure which segment they fit in.

For example: is this computer for business or personal use? do you need the "super-duper full-lenth CGI movie" version, the "regular everyday use" version, or the "i only browse the internet" version? You just can't push it much further than that.

Vista failed miserably on this front, which surely accounted for more than a trivial decline in upgrade sales.


> If I choose open source, I don't have to think about licensing, feature matrices, or recurring billing. I know, I know, we don't use software that costs money here, but I'd almost be willing to pay for the privilege of not having to think about that stuff ever again.

The grass is always greener on the other side isn't it? Don't get me wrong, FOSS has upsides, but segmentation isn't one of them. If time really is the cost rather than the cost of the software, FOSS isn't any better. The only difference is that nobody is artificially segmenting the market. The market artificially segments itself.

Think about it: do you use Debian? Fedora? CentOS? Or do you go with someone you can buy a support contract for like Red Hat or Novell or SuSE? Or would you rather go for a BSD flavor of nix? OpenSolaris seems to also be gaining popularity; do you choose it?

And once you get that sorted out, what webserver do you choose? Apache? Lighttpd? Nginx? Comanche? Are you doing Python? Then you also should consider CherryPy.

So, to summarize... sometimes I'd be willing to pay money to know that no matter what* edition I choose, I'm still buying Windows.


Segmentation is a good thing also from the consumer/buyer viewpoint but "limitations" like this max O/S memory limit are absolutely spinelessly crippled blatant rip-offs, and have no correlation with the actual work required to make those features.

The same goes for the stupid restriction of the maximum of ten simultaneous network connections (can't remember whether it was inbound or both?) imposed in some basic Windows (Home? Professional? vs. Server?) editions.

An operating system shouldn't limit the capabilities of available hardware. They could reasonably decide to not support some heavy-duty server hardware at all except in the server edition. Or they could give the server edition a better I/O scheduler (that can handle large loads) or something to make it a must if you want to use Windows on a server.

But if I have a network adapter in my box I expect to be able to max it out in all imaginable ways regardless of the O/S "edition". Or use as much memory as I can physically fit in my box.


So why is Jeff Atwood using Windows instead of Linux to run his database server? Not complaining, but presumably it delivers something that the open-source alternative doesn't.


He uses SQL Server, of course. The root cause is that Joel used to work for Microsoft. Awesome.


Not just that, but Jeff is a .Net programmer (IIRC, S.O. is written in C#).


And he spent a good chunk of his career as an evangelist for a Microsoft shop. (Not Microsoft itself but a "Microsoft partner")


That's nothing. When you want extra processor resources on an IBM iSeries, you only need to enter some code (after paying IBM big €€), and vóilá, your server will use some extra cores that were always there. I have heard Sun does the same, and probably others too.

All in all, it isn't that bad. At least there's an easy upgrade path. Probably you can't do that with Windows Server.


Applying the googlenomics concept I just read in Wired, aren't auctions the next step in pricing's natural selection process?


Can auctions function for goods that are not (practically) scarce?


I was thinking of an auction over a specific time period to set price levels. I think it would also be well to have it re-run/revisited later, to see if the same price levels still apply (see previous iPhone app charge change articles).


can we please stop using the word "awesome" already?

:)


I upvoted you because we really should stop, but I still can't bring myself to remove "awesome" from my vocabulary. Modern slang has made it the apotheosis of coolness; its appeal is irresistible to me.

In my own defense, whenever I say "This terrifying thunderstorm is awesome," I mean it in the classic sense as well as the colloquial. :->


I say, pick your battles. "fail" as a noun rankles far more. Killing "(epic) fail" would be truly awesome. I'd even be willing to bring back "tubular" in trade.


What if we change the slang to "flail", which is at least both a noun and a verb?


What an awesome idea ;)


atwood really likes that word




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: