Hacker News new | past | comments | ask | show | jobs | submit login
Did the Microsoft Stack Kill MySpace? (highscalability.com)
177 points by dolinsky on March 25, 2011 | hide | past | favorite | 195 comments



".Net programmers are largely Enterprise programmers whom are not constitutionally constructed to create large scalable websites at a startup pace."

This is such BS, I can't even read it without physically cringing. I work for a ~400-person business that stands up .NET websites at breakneck speed, and we do it well. People who blame their problems on technical infrastructure decisions almost ALWAYS do so because it's easier than addressing the true underlying problems.

"The biggest problem was they didn't allow the developers to have staging or testing servers -- they deployed on the production servers on the first go-around... And all this with no change management or control. No versioning either."

Oh wow. Wow. I hereby revoke my previous statement. These are some God-awful infrastructure decisions. Version control and a staging server are the most basic necessities for a scalable dev project. I even set them up when I'm working on a personal project, alone.


As they say, "A bad workman blames his tools." It's really hard to say that they failed because of Microsoft tools. The fact is that they made bad decisions in how they would implement and use their tools. There's nothing to prove they wouldn't have done the same dumb things with open source tools.


Precisely. How many terrible, awful, untouchable projects are out there that run on top of a LAMP stack? wordpress.com comes to mind with their thousands of servers dedicated to basically caching. I strongly believe that a terrible hacker will manage to foul up even the most beautiful language/framework, while a good hacker will be able to write neat and beautiful code in even the most obtuse and awkward languages.

As for their actual infrastructure: not having a version control system and a build system (e.g.: never deploying directly from the VCS) is a must for any size team and sometimes even for single developers. Thankfully, nowadays most people do use VCS's but build and deploy systems are largely at the level of glorified shell scripts at best.


Exactly, a tool (technology-based or otherwise) is just a multiplier for the person wielding it. It's the people wielding the tool that make the difference.

Management is not excluded from this either. They have to:

1. know, in depth, the business they are in and the tools they use (or could use) to achieve their goals

2. listen to what their team tells them and provide an environment where they can do the job right

It sounds like there was a lot of micro-management and putting out fires in this case.


.Net programmers are largely Enterprise programmers whom are not constitutionally constructed to create large scalable websites at a startup pace.

For the record, I think it's irrelevant.. but absolutely I think it's true. It doesn't mean .NET isn't capable, or that lots of .NET programmers aren't capable...

Just that .NET is way more popular in the enterprise than on the web, and that the work that gets done at most of those enterprise shops wouldn't fly on large sites.

It may be popular to pretend that the .NET camp (and to a slightly lesser extent, the Java Enterprise one) doesn't bend this way statistically, but they do.


> Just that .NET is way more popular in the enterprise than on the web

It is a fallacy to think that the web is the only large network that requires scale.

Goldman Sachs alone stores more data than the entire web

The Visa network has had 4 seconds of downtime in decades

Most airline systems took decades to engineer


True, but different networks have different constraints and require different solutions. Data warehousing, which is likely what most of GS's data is doing, is very different than trying to scaling a near-real time data access system.

Payment processing only deals with one type of data: money. This gives you as many shortcuts as the number of constraints it imposes.

Airline control and departure systems are probably as close to a typical modern web app as you'd get from your list.

My point is that while there certainly are engineers that have worked with high scalability issues without ever touching the web, they have also likely been solving slightly different problems.

P.S.: Other systems that require high availability but are not the web: telephony and cell communications, broadcasting and doomsday devices.


According to this link (after a few seconds of googling), Visa's network was down for 8 minutes in the five years ending 2001. Do you have citations for any of your claims?

Edit: Oops, here's the link: http://www.forbes.com/global/2002/0916/038.html


The Visa figure I heard at the Computer History Museum years ago, it was first decades so that may have changed.

Goldman Sachs told me when I was writing a proposal for them years ago that they have over a petabyte of data stored. The web is 80-200TB depending on who you ask. A single department there responsible for the program based trading would alone have an entire copy of the web (and parts of the deep web, and all of twitter, etc.) since they construct those whack trading apps that suck everything up and analyze it for signals. If there are any quants on here they could tell you about this more.

The airline system I was referring to is SABRE. Early IBM was built on their rollouts and we are talking about the 50s. V interesting story, lots of references from this page: http://en.wikipedia.org/wiki/Sabre_(computer_system)#History


I think you're grossly underestimating the size of the Web. Flickr alone were storing 15TB of new photos a month back in 2006: http://blog.forret.com/2006/10/a-picture-a-day-flickrs-stora...


sorry I just re-read my post, should have been past tense, i have been up for too long.

It would be interesting to get more recent figures and compare again, because I do know that the investment banks hoard a lot of data.

In terms of just documents, they would easily store more than the web


it was 8 years ago - the numbers for both have undoubtably increased

(it was ~10x the web at the time)


One of my professors told me YouTube is hosting approximately 100TB of new videos every month, I think 200 TB is off by very, very much. If 200TB was the case I could afford to buy disks to store the entire web with my yearly salary...


The web is only 200TB? That seems wrong by orders of magnitude to me. Maybe you mean Google's publicly indexed web?


well yes, indexed web - otherwise it would be all the data everywhere bar a few nuclear facilities.


> Goldman Sachs alone stores more data than the entire web

Why would you believe that Goldman Sachs stores more data than can be reached by HTTP?! Only YouTube amounts to petabytes of data. http://beerpla.net/2008/08/14/how-to-find-out-the-number-of-...


I completely disagree with his calculation, and GS were storing over 1PB 8 years ago when the web was 80-150TB

the point is that it is frikkin big - and most investment firms now scan the entire web for signals


Really great points. Do they add features to those systems rapidly? Cause I think the original premise hinged on the combination of those two desires.


They don't have to add features quickly because they spend years designing and building them based on a spec that somebody else has spent years designing all so that when it launches, you can go online and book an airline ticket or make a transaction or whatever other essential daily activity and not have to think about what is taking place, let alone see an error screen or 404


Gotcha. Sounds like an environment not likely to breed folks who are constitutionally constructed to create large scalable websites at a startup pace.


no what I am saying is that it is unusual for them to get themselves in a situation where they have to suddenly code out of a hole at 'startup pace' (whatever that means)

startups don't have a monopoly on working hard or working fast, and it is arrogant to generalize about both .NET and enterprise developers in that way since we are all in one way or anther standing on the shoulders of earlier enterprise work (where do you think what we call 'nosql' and think is new and grovvy, was first used?)


Totally fair. First, apologies if it sounded like I was suggesting startups have a monopoly on working hard and fast. Of course I never meant anything like that at all.

Second, arrogant really isn't fair because I certainly never gave an assessment of my own abilities or value.

Third, totally.. .NET stands on the shoulders of the same stuff the infrastructure that runs most of the internet stands on. It's just that .NET doesn't run most (or even lots and lots of) the internet.. so to suggest that the .NET development community is less likely on average to be ready to build Facebook doesn't seem like blasphemy.

That's very different from suggesting the .NET camp isn't full of awesome, hard-working developers... but really, I apologize if it comes off that way. Definitely don't mean to suggest it in the least.


There's a flaw in your logic which makes it seem like blasphemy. It's not enough to say that there aren't many .NET examples; you have to show that the proportion of .NET people who do good .NET work is lower than the proportion of PHP/etc. people who do good PHP/etc. work. With PHP in particular, I would guess that it's easily true that a higher proportion in .NET would be better equipped.


Not good work. We're not talking about good work. We're talking about fast work that runs on the web, works at scale, and allows the organization to pivot easily. You certainly won't catch me suggesting PHP is some sort of awesome language.. it's not. But it's language that lives on the web, and if I was building something on the web and had my pick of a random .NET developer and a random PHP one, I'd take the PHP one... because chances are the .NET developer has never deployed something on the internet, and chances are the PHP dev has.


ahh sorry, I misread your tone - might be because I have been up for so long (working at startup pace ;))


No apology needed! Tone's tough, and I'm particularly sloppy in conveying it carefully :)


Do you use .Net or are you familiar with the .Net community? Or are you basing your opinions on your lack of familiarity with it?


I worked professionally on .NET very early on it's in existence, for a couple of years. Before that I was a COM developer for a couple of years... since then I've done 8+ years in Java shops, and now I'm a full time python dev. Most of my career has been in the enterprise space. The last couple of years have been in web.

Most of my familiarity with the .NET community comes from the fact that I live/work in San Diego, and in the Healthcare IT space... if the combination of which don't make up the largest percentage of .NET developers in a particular area and arena, it must be close :)

So most of my experience with the .NET community comes from knowing developers and working with .NET-turned-Java folks... It's true, I haven't done .NET in a very long time. That said, I made the same assertion (to a slightly lesser extent) about the Java Enterprise camp.. and I've got loads of experience with that :)


So PHP developers are "constitutionally constructed to create large scalable websites at a startup pace"? (Facebook had to a TON of stuff to make it scale).

Ruby developers? (Ref: Twitter)

Java developers?

I think only HTML programmers are the only ones who are constitutionally constructed to create large scalable websites at a startup pace.


Haha. Excellent point about HTML programmers.

Also, it's probably fair to say that as a whole programmers are 'largely' not used to thinking or working in a way that's productive at scale. I didn't argue otherwise.

I just suggested that the percentage of developers in the .NET community who are aligned with values, knowledge, and experience essential to scaling a large site.. is smaller than many other platforms... Yes, I'd argue smaller than all of the ones you mentioned.


> Oh wow. Wow. I hereby revoke my previous statement. These are some God-awful infrastructure decisions. Version control and a staging server are the most basic necessities for a scalable dev project. I even set them up when I'm working on a personal project, alone.

Exactly. This seems to follow the Kevin Rose formula: make horrible decisions (or be entirely absent from decision making) without any understanding of technology, then blame your developers and technology choice ("Digg V4 failed to due to Cassandra").

If it weren't for Facebook's success, I can bet you'd see people blamining PHP for Digg's failure: prior to Facebook, P in LAMP also stood for Perl (and occasionally Python) and LAMP wasn't "universally" considered proven (unlike J2EE + Oracle or .NET + SQL Server), nor has Facebook been even remotely close to a "vanilla" LAMP site (since at least 2005)-- with many mission critical subsystems also being built in Java and C++ .


> prior to Facebook, P in LAMP also stood for Perl

The P in LAMP has been associated with PHP for much longer than the existence of Facebook.


I've always seen it stand for Python, Perl or PHP-- with many shops specifically stating "P for PHP" or "P for Perl". Yahoo hasn't settled on PHP vs. mod_perl (as a replacement for a C based template system filo built) until early 2000s and (when I was there) still had many Perl based services in production.


When I interned at SBC/Yahoo DSL around 'mid 2000s, they used the LAMPerl stack quite extensively.


I think I remember around 2005 being surprised at hearing PHP stand in for the P, before that it had always been Perl. Though it probably had to do with the circles you ran in.


Maybe. We did LAMP before I heard the term, with a site in '98, but then it was definitely Perl. I've understood it as mainly Perl, and later PHP or Python as possibilities.


Stackoverflow uses .Net stack and scales just fine, using less hardware as well.


Plenty of Fish served 2m pageviews / hour with one server.

http://plentyoffish.wordpress.com/2007/02/09/aspnet-and-iis-...


Don't put too much faith in what Markus says. He lies about everything. Constantly.

I think he thinks it's a strategic advantage to understate the work and resources he's invested into pof. (Makes for a good marketing story and newbies think building a pof clone is easy so they waste lots of money trying.)

Right about the time that blog post was written peer1 had a marketing video on their site where there was a guy walking about a data center being interviewed by someone off-camera. As they walked around the guy waved at a few racks of servers and said these are plenty of fish's servers, walked further, these are [some other big web site].. And from that video it was clear there was no way he was only using 1 server. There were lots of servers in use.


POF uses a CDN, which runs on multiple servers, nobody has ever claimed otherwise. The claim (that I haven't seen disproven) is that it's one one application or one database server.


Stackoverflow has a tiny fraction of the traffic MySpace had when it was relevant. I bet it has a fraction of its traffic now.


StackOverflow's architecture these days is not just Microsoft stack. They are using Redis and whole load of other OSS tech to get things working. Here it is interesting to note that this is what they started with: http://blog.stackoverflow.com/2008/09/what-was-stack-overflo... and this is what they are currently using http://meta.stackoverflow.com/questions/10369/which-tools-an...

I find it sad that these days MS is laggin behind. It is difficult to get Cache frameworks and other infrastructure working with MS Stack if you got a high performance website.


not just Microsoft stack

The Microsoft stack is not always just a Microsoft stack any more. e.g. it has jQuery out of the box. Of course you mix and match when you get to the high end. The idea of the MS-only shop is not as true as it used to be, many people are more pragmatic.

I find it sad that these days MS is laggin behind

Why, because MS didn't supply every single piece of server infrastructure software that So use, just the main ones? That's an odd definition of "lagging".

MS have a distributed cache called "Windows Server AppFabric" (formerly "velocity"). I don't know that much about it.


I remember Velocity and articles on it initially. Thanks for the help. I am looking up AppFabric now.

See I prefer sticking with one flavor of tools because its easier for developers to adjust. TBH MS does supply almost everything from grounds up. I only had to look elsewhere for advanced distributed caching frameworks. In fact before switching to Amazon EC2 our old datacenter was running MS VMM and our stack still didn't have anything other than MS software.


MS does supply a product in each category, but most MS dev shops that I have seen will more often than not be using some of: svn instead of Tfs, nUnit instead of MStest, castle or ninject instead of unity, nHibernate instead of EF, etc. And targeting firefox/chrome with jQuery. As far as I know there's no clear leaders in the distributed cache niche, and there is a fair amount of interest in noSQL stores like mongo, couch and ravenDb.

Where the open source choice is more functional, cheaper or just more familiar, it often gets used instead. This is good.


It is difficult to get Cache frameworks and other infrastructure working with MS Stack if you got a high performance website.

How so?


All these new frameworks are designed to work with OSS tech. So lets say some better NoSQL platform comes out, it supports OSS technologies first. .Net comes later and sometimes it is difficult to decide which client library to pick to connect to the NoSQL service. OSS client libraries would mature early and .Net libraries would take a little time.

Connecting OSS tech and MS stack is getting easier but still leaves you with a lot of uncertainty.

This is an example: http://wiki.basho.com/Client-Libraries.html These guys have Erlang client but there is no support for .Net.



tbh if you are running .NET and SQL Server you don't really need a cache like redis or memcache since SQL Server has had an in-memory query cache built in for 12+ years now.


Thinking that would help is a mistake. Take it from someone who went to optimize SQL Server 2008 R2 to its limits.

Cache framework's are absolutely necessary. The whole idea is to avoid a SQL Server hit and return a cached data object in memory. I think Stackoverflow is the best case study here.


Same here. Considering what can be done with careful tuning of hardware and database structure, I can tell SQL Server's limits are very high, but nowhere near close to whatever MySpace needed.

SQL Server is the third best Microsoft product. Right after their natural keyboard and their mice lineup.


I too work with sites hitting high visitor count but I have to agree there with the article. Most .Net people I've interviewed think page load speed doesn't matter. They write code to satisfy the requirement and are quite good at it, but it ends there.

Remember when people used ASP.Net web forms and it was hard to get rid of viewstate in the rendered page, and only the people who knew internals of platform well could fix it. It drove SEO's mad and they started to recommend staying away from web forms.

I agree with you on the staging and testing servers. Not having them is planning for disaster.


I too work with sites hitting high visitor count but I have to agree there with the article. Most .Net people I've interviewed think page load speed doesn't matter.

I find this shocking and somewhat unbelievable. My wife, who isn't a tech person at all thinks that page load speed matters (she just called so I asked her) -- and sites gmail as an example of a page that takes too long to load (I think that "loading..." indicator actually brings their load time to the forefront, although it really isn't that long).

I just have trouble seeing a .NET developer saying "for high volume sites page load speed doesn't matter", when most people who aren't in the tech industry would concede that it does.


It does but I think you misunderstood the point.

Most .Net developers are working on intranet sites. They don't have problem with large footprint pages.

When they move to internet and public domain websites, they are newbies. It takes them a while to adjust to the way internet sites are written. SEO optimizations, CDN usage, Ajax calls etc are pretty important on internet site than on intranet site.

Imagine using a Update Panel to code for an internet website. It would create enough junk javascript to delay a page load but it works fine on intranet. To fix this jQuery or some other Javascript framework must be brought in.


Well, that's why if you're looking for a business model, here's something that works -- take intranet applications people commonly use and release alternatives that do not suck so much.

Companies like 37signals are doing it successfully ;)


The way you said it originally made it sound more like they simply don't think it matters. I think what you really mean is what you said here. They are newbies when dealing with high volume sites where page load time is critical.

IMO, that's a very different statement. One is a difference in experience with a domain, the other is ideological.


I don't think they mean like that. If something is obviously slow its a problem regardless. However, especially when you have a certain number of users, even a fraction of a second faster load time can have a visible change in your analytics.

I've read before that Facebook has shown that users tend to spend a fixed amount of time on their site. Once users hit that time limit, they're done. If your site exhibits similar usage patterns, the faster your pages load (even if they're already fast), the more users can get done on your site, which, depending upon your revenue model, may result in more revenue.


That information about fixed time usage is interesting. But even with that aside, doesn't everyone know that fraction of a second decreases in load time is important for high volume sites?

For example, MySpace was getting at one point 24 billion page views per month. If you could reduce each page view by 1/100th of a second you save 40 weeks per month for your users (assuming the model where they look at a fixed number of pages).

In the Facebook model, if you assume a page comes up in half a second, this delay results in a 2% decrease in page views -- which is a pretty huge deal when your business model indirectly revolves around page views.

I guess my point is that even for people who come from a background where page load didn't matter. it would take 30s to point out it does, and I don't think you'd get any pushback.


* you save 40 weeks per month for your users*

that sounds big and important, but it doesnt really mean much does it?

the difference between a 1/100th of a second and 2/100th's of a second is too small to translate to enough time to get any increase in productivity.

There must be someother reason that load time is that crucial.


It's not so much that certain programmers don't think that page load speed matters; it's that the culture around the platform doesn't encourage speed.

You're conditioned to not care anymore, because ASP.NET Web Forms makes it so damn hard to achieve responsive web apps, and building responsive web apps almost universally means breaking away from the standard ASP.NET Web Forms style of development. You have to abandon pretty much everything that makes the platform convenient in order to get good performance.


Web forms were designed back somewhere around 2002 to compete with the likes of JSP etc. They still work well for their domain, which is intranet portal etc.

MVC however is designed from grounds up to deliver speed and web standards compatibility. Building for web these days with Web Forms is just wrong. But yes, most to actually bend Web Forms to deliver requires wizardry.


If your knowledge of ASP stops at webforms, then you have nothing to say about anything recent in ASP. It's all gone MVC and jQuery. StackOverflow is a good example.


You must be getting the shit end of candidates because I know as many brilliant C# developers as I do C, , PHP, Python etc. I also know plenty of terrible Ruby, Python etc. developers. You can not generalize based on the tool a developer uses based on you interviewing a handful of people

oh and that viewstate forms thing from SEO is the SEO talking bullshit and trying to justify his job


You are right about candidates.

I had the same reaction to SEO at first. See SEO guy is partially right. It involved page load times. Larger viewstate means longer load time and that gets penalized by Google.


Sure.. maybe it's harder to build a top-notch .NET team than a LAMP one... but I think the point is that you have to know what a top-notch team and set of processes look like in order to build them... on any platform. If anything the tech-choices are symptomatic of a bigger issue.


>Most .Net people I've interviewed think page load speed doesn't matter.

Have you informed them that you work with a high traffic site? Most intranet sites are not high traffic, so if they're spending time pre-optimizing for speed instead of features/development time, they're actually wasting the company's money.

And what has viewstate got to do with SEO? You lost me.

For SEO friendly URLs, all you need a is URL Rewriter. http://www.google.com/search?client=opera&rls=en&q=a...

>only the people who knew internals of platform well could fix it

That's true of any platform out there.


Something like this: http://stackoverflow.com/questions/1185984/is-viewstate-bad-...

Yes I talk to them that we have a high traffic site and we lose them because they don't know about building them. Besides you won't find many Microsoft stack sites dealing with scaling issues. I personally looked up Stackoverflow scaling case studies to design my solutions later.

Back in 2005/6 URL rewriting was still pretty difficult. I started out back then. I converted to MVC completely when 1.0 version came out specifically to address the SEO concerns.

But I guess our discussion is swinging to technical side :)


In ASP.NET 4.0 and MVC, URL routing is built-in, and it's easy as pie.


SEOs prefer to have the "relevant content" as high on the page as possible. Since viewstate is just a large blob, many SEOs assume it decreases relevancy. I've never seen results that confirm this assumption.

Edit: See sajidnizami's post for the relevant StackOverflow question. Though, this still doesn't make sense to me logically. Why would a search engine disregard a (reasonably) longer page?


Longer page would increase page load time. A longer page with viewstate at the beginning would delay loading of content. Google penalizes pages that load slower.

http://googlewebmastercentral.blogspot.com/2010/04/using-sit...


As an ex-MySpace ID MDP employee we had staging servers. There were like 2 or 3 different levels of test, of course, that might have been unique to our team.


Did you have version control? That's the most damning accusation to me.


I don't even get where that accusation is coming from? Everything was versioned prior to building out, rather than allowing devs to drop dll's wherever they felt like it. Sure the local dev environment wasn't the best...


> "The biggest problem was they didn't allow the developers to have staging or testing servers -- they deployed on the production servers on the first go-around... And all this with no change management or control. No versioning either."

And that, ladies and gentlemen, is why you don't want your software project managed by non-programmers. Any competent programmer would know not to do that.


Completely agree.

Maybe they should have used Azure. At least that forces them into a staged deployment. (of course not that it was around when they started MySpace).


There is so much garbage in those comments. The lack of staging servers was probably from 6 years ago and we use TFS, git and SVN.


> largely

critical word there: largely. he wasn't saying all. he was making a generalization comparing large groups. individual cases may vary.


"whom"

Yeah, that made me cringe too.


I worked at MySpace on the MDP ( MySpace Developer Platform ) team. My team, MySpaceID, was the one that implemented Oauth 1, 2, 2.0a and all of the external REST libraries. We worked closely with the activity streams team and the OpenSocial Team. We also launched the MySpace JSL or MySpace Connect. We were the 1st to do a popup login flow for OpenID and several other cool things MySpace was doing to catch Facebook. We might have done it if Google did not pull the money.

Once the free parking was pulled from MySpace, 50% of every team was laid off and all of the momentum was pulled from the company.

Working with .Net was not an issue, and in some cases it was a benefit.

There were however huge cultural problems with FOX. Here are a few.

Developers used were given one display, not two. Screens were 1280x1024. I bought my own displays and had my manager order a custom computer for me with dual video card support.

Fox was a closed source company, and so when we were working on Open Tech like Oauth and OpenSocial gadget servers, we had to do it as closed source software. We had no peer review. It made it really hard to ship and test something when you don't have linkedin, google, bebo, and twitter reviewing your code. On top of that when those companies found a bug, we had to re-implement that code in .Net. On top of that MySpace and .Net were well designed for strong typing and those types work a bit different than Java.

It didn't take a lot of time to port so, we kept doing it, but you have to think about this, you are at a race with a company like Facebook who had zero profit motive at the time, and billion in funding and a ground up stack. Meanwhile MySpace was just clearing out cold-fusion and we had really old blogging platforms that could not just get ripped out.

We also had management that didn't want to piss off users and so we would have 2 or 3 versions of the site out there, and we required users to upgrade to new versions.

What MySpace had was Cultural Debt, and a fear of destroying what they had. Facebook won because they were Socratic to the core.


By the way, if there's any MySpacers reading that got laid off or looking to jump ship...

I'm a developer at Leads360 in El Segundo, CA. We're hiring right now. I've already interviewed several MySpacers and have extended offers to a few. We hope to get more :)

Email me your resume, if interested: bpaetzke@leads360.com


Thanks for the insider account, nice to hear in a sea of speculation and rumor.

One thing though, what do you mean when you say Facebook was 'Socratic to the core'? I'm only aware of that in context of the Socratic Method of teaching, but am not clear what it means here.


Glad I'm not the only one confused by that. I googled "define:socratic" and got "Know thyself"...

So I assume he meant that facebook knew exactly what they wanted to be, as opposed to myspace, who was trying to catch up with facebook.


Socratic - as in seriously concerned with a logical journey of discovering the essence of what is important - dismissing irrelevance and having little patience for straggling or argument.


Once the free parking was pulled from MySpace, 50% of every team was laid off and all of the momentum was pulled from the company.

Can you explain what you mean by "free parking" here?


Sorry, for being overly creative.

"free parking" is a spot on monopoly where the rules specifically state that once a user lands on the 'free parking' space no money is received. however, in almost every monopoly game i have played, players take fines/ taxes and put them in the middle of the board, and when a player lands on 'free parking' they get the funds.

It's an example of not following the rules as a point of culture.

The deal soured with Google because the terms were around pageviews and clicks. MySpace and Fox decided to target those terms to maximize revenue. The result was that MySpace added unneeded page flow for just about every user action. It destroyed UX and pissed off Google. Our team kept joking with management that we were going to create a MySpace-Lite as a side project from our REST APIs and to get rid of all of the crap.

  WE SHOULD HAVE DONE IT.  WE SHOULD HAVE CREATED MYSPACE-LITE
The Deal with Google was worth $300 million a year on total revenue of $750 million a year. MySpace Music was loosing the company mad money, it was and is a flawed model. Our team wanted to create an API where games and sites could access the music, and to create open playlists. We wanted to make the music open, and then work to license it. We were told that it was not possible.


Sidenote : Growing up I also played monopoly that way and various other house rules. After playing some Euro games I went back and read the monopoly rules and played a game by the rules. No free parking, no extra houses or hotels, forced auctions, etc. The game took 45 minutes and was way more fun. Made Monopoly actually enjoyable again, though not as good as most of the really good Euro games out today.


Even more OT: The electronic versions of Monopoly actually vastly improve the game. No more counting money, and strongly enforced rules make for a faster, more efficient and entertaining game.

There was a write-up by someone about how everybody hate Monopoly because it drags on, and few people fully understand that the reason it drags on is because of house rules like Free Parking, immunities and the like.

I hated playing the board game as a child, but the console Monopoly games can have some strategy applied to them that have immediate payoffs.


Putting the money in the middle and getting it when you land on free parking always made the game more fun as the pot got bigger you just tried to keep throwing the correct number to get the huge pot in the middle to hopefully help you get out of bankruptcy!

I never had a MySpace page, I found that the disjointed looks across peoples profiles and seemingly no direction as to what it was to be used for rather annoying and made me consider MySpace as a joke. Facebook won me over in that they had a consistent layout, and no goddamn music playing when I went to a users profile page, and I can't even begin to count the amount of friends I have that agree with me, but I can count the amount of friends on one hand who have a MySpace page.


I believe that he's referring to the deal where Google paid MySpace for ad impressions. When that fell through MySpace lost a lot of revenue. http://www.dailyfinance.com/story/media/myspace-in-trouble-o...


I guess it comes from Monopoly?


"Fox was a closed source company"

Wow. I didn't know such a thing existed. Was no open source software used? From databases to javascript libraries?


You are kidding me right? There are loads of Microsoft shops, who see open source as evil, and have severe problems in using anything that is not Microsoft. Scary, I know.


This is wrong, there are more than a few things Myspace open sourced.


Hey David :) (i'm remaining anon, just because). In fact devs always got two monitors and it was pretty easy to get more if you wanted to, but yea they were just 1280x1024's. I really don't think the dev hardware was much of an issue though, they did upgrade it fairly regularly. The testing and staging environments definitely existed. There were some deeper issues with some portions of the production infrastructure not being testable in dev, so you had to test them in stage, but the basic ability to test in dev did exist and was good enough to be pretty useful.


This was a stark contrast from having worked at hi5 just prior where I had a killer Macbook Pro, and (2) 1080p 22" displays. Sometimes while visiting dropboxes offices in those days I remember them having (2-4) 27" displays. Just a difference between SV and LA.

At hi5 I remember some nights working till 5am in the morning going home, and then being at work by 10am for a meeting.

At MySpace I tried to get a few people together on weekends to hack on cool demos, but management didn't want people to burn out. They did however buy a few people blackberries and expect them to remote in and fix anything that might be a fire. A few of our team members had them and they would have to log in a 2am and fix shit.

I am still very proud of the work I did there and respect every one of my team mates as fantastic developers. I hope to work again one day with many of them.

If there was one place MySpace didn't fail it was hiring great talent, and bringing them to LA to work. The partying in LA might have been a bit of a distraction but, it's also what made LA a ton of fun. It was so easy to date down there vs. SF where it's impossible to meet a gal.


The article claims you didn't use version control. That's hard to believe. Is it true?


No, Facebook did. And they did it with crusty old PHP which pretty much proves the platform isn't going to make or break your business.

Finding good talent that's experienced with huge scale sites is not going to be easy regardless of language. It's not like MySpace could have been RoR and suddenly everything would have been simple, at their prime they were doing a ridiculous amount of traffic that only a handful of sites ever had experienced. There were probably 0 people experienced with PHP at Facebook's level, they all had to learn as they go and what they learned was they picked the wrong language so they created HipHop, a hack to overcome PHP and probably hundreds of others that help them scale better.


I agree with the first part, but have to quibble with the statement "there were probably 0 people experienced with PHP at Facebook's level."

I suppose that's true in absolute terms (nobody is at Facebooks level). However, there are definitely an army of really high traffic sites out there written in PHP, many of which predate Facebook. Problems of scale aren't exclusive to Facebook by any stretch.

It seems to me that Facebooks choice of PHP, in that context, was a big advantage. They've undoubtedly been able to draw on the experience that others have had at scale on very similar stacks. That might not have been as true for MySpace.

That said, MySpace had a whole host of internal issues. I briefly worked for a sister site at Fox Interactive Media and had at least some insight into what was going on over there. I'm sure someone will write a book about it one day:)


Wasn't Yahoo written largely in PHP?


Facebook didn't do it with "crusty old PHP;" rather, they had to re-build their stack completely from scratch to keep up. See HipHop, their homegrown PHP-to-C compiler https://github.com/facebook/hiphop-php/wiki/ and Cassandra, their own custom database system http://cassandra.apache.org/

If they stuck with crusty old PHP, I have no doubt they would never be able to manage the load.


No kidding. They're using php as a template language to call thrift services. That's hardly "using php" in the sense that most people would think of it.


Back when MySpace actually mattered I don't think they had HipHop, Cassandra etc.


HipHop went live on FB last year. It didn't even go live everywhere at once.

For some reason ppl think Mark created it on the 3rd day


It would be interesting to see how much load HipHop alleviated. I assumed it helped, but how much?


apparently it allowed them to not buy 70% of new servers at a time when they were growing crazy


Could anything "off-the-shelf" have managed that load?


It depends on how many servers you are willing to run. When you have 500 million users, and a decent amount of them, accesses your site multiple times a day, CPU cycles per request starts to count.


Are there any websites that face similar load problems to Facebook's that are addressing it with an unmodified stack?


I'm pretty sure Facebook's bottleneck is the database, not the framework.


It's not the platform itself, but the developers that know the platform inside-out. For some platforms, the best money can get you is still not good enough for an ambitious project spanning world, and having to undergo massive changes almost in realtime. Why? Because certain platforms don't attract top-notch developers.

It's not that easy to get top talent if you stick to MSFT platforms.

From linked article:

> Silicon Valley has lots of talent like him. Think about the technology he knows. Hint, it isn’t Microsoft.

http://scobleizer.com/2011/03/24/myspaces-death-spiral-due-t...


"It's not the platform itself, but the developers that know the platform inside-out."

That's the number one benefit of an open source stack: it's possible to know the stack inside out. If it's closed source, there's an eventual point at which you're dealing with a black box.


There is nothing black box about the MS stack. The .net bcl might be closed source but you can still step into and read the source. It's available under the MS-RSL license. Which allows you:

"use of the software within your company as a reference, in read only form, for the sole purposes of debugging your products, maintaining your products, or enhancing the interoperability of your products with the software"

See: http://referencesource.microsoft.com/


> And they did it with crusty old PHP

PHP is actually a good templating language and a passable rapid-prototyping language. The fact is that it's more important to stay agile when you're growing exponentially then to pick some optimal technology.

PHP's simplicity makes it one of the best choices to be able to incrementally build a robust and scalable back-end underneath it as you go. ColdFusion and .NET I imagine to be some of the worst (though I have no experience with either, so maybe I don't know what the fuck I'm talking about).


Sorry but I cannot resist. Don't comment on things you know nothing about.

Ever heard of JBOSS? Did you know they have an open source CFML project called Railo? Or that Chris Schalk, developer advocate, from Google called what another open source cfml distro called Open Blue Dragon was doing on the GAE as "awesome". He said it was the easiest way to get running on the GAE.

The best developers can do amazing things in a number of different languages.


Its funny I don't know (can't think of one) of a high traffic site that "died" due to technical failures (particularly scaling failures). Twitter had massive problems during its growth phase, Reddit has had similar problems. Does anyone have an example of a site that got "killed" by technical issues? I'm really curious.

Bad products tend to die or get replaced by superior offerings. Thats the nature of business.

Not being able to innovate rapidly because of technical lock in is the only way these types of issues can "kill" a site. But its very hard to quantify these types of issues. Between this article and Kevin Rose's statements about hiring B&C level programming talent it seems like a lot of engineers are getting tossed under buses, for poor management decisions.


Friendster:

http://highscalability.com/blog/2007/11/13/friendster-lost-l...

VB: Can you tell me a bit about what you learned in your time at Friendster?

JS: For me, it basically came down to failed execution on the technology side — we had millions of Friendster members begging us to get the site working faster so they could log in and spend hours social networking with their friends. I remember coming in to the office for months reading thousands of customer service emails telling us that if we didn’t get our site working better soon, they’d be ‘forced to join’ a new social networking site that had just launched called MySpace…the rest is history. To be fair to Friendster’s technology team at the time, they were on the forefront of many new scaling and database issues that web sites simply hadn’t had to deal with prior to Friendster. As is often the case, the early pioneer made critical mistakes that enabled later entrants to the market, MySpace, Facebook & Bebo to learn and excel. As a postscript to the story, it’s interesting to note that Kent Lindstrom (CEO of Friendster) and the rest of the team have done an outstanding job righting that ship.


From http://www.nytimes.com/2006/10/15/business/yourmoney/15frien...

But the board also lost sight of the task at hand, according to Kent Lindstrom, an early investor in Friendster and one of its first employees. As Friendster became more popular, its overwhelmed Web site became slower. Things would become so bad that a Friendster Web page took as long as 40 seconds to download. Yet, from where Mr. Lindstrom sat, technical difficulties proved too pedestrian for a board of this pedigree. The performance problems would come up, but the board devoted most of its time to talking about potential competitors and new features, such as the possibility of adding Internet phone services, or so-called voice over Internet protocol, or VoIP, to the site.

The stars would never sit back and say, ‘We really have to make this thing work,’ ” recalled Mr. Lindstrom, who is now president of Friendster. “They were talking about the next thing. Voice over Internet. Making Friendster work in different languages. Potential big advertising deals. Yet we didn’t solve the first basic problem: our site didn’t work.”


It's unclear why that should be a board-level discussion in the first place. I doubt board meetings at Amazon or Google involve people benchmarking performance.


Because it was killing the company. The reason board meetings at Amazon and Google don't involve people benchmarking performance is that the sites are competently run. But anything that is putting the company's future at risk is a legitimate question for the board to talk about.


To be fair to the rest of the industry, a single glance at the front page shows size of the TRANSITIVE CLOSURE of your social network.

Why would you do that?

This is bread & butter algorithm analysis. Don't put O(n^2) analyses on your most-loaded page.


I have heard from insiders that the founder viewed this feature--the count of people 3 degrees away--as the central embodiment of the magic of Friendster, an absolute must-have. I.e. people added friends in great part in order to feel more connected to 100ks of people.

Others in the company begged to at least back it down to a count of people 2 degrees away; but the founder insisted that the magic was the 3-degree version. So instead the company pursued technical strategies to speed up the 3-degree count; don't know what those were precisely, but it seems that they were not pursued as zealously as they could be (due to VoIP, etc. distractions).

My understanding, by the way, is that the network size was computed on page-load until surprisingly very late, due to the perceived need for real-time. Even after it was cached, it was still computationally expensive, as your numbers were computed (roughly) every time your 3-degree network added a link.

In retrospect, A/B testing could have been used to test the executive vision. So although my first reaction upon hearing this story (years ago) was: "that's big-O insanity," now I think that it's just as much a story about willingness to subject vision to empirical data and performing clinical cost/benefit analysis (when appropriate).


I actually respect that. While facebook brags about their "social graph" (IE: A list of friends they don't do anything with), friendster tried to take on the hard CS problems. Sure they failed, but they were never fake about it. And they were one of the first.


You are assuming that it is computed on page load.


Because Sixdegrees did that. In fact, it was the entire marketing point of Sixdegrees. I don't think Friendster was trying to imitate Sixdegrees generally — after all, it was a failed bubble business — but unavoidably you tend to model what you're building after things that you've seen before.


Friendster. Maybe there are lot's of others who just had technically competent competitors, so we never knew about them in the first place.


I've often heard people cite Friendster's outages and slow loading time as one of the reasons for its decline. But yes, it's usually a secondary issue and not a primary cause.


Digg got slow and started to loose people to reddit. This caused them to rebuild from scratch with a new strategy which ultimately failed. Yahoo was also slow.

Funnily enough, now I think about it, performance may be much more critical than periods of downtime.


LiveJournal is a good example. They were forced to go invitation-only for a long time because they couldn't handle the traffic. By the time they had the technical issues under control attention had shifted elsewhere.


LiveJournal maybe.


MySpace wasn't always .NET. It was ColdFusion before .NET v2 came out. Not that ColdFusion would have made any difference.

That said, I'd argue that no, Microsoft did not kill MySpace. Generalizations like this are wrong. There are many more .NET enterprise developers out there then there are Ruby or Node or Python. With quantity comes a varying degree of ability. MySpace killed themselves by lowering their standards to the easy-to-find .NET developer instead of setting the bar higher. Once you lower the standard by which you hire developers, it's a cycle. The new guys will lower the standard a little more to hire the next set, etc.

The lesson to be learned here is if you can't find a good developer, don't blame the technology stack you've selected, blame your recruiter. Find the person you want, they are out there.


Scoble's thesis seems way off. The problem with MySpace wasn't the technology, or even the site. It was the users of the site. It became a place that people didn't want to be associated with, while Facebook became that place. If MySpace instantly could flip a switch and turn into Facebook (codewise) hardly anything would have changed.

MySpace's problem IMO wasn't technical at all. They built a service that focused on users most likely to move, and repelled those most likely to stick with a platform.


Another case of technical people assuming all business failures are technology related. Just like Digg failed because of NoSQL even though Reddit is frequently down due to technical issues.


I don't completely agree with you, however the fact that various gaudy, non-crossbrowser page themes could be added by third-parties was part of what drove me away. Towards the end of my time on MySpace I would make sure to disabled styles and flash in the browser to make it bearable.


Platform fetishism[1] and attempts to throw developers under the bus[2] aside, the comment from Nick Kwiatkowski states a much better reason: developers weren't empowered do their jobs.

The comments state that there was no staging or test environment, no ability to roll back releases, refactoring was a dirty word (engineers wanted to refactor, but couldn't), principal on technical debt was never paid (only the interest in terms of hacks to make the site scale-- again, product organization prioritized new features).

The rest: location, technology choice aren't sufficient to "kill" a company: there are successful companies in LA, there are successful and agile companies using Microsoft stack (where appropriate-- see Loopt and StackOverflow/FogCreek as examples of companies using both FOSS and .NET). On the other hand, they're not optimal either: they aren't what engineers would choose themselves most of the time.

This indicates that the technology and location choice aren't the cause, they're symptoms of company management that doesn't understand building non-trivial Internet applications (what the optimal technology stack for one is; where, when and how to hire developers to build one) and yet maintains authority to make all technical decisions. Contrast this with Facebook, Google et al-- where every process (from corporate IT to production operations to HR and recruiting) is designed with needs of the engineering organization in mind: "Want two 24" monitors? No problem. Want Linux on your laptop? No problem. Want ssh access to production? No problem. Want to fly in a candidate to interview? No problem."

[1] I personally wouldn't touch Windows with a sixty foot pole, but speaking objectively C# >3.0 and (especially) F# are great languages.

[2] "They weren't talented": having interacted with some ex-MySpace engineers, this just isn't -- at least universally -- true. Indeed, here's a secret to hiring in an early (Facebook in 2005) startup: seek out great developers who work for crappy companies, people who have joined "safe bet, resume brand" companies like (post ~2003) Google aren't likely to join you until you've already become a "safe bet, resume brand" company (Facebook in 2008-Present).


> Contrast this with Facebook, Google et al-- where every process (from corporate IT to production operations to HR and recruiting) is designed with needs of the engineering organization in mind:

Completely agree.

> Want ssh access to production? No problem.

This makes me a little uneasy, I'm not sure everyone should have ssh access to the production server.


> This makes me a little uneasy, I'm not sure everyone should have ssh access to the production server.

Every _developer_ should. No question about it. Sudo should given on an is-needed basis, but ultimately, as a developer I can screw up a lot more by simply writing bad code.

Simple philosophy: you build the software, you should be involved in running it (including carrying a pager). Amazon's CTO agrees: http://twitter.com/#!/Werner/status/50957550908223490


> Every _developer_ should. No question about it. Sudo should given on an is-needed basis, but ultimately, as a developer I can screw up a lot more by simply writing bad code.

Fair enough.

This, more nuanced point, I agree with.


If you can't trust engineers to work in production, then you can't trust their code. Extensive logging, auditing, and granular access control are also critical to making this work in a large engineering org.


I don't think it's a matter of trust but rather mitigating your exposure to hackers.


How does shutting of ssh for developers mitigate exposure to hackers? Require everyone to use an ssh key with a (strong) pass phrase, require strong passwords and two factor authentication. You already should do so for the operations staff (who needs ssh access), same should happen to developers.

Ironically, if you treat production as an alien land developers aren't allowed into (and have no transparency about), you're going to create an environment where developers completely ignore operational concerns like security (e.g., having no authentication mechanism on their own services, as it's presumed production is a "magically secured" environments where no one may connect to in any way).


You don't want to mitigate exposure to hackers. You want exposure to hackers because they're the ones who can write your code.

For mitigating exposure to crackers, though, it makes sense to minimize the number of possible entry points someone could compromise in order to put malicious code on your production servers. The source control system (did they really not have a source control system!?) is a less vulnerable avenue than ssh, because presumably third-parties review what flows through source control before putting it on the server.


Yes, I wanted to say something about hackers vs. crackers/script kiddies, but decided against it.

Ssh access doesn't have to come with privileges: main purpose of ssh access is to be able to run top, iostat, ps, strace/dtrace, grep log files and also verify that my service is configured correctly.

You are correct that code can be reviewed, but that isn't always the case nor is the reviewer omnipotent. In any case, with both code and ssh is there is a strong audit trail: an employer needs to make it clear which are fire-able offenses and which aren't.

For what it's worth, "give developers read-only ssh access to machines that don't contain sensitive customer data" works great for Google, Amazon (where it also comes with a pager, something I'm in favour of), LinkedIn (recently implemented-- this made my work much easier), parts of Yahoo and I'd be surprised if that isn't the case at Facebook. In other words, companies that are strongly oriented around UNIX/Linux (it's available as an option on developer desktops), which can afford to hire (and are able attract) strong developers and strong operations engineers and which are in the business of writing Internet applications.

My personal philosophy actually goes quite a bit beyond that: hire great, generalist engineers who are considerate and nice, give them root. Let them push some code without review, if they're confident their code won't cause damage. Review any tricky code, bug fixes, or mission critical components (e.g., the HA storage system, revenue loop components, UI changes). Roll back instantly if it trouble occurs (something you couldn't do at MySpace, apparently!).


Sorry, there were some shorthands in my post. Let me expand.

If someone cracks your developer's development workstation, they can piggyback on that developer's access in order to insert malicious code into a commit, or in order to ssh into a production server and run a canned exploit of a local-root vulnerability. The first of the two leaves a strong audit trail, and may require a third party to sign off on it before going to production. The second probably doesn't, and won't.

If you can run strace on a process, you can inject malicious code into it.

While this is a theoretical consideration, I don't know of any security breaches due to this policy at the companies you list. On the other hand, there were security breaches at MySpace due to gross incompetence on the part of the developers — most of all, Samy is my hero!

I wasn't suggesting that developers themselves would be putting malicious code into production.


How is getting onto a developer's workstation more difficult than on an operation engineer's workstation? If you don't allow developers to do some operational duties, this also means having to have more operations staff (which typically will have higher privileges than developers, anyway).

You are also forgetting that there is usually a step between a developer workstation and production, and at that gateway you'll typically have additional security measures (so that simply getting to the gateway doesn't get you to production).

I don't, however, disagree with your overall idea: yes, technically, developers having ssh access to production might (to a very small degree) reduce security, all else being equal. However, there countless benefits to giving developers ssh access that result in greater security.

Nor do you have to use the same policy for all machines: SOX, for example, mandates that developers that write the code that handles financial transactions shouldn't have access to machines that run this code (to prevent fraud). There are other types of machines I'd include in this case (databases holding sensitive user data, machines holding sensitive configuration, etc...). However, for a vanilla machine running an application server, or a database server holding strictly non-sensitive/non-revenue data, that's not the case.

There are also far worse mistakes one can make (e.g., don't use version control, don't put proper review procedures in place, hire/don't fire incompetent developers) which will impact security.


Concur.


If a headline ends with a ?, the answer is generally "No."



I worked at MySpace, specifically the middle tier where these technical issues supposedly existed (scalability), although I also worked on a number of user and non-user facing projects during my time there. You may consider me biased because of that, but I'd say I also have a pretty good view into the issue. The reason for MySpace's downfall is crystal clear to anyone who worked at the company and cared to look around and make sense of what was happening - it was catastrophic lack of leadership and vision in management and product, paralyzing political infighting, and general lack of competence at the top levels of the company's management. The people making the decisions would change their mind and requirements constantly because of this. There were numerous entire features that were simply not launched and abandoned AFTER they were completed because the management couldn't agree on how they wanted to "position them" (and they were great features). The top management level was in a constant state of political infighting, and that most likely came from fox and the way they ran shit. There was no one to judge and reward competence at that level, it was simply about who could cover their ass better or come out looking better. MySpace was huge, and everyone just wanted a piece of the pie.

One of the issues that stemmed from this was lack of respect for technology in the sense that no one at the higher levels saw the company as a technology company. They saw it as an entertainment or media company. That created the cultural problems on down that eventually contributed to bad products and shoddy implementation.

Now, the core technical part of the organization was actually extremely competent. MySpace was pushing more traffic than Google at one point in its heyday, and the site scaled just fine then. That wasn't an accident, I have worked with some of the smartest people in the industry there. But because tech wasn't the point for executives, those people were tightly controlled by non-technical management, and so products suffered.

MySpace could (and still can) scale anything, to say that they had a scaling problems by the time they got to their peak is complete gibberish. Over the years they have developed a very mature technology stack. No one knows about it because it's entirely proprietary. The problem was management and product that was basically... incompetent, and lacked anyone at the proper levels who would care to see and fix it.

EDIT: Some typos and missed words. I'm sure still missed some.


The original article mentions you didn't have version control or staging servers. You didn't mention that claim. Is it true?


That's definitely false. We've had Microsoft's TFS as the source control system when I started working there about 4-5 years ago (I no longer work at ms). We also had two levels of stage servers, several for the dev code branch, and a couple for the prod branch. Eventually each team had their own set of stage servers. Stage servers were crucial since some parts of the infrastructure were not testable in dev, so to say we didn't have any is to not be at all familiar with ms's development setup.

BTW, I'm not normally this animated with my comments, but the article was so full of such baseless conjecture I as truly appalled. I actually had a good deal respect for Scoble prior to reading that. MS had a ton of problems, but it definitely had a number of great people working on technology and doing a pretty good job at it - otherwise we would have been friendster.


Am I the only who thinks a large reason why MySpace lost to Facebook was design?

MySpace just gave way to much flexibility to the users to modify the look and feel of their pages that it just got way to busy and very difficult to look at.

In some respects I think it was MySpace's business proposition to allow users to create their own personal spaces on the web easily, whereas, Facebook's goal was more to connect you to your friends. In that sense MySpace followed through, although that follow through seemed to lead to their demise!


I logged into MySpace the other day for the first time in quite a while. And wow, it's butt-ugly, hard to navigate around, and within ten minutes of logging in I had eight MySpace-based spam e-mails sitting in my account.

In short, it's wildly less-pleasant to use than Facebook.


I don't think myspace's greater flexibility was its downfall. The problem was that over time, the end-user experience stagnated while the company focused on maximizing revenue from ads and such. Based on comments above, it seems that while facebook was busy introducing incremental social networking improvements myspace was busy trying to make users click more so that more ads would load.

Myspace had some of the most annoying ads on the web. Heaven forbid you tried to use the site without adblock.

I think that was the bigger problem. Had they continued to focus on improving the end-user experience rather than extracting every last bit of value, they might still be a viable competitor to facebook. The freedom to customize would be one of a very few features facebook could not easily copy.


Holy mother of god, no change management, staging or testing servers? On a site that big?

Appalling, if true. (Not that good technology and process would have made the product suck much less.)


Having read the article, as well as Scoble article linked I would have to disagree that it was the Microsoft stack. There was just not enough investment in their programmers. I work at a small startup where money is rather tight while we raise funding and attempt to get contracts in, yet us developers get what we need. Every developer has at least two screens (be it a laptop and a large LCD, or a desktop with two the same monitors). We can ask for new staging servers, we can ask for more memory, we can set up our own infrastructure, and we can make technical decisions.

Once you start taking away the ability of devs to think for themselves or feel comfortable doing work it makes it harder to be motivated to come into work and fix the issues, and if management isn't listening to the complaints about the need to re-factor then what is the point? Adding hack onto hack gets boring pretty damn fast.

MySpace also lost in that they really didn't have a direction of where they were going (at least that is what it looks like looking in). Blogs, music, status updates, what was it supposed to be? And it didn't help that all over their web properties they didn't have a consistent look and feel because they allowed everyone and their mother to skin their profile page how they saw fit leaving it a disjointed mess that just made me hate the site more.


(former MySpace and former .NET team @MSFT) Let's just say the Microsoft stack probably didn't kill the beast… ASP.NET certainly didn't help though.

It's too bad that all the tech built around .NET will be lost to the annals of MySpace, MSFT should acquire the company just to open source the whole thing for the benefit of .NET.

Regardless, it's fair to say starting a company on "the Microsoft Stack" today would reflect questionable judgement. Are there any recent ex-MSFT founders on it?


Try telling Joel Spolsky and Jeff Atwood that.


The technology never kills the business; it's ALWAYS the people. However, I think this points out the extreme importance of getting good people who make good technology choices.

MSFT products are not inherently evil; they have some advantages for some types of projects. But a proprietary closed source stack always puts you at a disadvantage.

Worst case scenario with open source, you go patch what's holding you back in the open source. With bugs in MSFT products, you are at the mercy of MSFT to prioritize your issue. If you are a big enough fish, then they will pay attention. Otherwise, good luck.

I don't understand why anyone would willingly tie themselves to the Microsoft web dev stack as a startup. Even if you don't have to pay upfront, you will pay dearly in the future when you go to scale. At one startup I worked for we were hamstrung by not being able to afford the upgrade to Enterprise SQL Server, for example. So our data replication was tedious, time consuming and prone to failure.


In my opinion, some of it also had to do with inconsistent and ugly hacky Myspace user experience.

White / Yellow / Green / Red fonts on black backgrounds with animated gifs + glitter and broken plugins will be the response to the question "What comes to your mind when you think of Myspace UI experience?"

In comparison, the facebook experience was a lot more fresh, clean and unified.


Completely agreed.

It was downright embarrassing to have a profile page on MySpace. Unless you wanted to spend an entire weekend customizing your page, it was going to look like a banner ad factory had exploded on your profile. I'm a web professional -- I can't have that as my public image.

Not only did MySpace look like an amateur web site from 1998, it was completely confusing to operate. What checkbox do I click on which page to turn off the flashing purple?

MySpace just had an inferior product, plain and simple.


Twitter had scalability problems and they were on RoR, but it got solved. Scaling to those levels is always going to uncover problems in your architecture. What mattered was the way MySpace chose to execute, not the technology they did it with.


It got solved by moving off of RoR...


By moving the part that wasn't scaling off rails. From what I understand they had a giant monolithic rails app which just couldn't scale after a point. They moved to a services-based approach, with a rails frontend talking to scala services.


Not exactly. It got solved by a lot of changes to the underlying storage structure.


It got solved is the important thing. From everything I've heard about MySpace, technology was not the real issues but rather management, development pratices, etc.


Scalability is relatively new hiccup; given the fact that only in past few years users have swarmed the internet. Sites never expected that and developers weren't prepared. They learned mostly by trail and error and reading case studies and then figured out what to do. You would find inexperienced PHP devs who don't know scaling just like you would find .Net devs.

I think the article has the right notions. Stack doesn't matter, a team of highly motivated devs who can milk the technology involved is more important.


As someone who was fairly intimately involved in the entire evolution of the MySpace stack, I'm dumbfounded at the number of inaccuracies in this article (actually, it's hard to call them inaccuracies so much as an exercise in "I'm going to write an article based on some stuff I heard from disgruntled people."). I developed in non-Microsoft technologies before and after MySpace, and I can tell you that, like all technologies, the Microsoft web stack has strengths and weaknesses. Performance was a strength, non-terseness of the code was a weakness. Modularity was a strength. Etc. Have any of you encountered a technology where, as much as you like it, you can't rattle off a bunch of problems and things that could be done better?

The web tier has very little to do with scalability (don't get me wrong, it has a lot to do with cost, just not scalability, except in subtler ways like database connection pooling)--it's all about the data. When MySpace hit its exponential growth curve, there were few solutions, OSS or non OSS for scaling a Web 2.0 stype company (heavy reads, heavy writes, large amount of hot data exceeding memory of commodity caching hardware, which was 32 bit at the time, with extraordinarily expensive memory). No hadoop, no redis, memcached was just getting released and had extant issues. It's funny because today people ask me, "Why didn't you use, Technology X?" and I answer, "Well, it hadn't been conceived of then :)".

At the time, the only places that had grown to that scale were places like Yahoo, Google, EBay, Amazon, etc., and because they were on proprietary stacks, we read as many white papers as we could and went to as many get-togethers as we could to glean information. In the end, we wrote a distributed data tier, messaging system, etc. that handled a huge amount of load across multiple data centers. We partitioned the databases and wrote an etl tier to ship data from point A to point B and target the indices to the required workload. All of this was done under a massive load of hundreds of thousands of hits per second, most of which required access to many-to-many data structures. Many startups we worked with, Silicon Valley or not Silicon Valley, could not imagine scaling their stuff to that load--many vendors of data systems required many patches to their stuff before we could use it (if at all).

Times have changed--imagining scaling to MySpace's initial load is much easier now (almost pat). Key partitioned database tier, distributed asynchronous queues, big 64-bit servers for chat session, etc. But then you factor in that the system never goes offline--you need constant 24 hour access. When the whole system goes down, you lose a huge amount of money, as your database cache is gone, your middle tier cache is gone, etc. That's where the operations story comes in, wherein I could devote another bunch of paragraphs to the systems for monitoring, debugging, and imaging servers.

Of course there's the data story and the web code story. MySpace was an extraordinarily difficult platform to evolve on the web side. Part of that was a fragmentation of the user experience across the site, and a huge part of that was user-provided HTML. It was very difficult to do things without breaking peoples' experiences in subtle or not subtle ways. A lot of profile themes had images layed on top of images, with CSS that read, "table table table table...". Try changing the experience when you had to deal with millions of html variations. In that respect, we dug our own grave when it came to flexibility :).

Don't get me wrong, there were more flaws to the system than I can count. There was always something to do. But as someone who enjoys spending time on the Microsoft and OSS stacks, I can tell you it wasn't MS tech that was the problem, nor was it a lack of engineering talent. I am amazed and humbled at the quality of the people I worked next to to build out those systems.


Thank you for sharing - this is the most interesting comment (to me, anyway) so far.


To keep this comment in context to those whom have reached this comment on its own...

Article being discussed: http://highscalability.com/blog/2011/3/25/did-the-microsoft-...

HackerNews Link this is a comment from: http://news.ycombinator.com/item?id=2369343

I have no idea why HackerNews has no context links built-in to their comment pages. O_o


duh on my part. there is a link named "parent" following the permalink at the top. Sorry...


If the MS stack killed MySpace, then PHP made Facebook?


I'm not sure these technology-based analyses are correct. I had four teenagers at the time and they all switched from MySpace to Facebook because the MySpace pages got cluttered with glaring ads. The Facebook layout was cleaner and had no ads (at the time). There was no problem with site speed.


I may be incinerated for saying this, but maybe stupid decisions are a symptom of the incompetence that doomed MySpace to failure.

Let the big karma fire begin.

edit: somewhere else someone mentioned they used Cold Fusion. I consider that another stupid decision. But at least they were migrating out of it.


It's nice, in a sense, to allow people to out themselves as mistaking technology for culture. You can write slow code, and make bad choices, on any platform.

My colleagues at Stack Overflow work faster and produce more -- at obsessively fast web scale -- than any team I have observed. I also see talented people struggle to produce a viable site using (say) Ruby on Rails.

Technology correlation? None. The correlation is in discipline, understanding the tools, foresight, priorities, management...

Think of it this way...how often have you seen a headline on HN bring a site to its knees? Fair guess that many of them are on "scalable" technologies.


For me the UX killed it. Allowing any user to design their myspace page was a bad decision. It was so annoying to find the information that is most important in a social network on many of my friends and general user's pages, as many just add crap on top of crap. Also, the terrible opening a MySpace page and immediately hearing a song or piece of music and madly scrolling down the page to find where to stop the awful sound. Most of the time I would just close the window in disgust.


The theme of the other comments on this thread seem to be ".NET? newbs!" or "Facebook worked even though they used PHP!". Keep in mind that at the time MySpace and Facebook were created, .NET was by far the best option out there for a scalable framework, they converted their Cold Fusion infrastructure over to it. It also may be hard for the Rails kids to believe, but PHP was the Rails of that time.


I think it's simple. If you're business requires scaling at this level you need to have really good engineers and they need to have a lot of say in how things are done. I've worked with a number of "product" people from myspace, and they were definitely not doing 1 of these two, maybe both.


No, Myspace failed because it was a shithole filled with awful people that nobody took seriously, and Facebook turned out to be relatively clean and useful.

Don't blame technology for your failings. Facebook won because it had a first name and second name field.


Give me a break!!! Teenagers with animated gifs, a horrible taste for colors and true angst along with Rupert Murdoch's old school leadership killed myspace. Before you blame the stack, look at the content and lack of a proper newsfeed. Ugh


Facebook hasn't always been a good performing site. I remember up until recently if you clicked on the "Info" tab on a person's page you'd get a loading gif for 10-15 seconds.

Hearing a blogger that has no idea what he's talking about make such generalizations as 1) There are no good c# developers and 2) There are no good developers outside the bay area shouldn't bother me but it does.


I really doubt the MS stack had anything to do with it. I think it's more of case of a combination of a different online social shift (from scrapbooking to social circle behavior tracking) and resting on your laurels (e.g. refusing to evolve before Facebook became dominant).

In their defence, what Facebook stumbled upon was really simple and yet very non-obvious (at least initially).


Myspace was killed by backwards compatibility.

One key aspect of Myspace is how customizable it is. As any programmer can tell you, this limits the ways features can be rolled out.

For example, you want to have a new layout? Too bad. It will break the users customization.

You want to add a new button? Too bad. There is not a coherent place where you can add it.

You want ajax? How will that break users layouts?


Stackoverflow doesn't seem to have many problems with it. Anyone who has done any C# programming knows .Net is * embarrassingly fast* these days. It'll save you a lot of "scaling" money.

What killed MySpace is poor management. It is one of those companies that still don't get that good engineers are as precious as good lawyers.


No, the fact that myspace looks ghetto killed it.


I agree. That is why eBay is such a failure ;-)


- step 1 : create rules that makes it near impossible to develop

- step 2 : accuse the competency of developers to hide your own incompetency

- step 3 : fail


interesting consider Twitter is down right now.


Did "Closed Source" development kill MySpace?


tl;dr: No


I don't think its as much about the technology platform, as it is about following good development practices, and having a leadership that understands the value of following those practices. Good leadership can make poor technology competitive while poor leadership can screw up a good technology platform.


if it contributed it was a much smaller factor than it's ugly design and skanky/teen vibe.


MySpace didn't die because of the Microsoft stack, they died because their users left for Facebook. I'd take the .NET stack over PHP any day of the week. I certainly don't know of any company that was so screwed by the performance of C# that they needed to create a C++ compiler for it. (HipHop compiler for PHP) PHP programmers aren't exactly known for their brilliance.

Definitely not a problem to fix their deploy problems on the .NET stack, I've put together automated deploys for Windows and with MSI they are a breeze. Yes, it's going to take a week or two to get the hang of WIX but after that the installer does all your dependency checks and you have a very repeatable process. If you stamp your MSIs with the build number it's even very easy to rollback.

This is just about the most monumentally stupid thing you can say, if you really don't like C# there are a dozen other languages available (like Ruby AND Python). If you're hiring people that can ONLY write code in one language then that should be a sign that you're not hiring the right people to begin with. They hired crap talent that happened to know C#

All this which stack scales best crap is cargo cult programming, you should recognize it as such. Most startups die because they have no customers, not because their servers are on fire from load.


now HN is just becoming /.

MS stack does not kill anyone. dumb management kills.

top level should be able to see the error and move, be it dumb layoffs or .net codebase. it's not like myspace was rocket science.


It's always been my understanding that spam is what killed MySpace. I'm sure Facebook's long-closed membership system helped make it a somewhat more manageable issue to deal with.


Yeah, uh, StackOverflow is written in ASP.NET.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: