It is indeed vitally important to remember that external APIs are unreliable and temporary. The question to ask in any given situation, though, is: How much more reliable and permanent is the code you own and manage yourself? And at what cost?
However high the odds are that the founders of a given startup will leave their company within five years, the odds that the people who built your in-house widget will leave your company within five years may well be higher. However high the odds that a given API will evolve away from your use case in five years, the odds that your use case will be obsolete in even less time may well be higher.
The underlying truth is that all APIs everywhere, in-house and out, have bugs and missing features and finite half-lives, unless you control them and continuously expend resources maintaining them. As your project grows older and more stable you may eventually discover that its expected half-life is longer than that of your provider APIs, and that's when it's time to start moving stuff in-house. This is when you realize, if you haven't already, that open-source software is your friend.
ANSI C programs written in 1989 still compile and run unmodified on any modern 32 bit Linux or Windows box. C programs using the BSD sockets API from 1982 still work unmodified on Linux. Fortran77 code has an even longer pedigree and still builds in gcc. And while Common Lisp has seen a lot more library evolution, I believe most of its early incarnations still work in modern implementations.
I suppose you're right that any given code is unlikely to outlast its dependencies. But the corrollary to that is that successful code is, by definition, almost always going to outlast its dependencies.
Dependencies are hugely expensive. And it makes me happy to see someone try to explain that to the modern world of import-happy coders. Reuse isn't free. Solving simple problems with external dependencies can often be more work than it's worth.
"How much more reliable and permanent is the code you own and manage yourself?"
Even with the worst bugs I can use and reverse engineer the oldest API if it runs on my own hardware. I can turn on and access my Amiga 500, or my Apple II "APIs". There are uncountable companies running legacy software, just sometimes they need to reboot the machines or "play" a known trick to continue working.
I think, the fact that people that advertise their "API" and "platforms" and provide a glorified REST frontend to their website are actually diminishing the value of true platforms and API, where actual thought was put in making a business case, architecting them, making sure they'll be there long-term and changes planned with regards of backwards-compatibility. You can actually see one of the comments here on NH that claims that it is "vitally important to remember that external APIs are unreliable and temporary". No, they are not, if we are talking about platforms. While this is true that new API subsets are being added with every new release (more: "Fire and Motion" by Spolsky), with platforms you can expect that existing API are still there for some time, and not until the end of next month, because "platform" vendor "pivots" while burning through VCs money.
Well, given the choice between a world where the only APIs in existence are "true platforms", backed by enormous corporations with tons of resources who pledge to make them absolutely backwards-compatible for ten years... and a world where, in addition to the true platforms, every half-assed hack also has some sort of API so that other people can build upon it with half-assed hacks of their own, my first instinct is still to prefer the second world.
Yes, this means that the word API no longer connotes "true platformness". That's a branding problem for the "true platforms", but I'm sure Microsoft and Oracle can handle it. They have resources.
Now, what your instinct says about viability of business that is built on top of "half-assed hack"? Or we'll just stick with the business model where there's VC or an angel with extra cash to burn, while we're making sure our instincts are happy?
I'm not sure I'm reading your sentence correctly, but the scary truth is that many, many brand-new businesses are initially built on top of half-assed hacks. By design! That's what all this "minimum viable product" talk is about.
The canonical example might be Twitter. There's absolutely no question that their initial Rails-based infrastructure was not up to the task of running Twitter. But it was up to the task of running Twitter for a few thousand users, long enough to prove that they really needed a better, custom, designed-and-maintained-in-house-by-an-expensive-team-of-hardworking-experts architecture, which they then built, using money raised on the strength of the initial hack.
And it's absolutely true that businesses built on half-assed hacks generally aren't viable in the long-term. But most businesses aren't viable, period. The trick is to upgrade the viability of the codebase as the business continues to prove itself. Nobody said that this was easy to get right, and you can fall behind ("OMG our codebase is horrible and the fail whale has its own fan club"), or you can get too far ahead ("OMG we spent a fortune overengineering our product"), and sometimes you don't even know whether you are ahead or behind, but that is the game.
Meanwhile, let's not get completely stuck on businesses. There's room in life for half-assed mashups that cannot possibly "work" long-term, but are a lot of fun. Even before Twitter was a rickety service with a dozen public users, it was a hilarious idea with no public users at all. The toy stage is important too.
Absolutely. I'm postulating a very conservative lower bound here.
(And, obviously, a very rough approximation of the actual history. If we could see Twitter's Git repository presumably we'd find that the service actually went through dozens, perhaps hundreds, of performance tweaks as it grew.)
All API's are not made equal. Some may not be around in 2 years, some 10 years, but in the time you are forced to work with some of them, you might lose your sanity.
I'm working with pretty much every type of API through my startup Zapier.com and some of these API's I wouldn't wish upon anyone except maybe the deeply masochistic coder. They suck. I suppose that's good for us. However, we aggregate data from many dozens of APIs for the end user, so losing an unpopular service's API here or there doesn't hurt us in the least.
While I've got an API pedestal and a crowd of developers: please no SOAP or XML. And do NOT go off the OAuth spec. Bad documentation is the gravest of sins, which can mean lack of examples or, worse, out of date info/examples.
Good APIs: Github, Wufoo, Strip. All of them are more or less self documenting, use JSON, and RESTful. The docs are awesome. A joy. In and out.
Bad (or just annoying) APIs: Salesforce, 37signals, Google Contacts and co, Docusign... While they work, you'll struggle every inch of the way to make the simplest things happen. XML (or some derivative) and SOAP. No thanks.
You don't think there's a benefit to XML formats that have an XML Schema, so you can do automatic data-binding (at least in Java, with JAXB)?
There are also many visual XML mappers (mapforce, biztalk mapper, stylus studio, IBM and Oracle integration products)... but truly, these are enterprise-expensive tools which I think are only used in the enterprise (with its more complex data and, some say, unnecessary corporate inflexibility).
While it's true that disruptive tools tend to start at the bottom (e.g. with startups) and bubble up, the complex data needs of the enterprise have to be addressed in some way, either by sticking with conventional tools, or the new tools growing in ability. The common consensus seems to be that if JSON's ecosystem developed a schema language, a "JSLT" etc etc, it would be just as ugly as the XML ecosystem (i.e. ugliness is partly due to irreducible complexity) ... and we already have one.
I wonder if the technology development path might be similar to Twitter, iterating quickly with agile, featherweight Ruby, then switching to Java only when scaling demanded a heavyweight solution: start with light JSON when the needs are straightforward; move to XML when complexity demands a heavy solution. For many wildly successful startups, that might not happen, and JSON might remain perfectly fine; just as many startups stay with Ruby.
I think you make a good point, there are definitely complex situations that call for Java-esque and enterprisey tools. However, having a simple solution often trumps complex solutions when you just want to "get things done"™.
I suppose it boils back down to the most appropriate tool. Are you selling to enterprises? Use SOAP. Everyone else? Use REST/JSON.
Yep, as simple as possible, no simpler. (NoSQL is another example)
I think a central source, like a database, needs to cater for every possible use in general (which the relational model does); but if you're doing something very specific, it's crazy wasteful to do it with full generality.
I am curious if someone can work out how to make a simple API that is also fully general (I think it has to be equivalent to the relational model - but it's early days yet).
API churn isn't limited to small companies. The FedEx shipping API is on version 9 and my last conversation with their developer support indicated they are currently ending support for version 5 and supporting versions 6 through 9. I certainly understand that business requirements, security standards, and development practices change and that it doesn't make sense for them to support what they were doing in 1995. However, I'd expect a little bit of what Google is doing [1] in trying to build sustainable APIs. On the other hand, maybe it is a good thing for security and IT practice to have a forced requirement that the software and systems from 10 years ago change.
It's ancillary to the argument, but I do think that the development world needs more chaos monkeys (http://techblog.netflix.com/search/label/chaos%20monkey).
Not just because it's a clever name, but the implementation really drives availability and disaster preparation.
API? Ha! In my day, we didn't have APIs. I started by scraping stock quotes off of web pages, with a daily-updating script of search-macros. They couldn't change their page fast enough to avoid NetProphet!
Before you cast stones, consider this all started as a side-project, we merged with Stockpoint after they noticed our scraping and arranged a meeting.
Anyway, with APIs it would seem useful to have an 'API watchdog' automated product that alerts when an API changes behavior. Similar to our watchdog for webpage changes when our scraping would fail. Or server-down watchdogs. etc.
I still do this kind of thing, and literally thirty minutes ago logged back on to an old server to fix the scraping script to deal with some bad data on the origin website.
It's quite pleasing when it works, and a hell of a lot easier to deal with as you just read the web page and scrape away.
I think when web pages become more clearly marked up we'll start seeing less and less of the API mess we've got ourselves into. If the websites did what we would hope they would, there wouldn't be a great need for a RESTful aspect of them.
>> With so many startups, there are often several choices in any given category.
And having several choices is a great thing.
When I integrate with an external API, I look at 2-3 other APIs as well. I then write an interface based on the common features / constructs available across similar APIs. Then I write program to that interface.
If the external provider dies, at least I have an easy path to switch to another provider.
One of the things that we've been emphasizing at Google of late is to ensure that each API has a sustainable model prior to launching it.
By that I mean, evaluating each new surface to verify that it provides value to all parties — the end users, the developers, and to Google (in roughly that order, but all are essential) — and that this value scales as the popularity of an API or platform increases.
Over past years, we've launched a handful of APIs that perhaps met one or two of those criteria, but not all, and learned through practice that we're unlikely to achieve a sustainable ecosystem without first identifying a "virtuous cycle" such that success for each party is complementary to the others.
Nothing pains me more than seeing an API surface go away, so we've been working at getting consistently better at making sure each new launch has strong potential for longevity. Launching a new surface just because it's interesting (or to find out if it's even possible) is obviously tempting, and can be fun and even rewarding in the short-term, but each new API is also a promise being made to developers, and that promise is a commitment not to be taken lightly.
Now sometimes, the underlying products go away for good reasons, and we need to turn down the corresponding API surface (e.g., when the Buzz API was shut off when Buzz was sunsetted in favor of Google+), sometimes we grossly underestimated the potential for abuse (e.g., the old translate API, the first version of which didn't have quotas or require any real form of registration), sometimes we were able to declare victory and move on (e.g., Gears -> HTML5), and sometimes API functionality was never officially available to begin with, and as such when the underlying product changes and it's impossible to keep the unofficial surfaces intact (e.g., Reader), but in every case the decision to turn one of these down is a difficult one. Whenever possible we've gone through great lengths to keep the surfaces stable and available as long as practical to give developers time to find new solutions (6 months to a full 3 years is the norm).
Going forward, I think the additional up-front discipline is a win for all of us. It's a bit more work on our side to play out all of the possible scenarios (what if no one uses it? what if everyone uses it? what if our fiercest competitors start using it?), but in doing so I have even more confidence that when we launch new platforms we're going to be able to stand by them for the years ahead.
Personally I'd still bet (heavily) on third-party services, both from big established companies like Google, Amazon, Microsoft, etc., and also on smaller and upcoming startups. Though I'd certainly ask myself the question: is there the virtuous cycle in the API that I depend on? Can the service provider reasonably sustain it for the long-haul? Does the API provider benefit when my usage increases? Do I benefit if my own user numbers grow? Etc. Contingency plans are still a must — betting your entire company on getting something free from someone else forever is probably a bad bet — but I'm bullish on the future of web services in general, and plan to personally stay in this space for a long time to come.
Some day I should give a talk on lessons learned the hard way...
"Perl is another example of filling a tiny, short-term need, and then being a real problem in the longer term. Basically, a lot of the problems that computing has had in the last 25 years comes from systems where the designers were trying to fix some short-term thing and didn’t think about whether the idea would scale if it were adopted. There should be a half-life on software so old software just melts away over 10 or 15 years." -ACM Queue A Conversation with Alan Kay Vol. 2, No. 9 - Dec/Jan 2004-2005
As I pointed out on Gabriel's blog. This quote is six (6) years old at this point. Even if it was arguably relevant at the time (Alan Kay giving witness to an internal transition period in the Perl community), I'm not sure how it's relevant now.
Without specifics on which ideas Perl has that you feel don't scale, and how ... this is merely trolling by appeal to authority.
When Alan Kay made his remark, Perl had just hit its highest mark in popularity. And it just this year hit its lowest mark. So I think you've been holding your graph upside-down.
I'm sorry I completely fail to understand how either of these remarks have any bearing on scalability?
How does the Perl6 project's ability (or lack of) to deliver a project to your undefined criteria for "usable version" have any bearing on the scalability of the ideas involved in either it or Perl5?
Second, in what way does a self-described popularity contest have bearing on the scalability of a language?
I would also love to hear how you would explain http://www.tiobe.com/index.php/paperinfo/tpci/Lisp.html which Alan Kay also talked about during that interview ... far more than the single quote he made about Perl. Note that according to the scale on both the Perl graph and the Lisp graph, Perl's lowest point is roughly equal to Lisp's highest point. The obvious explanation to me is that TIOBE score has almost nothing to do with Alan Kay's comments or opinions as expressed in that interview. So I don't understand what connection you're trying to imply exists here.
Perhaps you had your book on logical inference upside down?
TIOBE is the phrenology of programming language discussions, and making changes in an established programming language is exceedingly difficult even when you intend to replace the existing version. See also Python 3000.
The Church-Turing thesis is not a proof - it is a definition of computability. (The thesis is that it is the reasonable definition. For all I know someone could come up with a new kind of computation that is stringer then turing machines and then we would have to redefine our notions.)
Gabriel's and everyones comments are great...definitely an area that needs constant discussion.
I can't think of any industry where your vendor / supplier guarantees it will be there forever and never change.
Even with contracts and commitments, the world moves forward.
We are just moving forward at faster and faster rates. I think there is a lot of opportunity here for API owners to provide copycat services for redundancy.
Also lots of opportunity to really keep interfaces dead simple and easy to integrate, adapt and change.
If there's demand for an API, one would expect competitors to arise; or at least, for someone to fill the niche left by an abandoned/bankrupted/acquired API... (unless it's a non-feasible business, e.g. giving too much away)
Then, the problem is the difficulty of integration: how hard it is to switch to another provider? In a mature industry, standards eventually arise that make switching easier, but not in the early days.
Just on a side note for the non-native speakers: shouldn't it be "API half-lifes" (as 'half-life' in terms of radioactive decay) instead of "half-live" or am I missing out the idea of the headline?
Yeah, my point wasn't so much that there are not exceptions, but that there are frequent patterns in the English language.
Rather than throwing up your hands in defeat and saying, "The English language is totally inconsistent; the only way to learn it is to memorize each arbitrary word form," the key is to familiarize yourself with these patterns so that at a minimum you can make a better guess next time.
Hopefully, WA learned today not just that the plural of "life" is "lives" but that, in general, when a noun ends in -f or -fe it usually changes to -ve. So if tomorrow he sees a sign that says "5 loaves of bread for $1" it won't catch him off guard like the title of this submission did.
The last example is a proper noun(-phrase) and wouldn't necessarily follow the "standard" pluralization rules anyway. If you think about it you're not making "Leaf" plural you're making the noun-phrase "Maple Leaf" plural, and who knows what rules apply then.
However high the odds are that the founders of a given startup will leave their company within five years, the odds that the people who built your in-house widget will leave your company within five years may well be higher. However high the odds that a given API will evolve away from your use case in five years, the odds that your use case will be obsolete in even less time may well be higher.
The underlying truth is that all APIs everywhere, in-house and out, have bugs and missing features and finite half-lives, unless you control them and continuously expend resources maintaining them. As your project grows older and more stable you may eventually discover that its expected half-life is longer than that of your provider APIs, and that's when it's time to start moving stuff in-house. This is when you realize, if you haven't already, that open-source software is your friend.