Basically you put your DO api key in your Vagrant file, and then running "vagrant up --provider=digital_ocean" spins up a new ubuntu VM and runs your chef/puppet provisioner code on them. It's awesome!
It uses rsync to mirror your vagrant project directory to /vagrant, which is a actually better workflow for some use cases (eg hosting) than virtualbox shared folders.
NewsBlur is mentioned in the article, but I cannot overstate how positive my experience with DigitalOcean has been. I'm stressing their biggest machines and spinning up dozens of differently configured boxes and they've handled it swimmingly well. Between the price and the performance differences, I'm glad I switched.
I also have a shadow site running in parallel on EC2, so I get to compare dollar for dollar. EC2 is strictly an apocalypse host at this point.
Network performance is fine, network reliability is the one you need to watch out for. I'm still working on getting some automated script to detect longer than expected timeouts through HAProxy and then issue a VM reboot through DO's API. It happens often enough that if I experience intermittent downtime, I turn to HAProxy's stats before looking at my munin graphs. But these things will resolve and ease in time. Bleeding edge is well named.
We are using a Cisco based network and working closely with them to build a resilient network architecture. We've already encountered a number of bugs within Cisco's platform that have surfaced and are working closely with their TAC to escalate and resolve them permanently not only for DigitalOcean but all providers world wide.
Is anyone else bothered by the vague, overbroad restrictions in DigitalOcean's terms of service? Particularly:
"2.5 You agree that you will NOT use DigitalOcean's services to: [..] Transmit, distribute, post, store, link, or otherwise traffic in information, software, or materials that is offensive, abusive, inappropriate, malicious, or detrimental"
Almost anything can be offensive to someone. I compared the Linode and AWS terms of service and neither has anything like this. While none of my own content is by any means extreme, the presence of this provision seems like a red flag and has kept me from switching to DO.
> I compared the Linode and AWS terms of service and neither has anything like this.
"No Illegal, Harmful, or Offensive Use or Content" is literally the first point in the AWS ToS and is all about what is "offensive" http://aws.amazon.com/aup/
> You may not use, or encourage, promote, facilitate or instruct others to use, the Services or AWS Site for any illegal, harmful or offensive use, or to transmit, store, display, distribute or otherwise make available content that is illegal, harmful, or offensive.
That's pretty standard boilerplate language for a ToS. It essentially protects DO from someone suing them because a client is doing something offensive (like say, running a mail server to email the Goatse pic to as many people as possible)
Given how cheap the price is, I don't think they have a sustainable business model. As far as I can see, there is nothing technically different from their model that allows their service to benefit from any type of economies (of scale/scope) in order to stay competitive. So then my overall question is - how can they offer better service and better hardware (i.e. more expensive) at a cheaper rate? I'm genuinely curious.
I'll stick with Linode until the company is more established.
Hi, I'm the CEO of DigitalOcean and we are not operating on a fantasy revenue model. All of our unit economics are positive and we are certainly here for the long haul with a sustainable business model.
To answer your question directly, and without knowing the exact cost structure of our competitors, I would have say that we are generating less margin per unit but we are overall sustainable and growing healthy.
Modern machines is my guess. Many other cloud providers and hosting companies are running machines from a few years ago. I don't believe the price has stayed competitive to the power, so DO is effectively arbitraging the price difference between 2010 machines and 2013 machines. That and their API makes it really easy to spin up more machines than you need, leading to some dead weight that doesn't harm your neighbors.
DigitalOcean is my fourth host in four years. I just spent two years with a company inexplicably named Reliable Hosting Services before I switched. Switching hosting providers is surprisingly easy, since I just replicate out my DBs and then eventually switch the primaries over to the new host.
My gut tells me that DigitalOcean has at least a couple years until I have to even think about comparing to other hosts again.
They offer the same amount of RAM for less than Linode, but they offer fewer cores, which makes me suspect they're putting more VPS's on a single host. I haven't yet run any benchmarks, but I think Linode gives you more CPU and Digital Ocean gives you more memory. It's a little more complicated than "better service and better hardware at a cheaper rate."
To add a little bit of anecdata, I recently move from Linode to DigitalOcean.
On Linode I was getting jammed on I/O; on DigitalOcean I am jammed on CPU.
I'd be happy to give up half my RAM to get twice the cores, actually. I run a modest Wordpress network (http://ozblogistan.com.au). Once MySQL is humming, most of the CPU time is spent on various copies of PHP running the increasingly bloated Wordpress codebase.
(Yes, I've used opcode caching. I've never had good experiences with any of the major ones).
>(Yes, I've used opcode caching. I've never had good experiences with any of the major ones).
Just wondering, what were your experiences? Installing apc is basically apt-get install php-apc (if you're on Debian/Ubuntu), and that's about it unless you want to increase the memory limit. This will greatly reduce CPU usage.
I think the trick is (as it is with most vps providers), that most users, don't always use full memory and or full CPU usage, or even disk space. The full usage, if it ever happens usually happens in small bursts.
Most vps providers have a rule that you are not alowed to use 100% 24/7/365 or they can cancel your account (I don't know if this is true with DO). The trick is that if you have a lot of customers you can spread the cost on users who are not using full resources (which happens to be most users), at large scale you can make decent profit in this kind of setup, even if your revenue per account is low.
I didn't mention bandwidth because at DC level, BW is the least expensive component, but it is also true that most users will not use up 1TB per month.
Most of it is speculation from my part, and a bit from experience.
I don't think "overselling" is the right word in this, it implies degradation of performance because of too many VMs. My understanding, is that all VMs allow sharing resources, even KVM. But to my understanding, you can share memory on KVM, but not diskspace (at least not easily), which is why every single KVM offers out there have very small disk allocation. This works great for DO, because they are offering SSD, and SSD VPS (regardless the type of VM) are small in size, so it fits perfectly.
I think the whole idea with VPS, is that you can share resources, so there is nothing wrong with that, its the overselling part that should be of concern and I have no reason to believe that DO is overselling. Their performance is pretty good.
The point I was trying to make referring to the OP, that VPS like DO can be profitable even at such a low price, because of the way VPS work. Even with very small revenue, if you have enough customer you can make good money. That's why the math works. Most other KVM offerings are so expensive because they don't have that tipping point scale to make them profitable with the same price. But other hosts like Linode gets away with charging more, because of reputation. They already have a good thing going, unless they starts losing customers drastically, they don't have to change their pricing model.
1 core/512MB/20GB ssd => $5. They state that they are running hexcore hardware. That translates to 6*$5 = $30 per month per server. So that's probably not making much money for them.
But then it looks better if they manage to sell the higher plans. 4cores/8GB/80GB ssd = $80, 2cores/4GB/60GB ssd = $40. These fit easily to single server and that's then $120 per month per server. Compare this to for example some of cheaper dedicated hardware providers like Hetzner [1] who are selling similar dedicated hardware for something like $60-70 per month.
Based on this it could be a sustainable business model, at least if they have enough volume (which seems to be the case, considering the Netcraft article).
(I assumed they are providing dedicated cores and using single socket servers. But it might make more sense to use some relatively cheap multi-cpu servers since with these plans the bottle neck seems to be on the numbers of cores side and not with the memory).
It isn't a dedicated core. Linode offers a $20/mo 8-core instance. That doesn't mean there aren't others sharing those cores. If Linode were offering 8-core Sandy Bridge E5-2670 servers with all 8 of those cores dedicated to you for $20, that would be quite a steal.
The RAM is guaranteed and the processor is shared. Amazon has been working a fixed-compute model on all except their Micro instances. Other providers are sharing cores.
Right now, DigitalOcean is limiting the number of cores exposed to smaller instances. They may change that in the future (there have been hints on their feedback site that they're looking into changing this so that smaller instances could have better burst capabilities).
I've been using DigitalOcean to host a GitLab instance for a while now (20GB Droplet) and it's been working great.
I ran into two small issues so far.
First of all, for some reason the traffic from my home connection is routed through New York (I'm in Europe using a Droplet in Amsterdam), resulting in a ~150ms RTT (I usually get 30-40ms to Amsterdam). Since this only happens with my home connection and it's fine in the office, my ISP might be the one to blame for this. Anyway, I've opened a ticket, so maybe they can fix it.
The other weird thing is how they hande kernels/kernel versions. You can't simply update your kernel through apt (or anything like that) - you have to select the kernel version from a gigantic drop-down (containing multiple kernel versions for every distribution you can run) in their UI. Plus, adding new kernels seems to be a manual process for them, since it takes a while until updated kernels become available.
I'm sure they're working on a better solution for that though. All in all, they offer great service for an even better price, so I'm happy with my choice.
Actually reading the comments further might give you a more somber impression of what "working on it" is in this case. That said, I'll stay optimistic since my work doesn't depend on it and I'm happy to have another super cheap Arch instance.
I moved everything I had at Linode over to DigitalOcean as well and performance has been excellent. I loved Linode's stability but their response to the recent security incident really pissed me off. On DigitalOcean, I have seen briefly the network issues mentioned by others, but things have been more stable the last month or so.
That said, one production server I work with on RackSpace was completely unavailable yesterday for nearly 5 hours, so paying more money doesn't necessarily give you better uptime. Just that one outage put DigitalOcean ahead in availability for my systems over the last 6 months.
Is there any reasonably simple framework for replicating across cloud providers?
Just received the following email from DigitalOcean:
Thank You!
Today is a very special day for us. As you may or may not have heard, Netcraft released a recent report that details the history of our rapid growth in comparison to other cloud hosting providers. We are extremely humbled by this recognition and cannot thank our customers enough!
As we have grown over the past two years, we are continually indebted to our customers for your amazing support and feedback. Without you, DigitalOcean would not even dream of being where it is today.
According to Netcraft, over the past six months we've grown 5,084.64% in web-facing computers (instances) and are responsible for 10% of the total growth worldwide. DigitalOcean's more than 50-fold growth makes it the 72nd largest hosting provider in the world by web-facing computers. Last December, we were number 549; even last month we were still 102nd.
As your projects and endeavours scale, we will be there with amazing and reliable service, new features, and an unrestrained drive to create the simplest, strongest cloud that you can be proud to call home.
I switched my personal projects from linode to digital ocean, for pricing but the performance is out of this world. The pricing allows me to run 4-5 of my projects for the same cost where I was at times maxing out the low-grade linode box that I had.
I later switched all my office's projects there from AWS and cut costs by over 85%. I also saw a significant performance bump allowing me to reduce the number of machines that I had active.
I cannot overstate how pleased I have been, their APIs compete with EC2, their performance is better than what I have seen on any competitor.
I am one very pleased customer. My _only_ complaint is that they aren't delivering on features at the rate they previously were. Though, after reading this article, I can understand a little better as to why.
The growth has been fantastic and we are super excited. Unfortunately managing it requires as much work as engineering new features. The good news is we've been hiring and growing the team and we're hopeful we'll get back to our regular development schedule in the next 2 months after we've gotten everyone up to speed and worked out some of the speed bumps. =]
I have been using Digital Ocean as a sandbox for playing with Docker, and it has been great. I can spin up a server in less then 1 minute, install the Docker dependencies, and be up in running in no time. I do what I need to do, and then I spin it down if I don't need it anymore.
I wrote down my notes on how to get Docker running on Digital Ocean, if anyone wants to play with it. There is even a promo code for a $10 credit for new signups.
I had a couple of issues when I first tried using them, but I was able to get help pretty quickly on a Saturday morning no less. I haven't used them for anything production quality yet, I have most of that stuff on EC2 and Rackspace right now, but I hope to start moving some smaller projects to DO as they come up.
Someone else mentioned this already as well, but the SSD is so nice compared to EC2's EBS, so if you are doing anything with lots of I/O then it is a must have.
The performance/price is good but the server is not reliable at all. Pingdom tracks many 1-minute downtime during the week. I think they have problems in the network, maybe because of the growth? Their support is very good though, I once got $50 credit because the machine/cluster that hosts my VPS was broken...
I've also had the pingdom monitoring on a droplet for the last 2 months and the only downtime I've been alerted to was my fault. Maybe it's on your end?
I'm using a 1gb droplet, ubuntu 12.10 with a LEMP stack, PHP5 and MySQL 5 for a low traffic site (Xenforo-based forum) as well. Maybe the difference is nginx vs Apache?
edit: I also added an additional 1gb as swap space, which helped prevent my mysql instance from falling over with low memory at times.
Mine is in New York. Pingdom is configured to 1 minute resolution and I had 54 minutes of downtime in the last 30 days. Maybe I need to take a snapshot and spin up another instance. Thanks for your reference point.
No problem. I've been quite happy with them so far. I was using BuyVM for my side projects, but was getting annoyed with dealing with constant downtime, and other issues. I was starting to consider Linode, but ended up going with DigitalOcean due to the price. I really hope they can keep up the reliability that I've been experiencing.
About 4 months ago I moved a mail server from a 2GB Linode instance (london) to a 4GB DO instance (amsterdam) and overall I've been very happy with them. No major outages, the cost saving is huge and I/O performance is great.
The only negative so far is network reliability. Every once in a while my monitoring stuff fails to establish a connection to the DO box. It's transient though and usually connectivity comes back within 2 to 5 minutes. The box is running a low traffic mail server so it hasn't been a big enough issue for me to worry about yet. Other than that, I'm a happy bunny.
I switched to these guys after my reserved instance ran out at AWS. I was paying $50 a month for EC2 small, which was super slow. Since switching to DigitalOcean, not only am I paying $10 a month, but my site feels much much faster now. DO wins because IO is very fast on the SSDs. By comparison, Amazon's EBS is glacial.
I tried to make the switch, but found the network to be too unstable.
I have several servers on Linode, that communicates over the internal network (PostgreSQL, Redis, Memcache, Beanstalk etc.) and a bunch of app-servers. If their line of communication is broken, nothing works.
Now I only use my server at DigitalOcean as routing traffic when I am travelling.
Just an anecdote, but I recently launched a little side project and DigitalOcean crushed EC2 and Rackspace Cloud when I did some benchmarking. The low cost is appreciated but was NOT a factor in my decision.
EC2 and Rackspace have a greater range of service than all the other guys (storage on demand, snapshots, private networks etc...), but at a very significant price difference.
I've played with DigitalOcean some. But I've kept my main VPS on Linode, and also recently recommended Linode for a group that I'm peripherally involved with.
Using SSDs for multi-tenant virtualized servers seems like a good idea to me. My intuition on this may be wrong, but it seems to me that the lack of variable seek time ought to lessen the variability in I/O performance that one hears so much about.
However, I have the following reservations about DigitalOcean:
1. No IPv6 yet AFAIK. I figured that would be a priority given the scarcity of IPv4 addresses, particularly in Amsterdam.
2. Not all resource allocations are proportional to the price or amount of RAM. On Linode, a 4 GB VPS is 4 times as much as a 1 GB VPS in every way. On DigitalOcean, a 2 GB VPS only has 2x as much CPU, 2x as much storage, and 3x as much bandwidth as a 512 MB VPS. Don't get me wrong; the prices are great. It just doesn't seem like a logical way to allocate resources from a pool.
> My intuition on this may be wrong, but it seems to me that the lack of variable seek time ought to lessen the variability in I/O performance that one hears so much about.
Take this with a grain of salt, as I'm limited in my experience, but I believe that's not entirely the case. Modern SAN controllers are optimized for very consistent latency on spinning platter disks, even under heavy load. The big benefit that I see in SSD-backed SANs is increased I/O bandwidth.
I've been very happy with my testing on Digital Ocean; the servers are fast, they remain up as much as any other provider, and the interface/service is great, and is improving every month.
The only sore spot is that two times I've had intermittent dropouts—once caused by someone DDoSing another DO site, and the other time for unexplained reasons. (And I know that the dropouts weren't just me, since I was monitoring things with https://servercheck.in/, which runs partly on Digital Ocean VPSes).
I've switched all my stuff from Linode and previous other hosting providers all to DIgitalOcean about 6 months ago and haven't been happier. Great group of guys there too.
I lose connectivity all the time. Tried to contact their support, they could not do anything. One solution for me is just to use cron job and reboot the VM once the problem is detected.
Support is quick to reply but often there is nothing they can do. I've tried contacting regarding both technical problems and billing issues and in both occasions they couldn't do nothing to help.
I switched from prgmr.com to DO recently simply because of the additional RAM (double) for the same price. I really like prgmr.com, and have been a customer of theirs for years, but the new machine is simply much faster (and the network latency is also much better).
Of course, this would probably be true switching to a new VPS provider after a few years of being on the same box.
One thing that's unique to DO in this regard, though, is that their onboarding process is super smooth. I was up and running very quickly.
Recently tried out DigitalOcean for a couple of small projects and already noticed some network reliability issues. My droplet was un-accessible for a couple of hours (at least) yesterday. Fortunately, it was for a non-production project and they were able to restore it within about a 1/2 hour. Verdict is still out for me, although I suspect that reliability will improve with time. I'm keeping production at Linode.
I noticed the same thing when I initially switched. I actually had them lose the box I was on, and the backups associated with it. They apologized and indicated it was an issue at the datacenter that they didn't have much control over. They gave me a really nice bit of credit on my account and I've since had no problems. I'm a fan of the pricing as well as their support.
I'm amazed you stuck with a host that lost your data. That would be a complete deal breaker for me; I've never lost so much as a byte with Linode over six years.
To be a bit more specific, I was moving some sites over from linode as a test to see if they would perform the same on DO. So effectively they only lost some test stuff. Still data loss though, like you said.
There were three images that didn't like to regenerate host keys on their own when they were missing on the initial boot and we've update them today so it should all be cleared up now.
We have started with using DO for our test machines / intrusion detection boxes and were so happy with price/performance/ease of use that we ended up migrating most of our infrastructure over. We now have 45 boxes hosted on DO and couldn't be happier. The network reliability has been phenomenal so far compared to some of the other providers we've dealt with - Rackspace, SoftLayer, VPS.NET, etc.
I've been on DO for several months now and love them. And as far as reliability, I've had binarycanary pointed at my sites for about a month now with one outage that lasted about a minute and then resolved itself. To me, for the price/benefits, that's amazing.
One other thing people rarely mention with hosting, that's SUPER important to me, are the FAQs/Docs, and DO really comes through in that area imo.
Thank so much for the shoutout about the docs. We are very focused on building up our developer community (with our IRC channel #digitalocean, forum, and tutorials) and growing as an educational resource. If you ever have any additional article requests, please feel free to send them to etel@digitalocean.com
My one gripe with DigitalOcean is that their customer support leaves a great deal to be desired. For days I've been unable to replicate images across regions (any droplet I try to create gets in a permanent stuck state and can't be killed). I filed a service ticket days ago and was told I'd receive a response shortly but have still heard nothing. I've even sent replies pleading for some recognition that my issue is being worked on, but have received no response and no indication that they are working on my ticket. This does not bode well considering that creating a droplet from an existing image is a major feature of their product, and is not properly working.
While I have liked DO when it works and their prices are great, it comes at a cost. In my case that cost is their product not properly functioning and their team seemingly having no intention of fixing my issue.
As someone who works in (completely unrelated) support, the numerous emails may be a contributing factor to your problem. Most support is done using last contact as the priority. When you send a new email, you're resetting that counter and bumping yourself to the bottom of the queue.
This was a problem we had with a /particular/ support system (Kayako) and we've since resolved it (with ZenDesk) but if they're using something silly like Kayako, that might be the reason.
Also, if they have a phone number, use it.
Not justifying what they do, just trying to help you work them out.
Hmmm, that might explain why Pebble (the smart watch) support had this super passive aggressive sounding "if you keep emailing us about the same issue, we'll put your case to the bottom of the list" instructions.
Not the message you really want to be sending your already disgruntled customers - "Do I mail them and check to see if they even got my email which hasn't had any response in two weeks, or will that just put me _another_ two weeks further down their already abysmally slow response queue?".
I am curious if there are any IOPS comparisons between DigitalOcean and real hardware (spinning disks)? Currently we run our Postgres instances on EC2, but are looking to move to get better IOPS. The question is: SSD-based VMs or real hardware? Cost is also a factor which is why DigitalOcean is a contender.
Thanks but those are not what I am looking for. Linode and AWS run on VMs just like DO. The question is not SSD VM vs disk VM. It is SSD VM vs disk non-VM (real hardware).
I made a thorough analysis recently and discovered that catalysthost.com would give me the best value for money. I am using a "Trenta" OpenVZ Ubuntu 12.04 machine for $15.99 per month and I am quite happy with the performance, including network performance. This is the spec of the vm:
Is there a SKU for more disk-heavy workloads like a box running Postgres?
I'd love to be able to get more disk space (especially since you guys have SSDs, this will make PG very happy) without the extra CPU and memory. Is that currently possible with Digital Ocean?
Heroku is good because they take sysadmin/DBA headaches away from you, but I wouldn't call them cheap.
My take on this is that if you want or need to run your own database with a large amount of storage, there are three solutions:
- run it in house, which means you now need a redundant link to the internet, server class machines (redundant power supply), etc, etc, etc... Unlikely unless you already have your own data centre
- use a VM to which you can add storage (EC2, google, etc...). That is expensive (0.1$/GiB) but typically very reliable (they use redundant physical storage), and flexible (you can move your storage around, rebuild VMs etc...).
- use a colo site, or a rent a cheap physical. You get a lot of space for a very reasonable price, but have all the headaches of a physical (storage is only as redundant as you make it, no hardware mirroring/raid'ing, if the storage fails, you've lost your OS as well, and now have to restore from backup, changing OS means renting a second physical, install, transfer data, etc...).
Fair enough, thanks for explaining that. Seems like the EC2-like route makes the most sense if you're most budget and manpower constrained, even though the performance will be comparatively pretty abysmal.
I did some testing of .NET on Windows vs mono on Linux and unfortunately mono is just too slow. For the money saved on licenses more would have to be spent on hardware. Licensing is a pain though. Working out what you need and how to pay is non-trivial.
I like the idea and concept of DO, but they don't support non-Linux which is pretty lame considering they are using KVM which can run a bunch of different stuff. I'm also curious how they do volume management, but these are just technical details :)
I'll also point out that a regular FreeBSD/NetBSD network install can be done in under 55 seconds, so thats not really a major feat to do when you're just copying images around.
But, they've productized it all very well so I'm happy to see someone taking a chunk out of EC2.
I'm currently using EC2 with RDS primarily because RDS provides major benefits to a non-DBA like me.
Given that I would prefer to keep using RDS, and the network latency etc., any thoughts/advise on how DO-RDS would perform compared to EC2-RDS, with everything being in N.CA/SF Zone and using comparable EC2/DO instances for running my app server?
Alternately, is there an easy way to benchmark RDS i/o from EC2 and DO, both running Linux which can help in making this decision.
I first found digital ocean on lowendbox when they were opening and even though I only host side projects on there, I've never had a problem with them.
I was never aware of their performance when compared to Linode and Rackspace until that one comparison on HN several months ago. With that being said, from their expansion from 10,000 instances when I first signed up to over 200,000 today, I have been, thoroughly, impressed with their servers and overall operation.
Happy Do customer, really nothing bad to say about them (yet). Been with them for couple of months now. Reminds me of slicehost, but with much much better price.
I'm about to ditch DigitalOcean because I keep experiencing long lags in the ssh terminals. Sometimes when doing things that stress the terminal, like tail-ing a busy log file or running a full-screen app like top or vim, it will hang for minutes.
I'm running GitLab (which is a rails app) with postgres (small database, so it's not using a lot of memory) on a $5 droplet. It's using about 50-75% of my memory. Here's what free says:
Depending on your apps' memory usage, you might be able to put two apps on a $5 droplet (I'd recommend adding a swapfile so you don't run out of memory at some point - since they're using SSDs, swapping is not that bad.)
GitLab is using puma (configured to 1 worker/8 threads in my case), and I'm running nginx in front of it. Sidekiq seems to be eating the biggest chunk of my memory (~20%), so I might be able to save some memory by tuning that.
Same as the other reply. I'm using Unicorn behind Nginx. Sidekiq is eating the biggest chunk of my memory too, and I haven't tried tuning it yet. Haven't "needed" to.
Probably, as the $5 instance is only 1 core. You might be able to squeeze more on if you aren't using much memory, or don't mind adding swap space (ssd swap is fast enough for me...)
Inspired by the article I signed up and spun up a 512 mb droplet. It eventually came up(although took alot longer than the advertised 55 secs). Felt very sluggish, or maybe they run alot of startup scripts on boot? I tried running "top" and it ran a whole bunch of unfamiliar looking commands using 96% CPU. So I destroyed the droplet and wanted to try a 8gb/4core droplet. In the interface it says it was created in 173 seconds. But now 10 mins later I still cant ssh or ping the given IP. So while the pricing is definately good, and I like the idea of SSD-only storage, my first half-hour impression of the service was that it seems a bit rough around the edges and not something I would want to bet my company's hosting on.
Just to make sure you didn't miss a simple step (like I have before...) after you spin up an instance, they email you the ssh login details. Did you check your email?
I think Kahneman has a name for this common fallacy in this book -- it's the one wherein small samples always appear to change radically because they are... uhm... small.
Incorrect. If you were right, then every provider that was small would be skyrocketing up the usage charts. However, that's numerically impossible. Many smaller providers simply die off or never gain any traction.
This time next year, they'll likely be in the top 15 to top 25 on that same list.
Precisely. When a smaller provider goes from 1000 to 50 users, it's a "ZOMG 95% drop!" while only 950 users left, line noise for bigger providers. You're adding another example to my point.
Naturally I didn't say that all small numbers always change drastically. But those that do change, appear to be more drastic than larger numbers.
It's an artifact of small figures and small sample sizes generally. Anything which grows geometrically will grow progressively more slowly over time when expressed as a percentage.
They are certainly generous with promos. Just today I got an unsolicited $5 credit and I haven't even burned through the original credit I got for signing up via Twitter.
Do you not have a domain of any sort? It would seem trivial to point TESTAPP.degachevdomain.com to the IP address of the server. Else, just go to the IP.
I have a bunch of domains, but I'm using DO for temporary apps, and would like the DNS records to be automatically created/destroyed along with the droplets.
I think this is a great idea. In many ways they're positioned to become a more flexible/performant version of Heroku. On the complexity scale from Heroku to AWS, they definitely sit right in the middle. It will be interesting to see which direction they tend towards as they add features. Personally I want a version of Heroku with the price/performance of digital ocean. In other words, by default you could use it like Heroku but it would allow you to SSH into machines and customize your software/configuration when you ran into problems. Maybe DO could offer a load balancer that would make it easy to deploy horizontally scaled application servers really easily. A nice DSL to describe the node configuration would go a long way... I suppose this is kind of what Vagrant is solving but they're approaching it from the development perspective, not deployment.
I suspect such data wouldn't be that useful even if it included model names because by the time you have good data on a model it's old enough that it's no longer price competitive.
Speaking of SSDs, I noticed that Netflix is already deploying the brand new Crucial m500, although their monkeys probably care more about cost than reliability.
Sure, information on particular models is pretty useless, but dredging the data might turn up model series that tend to outperform other series/other brands.
I believe "meteoric" implies a quick fall. A light that burns brightly, but disappears quickly. Not sure the data warrants the use of that adjective in this context.
Basically you put your DO api key in your Vagrant file, and then running "vagrant up --provider=digital_ocean" spins up a new ubuntu VM and runs your chef/puppet provisioner code on them. It's awesome!
It uses rsync to mirror your vagrant project directory to /vagrant, which is a actually better workflow for some use cases (eg hosting) than virtualbox shared folders.