Hacker News new | past | comments | ask | show | jobs | submit login
Instant Cloud – SSD Bare Metal Servers (instantcloud.io)
171 points by edouardb on April 10, 2015 | hide | past | favorite | 79 comments



I made a habit out of measuring the disk performance of cloud servers, so here it is:

  ubuntu@instantcloud:~$ dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync                                                                         
  1024+0 records in                                                                                                                                      
  1024+0 records out                                                                                                                                     
  1073741824 bytes (1.1 GB) copied, 11.0101 s, 97.5 MB/s  

  ubuntu@instantcloud:~$ sudo hdparm -tT /dev/nbd0                                                                                                       
                                                                                                                                                       
  /dev/nbd0:                                                                                                                                             
   Timing cached reads:   2180 MB in  2.00 seconds = 1090.44 MB/sec                                                                                      
   Timing buffered disk reads: 268 MB in  3.02 seconds =  88.85 MB/sec


How does that compare to other cloud servers?


Tested on random instance, without any io intensive application running

VULTR 2 GB INSTANCE dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync

    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 8.11451 s, 132 MB/s
    
    
    hdparm -tT /dev/vda
    
    /dev/vda:
    Timing cached reads:   25106 MB in  2.00 seconds = 12568.51 MB/sec
    Timing buffered disk reads: 496 MB in  3.02 seconds = 164.16 MB/sec

DIGITALOCEAN 2GB INSTANCE

    hdparm -tT /dev/vda
    
    Timing cached reads:   8926 MB in  2.00 seconds = 4468.52 MB/sec
    Timing buffered disk reads: 542 MB in  3.00 seconds = 180.43 MB/sec


    dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync 
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 32.0005 s, 33.6 MB/s


I'm not sure how good is the test as it depends on the IO levels of the host where the VM instance is running.

This is a 1GB Debian Wheezy instance with regular HDD in Memset [1]:

    dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 17.7142 s, 60.6 MB/s
I checked the host and its load is average. DO's 33.6 MB/s sounds poor.

[1] disclaimer: Memset hosting is my employer, I built some of this; but this is an honest test (personal VM running several services).

EDIT: formatting

EDIT 2: same test on a VM with SSD disk:

    dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 3.48953 s, 308 MB/s
Again, the host has average load.


Did the same test on a VM I have in DigitalOcean... ( Ubuntu, 512 MB droplet )

~# dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 2.35872 s, 455 MB/s


Add four spaces at the start of a line to mark it as code (both to avoid lines being merged and fixed-width font).

(yes: it's just markdown here).


You can read my comment history for exact numbers. But for SSD storage it's not very good. Better than amazon EC2 though (t2 instance).


You should have a look to the IOPS instead bandwidth

  root@instantcloud:~# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=16 --size=4G --readwrite=randwrite
  test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
  fio-2.1.11
  Starting 1 process
  Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/18714KB/0KB /s] [0/4678/0 iops] [eta 00m:00s]

  root@instantcloud:~# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=16 --size=4G --readwrite=randread
  test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
  fio-2.1.11
  Starting 1 process
  Jobs: 1 (f=1): [r(1)] [100.0% done] [22307KB/0KB/0KB /s] [5576/0/0 iops] [eta 00m:00s]


Didn't know about fio, thanks!


FYI - dd is not an effective way of measuring disk performance.

I'd recommend using fio to do this using libaio and direct disk reads/writes, and IOPing for basic latency tests:

* Random 4k read test for flash storage:

  fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test \
  --filename=test --bs=4k --iodepth=128 --size=5G --numjobs=12 --norandommap \
  --readwrite=randread
* And writes:

  fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test \
  --filename=test --bs=4k --iodepth=128 --size=5G --numjobs=12 --norandommap \
  --readwrite=randwrite
---

Here's an example here is a test storage unit I'm logged into at work right now (NOTE: THIS IS NOT ON INSTANTCLOUD!):

  root@s1-san5:~ # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test \
  --filename=/dev/md200 --bs=4k --iodepth=128 --size=5G --numjobs=12 --norandommap \
  --readwrite=randread
test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=128

...

  Run status group 0 (all jobs):
  READ: io=61440MB, aggrb=2339.7MB/s, minb=199652KB/s, maxb=200362KB/s, mint=26167msec, maxt=26260msec
  
  Disk stats (read/write):
  md100: ios=15636294/0, merge=0/0, ticks=0/0, in_queue=1581403800, util=100.00%, aggrios=7864320/0, aggrmerge=0/0, aggrticks=128880/0, aggrin_queue=131372, aggrutil=100.00%
  nvme0n1: ios=7576253/0, merge=0/0, ticks=123812/0, in_queue=125748, util=100.00%
  nvme1n1: ios=8152387/0, merge=0/0, ticks=133948/0, in_queue=136996, util=100.
While the test is running is you'll see the storage performance (Note the MB/s in the third [] and iops in the fourth []):

  Jobs: 12 (f=12): [r(12)] [12.3% done] [2537MB/0KB/0KB /s] [650K/0/0 iops] [eta 00m:50s]
And when it's completed:

* Throughput: aggrb=2339.7MB/s

* IOs (Read in this case): ios=15636294/0

And then with IOPing to test latency:

* Here's an example of really bad storage latency on my crappy old rotational RAID array at home:

  root@nas:/mnt/raid# ioping /dev/md0
  4 KiB from /dev/md0 (block device 7.28 TiB): request=1 time=27.2 ms
  4 KiB from /dev/md0 (block device 7.28 TiB): request=2 time=15.7 ms
* Here's an example of pretty good storage latency on my new storage at work:

  root@s1-san5:~  # ioping /dev/md200
  4 KiB from /dev/md200 (block device 1.09 TiB): request=1 time=136 us
  4 KiB from /dev/md200 (block device 1.09 TiB): request=2 time=124 us
  4 KiB from /dev/md200 (block device 1.09 TiB): request=3 time=112 us
* And one more on a VM with storage provisioned over iSCSI to a very slow rotational storage array that's quite busy:

  root@nagios:~ # ioping /dev/xvda
  4096 bytes from /dev/xvda (device 15.0 Gb): request=1 time=11.6 ms
  4096 bytes from /dev/xvda (device 15.0 Gb): request=2 time=0.2 ms
  4096 bytes from /dev/xvda (device 15.0 Gb): request=3 time=7.1 ms
  4096 bytes from /dev/xvda (device 15.0 Gb): request=4 time=1.2 ms
---

OK So let's try this on instantcloud.io / scaleway:

* IOP/s - Random 4k reads:

  bs: 12 (f=12): [r(12)] [0.5% done] [31050KB/0KB/0KB /s] [7762/0/0 iops] [eta 37m:32s]
* IOP/s - Random 4k writes:

  Jobs: 12 (f=12): [w(12)] [0.2% done] [0KB/7848KB/0KB /s] [0/1962/0 iops] [eta 02h:29m:10s]
* Latency:

   root@instantcloud:~# ioping /dev/nbd0
   4.0 KiB from /dev/nbd0 (device 46.6 GiB): request=1 time=1.4 ms
   4.0 KiB from /dev/nbd0 (device 46.6 GiB): request=2 time=1.4 ms
   4.0 KiB from /dev/nbd0 (device 46.6 GiB): request=3 time=1.4 ms
   4.0 KiB from /dev/nbd0 (device 46.6 GiB): request=4 time=2.8 ms

Conclusion:

Good performance for small arm devices but not even close to even a single entry-level consumer grade SATA SSD.


That's rather poor, considering my laptop with FDE does about 50% faster:

    ▶ dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 7.32253 s, 147 MB/s


Back at the office, we were recently talking about the possibility of really cheap (in terms of power requirement) cloud servers which are equivalent of raspberryPi's with soldered on flash cards around the 32-64gig range. I'd bet you can pack a shitload of these in a 1U box and still have power and cooling to spare. The only expensive part might be budgeting for all those ethernet ports on the switch and uplink capacity (for bandwidth intensive servers).

One of the engineers tried running our server stack on a raspberry for a laugh.. I was gobsmacked to hear that the whole thing just worked (it's a custom networking protocol stack running in userspace) if just a bit slower than usual. I can imagine making use of loads of ultra-cheap servers distributed all around the world... IF the networking issue can be solved.

Perhaps the time is right for a more compact and cheaper wired networking solution... maybe limit the bandwidth to 1Gbps but make it ultra compact with much cheaper electronics. Sigh... a man can dream.


Small systems are good for realtime applications. Then the resources for an application have to be ready all the time.

In terms of space/power/reliability/scalability large systems win. Sure a single raspberry doesn't draw much power. But it doesn't provide much computing power either. Throw in a few hundred of those systems and you feel the heat. Still the computing power is comparable to a single rack server. You want to use a RAID6 on the raspberry, sure, can be done but throw in 4 times the number storage devices. Compare that to the rack server with 16 SSDs configured as RAID6 where the data is shared by hundreds of virtual machines. If you compare the "energy per bit" or "energy per operation" the high-powered server CPUs win versus most anything out there.

So I'd say:

* If you can justify a real server, do use one instead of dozens of "simple machines".

* If you can use the cloud do it instead of providing your own hardware.

* Exceptions may apply where security or reliability is concerned. (You wouldn't run your heart monitor in the cloud when a small dedicated system does the job.)


What's the current state of the art for realtime + internet? I would have thought that once you get to the first other network device, whatever realtime guarantees your device offered would be toast. And stuff like packet loss or a TCP retransmission would be disasterous. They seem like entirely incompatible domains.


Makes sense, like semis vs trains.


More like a semi vs a dozen of Prii... Semi still wins.


>(in terms of power requirement)

You can buy a single off the shelf Proliant microserver which is the size of a shoebox, put Vmware on there (or your virtuaization product of choice) and have god knows how many VM's running with a very modest power load that will blow away any sort of jerry-rigged Pi's shoved into a 1U solution. And be reliable and have raid and have ecc memory. Hardware is kinda a solved problem now.

Shame the pi isn't more powerful. I was looking at deploying Freepbx in my home and the performance of the web interace of the Pi is terrible. I'm not sure what people actually use them for. I'm probably going to just get a beaglebone that's 2-3x as powerful for a measly $15 more.


I've been cabling servers, kind of for a hobby, for a couple of years now. Even a two-server-per-U cabling situation (with redundant ports on each server) is a nightmare. Once you're talking hundreds of wires in a rack, it's no longer fun.

I can't imagine going to top-of-rack directly from (say) 512 or 1024 Pis in a rack. So you need intermediate switches, probably a small one every couple of U. From there to top of rack at 10G (could get away with 1G if you know your network bandwidth over your couple U of Pis won't get saturated). Top of rack switch will need to be optical, probably redundant, at 40G aggregate or better. Did I mention that those first-layer switches probably take up a U themselves?

Per Pi, we might be spending as much on each switch port as we are on the Pi itself, maybe more (a 48-port switch that we use is about $2k delivered, or about $40/port, and that's just the first switching layer). You can probably buy cheaper switches than the ones we use; I don't know if there is drop in reliability. Haven't figured the cost of the optical links, either, but they can get spendy as well.

I think that box of Pis needs its own switching fabric, so that 1G link never leaves the chassis the Pi is in. The switch doesn't need to be fancy, but it looks pretty custom and you'll have to amortize its cost over a big build.

I really hate wires :-)


hmm... yeah that's what I thought. And yes, the best case might be have hundreds of these "systems on a chip" on a custom board with it's internal bus (PCI-whatever) with avirtual eth0's visible on each internal system. This whole rigmarole could be connected to the rest of the network via a couple of 10G networks. Then some flavor of SDN would work to divide the bandwidth in a fair manner among the boxes.

But doing all that custom electronics does take the fun out of the idea of "just a bunch of cheap raspberry pi's doing their thing". So maybe not.


It is possible that future servers might be connected to the network backplane, power, drives, and pretty much everything else by multiple USB-3.1 type connections. The bandwidth is there.

Just imagine racking in machines like that which take N USB connections, where that's 1, 2, 4, 8, 16 or whatever is necessary.


Hmm. USB fan-out is pretty cheap (the cabling is not fussy). Schedule maybe 70% of the raw bandwidth and you should have enough to drive 6-7 servers at 1GBit from each root hub. If you know your servers are less chatty you can get away with less guaranteed bandwidth. Can also play games with isoch.

You can even add redundancy by connecting each Pi server to more than one root hub.

Writing the USB-based switch for this would be fun (probably someone has done this, though).


With the Pi 2, it's not as bad as you'd think; see: https://github.com/geerlingguy/raspberry-pi-dramble/wiki/Dra... (Drupal 8 on a 6-server Pi 2 LEMP + Redis stack is about 80% as fast as running on Digital Ocean, in some benchmarks).

For test purposes, a local Pi cloud (or something similarly-priced) is a decent deal.


The biggest problem would be the reliability. I have a few friends who use RPis and BBBs as part of their home infrastructure - they have to replace them randomly as they stop working.

One guy's working theory is that they overheat, so last I heard he was attaching heatsinks to the major chips, but I haven't heard if that helped the reliability much.


There are companies out there that do raspberry pi colocation.

I guess it's just a novelty. Might as well go virtual.


A truly great way to demo a product while also making use of unallocated resources.

Unfortunately the latency to the servers from Australia is so poor and makes it practically unusable.

Here is a trace from my 100Mbit fibre:

  samm ~ % mtr -n --report 212.47.250.196
  Start: Fri Apr 10 19:36:45 2015
  HOST: samm-mbp                    Loss%   Snt   Last   Avg  Best  Wrst StDev
    1.|-- 192.168.0.1                0.0%    10    1.7   2.0   1.6   4.7   0.7
    2.|-- 150.101.212.44             0.0%    10    2.7   3.1   2.5   4.2   0.0
    3.|-- 150.101.208.65             0.0%    10    5.3   6.9   2.7  38.1  11.0
    4.|-- 150.101.33.28              0.0%    10   28.3  20.8  14.6  28.3   5.6
    5.|-- 150.101.33.149             0.0%    10  170.6 170.4 170.1 170.8   0.0
    6.|-- 62.115.33.97               0.0%    10  170.8 170.8 170.1 171.4   0.0
    7.|-- 213.155.135.156            0.0%    10  240.9 240.7 240.2 241.2   0.0
    8.|-- 80.91.251.103              0.0%    10  322.3 335.0 321.4 413.3  30.2
    9.|-- 213.155.136.209           20.0%    10  319.7 329.8 317.9 355.6  14.4
   10.|-- 62.115.40.86               0.0%    10  319.1 319.3 318.7 320.0   0.0
   11.|-- 195.154.1.41               0.0%    10  332.5 333.0 332.3 333.9   0.0
   12.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
   13.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
   14.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
   15.|-- 212.47.250.196             0.0%    10  395.0 334.8 319.4 395.0  23.5
Edit: Spelling (Typo)


I recognise those Internode IPs at a glance. I used to work in the ADL6 data centre.

I'm told that traceroute isn't a reliable way to determine end point latency. I can ping 212.47.250.196 and get a round-trip time of ~365ms. That the intermediate hops each take 170 - 333ms to response in the trouceroute is meaningless. Or so I thought? Maybe I'm not sure what you're getting at?

(Edit: I mean iiNet. I mean TPG.)


That's mtr, not traceroute so it actively measures latency to each hop. You are correct in saying that the connection is from Internode (owned by iiNet).

Here's an example from PIPE networks (TPG):

  root@dev-samm:~  # mtr -n --report 212.47.250.196
  HOST: dev-samm                    Loss%   Snt   Last   Avg  Best  Wrst StDev
    1.|-- <removed>                  0.0%    10    0.9   1.1   0.9   2.0   0.3
    2.|-- <removed>                  0.0%    10    0.5   0.5   0.4   0.6   0.1
    3.|-- <removed>                  0.0%    10    1.1   2.4   1.0   9.1   2.8
    4.|-- <removed>                  0.0%    10    1.1   1.1   1.0   1.3   0.1
    5.|-- 203.219.106.21             0.0%    10    2.5   2.8   1.1   4.7   1.1
    6.|-- 202.7.171.25               0.0%    10   11.2  13.1  11.2  14.6   1.3
    7.|-- 203.29.129.195             0.0%    10   11.1  12.9  11.1  14.7   1.2
    8.|-- 64.86.21.53                0.0%    10  210.2 212.2 210.0 229.9   6.2
    9.|-- 64.86.21.2                 0.0%    10  373.1 373.2 372.9 373.9   0.3
   10.|-- 66.198.127.1               0.0%    10  382.3 382.4 382.2 383.1   0.3
   11.|-- 66.198.127.6               0.0%    10  382.1 382.0 381.8 382.1   0.1
   12.|-- 66.198.70.21               0.0%    10  378.7 379.3 378.5 381.6   1.2
   13.|-- 80.231.130.33              0.0%    10  380.6 381.0 380.6 383.0   0.8
   14.|-- 80.231.130.86             20.0%    10  373.9 374.1 373.9 374.7   0.2
   15.|-- 80.231.154.17             10.0%    10  371.3 371.5 371.3 371.9   0.2
   16.|-- 80.231.153.58             10.0%    10  379.4 379.1 378.9 379.4   0.2
   17.|-- 5.23.24.6                 10.0%    10  354.1 354.4 353.9 355.6   0.5
   18.|-- 195.154.1.39              10.0%    10  355.2 355.2 354.9 355.5   0.2
   19.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
   20.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
   21.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
   22.|-- 212.47.250.196            10.0%    10  354.5 354.3 354.0 354.9   0.3  
A throughput test struggles to obtain 1Mbit:

  samm ~ % iperf -c 212.47.248.211
  ------------------------------------------------------------
  Client connecting to 212.47.248.211, TCP port 5001
  TCP window size:  129 KByte (default)
  ------------------------------------------------------------
  [  4] local 192.168.0.22 port 56958 connected with 212.47.248.211 port 5001
  [ ID] Interval       Transfer     Bandwidth
  [  4]  0.0-11.1 sec  1.38 MBytes  1.04 Mbits/sec


Ah, yes, sorry, I was interpreting the numbers incorrectly. Should probably sleep more. Thanks.


Interesting, without any information for the user ( no credit card required ), it is really interesting how they prevent usage of this box as a hacking machine.


Techniques other sites use:

* CAPTCHA

* Require single-sign on via Facebook or Google+

* SMS token

* IP throttle

No, none of these are remotely watertight and no, not every developer is inclined to connect to their VPS via Facebook (or even has a FB account). Just saying, these are ways other sites try to add a bit of friction to what could otherwise be a runaway script spinning up thousands of servers. And without requiring a credit card.


Most definitely. Just for fun I logged in, installed nmap, and proceeded to scan my home connection to see if they blocked anything (scanned from 1-65535). If anything this is convenient for getting a public IP quick and they don't restrict any ports.

But wait...

You can also use SSH as a dynamic SOCKS proxy quick for an ad-hoc VPN. I bounced SSH over to port 443, because why not. Fired up SSH on my local machine with a local dynamic proxy (ssh -D 4444 -p443 ubuntu@212.47.231.xxx), set my proxy on Firefox to localhost:4444 and voila. Free VPN through them for 30 minutes. Not uber fast (server is in France) but usable.


That was my first thought. But then again, all my private servers have been hacked or under intense attacks by bots, so maybe it is my own brain which is now compromised.

Anyway, I think this is a super cool and creative service, kudos to the author.

Edit: Possible bug, I get nothing on Safari after 1 minute of waiting: http://i.imgur.com/pKoNPrV.jpg

Edit 2: Ah, on trying again, I get an error that all servers are busy. Maybe we killed it again, HN :)


You are right. I've been hacked a couple of times too. What I see the hacker is usually doing is make my machine a DDoS bot that acts on demand.

Anyway with a service like this it would be far easier for anyone to make his hacking attempts more untraceable.


Tried it for 30mins. I installed apache2 and made a quick mirror of my personal homepage with wget.

After the 30min session the window closed and it told me that session expired. Although the actual server was still up at the ip. About 5 minutes later apache shut down, so I guess the server was destroyed.

This is great for experimenting things. You could quickly for example test "sudo rm -rf" at the root and see what it does. Nice one!


You can already do `docker run -ti ubuntu bash` and not be limited to 30 minutes test :)


Sure, but you need larger platform than just a browser to do that.

I mean, for example total beginners may find this truly helpful.


Sure, but you need larger platform than just a browser to do that.

Do you? JSLinux (by Fabrice Bellard) disagrees :) http://bellard.org/jslinux/


FWIW: We recently talked about their servers, hardware and such: https://news.ycombinator.com/item?id=9309459


Seems like a really smart way to use spare capacity to market your services. Especially so when your services aren't quite the norm (i.e. ARM rather than x86)


Apparently they also have Object storage which seems to be way cheaper than AWS Glacier.

€0.02 GB/month, with unlimited requests and transfer https://www.scaleway.com/pricing


My password had an "0" in it - I had to guess whether it was a number or a letter.


My password is illegible. It has a 0, I think, and the letter after it could be an I, L, or maybe a 1, but nothing I've tried so far works.

Definitely not ideal if I've wasted 5 minutes of my server's runtime just guessing the password.

Edit: Got it finally, 7 minutes in - a line right under the mystery letter made it look like an 'L' when it was actually an 'I'.


definitely, I am brute forcing since it has two sticks and I don't know if they are l/I/1 or l


Note to developers, exclude similar letters from your alphabet when generating passwords or spell them out phonetically.


The same happened to me. Bad font choice, I guess. Tried several times until realized that was the problem.


You should get a better font.


The password is in a captcha. Better font wouldn't have helped.


Fair enough. They should get a better font :)


The tag line "Get your 30 minutes free server" is extremely confusing.

Do you mean to say "Get a free server for 30 minutes"?


I'm confused too, since it could also mean "Get a free server in 30 minutes"?


Or it could be "Get your server. 30 minutes are free. If you leave it up, the rest of your usage is not."


If ovh ever puts ssd on their kimsufi, they will blow all the micro-servers out of the water

http://www.kimsufi.com/ca/en/index.xml

Meanwhile for $42 you can get dedicated SSD

http://www.soyoustart.com/us/essential-servers/

The only problem with all of these is no ECC


Kimsufi offers servers with SSDs in France: KS-2 SSD: Atom™ N2800, 4GB of Ram, 40GB SSD, 100 Mbps for 9,99€/month.

http://www.kimsufi.com/fr/


and i guess a $60 setup fee


I just get what looks like a mac window div (judging by the class `terminal` I guess it's supposed to be a terminal to the server) saying 'the server refused the connection'.


Try to refresh?


Nope, happens every time. ubuntu chrome `net::ERR_CONNECTION_REFUSED`.


"Oops!

All C1 servers are busy, please retry in few minutes :)"


And the back button is broken as a result.


Isn't the point of "Cloud" computing, high availability, high uptime? If there's a hardware failure on your "cloud machine" it just fails, and there's no recovery. This sounds like just regular hosting and not cloud hosting, please correct me if I'm misunderstanding how this is set up.


Individual nodes can be unreliable (and therefore very cheap), availability is maintained by distributing load to on-line nodes. That's my understanding of it, anyway.


The technique is known as 'high availability' (HA) or 'fail over' clustering. https://en.wikipedia.org/wiki/High-availability_cluster 'Cloud' computing is simply another term meaning "using someone else's infrastructure as a service" (IaaS), which is essentially a restatement of the centralized computing paradigm. http://en.wikipedia.org/wiki/Centralized_computing

The gaping problems with such paradigms (chiefly survivability/evolvability over time) were well highlighted for large scale, general systems by the internet itself, RFC3439 (2002) puts it thus: Upgrade cost of network complexity: The Internet has smart edges ... and a simple core. Adding an new Internet service is just a matter of distributing an application ... Compare this to voice, where one has to upgrade the entire core.

My take: cloud computing is about to get smart edges; cloud providers are about to be commodified; and we are about to effect an appropriately flexible layer of additional abstraction to the entire field of computing that will further push us towards a position in which we treat computation as any other service and networked communication itself as a means of economic exchange.


Mostly of off-topic, but I have to ask: what's the state of Go compilers for ARM nowadays? Are they generating optimized code on par with x86 ones?


My session lasted for 35 minutes - The web app closed after 30 but my VNC connection continued, presumably because someone else took my slot.


No offense, but "30 minutes... with a click" is pretty standard in the age of AWS. It's not a good headline anymore :)


Here is the message I have : "All C1 servers are busy, please retry in few minutes :)"


got the same message..


I'm very impressed that it lives up to the promise of getting it "with a click."


I just get a timeout in the terminal-like window. SSH-ing to it also doesn't work.


Are there any restrictions on what you can do? Input/output ports blocked?


Your page breaks the back button, at least on a busy error. Not cool, guys.


How so? You can still click the Get My Server button when the error message shows up at the top.


Yes, but if you press 'back' you get a split-second view of the previous page before being redirected to the error page again.

In general, don't use unconditional in-page redirects, whether they be javascript or meta refresh. If you want to do a redirect like that, you can have your server serve a 301 and the browser will collapse history appropriately, but if you must do it in JS then use history.replaceState.


this is great. I just wish digitalocean would deploy faster. it says 60 seconds but more than often it takes several minutes to load a snapshot or destroying can take ages (since you are still billed for idle and off servers).


I put a fork bomb...


I think that's one of the arguments in favour of this approach. Your neighbours on AWS might notice. On individual real servers, they wouldn't.


It's not such an issue these days though - all the non-toy AWS instances have dedicated cores, sometimes dedicated processors.


You are going to jail.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: