Back at the office, we were recently talking about the possibility of really cheap (in terms of power requirement) cloud servers which are equivalent of raspberryPi's with soldered on flash cards around the 32-64gig range. I'd bet you can pack a shitload of these in a 1U box and still have power and cooling to spare. The only expensive part might be budgeting for all those ethernet ports on the switch and uplink capacity (for bandwidth intensive servers).
One of the engineers tried running our server stack on a raspberry for a laugh.. I was gobsmacked to hear that the whole thing just worked (it's a custom networking protocol stack running in userspace) if just a bit slower than usual. I can imagine making use of loads of ultra-cheap servers distributed all around the world... IF the networking issue can be solved.
Perhaps the time is right for a more compact and cheaper wired networking solution... maybe limit the bandwidth to 1Gbps but make it ultra compact with much cheaper electronics. Sigh... a man can dream.
Small systems are good for realtime applications. Then the resources for an application have to be ready all the time.
In terms of space/power/reliability/scalability large systems win. Sure a single raspberry doesn't draw much power. But it doesn't provide much computing power either. Throw in a few hundred of those systems and you feel the heat. Still the computing power is comparable to a single rack server. You want to use a RAID6 on the raspberry, sure, can be done but throw in 4 times the number storage devices. Compare that to the rack server with 16 SSDs configured as RAID6 where the data is shared by hundreds of virtual machines. If you compare the "energy per bit" or "energy per operation" the high-powered server CPUs win versus most anything out there.
So I'd say:
* If you can justify a real server, do use one instead of dozens of "simple machines".
* If you can use the cloud do it instead of providing your own hardware.
* Exceptions may apply where security or reliability is concerned. (You wouldn't run your heart monitor in the cloud when a small dedicated system does the job.)
What's the current state of the art for realtime + internet? I would have thought that once you get to the first other network device, whatever realtime guarantees your device offered would be toast. And stuff like packet loss or a TCP retransmission would be disasterous. They seem like entirely incompatible domains.
You can buy a single off the shelf Proliant microserver which is the size of a shoebox, put Vmware on there (or your virtuaization product of choice) and have god knows how many VM's running with a very modest power load that will blow away any sort of jerry-rigged Pi's shoved into a 1U solution. And be reliable and have raid and have ecc memory. Hardware is kinda a solved problem now.
Shame the pi isn't more powerful. I was looking at deploying Freepbx in my home and the performance of the web interace of the Pi is terrible. I'm not sure what people actually use them for. I'm probably going to just get a beaglebone that's 2-3x as powerful for a measly $15 more.
I've been cabling servers, kind of for a hobby, for a couple of years now. Even a two-server-per-U cabling situation (with redundant ports on each server) is a nightmare. Once you're talking hundreds of wires in a rack, it's no longer fun.
I can't imagine going to top-of-rack directly from (say) 512 or 1024 Pis in a rack. So you need intermediate switches, probably a small one every couple of U. From there to top of rack at 10G (could get away with 1G if you know your network bandwidth over your couple U of Pis won't get saturated). Top of rack switch will need to be optical, probably redundant, at 40G aggregate or better. Did I mention that those first-layer switches probably take up a U themselves?
Per Pi, we might be spending as much on each switch port as we are on the Pi itself, maybe more (a 48-port switch that we use is about $2k delivered, or about $40/port, and that's just the first switching layer). You can probably buy cheaper switches than the ones we use; I don't know if there is drop in reliability. Haven't figured the cost of the optical links, either, but they can get spendy as well.
I think that box of Pis needs its own switching fabric, so that 1G link never leaves the chassis the Pi is in. The switch doesn't need to be fancy, but it looks pretty custom and you'll have to amortize its cost over a big build.
hmm... yeah that's what I thought. And yes, the best case might be have hundreds of these "systems on a chip" on a custom board with it's internal bus (PCI-whatever) with avirtual eth0's visible on each internal system. This whole rigmarole could be connected to the rest of the network via a couple of 10G networks. Then some flavor of SDN would work to divide the bandwidth in a fair manner among the boxes.
But doing all that custom electronics does take the fun out of the idea of "just a bunch of cheap raspberry pi's doing their thing". So maybe not.
It is possible that future servers might be connected to the network backplane, power, drives, and pretty much everything else by multiple USB-3.1 type connections. The bandwidth is there.
Just imagine racking in machines like that which take N USB connections, where that's 1, 2, 4, 8, 16 or whatever is necessary.
Hmm. USB fan-out is pretty cheap (the cabling is not fussy). Schedule maybe 70% of the raw bandwidth and you should have enough to drive 6-7 servers at 1GBit from each root hub. If you know your servers are less chatty you can get away with less guaranteed bandwidth. Can also play games with isoch.
You can even add redundancy by connecting each Pi server to more than one root hub.
Writing the USB-based switch for this would be fun (probably someone has done this, though).
The biggest problem would be the reliability. I have a few friends who use RPis and BBBs as part of their home infrastructure - they have to replace them randomly as they stop working.
One guy's working theory is that they overheat, so last I heard he was attaching heatsinks to the major chips, but I haven't heard if that helped the reliability much.
I recognise those Internode IPs at a glance. I used to work in the ADL6 data centre.
I'm told that traceroute isn't a reliable way to determine end point latency. I can ping 212.47.250.196 and get a round-trip time of ~365ms. That the intermediate hops each take 170 - 333ms to response in the trouceroute is meaningless. Or so I thought? Maybe I'm not sure what you're getting at?
That's mtr, not traceroute so it actively measures latency to each hop. You are correct in saying that the connection is from Internode (owned by iiNet).
Interesting, without any information for the user ( no credit card required ), it is really interesting how they prevent usage of this box as a hacking machine.
No, none of these are remotely watertight and no, not every developer is inclined to connect to their VPS via Facebook (or even has a FB account). Just saying, these are ways other sites try to add a bit of friction to what could otherwise be a runaway script spinning up thousands of servers. And without requiring a credit card.
Most definitely. Just for fun I logged in, installed nmap, and proceeded to scan my home connection to see if they blocked anything (scanned from 1-65535). If anything this is convenient for getting a public IP quick and they don't restrict any ports.
But wait...
You can also use SSH as a dynamic SOCKS proxy quick for an ad-hoc VPN. I bounced SSH over to port 443, because why not. Fired up SSH on my local machine with a local dynamic proxy (ssh -D 4444 -p443 ubuntu@212.47.231.xxx), set my proxy on Firefox to localhost:4444 and voila. Free VPN through them for 30 minutes. Not uber fast (server is in France) but usable.
That was my first thought. But then again, all my private servers have been hacked or under intense attacks by bots, so maybe it is my own brain which is now compromised.
Anyway, I think this is a super cool and creative service, kudos to the author.
Tried it for 30mins. I installed apache2 and made a quick mirror of my personal homepage with wget.
After the 30min session the window closed and it told me that session expired. Although the actual server was still up at the ip. About 5 minutes later apache shut down, so I guess the server was destroyed.
This is great for experimenting things. You could quickly for example test "sudo rm -rf" at the root and see what it does. Nice one!
Seems like a really smart way to use spare capacity to market your services. Especially so when your services aren't quite the norm (i.e. ARM rather than x86)
I just get what looks like a mac window div (judging by the class `terminal` I guess it's supposed to be a terminal to the server) saying 'the server refused the connection'.
Isn't the point of "Cloud" computing, high availability, high uptime? If there's a hardware failure on your "cloud machine" it just fails, and there's no recovery. This sounds like just regular hosting and not cloud hosting, please correct me if I'm misunderstanding how this is set up.
Individual nodes can be unreliable (and therefore very cheap), availability is maintained by distributing load to on-line nodes. That's my understanding of it, anyway.
The gaping problems with such paradigms (chiefly survivability/evolvability over time) were well highlighted for large scale, general systems by the internet itself, RFC3439 (2002) puts it thus: Upgrade cost of network complexity: The Internet has smart edges ... and a simple core. Adding an new Internet service is just a matter of distributing an application ... Compare this to voice, where one has to upgrade the entire core.
My take: cloud computing is about to get smart edges; cloud providers are about to be commodified; and we are about to effect an appropriately flexible layer of additional abstraction to the entire field of computing that will further push us towards a position in which we treat computation as any other service and networked communication itself as a means of economic exchange.
Yes, but if you press 'back' you get a split-second view of the previous page before being redirected to the error page again.
In general, don't use unconditional in-page redirects, whether they be javascript or meta refresh. If you want to do a redirect like that, you can have your server serve a 301 and the browser will collapse history appropriately, but if you must do it in JS then use history.replaceState.
this is great. I just wish digitalocean would deploy faster. it says 60 seconds but more than often it takes several minutes to load a snapshot or destroying can take ages (since you are still billed for idle and off servers).