This "virtual server resizing" and sub-second VM booting is nothing new to Solaris users, who've been quietly doing it for years with Zones. Of course, you have to use Solaris to take advantage of flexibility in dynamically changing any resource allocation, including disc, CPU, networking and memory.
It sounds like GoScale is bringing memory allocation resizing to Linux guests, which is handy. If it's done with KVM, it would be interesting to see if it can be added to the Joyent Solaris KVM port.
We have built on top of the ground work of Linux Containers, and as well as building an API for it, we have made some deep system modifications to allow containers to do things which you would assume only paravirualisation could do. We are neither OS level nor Paravirtualisation, its a blurry point inbetween to get the best of both worlds.
Edit: Apologies for the spam, we misconfigured our emailing software.
If your aim is to be a PaaS vendor, you'd probably able to put something together much more quickly and cheaply by just licensing Joyent's SmartDataCenter product and building anything else you need on top of that. And then you'd be running it on a kernel which has been tried and tested over decades on many sites, rather than your own unique kernel modifications.
I work for Joyent and think you are threatening us. You should not innovate, but use our stuff because we think its the best and don't think you can take any of our market share, but please don't try!
I gave you my email address before coming here and reading the comments. Now I regret it.
I feel like the representatives of GoScale here are not doing themselves much good. They offer vague answers and hand-wavy descriptions. We all get that you have trade secrets. But If your business is jeopardized by just describing how your infrastructure works, that should worry you.
This is your "Show HN." You chose to do this. You should be generating excitement and buzz. I suggest taking a look at how Dropbox or even Tarsnap have talked about infrastructure in comparison to how you're doing it here. Talk about what you've built. It will spark respect and excitement around your brand. Lots of people have a real affection for Dropbox. That doesn't happen on accident.
I'm sure my advice here isn't unequivocally good. But it's a counterpoint I think you should consider.
Fair enough. I can see your point. We'll try and do that.
Rather than talk much, at this stage, about what we've built we decided to make it a priority to make the service available sooner, rather than later, so people can test it, break it and benchmark it.
Our purpose is not generate buzz but more, right now, to engage with people who want to try out the technology at this early stage.
> to engage with people who want to try out the technology at this early stage
I suspect that group of people (those who want to try the early stage) intersects enormously with the group of people put off by hand-wavey dismissals of concerns about the infrastructure they use.
What you've built is cool, but you're selling to sceptical and curious early adopters. Keep that in mind.
In practice they will have to drastically undersell their hosts in order to guarantee the burst-capacity.
If you need to reserve 8G for my 256M instance because I could burst to 8G at any time - then why not just sell me the 8G instance directly?
Perhaps they have some really interesting use for this "volatile" spare-capacity (something like the EC2 spot-market?), but that seems like an awfully complex endeavor.
That's not really answering the question, is it? Increasing density just means you get to buy fewer servers. It doesn't deal with the problem of everyone on a maxed-out server asking for an increase at the same time.
That's the difference between guaranteed capacity and most services. It's very common to use the properties of normal usage patterns and pool the excess capacity. The phone system has done that for ages. Ever try to make a call during an emergency?
Just because their service doesn't cover certain extreme situations does not mean it's worthless. It would worth knowing how much excess capacity they are pooling together, but vendors generally won't release those types of details.
I supposed that's the point I was working towards: they must be overcommitted, so it's a funny thing to be cagey about. The exact numbers don't matter as much.
What happens when I have, say, 50x 512M instances and decide to resize them all at once to 8G? Or do you generally limit the burst-capacity to twice the base-capacity?
How we do it would be giving away the kitchen sink.
There is no soft limit on how much you can scale. We attempt to distribute all yours apps as much as possible to ensure there are enough resources for you to grow in to. If there is no more capacity on the host server for your instance(s), you are able to transparently migrate to one which does. This happens within 2 seconds.
It's a neat technology and I'm very interested in using it. It's almost a perfect fit for my business, which deals with serving traffic during email spikes.
I think the questions are just around the economics to make sure you guys stick around and can continue to offer it. My initial thought was that being able to scale up and down so quickly would require you to have a lot of idle resources sitting around at any given time, both on individual hosts as well as across hosts.
Thanks. Fair point about whether we stick around: You'd expect me to say we intend to but the proof is in the pudding. We have a lot to do and long term vision. As for the economics a key point is that the resizing allows us/users to optimize much better the allocation of resources to what is needed
Actually... you do not need that fast a network. You can converge state incrementally rather than transfer it in one go. Xen and VMware do this. Surely the cost of a fast SAN and interconnect would quickly outpace the (imho minor) savings from smaller memory allocations.
At least with AWS, the model is relatively familiar, which makes the failure modes predictable. This seems to be breaking new ground, and is likely to fail in New and Interesting ways.
Even the most magical cloud in the world is backed up by real computers with real memory constraints. No matter what happens, one time or the other this memory constraint will be hit and then I doubt it will take only 500ms to resize the instance. Either that or, as someone pointed out, you'll end up with an aggressively undersold platform and that will have a toll on pricing.
But regardless, I am curious to see how this develops.
It's a new company but the same team. StackBlaze PaaS is still up and running and serving existing customers, but no further development is being done on it.
We fully intend to offer a transition to GoScale, for StackBlaze users, if they want it. They've been informed of that. But it's a fair question though. The change of company was a consequence of our funding sources.
What is the target application and who is the target customer or this service? It seems like a solution looking for a problem. How often does an application need to burst, but only within the resoure limits of a single VM? The problem of burst capacity seems solved already by simply scaling the number of instances in a well designed architecture.
You're assumiung a "well designed architecture" - and you're assuming that "well designed" maps to "perfectly horizontally scalable". Neither assumption need hold.
Oh, also: while I have no idea how this is actually implemented, under perfect conditions it's possible that this might be able to balloon your container in response to a page fault. That means you get instant scaling, rather than the couple of minutes it takes to spin new VM instances up.
but only within the resoure limits of a single VM?
I suppose the normal usage pattern would be to rent as many of their smallest VMs as needed for your base-load, and then scale them up simultaneously when more capacity is needed.
It does sound intriguing, but I'm honestly rather skeptical about the feasibility of this (at scale) with today's virtualization tech.
I came across GoScale two weeks ago. As somebody passionate about server technology I was both intrigued and very skeptical. Frankly I thought it was another MVP that would never appear.
So I contacted them asking for more information, expecting little. To my surprise they couldn't have been more approachable and friendly and gave me one of their first 10 test accounts. I was staggered to find it worked exactly as they said it would.
I'm already planning to put my next app on it so I just hope it's ready in time to host it.
This is cool. We currently host our app with Engine Yard and there's been a number of times when we've wanted to put in extra capacity for short bursts but it's impractical to do it. It's not the cost of upgrading for me that's the problem but the hassle of having to configure a new instance, and migrate to it. So I love the approach of dragging a few sliders and it magically happening. Go GoScale :)
Would love to know how this affects my bottom line! if my servers are basically running idle, it would be great if they use as little resources (and money) as possible. Then at any moment they can scale up when needed!
One way to trial this might be on dev servers that run automated tests. I wonder if a dynamically scaling test server could run our continuous integration tests more rapidly without significantly increasing server costs.
Your graphic depicting Server Load vs Server Instance is kind of choppy. Kind of left me feeling like I would experience a lag in your service. Good luck anyways.
Cheers rodly! Yeah I'm definitely not happy with it and want to fix that/get it re-done if we decide to keep it. I'll most likely have it stop animating after a while and also only load when it's in view. It's something one of us threw hacked together really quickly with HighCharts.js
jesus, I was like why do I care about a blank page? Whether on purpose or not, the "pages" are sized exactly the same size as my viewport (900 height screen). Give me a hint there's a little something down there to scroll to!
Good point. I'll probably add in some background textures, images next time. Perhaps also a next / back buttons along with a current page indicator. Any other suggestions towards how it'll signify a scroll action towards more content would be welcome.
I did on purposely have the 'pages' resize directly to the user's viewport using javascript as a visual style of design.
I'm really sorry about that, thanks for bringing it up Secoif. I've just pushed a quick fix. The website was supposed to be fully responsive to iPhone/iPad resolutions using various javascript and flexible width methods.
Unfortunately I made a change to video providers (to VidYard for analytics) which broke the video resize script (FitVids-JS). Whoops, my mistake.
I must apologise, we are both currently recovering from a bad cold. While 99% gone, its still lingering on a word or two. No illness or ailment can stop a developer!
It sounds like GoScale is bringing memory allocation resizing to Linux guests, which is handy. If it's done with KVM, it would be interesting to see if it can be added to the Joyent Solaris KVM port.
Free Solarish OSes for playing with Solaris Zones: http://smartos.org/ (from Joyent) and http://omnios.omniti.com/