Hacker News new | past | comments | ask | show | jobs | submit login
ClusterHQ is shutting down (clusterhq.com)
248 points by henridf on Dec 22, 2016 | hide | past | favorite | 224 comments



We've been running Kubernetes (500+ containers) in production for over a year now. I believe (and hope) that 2017 will be the year that persistent data storage will be solved. We are ready to move our data out of OpenStack and have our data services (Elasticsearch, Cassandra, MySQL, MongoDB) join the rest of our apps on Kube-orchestrated infrastructure.

But, we're not there yet. The options just aren't good enough. Look at the list of PV types for Kube [1]. You have technologies like Fibre Channel that are simply too expensive when compared with local storage on a Linux server. There's iSCSI, which is mostly the same story. Ceph is great for object storage but not performant enough for busy databases. GCE and AWS volumes are not applicable to our private cloud [2]. Cinder, to me, has the stench of OpenStack. Maybe it's better now? NFS? No way. Not performant.

I'm looking forward to seeing what shakes out in the next few months. It's just really hard to beat local storage right now.

[1] http://kubernetes.io/docs/user-guide/persistent-volumes/#typ...

[2] Beyond a certain size, it becomes more cost-effective to host your own Kubernetes cluster on managed or colocated hardware.


Look at the list of PV types for Kube

What I see is a lot of complex network filesystems, vendor-specific solutions and gateway protocols to expensive SAN solutions, which are already chalk and cheese in terms of features and performance.

Arguably one of the best features of unix-style systems is support for arbitrary mount points, filesystem drivers and (network or local) blockstores. Storage is, essentially, a well-solved problem at the OS level. The fact that this option is marked "single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster" raises eyebrows.

By choosing to expose individual remote storage model semantics as Kube-level PV drivers instead of just leaving this to the OS, what I would argue we essentially see here is the legacy of a cluster orchestration system that came out of Google... a system optimized for large, homogenous, dynamic workloads to provide organization-internal IaaS, and not reduced feature-set systems with simpler architectural properties (eg. no multi-client network-aware filesystem locking).

I would argue that, in fact, what many people actually want is simpler, and the current pressure to use 'one size fits all' cluster orchestration systems with a high minimum bar of functionality and nodecount (read: minimum hardware investment) is misplaced. At the very least, there's some legitimacy to this line of thinking.


Yes. k8s is cool but it is vastly overcomplicated for the needs of the non-Googles. We've been porting my company's production infrastructure to it over the last year and while it's been fun, I don't think it's been the correct thing for us.

Since suggesting your company is not in the same class of the companies that see literally billions of unique users every day, and thus may not need such overcomplicated solutions, is sure to make your boss irate, it's a good idea to familiarize with whatever new hotness has Facebook or Google's name attached to it.

Your clueless colleagues will race each other to announce the latest Google/FB engineering blog post in Slack so they can look the smartest and then convince your boss that since your Google-dom will be upon you tomorrow, you must adopt HotNewStuff today. This impulse is behind the proliferation of Hadoop and "Big Data", containers and orchestration, and MongoDB and NoSQL. All of these are useful tools that are valuable and good as necessary, but widely abused because people who don't really know what they're doing think this will give them an out.

You'll be stuck maintaining something interesting but really not mature or production-ready like k8s for years, just about long enough for it to become smooth and stable, at which time something else will come along to repeat the cycle. :)


Out of interest, what are you migrating from?


Deployment across EC2 nodes, managed with devops scripts from a few different tools and monitored with conventional monitoring solutions like Nagios/Munin. We migrated from colocated racks to that a few years back.

Personally, while there is undoubtedly a convenience factor with being pure EC2 and a cool factor with k8s, I think 80% of our stuff would be better off in the racks (which included a couple of hypervisors, so we still had some cloud-style flexibility and could do things like auto-scaling).


Whilst I often agree with you that we're a hype driven machine that more than often just creates more work for ourselves, I actually think kubernetes is an improvement over tying an app directly to EC2.

Obviously I imagine you know other tools better and it depends how you do it, but kube gives you a lot more by default. Arguably more importantly, I can lift and shift kubernetes and put it in any cloud or on-premise. I'm not really sure what the benefit of running a VMs would give you, other than possibly live migrations.


May I ask - what's the biggest issue you've been facing? Anything we can do to make it easier/more useful? We've found that there are a ton of things that people just end up reinventing unless it comes in the box (e.g. autoscaling, rolling deployments, roll backs, replications, aggregated logging/monitoring, etc).

Disclosure: I work at Google on Kubernetes


To be honest, I haven't gotten super-into-the-weeds on Kubernetes. Another guy is the main k8s guy, but I have used the cluster he's configured and deployed a few containers on it. I've also had to troubleshoot a few nodes. A lot of these complaints may be things that are already solved, but we just don't know how/where/why yet. I think we're also using a relatively "old" version of k8s (in young technology, "old" is anything more than a few months old), so some of these issues may have already been addressed.

First issue for me: the recommended way to run k8s for local testing, etc., is minikube. I run a hybrid Windows-Linux desktop env since June (full-time Linux for 10+ years before that), where Windows is the host OS and my Linux install is running as a VBox guest with raw disk passthrough. I have it configured essentially so that Windows acts like a Linux DE that can run Photoshop and play games, while I do all my real work through an SSH session to the local VM, which is my Linux install (and which I can boot into natively if desired, but dual-booting always impairs workflow, which was the reason I switched to this setup in the first place; previously, I would reboot into Windows maybe once a year even though there were games and things I wanted to try and photo editing in VMs hosted on my Linux box was painfully slow).

This means that minikube, itself dependent on VMs to spin up fake cluster members, won't work because VM hardware extensions aren't emulated through VirtualBox's fake CPU. So that's the first hurdle that has stopped me from tinkering more seriously with k8s clusters. I know there is "k8s the hard way" and stuff like that too, but it'd be really nice if we had a semi-easy way to get a test/local k8s up and running without requiring VM extensions, as I imagine (but don't actually know) most cloud rentals don't support nested VMs either.

Besides this big hurdle to starting out, many of the issues are high-level complexity things that create a barrier to entry more than things that actively get in the way of daily use once you understand them.

For example, we have 3 YAML files per service that need to be edited correctly before something can be deployed: [service]-configmap.yaml, [service]-deployment.yaml, and [service]-service.yaml. We have dozens of services deployed on this cluster, so we have hundreds of these things floating around. They're well-organized, but this alone is a headache. The specific keys have to be looked up, they have to be in the right type of configuration; if something that is supposed to be in the configmap is in the deployment file, k8s will be unhappy, the right env variable won't get set (more dangerous than it sounds sometimes), the wanted shared resource won't get mounted correctly (and my experience is that it's not always obvious when this is the case, and the mount behavior is not always consistent), or whatever. Keys must be valid DNS names, or something like that, because etcd, which runs under the covers here somewhere, doesn't accept names that would be invalid DNS entries. This means no underscores. There's nothing wrong with any of that per se, but it's a lot to wield/remember.

I also remember mostly thinking the errors related to k8s configurations and commands were unhelpful. For example, it took me a long time (a frustrating 60-90 minutes probably) to realize that `kubectl create --from-file` wasn't reading in my maps as config structures, but rather as literal strings. This seems like something that should've been made obvious through something like a warning on import ("--from-file imports your map as a literal; if you want to parse the contents and use it as a config, please use `apply -f`" (and `apply -f` means apply the config read and parsed from the file, not "apply with force", while `create --from-file` means "create a literal string as a resource instead of parsing and creating this config as a config object; however, be careful with kubectl apply, because it will try to silently merge existing configs with new values, which is sometimes helpful, and sometimes can drive you nuts if you forget about this behavior; I dunno if kubectl delete configmap my-configmap.yaml and recreate is always feasible or if that would give dependency conflicts or what?)).

To deploy, kubectl apply -f changed-yaml.yaml, which sometimes does and sometimes doesn't clean up the running pod (service configuration thing? or is it a matter of which config type I'm applying, cm, deployment, or service?), `kubectl delete pod old_pod_id` if not automatically reaped, restarting is automatic under our config after a delete which I'd guess is configurable too, then you have to `kubectl list pods | grep service_name` to get the new pod id, and `kubectl logs pod_id` to get the logs and make sure everything started up normally, though this just shows the logs output by the container's stdout, not necessarily the relevant/necessary logs. Container-level issues won't show on `kubectl logs`, but require `kubectl describe pod pod_id -o wide`.

Then you have to `kubectl exec -it pod_id /bin/whatever` to get into the right container if you need to poke around in the shell (and I know, you're not supposed to need to do this often). Side note here, tons of people trying to containerize their apps that run on Ubuntu or Debian today onto Alpine, another mostly-unnecessary distraction, and seems to result in just grabbing a random container image from Docker Registry that claims to provide a good Ruby runtime on Alpine or something without looking into the Dockerfile to confirm, which IMO is a much larger security risk than just running a full Ubuntu container.

Lots of extended options like `kubectl get pods -o [something]` are non-intuitive. I guess they're JSON pathspecs or something like that? Again, that probably makes sense, but it's pretty unwieldy. Often have to do `kubectl describe pod pod_id -o wide` to get useful container state detail.

When a running pod was going bananas, we had to `kubectl describe nodes`, again a long and unwieldy output format, and we have to try to decipher from the 4 numbers given there what kind of performance profile a pod is encountering. This leads us into setting resource quotas to make sure that pods on the same node don't starve each other out, which is something I know the main k8s guy has had to tinker with a lot to get reasonably workable.

Yes, we have frontend visualizers like Datadog that help smooth some of this over by giving a near-real-time graph with performance info, but there's still a lot of requisite kubectl-fu before we can get anything done. I also know that there are a ton of k8s and container ecosystem startups that claim to offer a sane GUI into all of this, but I haven't tried many yet, probably because I'm not really convinced any of this is necessary as opposed to just cool, which it undoubtedly is, but that's not how engineers are supposed to run production environments.

I mean all of this doesn't even scratch the surface, and I know they're not huge complaints, but they just speak to the complexity of this, and a reasonable person has to have some incentive to do it besides "It makes us more like Google". Haven't talked about configuring logging (which requires cooperation from the container to dump to the right place), inability to set a reliable and specific hostname for a container in a pod that will persist through deployments, YAML/JSON/etcd naming and syntax peculiarities in the deployment configs, getting load balancing right, crash recovery, pod deployments breaking bill-by-agent services like NewRelic and Datadog and making account execs mad, misguided people desparetely trying to stuff things like databases into this system that automatically throws away all changes to a container whenever it gets poked, because everything MUST be using k8s, since you already promised the boss you were Google Jr. and he will accept nothing less, and a whole bunch of other stuff.

All of this ON TOP OF the immaturity and complexity of Docker, which itself is no small beast, on top of EC2.

That's QUITE the scaffolding to get your moderate-traffic system running when, to be honest, straightforward provisioning with more conventional tooling like Ansible would be more than sufficient -- it would be downright sane!

SOOOOOOOOO ok. Again, I'm not saying there's anything wrong with how any of this is done per se, and I'm sure some organizations really do need to deal with all of this and build custom interfaces and glue code and visualizers to make it grokable and workable, and of course Google is among them as this is the third-generation orchestration system in use there. None of this should be taken as disrespectful to any of the engineers who've built this amazing contraption, because it truly is impressive. It's just not necessary for the types of deployments we're seeing everyone doing, which has nothing to do with the k8s team itself.

I'm sure that given the popularity of k8s, people will develop the porcelain on top of the plumbing and make it pretty reasonable here in the not-so-distant future (3-5 years). However, like I said in my original post in this thread, I don't think this is benefiting many of the medium-sized companies that are using it. I think, to be completely frank, most deployments are engineers over-engineering for fun and resume points. And there's nothing wrong with that if their companies want to support it, I guess. But there's no way it's necessary for non-billion-user companies unless you REALLY want to try hard to make it that way.

I could write something extremely similar to this about "Big Data". Instead of concluding with suggesting Ansible, we could conclude with suggesting just using a real SQL server instead of Hadooping it up with all of those moving parts and quirky Apache-Something-New-From-Last-Week gadgets and then installing Hive or something so you can pretend it's still a SQL database.

Is there a way to make over-engineering unsexy? That's the real problem technologists who value their sanity should be focusing on.


If you're deploying kubernetes to AWS, you should probably be using kops (but then I would say that, because I started the project. But OTOH I started it because nothing else fit the bill!)

Also, if you aren't already a member, come join us in #sig-aws on the kubernetes slack - we're a group of Kubernetes on AWS users, mostly happy - and working together to figure out the pieces where things could be better!


Bookmarking this comment and yours further down thread. Very insightful. Regret I only have one upvote.

Disclaimer: DevOps going through the same process (everyone wanting to move to Kubernetes/k8 because its the new hotness/"orchestrated containers").


> Storage is, essentially, a well-solved problem at the OS level. The fact that this option is marked "single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster" raises eyebrows.

Just to clarify this a bit: Persistent volumes as an API _resource_ in Kubernetes are independent of which node a container requesting them is scheduled on, which is why it makes little sense to have a host-independent host volume.

If you have your storage sorted out on the hosts you can use a "simple" volume to mount it correctly [1]. Scheduling can also be restricted to the correct nodes with that storage by using node selectors / labels.

1: http://kubernetes.io/docs/user-guide/volumes/#hostpath


Yep, can't repeat this enough. If you've solved node storage management, you've solved k8s storage management too. Use hostpath and call it a day.

Disclosure: I work at Google on Kubernetes


Absolutely agree. We just haven't solved node storage management. We need a better way to deal with data storage than simply using StatefulSets and tying database cluster members to a given Kube node.

That's what I'm hoping for in 2017.


You can either use local disk (thereby tying to a node) or network disk (fully supported, but apparently not good enough in any number of dimensions) or local+replication which kube does NOT currently solve cleanly.

The model I want to see is fast network-centric access to replicated local data. Some vendors are pushing into this space now. 2017 will be exciting.


It "WILL NOT WORK" because we need to use additional information for scheduling against a PV that uses local data. That work isn't done because we are still trying to find the right balance of API for local vs durable.

It HAPPENS to work in a single node cluster because the scheduler doesn't have a whole lot of wiggle room to do the wrong thing :)


Along with Docker's efforts, there are others working on container-based storage. This landscape[1] lists Ceph, Gluster, but also Portworx, Minio, Diamanti, Dell EMC's Rex-Ray, and SolidFire. I think also folks like StorageOS and Supergiant and frankly the whole storage & platform industry players are running in this direction. [1] https://github.com/cncf/landscape


Same here. Running ~100 app servers in K8S and the rest (databases & legacy apps) as regular GCE instances with PD drives. But long term going K8S-only is instrumental for us to prevent vendor lock-in.

I really hope that storage for K8S happens this year in a form that is simpler than Gluster/Ceph/etc and preferably integrated. Right now we're using NFS and it's ok for the simple applications but I right now I wouldn't dare deploy my databases in K8S and feel good about it.

What gives me hope is the speed the K8S is moving forward and the superb experience we had thus far.


Correct me if I'm wrong, but if you have the storage solved for the rest why not just use that same storage mechanism? If you're using local storage on your legacy applications and are essentially bound to the one node why couldn't you tag one node and use hostpath in kubernetes?

Since these applications are fundamentally not designed for something like kubernetes what everyone seems to be asking for is Kubernetes to provide network storage that has the good things from NFS but faster. I could be way off base here but asking for a simple and performant solution to this problem from projects like kubernetes just doesn't seem reasonable. Network storage, particularly the type that provides traditional filesystem semantics, is a hard space all on its own.

To put it another way, it sounds like people are asking kubernetes to take single node applications dependent on traditional filesystems and provide a magical performant network storage layer to make those failover seamlessly between kubernetes nodes with no (or tunable?) data loss. For those asking for this how would _you_ go about creating such a system? Just thinking through that a little should make it clear that what you're asking for isn't just a solution to a very difficult problem, it's a solution that is, I think, worth quite a lot of money to the person/group that solves it elegantly.

I would genuinely love to be proven wrong here so by all means destroy my argument with extreme prejudice.


There's ZREP [1]. It uses the snapshots feature of ZFS, which can send them over the network, to continually (but asynchronously) mirror the local filesystem to a remote machine.

It seems to provide exactly the features required (fast local access with failover), at the expense of a window of lost data (since the replication isn't synchronous and confirmed).

[1] http://www.bolthole.com/solaris/zrep/


What would it look like to be integrated? Could you just use your existing node solutions and hostpath and be set?

Disclosure: I work at Google on Kubernetes


I think NFS without its issues (performance & security) would be ideal as an integrated solution.


If you can solve that elegantly and safely I suggest you build a startup around it.


Any idea how does GCE handles PD drives underneath your K8? Is it network-attacched storage? Are those NVMe drives I wonder?


I thought this was a valid question as I am curious how you you able to run databases using K8 on GCE.

Why would somebody downvote asking a question?


This doesn't really make much sense to me.

If your systems support software level replication (Elasticsearch, Cassandra, MySQL, MongoDB all do) then why do you need persistent storage? You just need container scheduling anti-affinity and enough replicas.

You only need persistent storage for systems which don't support that replication. Ceph can certainly be deployed as performant for DB workloads.

You say "Cinder has the stench of OpenStack" but Cinder is just a Python based webapp which povides an API to arbitrary storage backends (Ceph RBD, iSCSI, NetApp ONTAP, whatever). How can it be "better now"? It doesn't provide storage on its own. If your ops team was using the default "proof of concept" LVM backend then I could see how you might get a bad impression but that just means your ops team doesn't know much about OpenStack.

Am I missing something obvious?


Yes, you're missing a few things. Maybe not obvious, though.

First off, the replication thing. It is true that ES, C*, and Mongo replicate within their cluster mostly automatically. However, this is not without cost. It takes non-trivial amounts of network capacity, disk I/O, and CPU cycles to migrate shards from a failed (or downed) node to a newly stood-up node. Often, many GBs must be moved and for something like ES, where shard replicas reside on many different nodes, that means much of your cluster feels the impact of this. The cluster can heal, but healing isn't easy.

Why would a cluster node go down? It's not always hardware failure. CoreOS regularly self-updates and reboots itself without intervention. In a Kubernetes cluster, this is a non-event because pods are simply rescheduled elsewhere the the degradation is momentary. If we were talking about 300 GB of persistent data, though, that's a serious amount of data that will get reshuffled every time there is a node reboot, especially when you consider that an Elasticsearch cluster may span dozens of physical nodes and experience dozens of node reboots in the course of a normal day. Maybe we could hack something that would disable shard reallocation in ES (there's a setting for this) when scheduled reboots happen but that's pretty hacky. Besides, ES is just one of a number of different datastores in use at my workplace.

As for Cinder, it's reliant on OpenStack APIs which (at least as of Juno) are reliant on things like RabbitMQ. We've seen a number of OpenStack failures due to RabbitMQ partitioning and split-brained scenarios. We're also back to the disk-on-network problem again: SCSI backplane ---ethernet---> client will never be as fast as local disk.


> First off, the replication thing. It is true that ES, C*, and Mongo replicate within their cluster mostly automatically. However, this is not without cost. It takes non-trivial amounts of network capacity, disk I/O, and CPU cycles to migrate shards from a failed (or downed) node to a newly stood-up node. Often, many GBs must be moved and for something like ES, where shard replicas reside on many different nodes, that means much of your cluster feels the impact of this. The cluster can heal, but healing isn't easy.

I'm talking about replication, not sharding though. If the data is actually lost then you have to bear the penalty of re-replicating it to match your replica count regardless, there's no magic wand here to do with "persistent storage". If the data isn't actually lost (e.g. due to CoreOS automagic reboots) then you absolutely should be putting the cluster into maintenance mode until the reboots are complete.

> As for Cinder, it's reliant on OpenStack APIs which (at least as of Juno) are reliant on things like RabbitMQ. We've seen a number of OpenStack failures due to RabbitMQ partitioning and split-brained scenarios.

Still pretty confused when you mention OpenStack. Cinder doesn't rely on OpenStack APIs per se, it provides an OpenStack API (for block storage). RabbitMQ clustering has longstanding issues with partitions which are mentioned explicitly in the documentation, nothing to do with OpenStack, everything to do with Erlang MNESIA DB. Any decent OpenStack team has learned by now to use singleton RabbitMQs with a master/slave configuration loadbalancer (i.e. haproxy) in front.

> We're also back to the disk-on-network problem again: SCSI backplane ---ethernet---> client will never be as fast as local disk.

Right. But wasn't the comment about persistent storage? You're never going to have persistent storage in your k8s cluster that magically avoids that problem, so not really sure what the point is here.


I'd suggest looking at ScaleIO from EMC which is free for unsupported use I believe. It is blazing fast, runs on bog standard Linux, and supports K8S and Cinder. It's the most impressive block storage product for high performance I've seen.

http://cloudscaling.com/blog/cloud-computing/killing-the-sto...


Any chance to see a comparison on recent Ceph version(s)? (Maybe with BlueStore as Ceph's backend?)


>NFS? No way. Not performant.

Really?

https://www.spec.org/sfs2008/results/res2011q4/sfs2008-20111...

And those are nothing compared to modern systems.


Let me clarify: not performant in any way that we want to implement. The NetApp referenced in that analysis would cost as much as we've spent on both of our OpenStack and Kube clusters. NetApp is great if you're a hospital or a bank but not an internet company.

We need something built on common PC chassis, either as distributed local storage or some type of high speed interconnect.


Storage is definitely one of those businesses where "fast, good, cheap: pick any two" has always applied, and still does.


and as an industry, we are tending towards fast, cheap, and you can write code to work around not having "good"


In my experience, if it isn't good relatively quickly, it certainly doesn't remain cheap for long.


It's not so much about the code as having a usage that can accommodate being run on "not good".

Banks and hospitals don't.


I have lots of issue to get NFS works on some particular db as well. Such as RethinkDB and InfluxDB...


Last I checked, databases came with numerous warning in the docs "don't run me on a network drive".

Is there any reason why you are trying this at all?


Depends entirely on the database as to if they work and why. Oracle, for instance, has a native NFS client built into the database. Modern versions of MS SQL support running on SMB3. Postgres is fine with NFS as well.

The biggest issue people have with NFS is when they roll their own. To get performance they allow asynchronous writes | but this means the write cache can potentially be lost during a power failure. Enterprise NAS systems like the NetApp referenced above have battery backed cache so writes are never acknowledged until they're in a secure medium.


I think that's a good rule of thumb but you can certainly do it if you know what you're doing (which includes having the right hardware).


I don't know how to solve problem of stateful container and data storage so I tried to use a NFS for this and failed.


Not to beat the NetApp horse to death, but they've solved that problem (they just do a poor job of advertising it):

https://github.com/NetApp/netappdvp


I've had many problems with NFS, but they were not performance related. Mostly FS hangs with some program waiting for IO that never comes.


This is the biggest issue I've had with NFS. If the mount goes stale, programs can get stuck in an uninterruptable state that takes a reboot to clear out.


Just mount the file system with "soft,intr" options and you won't have that problem. Otherwise NFS is (perhaps unrealistically) optimistic that the server will come back.


I'm in a similar boat. Does Kubernetes have a persistent local storage API story on the horizon?

The one thing I have found on Mesos that I really liked are persisted volume resources:

http://mesos.apache.org/documentation/latest/persistent-volu...

I was hoping K8 has something similar but it didn't the last time that I looked.

Edit: I just looked at your 1st link. I see the PV docs, none of those fit my use case(local) unfortunately.


Curious if you've tried gluster at all. Using Kubernetes also and about to cross the threshold of stateful data - performance is important but not critical since we're using it in a fairly low volume, low throughput way, but it will grow over time and we want to future-proof it a bit. gluster seemed like the best-case fit for us but have done no empirical testing yet.


Hi, I'm the executive director of CNCF (which hosts Kubernetes) and the co-author of our landscape document, which has a section on storage [0].

I'll just state the obvious that you're very much correct about stateful storage still being the most immature aspect of the Kubernetes and cloud native story, but there are definitely tons of folks working on it. And in the meantime, people are succeeding in production using last-generation solutions on bare metal or provider offerings in the cloud.

I'll make a quick pitch that if you are a cloud native end user interested in engaging with the community, please email me about joining CNCF's end user board (my email is in my profile). We just dropped the price from $50 K to $4.5 K, and it now includes 5 tickets to CloudNativeCon/KubeCon ($1.7 K and 2 tickets for startups).

[0] https://github.com/cncf/landscape


Yes I think that Kubernetes is too stubborn with regards to how it wants to decouple Pods/Containers from specific Nodes.

Many new databases (especially NoSQL ones) already support clustering and rely on being tightly coupled to specific nodes/machines in order to work. I think K8s currently makes this a bit too difficult - They need to improve support/documentation for using host storage for these kinds of clustered DBs.

NFS doesn't make sense for storing structured data because it doesn't know what the best way to partition/search your data is going to be (the directory tree structure isn't always what we need) - I think that this can only be solved at the DB layer unfortunately.


I don't want to break your hopes but stateful containers will only ever run on GCE and AWS.

The entire existence of stateful containers depends on having network storage at hands.

The only good network storages are GCE and AWS volumes, which are proprietary trade-secret technologies only available there.

If you want to play it old-school. You can run virtual machines with VmWare on bare metal servers with SAN disks (iSCSI/Fibber Channel). The Vm can be hot migrated from an host to another. It works and it's been battle tested for almost a decade. (IMO: Docker is not only new but a toy in comparison to that).


If virtual machines with live migration is what you're after, vmware is not your only option. You can get largely the same effect without involving any iscsi/FC/SAN tech with another open source project, which happens to originate from the same place as Kubernetes: http://www.ganeti.org/

Ganeti has been battle-tested for about a decade too, supports an assortment of storage backends, including some clustered ones like Ceph, can do live migration between hypervisor nodes, and it's a nicely maintained Python app with some Haskell parts in it.

ganeti came out of my team at google in the mid 2000s, fwiw. I did not work on it, but I've certainly used it. It's pretty nice.


Thank you. I've used Ganeti extensively and it help wrangle our Xen/DRBD rats nest very quickly. It's an awesome project.


> I don't want to break your hopes but stateful containers will only ever run on GCE and AWS.

Actually Kubernetes is starting to work on support for persistent local volumes; we know the lack of this feature is a significant barrier for running some stateful applications on Kubernetes, particularly on bare metal. The concrete proposal for how we are thinking to do it is at https://github.com/kubernetes/kubernetes/pull/30044

The high-level feature requests are at https://github.com/kubernetes/features/issues/121 and https://github.com/kubernetes/kubernetes/issues/7562

(Disclosure: I work on Kubernetes at Google.)


Having containers with local volumes is counter productive. They're just pet that can't be moved around and killed/recreated whenever you want. (Though I understand that it can be useful at times for some testing).

IMO: It's a marketing and usage problems. You should re-focus people on running exclusively stateless containers. Sell the strengths of containers, what it's good at and what it's meant to do. Containers = stateless.

Stateful containers are an hyped aberration. People barely get stateless containers working but they want to do stateful.


I don't understand why you say "Having containers with local volumes is counter productive." I would agree it's probably not a good architecture if you're running a huge single-node Oracle database, but it's an excellent way to run data stores like Cassandra, MongoDB, ElasticSearch, Redis, etcd, Zookeeper, and so on. Many people are already doing this, and as one large-scale real-world example, all of Google's storage systems run in containers. The first containerized applications (both at Google and in the "real world") were indeed stateless, but there's nothing fundamental about containers that makes them fundamentally ill-suited for stateful applications.


You don't understand because you are blinded and spoiled by Google.

Go see the outside world => They have none of your internal tech and services. Stateful containers do not exist there. "Containers" means "docker" which is experimental at best.


Yeah, I'm really looking forward to a true local storage option. I'd recommend watching https://github.com/kubernetes/kubernetes/issues/7562 and https://github.com/kubernetes/kubernetes/pull/30044 if you want to keep up with how things develop.


Check out the startup I work for, ClearSky Data [1]. We provide cloud-hosted block storage with SAN-level performance to your private datacenter and enterprise-grade durability and availability at a competitive price. I'd be glad to answer any questions you have (I'm an engineer) or point you to someone who can.

[1] http://www.clearskydata.com/


Given the topic you are posting on, my question would be what happens if ClearSky goes out of business? Potentially any file system hosted on your storage would just disappear, right? (And the DR capability too, if I am reading your website correctly.)

I don't mean to be negative but I'm having some trouble seeing how the ClearSky feature set justifies assuming what looks like an existential risk to business-critical data on the service. Interested in your thoughts on this.

edit: typo


Exactly. Besides the obvious technical implications of off-site, "cloud-based" storage, what happens if ClearSky can't pay it's hosting bill? Presumably, your many TBs of data (which could take weeks to transfer) would vanish into thin air.


This is a valid concern and I am the wrong person to answer it (not being on the business side of things) but I can get you in touch with someone who can.


We don't use public cloud. We're hosted on dedicated hardware for a number of reasons. While we're not opposed to commercial solutions, we strongly prefer open source solutions for obvious reasons (like, this ClusterHQ situation).


You don't have to. We provide iSCSI or FC access points in your private datacenter.


Is the hardware storing the data in-datacentre too? I think that was the key thing.


We (ClearSky) store your data off-site (except a small cache), but provide performance as if the data were on-site.


I don't believe you. There's no way you're coming even close to local disk performance with an off-prem solution.

Feel free to prove me wrong but I don't think that this is a reasonable solution for backing storage for a database.


Well it seems clear to me that they can only provide that level of performance for blocks already in their onsite cache. Presumably they're using a novel compression/de-duplication scheme and maybe prioritizing blocks using historical and/or predictive analysis to cache the right data at the right time but you can only transfer data as fast as you can transfer it. I'm guessing a full export of all of your data (that isn't cached) is going to go as fast as the line/compression allows.


Bandwidth is much less a problem for OLTP workloads than latency is. (If all you care about is bandwidth, S3 is your friend.)

With ClearSky, even for workloads that don't fit in the edge cache (which we believe are few), you'll still see single-digit millisecond random read (and write) latencies. This is made possible by our points of presence (PoPs) located in each metro area we serve. These PoPs house the lion's share of data in a private cloud, and are connected to each customer site with private lines with sub-millisecond latency.

In other words, the speed of light is very fast when it goes in a straight line with nothing in its way ;) While we do have some secret sauce in the data pipeline, it is because we own the network to the customer that we can provide the performance we do.

(Fancy PDF with a few more details here: http://cdn2.hubspot.net/hubfs/445689/2015_assets/ClearSky-Da...)


Check out this report if you don't believe me: http://www.clearskydata.com/clearsky-takes-primary-storage-t...

Several members of our team were core developers of EqualLogic (pre-Dell buyout). We have significant investment from Akamai. I promise you this is not snake oil.


Check out this report? A link to a marketing data collection form? Come on. Link me to the PDF. Let's see the technical details of exactly how your solution is built out.


Technical details are here [1], though if entering a name and e-mail in a form is off-putting to you, I'm not sure anything could convince you to take the step of switching enterprise storage vendors.

Aside, I can't express how validating it is how much you (and others, given the downvotes) disbelieve me. It makes me quite proud to have helped develop a service considered so impossible that it is written off as black magic. It does make it hard to market the damn thing though ;)

[1] http://www.clearskydata.com/clearsky-global-storage-network-...


Wow.

First, this is a technical audience. Plenty (most?) of us couldn't give a shit about what marketing says. I believe about half of what comes out of the mouths of sales/marketing folks -- for good reason. We don't care what Gardner or some firm you paid to write up a report says.

Personally, I have a real dislike of sales/marketing folks and I will avoid them at all costs... so, no, I don't want to give you my name or e-mail address. I don't want your people calling me, interrupting real work. I don't want to view your webinar. I want to look at the technical details -- the facts -- and decide for myself and then, quite possibly, completely forget I ever heard about your company and go about my day.

Last, don't fool yourself. The downvotes aren't "validation" -- at all -- but, hey, go on living in your fantasy world. If it really was as awesome as you seem to think it is, it wouldn't be hard to market. To the contrary, the damn thing would sell itself and you wouldn't even need a marketing department.


Your loss buddy.


So you are essentially doing AWS Storage Gateway then?

http://docs.aws.amazon.com/storagegateway/latest/userguide/G...


No. ClearSky provides several features that Storage Gateway does not:

* ClearSky has a dedicated private network to every customer ensuring low latency.

* ClearSky fully manages disaster recovery; you lose no data if your datacenter is destroyed.

* ClearSky provides transparent instantaneous data mobility: you can move volumes to any other data center in a matter of seconds.

* ClearSky provides and manages the edge cache.

All of the above must be provided by the customer with Amazon's offering.


So it's stored on a public cloud?


Yes.


I get that local storage is good for perf, and everyone has some, but you HAVE to understand how disastrously bad it is for availability (unless you do replication yourself, a la Cassandra).

That said, we hear you, we're contemplating options for local-disk volumes.


Ceph RBD is plenty fast for having a database on... was able to get better performance than fibre channel or NFS to a NetApp using Ceph, ran some nice large Oracle instances on VM's on top of OpenStack backed by Ceph RBD.


My testing showed otherwise but I'd love to see what you've done. What sort of equipment did you use, what kind of network, and what how many IOPS did you see?


SuperMicro has their IOPS optimized Ceph storage SKU's, that is what was used. Looks like they have updated since we purchased:

https://www.supermicro.com/solutions/storage_ceph.cfm

We went for upgraded network capacity though, 20 Gbit/sec cluster backend, 20 Gbit/sec cluster frontend...

12 * 8 TB drives, with 800 GB NVMe for the Ceph journal. Fast, large Ceph journal was key.

Total installation was about 3 PB raw, that is 1 PB useable with replication size 3. 33 Ceph OSD nodes, 3 Ceph monitor nodes and Juniper low latency switching using the QFX5100.

Full IPv6 network on both frontend/backend. 11 nodes per rack, each rack being it's own /64 routed domain. 3 racks.

I'm no longer doing contract work for the company, but last I heard they were expanding it out to 6 racks with an additional 3 PB raw capacity added on because of growing datasets.

It's an OpenStack cluster that is connected to this Ceph cluster, 40 Gbit/sec storage backend network, with 40 Gbit/sec front-end that VM's have all their traffic on. So storage and standard traffic don't mix.

The performance and IOPS even virtualized were enough that the entire company is moving their bare metal databases to VM's. I am unable to disclose IOPS or Oracle database performance due to contractual obligations unfortunately.


> The options just aren't good enough.

I'd be curious to understand your storage requirements as a production Kubernetes user. What would you like to see for performance, cost and RPO/RTO?


local NAND flash?


does VsphereVolume mean you can use data virtualization slash software defined storage like vSAN?

https://en.wikipedia.org/wiki/Data_virtualization

https://en.wikipedia.org/wiki/Software-defined_storage


curious to know what you are doing right now ? I'm planning to deploy a redis cluster on a private cloud and am wondering what to use.

do you use hostpath right now ?


We don't put any persistent data into Kube. Everything goes into OpenStack instances (Ubuntu), orchestrated by Chef. We hate it. OpenStack SDN has been flaky, Chef is a pain and doesn't support the latest Ubuntu releases well, none of the devs or technical ops engineers like it.

It's my #1 goal for 2017: figure out persistent volumes for Kube.


Change Chef for Ansible and I have the same architecture. And you know what? I have decided to put on hold K8S until persistence is properly managed. Meanwhile, we have decided to give a chance to serverless architectures with AWS lambda.


I agree that persistence in k8s is tricky, especially at scale, but at least in our case that doesn't drive us way from the platform. Kube is awesome for services, and if we have to keep a few things on gcloud instances bolted to reliable storage for the moment that's at least less heterogeneous than what we had before kube came along. In other words I don't think you have to kube all the things to see a lot of benefit.


Are you running Openstack yourself or a vendor backed version ? My understanding is deploying OpenStack in production is a nightmare. Would you mind sharing your experience (cluster size, upgrade, support etc.) ?


Disclaimer I work at Pivotal but - I'd suggest taking a look at http://bosh.io which can handle full stack automation ; there already are quality releases for MySQL, elastic, mongo (from anynines), and Cassandra, with commercially supported releases in some cases with fully orchestrated vm and volume management on Openstack.

These are for cloud foundry nominally but (yay actually collaborating foundations) the open service broker API ( https://www.openservicebrokerapi.org/ ) allows you to hook these into Kube and (the best part) standardize how your CI/CD pipelines manage the lifecycle of these services independently of whether they're backed by Openstack today or Kubernetes tomorrow.

Persistent volumes for these sorts of services on Kube will require the new PetSet primitive to mature.


If you have access to an object store, you can get persistent file system storage for your containers using ObjectiveFS[0]. You can run it either on the hosts or inside the containers depending on your goal.

[0]: https://objectivefs.com


How many people were employed at ClusterHQ? Honestly I never even heard of the company but I had heard of some of the open source projects. Maybe I'm just out of the loop.

Also any information as to lessons learned, etc? Basically why it failed? Looking at the marketing material I didn't see anything really remarkable about it (nothing that stood out as a "oh this is why I would give them money") so I'm curious.

> I’ve been part of big successes as well as failures. While the former are more pleasurable, the latter must be relished as a valuable part of life, especially in Silicon Valley.

Relished? I never really understood the Silicon Valley "failing is awesome!" mentality. Failure is failure. It's not awesome. Why would you relish it? Take the lessoned learned for sure but you likely just lost several people's money and you lost your employees their jobs, what is there to take enjoyment from? Seems a little sadistic and a tad lacking in empathy for others involved.

But maybe that's just me.


> Failure is failure. It's not awesome. Why would you relish it?

You're beating on a straw man. Nobody (very few people?) thinks the failure itself is good in and of itself. The mentality is to observe that even if a business endeavour ends in financial failure, that outcome is just one effect of a long process that had many effects, and that many of the other effects of the process were highly beneficial.

Sure, some people are out of a job, and some investment money has been lost - these are bad things, and nobody is claiming otherwise. On the other hand, many people have been gainfully employed for some time, and have gained a great deal of professional experience. That value has not been lost. Open source software has been created, and continues to be useful. That value has not been lost. Investors have lost money on this bet, but if they distributed their risk over many bets, they are probably going to come out ahead of where they would be if they made no bets whatsoever. Many people have learned some lessons about why a business may fail to be financially successful, and can use that knowledge to build financially successful businesses in the future. In sum: much value has been created which would not exist if this business had not been attempted, even if it ended in financial failure.


I think the critical idea is separating out the idea of the business failure and a moral failure on the part of those involved.

Businesses fail, and that's often sad. They even fail when you do everything right. It's just not shameful to have your business fail, nor in itself a mark on the character of those involved.


> You're beating on a straw man.

How so? If you're stating that you're going to greatly enjoy your failure, how is that not making failure seem good? Can you greatly enjoy something that you don't consider awesome (greatly being the keyword)?

Seems like we could split hairs over this in a variety of different ways. On the other hand the rest of your post discusses the value created out of a failed company which wasn't related to the context of my post.

I think we're going to be arguing in straw man circles :)

> if they distributed their risk over many bets, they are probably going to come out ahead of where they would be if they made no bets whatsoever

Not necessarily true, actually. The best investors come out ahead. The majority of them near break even or lose money on a fund. At least this is what I've always found unless you have a source I could take a look at?


> On the other hand the rest of your post discusses the value created out of a failed company which wasn't related to the context of my post.

"not relevant"? The entire point of my response was to point out that the fundamental flaw in your argument is that you are ignoring the wider effects of the business process, while the "failure is success" mentality is all about taking into consideration those same wider effects.

> If you're stating that you're going to greatly enjoy your failure, how is that not making failure seem good? Can you greatly enjoy something that you don't consider awesome.

You don't enjoy the failure, but you can still enjoy the process that happens to lead to the failure, and the net outcome can still be positive. Once again: the mentality you're attempting to criticise is all about accounting for the entire process and all of its outcomes, and not narrow-mindedly focusing on just the financial outcome for the business.

> I think we're going to be arguing in straw man circles

If you continue to insist on equivocating between financial failure of the business (an individual outcome) and the holistic outcome of the entire process, yes. Well, I won't bother to continue responding, but that's beside the point.


> You don't enjoy the failure, but you can still enjoy the process that happens to lead to the failure, and the net outcome can still be positive.

But like I said you're adding your own words / perspective to the post. None of this was said nor implied hence:

>> I think we're going to be arguing in straw man circles


> If you're stating that you're going to greatly enjoy your failure

'Relish' in this context doesn't mean 'greatly enjoy', as in entertainment. It means simply 'to significantly value', as in valuing the learning opportunity, but not necessarily the event itself.


> It means simply 'to significantly value'

Er, not trying to be pedantic but I can't find this definition anywhere. Is that an informal use? I only see references to the condiment or similar to 'greatly enjoy'.

Might be one of those cases where using more specific wording would have helped the blog entry.


The article is talking about relishing the richness of life's experience, not some sort of Machiavellian cackling while you watch people lose their jobs. The failures are simply to be valued as learning experiences that are part of life. That particular phrase is preceded in the same sentence with another phrase saying that the failures basically aren't pleasureable.

Read the paragraph as a whole, and don't just fish out one phrase and take it out of context.


I read the whole thing. Taking it at face value I don't see what you're talking about in it at all. Maybe he meant it in the way you're saying? Maybe he didn't? It's usually best to speak in literals on the internet and not hope to someone to fish out implications.


Ed Catmull, Pixar cofounder and inventor of the Z-buffer, has a great take on mistakes and failure in his book Creativity, Inc. Here's a pretty decent summary:

https://www.brainpickings.org/2014/05/02/creativity-inc-ed-c...

Essentially, we're going to fail. It happens. Might as well get it out of the way.

Secondly, failure averse cultures don't actually prevent failures, and they have a tendency to squash innovation.


> Secondly, failure averse cultures don't actually prevent failures, and they have a tendency to squash innovation.

Celebrating failure and being failure averse have nothing to do with each other other than celebrating failure means you're probably not failure averse (but I would argue you lack empathy for the people you just lost money for and for the people you just lost jobs for).


"Essentially, we're going to fail. It happens. Might as well get it out of the way. Secondly, failure averse cultures don't actually prevent failures, and they have a tendency to squash innovation."

That's a very interesting philosophy ... I wonder how that idea, in general, relates to clusters ?

I've seen organizations pour lots of time and resources and brain power into chasing nines on their uptime and they all have outages. And those outages are particularly painful due to the complexity that had to be added - complexity that sometimes grows non-linear in relation to the "nines gained".

I've always leaned towards building dumb things that failed simply. They're going to fail anyway ...


I read the whole book, let me put Catmull's quotes in context. They're not really generalized startup/life advice; Creativity Inc. is targeted toward people who are already in management and want to improve their department's output.

While the root principle that failure is a mile marker, not a road block, on the path to innovation can be extracted, Catmull's book itself is not really about that topic. Rather, it's about how Pixar came to be (recounting major events in company history up to publication, including the spinoff from LucasArts and the death of Steve Jobs), the principles that power Pixar's culture, and how Pixar seeks to instill those principles in its employees.

In context, I feel that Catmull is specifically addressing two points when he discusses failure. First, dealing with failure within the ranks of your workforce and second, imbuing upon the workforce a confidence that allows them to be mutually critical without getting hostile.

Catmull is a engineer first and foremost, that's what needs to be understood going in. He is taking an engineering approach and applying it to management. IMO, Pixar is an excellent example of what happens when one does that.

The first point about not firing people over mistakes or bugs is something that engineers know well, but non-technical people may not understand. A bug, a hole, a mistake, a vulnerability, a typo (like the one that almost deleted Toy Story 2 and could've destroyed Pixar if not for an accidental backup on the home computer of a telecommuting employee) is not a valid firing offense. You need to hire smart people and give them room. That includes acknowledging their humanity, and that the best of us are still going to make simple mistakes sometimes, just because we're human. Even elite runners trip and fall sometimes; practice doesn't make perfect but it makes very good.

Dismissing good talent based on the types of simple mistakes that are essentially random events is first, very wasteful of both monetary and talent resources, and second, antithetical to a culture that teaches that mistakes, experimentation, learning, and exploration are not only OK, but necessary parts of doing things that are worthwhile.

The second point is more about the employees and developing a culture where they can be comfortable that their image and reputation is not impugned when a colleague expresses his or her honest beliefs about their work product, and vice versa. This is another thing that engineers really value and understand well, but which causes a lot of non-technical people to worry endlessly.

Catmull recounts how many people need time to adjust to a culture of free, open, and fair criticism (which makes sense, as many corporate ranks are terrorized by glancing, unstable egomaniacs scattered throughout the ladder) and how Pixar assists them in developing the mutual understanding, trust, and respect that allows them to provide true and honest feedback to one another without becoming bitter. That is, "failure" in one task is a lesson to be learned from, not a fault to be feared, and such "failures" are welcome at Pixar because they understand it is an intrinsic element of exploration and development. Pixar's task is to help its employees internalize these values, so that such open criticism can refine their output into the exceptional pieces of art that they're known for producing.

Pixar, via Catmull, is a wonderful example of what happens when a smart, fair, and directly involved leader is allowed to control something he knows well. That's something that happens all too rarely. We'd have many more Pixars if we could get more Catmull-esque people in positions where they could override the drones who go around flashing their MBAs.


Having a culture where failure is okay is different from having a culture where failure is valued. Like the parent commenter said, if failure means you look back at what happened, learn some lessons, do better next time, then that's fine, you've gotten value out of failure. If you say "Hey, I got a failure out of the way," you're playing slots and falling victim to the gambler's fallacy.

> Make New Mistakes. Make glorious, amazing mistakes. Make mistakes nobody’s ever made before.

> Mistakes aren’t a necessary evil. They aren’t evil at all. They are an inevitable consequence of doing something new (and, as such, should be seen as valuable; without them, we’d have no originality).

So, was this mistake glorious, amazing, or new?


David Anderson (kanban) once observed that if your estimates are accurate then they should be wrong about half the time (and half of those overestimated).

The pressure to make them into promises instead of estimates is, I think, a form of failure aversion, one that Almost everyone deals with and one that causes a lot of unnecessary drama.

I don't remember whose quote this is, but the old line about how if you aren't failing from time to time it means you aren't trying hard enough. Your reach can't exceed your grasp if you don't reach at all.

However its easy to fail from sheer stupidity as well. Failure is a trapping of success, not an indicator.


Well, a few thoughts.

1) It means you tried. Most people don't even get that far because they're too busy being terrified of failure 2) It means a hypotheses has been vetted and deemed not correct/successful, so something was learned. 3) I'm sure some of it is just trying to put a brave face on a shitty situation. If you try something, fail and then jump off a bridge because you're so donwn on yourself over it I'd say that's the wrong approach. No one likes failure, but having a more positive mindset about it is healthier than the alternative.


> I'm sure some of it is just trying to put a brave face on a shitty situation. If you try something, fail and then jump off a bridge because you're so donwn on yourself over it I'd say that's the wrong approach. No one likes failure, but having a more positive mindset about it is healthier than the alternative.

There is a difference between having a positive mindset and looking at something that negatively affected a lot of people as a positive. I don't think those two have to be the same at all.

I would love to hear some feedback from the employees there, however :)


It kind of reeks of entitlement to me. You would never see someone who is barely scraping by at a blue-collar job make a post celebrating getting laid off.


How does celebrating the shut down of their company show that they believe themselves to be inherently deserving of privileges or special treatment?


Full-fledged failure is genuinely unpleasant, and Silicon Valley plays deceptive word games to hide this. Most of what's cheerfully dubbed "failure" should be called "short-lived experiments that didn't work out."

Those doodles-gone-wrong can be shrugged off. We can count rapid-cycle launch goofs in that category, because there's an easy chance to come to market a couple months later with something better. We can even include small-scale corporate wipe-outs, too, as long as it's mostly VC money that got squandered and the investors expect some duds in their portfolio. ("Deal me another hand, mister!")

If it's five years of your life that collapses into dust, that's not fun at all. If it's a colossal product failure that damns your reputation (or your customers' lives) for keeps, that's pretty grim, too. I don't mean to get morose about it, but the key to longevity in any field involves making sure that your supposed failures actually are pretty transient and superficial.

That is not always possible.


> relished as a valuable part of life

Not saying "failure is awesome! this is my favorite!" but saying "Ive learned something from this failure and walked away a better person from it." Also, you need low points to make high points better. if high points are your baseline, you never feel higher than average, and low points become abyssmally low. Knowing that "hey, sometimes shit happens" makes happy moments happier.

And I guarantee you that SpaceX wants to learn failure points on early Falcon 9 landings rather than when their interplanetary ships land on mars with people on them.

Science heavily leans towards proving what we think is right, while little goes towards proving something is wrong. Which means you need every point of failure to help prove the bounds of a model.

If failure is used as a learning aid, then failure is great. If you don't learn from your failures and just keep repeating the exact same mistake, then yes, it is nothing to celebrate.



This range is accurate.


> Also any information as to lessons learned, etc? Basically why it failed? Looking at the marketing material I didn't see anything really remarkable about it (nothing that stood out as a "oh this is why I would give them money") so I'm curious.

My personal take on this from what I've red in the post (which is kinda harsh).

[First, containerization (read: Docker) is in its infancy. It's only 3 years old. Let's assume it's immature and not well understood.]

It seems from the post that ClusterHQ is about doing stateful containers + The company was born in 2014. Stateless containers are still a challenging and new topic as of now. Stateful containers are purely theoretical as of right now.

They had => No product. No market. No business model. Tech is not ready. Usages ain't defined. It couldn't go well.

They don't seem to have products or sale something? Were they an open source company by any chance? (I couldn't tell, never heard of them, site is shutting down so too late).


Why not when it's someone else's money that is gone, and even better, your continued success is basically guaranteed since you have a network now that you can call on to join as an "advisor" or that can get you another pile of money to burn on a startup? Who wouldn't relish a position in their career with very few real consequences?


There were many factors, but the largest one was lack of sufficient venture capital funding to follow through with plans.


This is Michael from ClusterHQ. Just wanted to say thanks to everyone in the community who helped make the last 2 and a half years a great experience. Sad that it's ending now, but excited for what's to come.


Michael, I appreciate this. I'm sure ClusterHQ was at times brutally hard work and you and team have worked hard on making things happen. I'm sorry it didn't work out and hopefully the future will be brighter with new lights.


Thanks for posting, Michael, and I hope good things for you in the future.

Do you foresee Flocker living on in opensource form?


Yes, Flocker will remain open-source and my hope is that the community continues to improve it. Fli too, btw, for creating and managing ZFS snapshots https://github.com/ClusterHQ/fli


Maybe donation of the code to an FOSS foundation, like maybe Apache, might be a way of ensuring the community continues. Or, at least, gives it the fighting chance to do so...

DM me if curious


I raised that question during the final company meeting yesterday. The ownership of the code is in limbo until the investors decide what to do with it at least a few months from now.


1. What about the Flocker open source project? 2. Do you have plans to continue and maintain only the Flocker project?


- December 22, 2016: ClusterF*ed - December 15, 2016: Reflecting on a Year of Change and What’s to Come in 2017 ("All in all, 2016 was full of tests and triumphs and I can promise that 2017 will also be a big year for the company.")

I'll be the first to admit I don't know anything about this company, but that's an interesting change of heart.


As a former ClusterHQ employee who had been with the company longer than our CEO, I should be able to shed some light on this. The plan was to raise round A funding two years ago (which we did) and then raise round B funding now before becoming profitable 2 to 3 years from now. Venture capitalists have become far more risk adverse. We all thought that round B funding would happen, but it fell apart at the last minute and instead of announcing round B funding, Mark announced a shutdown.


For reference, here is the post from 12/15 where the statement, " I can promise that 2017 will also be a big year for the company" was made by the CTO. Having the CTO make that statement, knowing that the company was shutting down, seems odd. Which implies it was not an orderly shutdown.

https://clusterhq.com/2016/12/15/container-predictions/


And that the CTO wasn't kept in the loop.


Or the Wotton hypothesis: "An ambassador is an honest gentleman sent to lie abroad for the good of his country."


It seems a little odd to do this 3 days before Christmas. Holiday depression is already a real problem, and making people unemployed a few days before the holiday sounds like a bad thing.

And it's a terrible time to be job hunting.

Why not hold on a few more weeks, let employees enjoy the holidays and announce in mid-January when employees can actually talk to hiring managers and get some good job prospects instead of being met with out of office messages?


It's likely that people within the organization already knew things weren't going well.

So, the bright side of this is that:

A) employees can spend the entire holiday week or two with their families, and

B) employees can start out their new year unencumbered by the stress of working for a failing company

Personally, I'd rather have it this way than come back from whatever fun holiday adventure I'm on to find out a week later that I'm losing my job... that's a terrible way to start the year's momentum.


Sure, there could be mitigating circumstances, but I've worked for a startup that folded just after Christmas while the office was closed for the holiday, we were all fired by phone.

It sucked.

Spending the holidays with family can be even more stress inducing when you've just learned that you lost your job. No ability to go out with local friends or former coworkers to commiserate about being fired and talk about job prospects. I was too distracted with job searching to really relax and enjoy time with family.

Rumor was that the investors pulled the plug before the end of the year for tax reasons, but I'm skeptical since it took months to sell off assets (physical and virtual) and wind down the business.

I made out pretty well though, got a lucrative contract with the company that bought the core software to keep it running for them until they could merge it into their systems.


I am not sure if even the CEO knew that sufficient venture capital to succeed was unavailable until all avenues had been exhausted. I had a hint that things were not going well when my request for equipment was denied until round B funding was raised.


Given that the venture will fail anyways, it's always better to figure this out sooner rather than later. Like, take the money that would have been spent on salary for those extra weeks, and pay it out as severance.

There's arguments to be made for stretching things out if it means funds get spent as salary rather than paying things back to investors, but this is an allocational waste (employees do work for no real benefit) for a distributive gain (employees get money rather than investors).


How do you magically hold on a few more weeks? Is there some magical startup christmas pay holiday fund I don't know about? Or are you proposing to just string them along and simply not pay them in the new year?


Because a day in business costs money.

If the company is out of money then you shut it down regardless of anything at all.

Running for a single day longer only makes the problem worse and presumably spends money that does not exist - someone needs to pay the money the employees earn.

Keeping people feeling nice isn't how to run a business that is at the endpoint of its lifecycle.


I understand that the investors have agreed to pay all former employees severance pay. I am one of those former employees. That helped somewhat.


Well, you don't know the exact circumstances. Maybe there's no money in the bank and they couldn't raise more investment and they did the best they could to stretch it out to the New Year?

Or maybe they're giving everyone 3 months severance in which case, it might not be so bad to get a Christmas gift of pay without work.


It was 4 days before Christmas. We were all let go yesterday. The announcement was written sometime between then and now.


They shut down because they couldn't secure series B funding. It's likely that they couldn't both make next payroll and also deal with severance and shutdown costs. Hard to hold on for a few more weeks if you can't actually pay anyone.


If the company is shutting down no matter what, keeping the employees on is essentially giving away money for no (business) reason. To do so could be construed as a violation of fiduciary duty to the shareholders.

Alternately, who says they have the money to make payroll either way? Keeping them employed when you know there is no money to pay them would be even worse. It would leave them just as unpaid, but also even more bitter about it.


Or maybe the technology wasn't correct for what most people are trying to do with Docker these days. Flocker never felt like it quite fit in the ecosystem along with Mesos, Kubernetes, etc.

Great efforts guys, the tech is cool, but technology will continue to evolve and if you bought into something completely that doesn't fit nicely with the movement, you will get left behind.

Edit: not sure why the downvotes, I was not being sarcastic. The comments about why "pioneers get arrows" in the post made it seem like they had a perfect product, the world was just not ready for it.


Since when is Flocker competing with those platforms? It's designed to work with them. http://kubernetes.io/docs/user-guide/volumes/#flocker


I never said it was competing. I said it didn't seem to fit nicely. I run several very large clusters, and we evaluated Flocker and it didn't fit nicely into ecosystem. It felt very "bolted on".


I see, sorry for misunderstanding. What did you move to for persistent volumes?


We are using DC/OS (Mesos) and found it to be much more feature rich for what our needs were.


I appreciate the honest tone of this announcement without any "incredible journey" nonsense.


This is why it's a really bad idea to rely on PaaS/SaaS for your next project. When the company tanks (or cancels the product, changes the API, raises it's prices, etc.) you're screwed. Hope no one out there was heavily commited to FlockerHub.

What we really need is better business models for supporting Open-Source.


Counter argument: if you don't rely on PaaS/SaaS, it'll take you 3-4x to launch. So, rely on them, but be ready to switch. A few bad things like this shouldn't take away from how services enable rapid development and iteration of ideas.


>Be ready to switch.

This why I include a weekly back up of our data from any of the PaaS/SaaS providers we use. Make sure you know exactly what is in the data you are backing up as well. You might find that data you really need isn't included in the backup/export files. There are a few sites where I have to take screen shots of configurations or rules to make sure I have all the data I need to switch over to a new process with minimal disruption.

I trust other people, but I don't want to endanger my organization by inextricably tying it's survival to that of another organization.


Try getting log files/historical data from a company that shuts it's doors without notice.


Why would it take you 3-4x longer to launch? Running your own servers in the cloud is not that hard. You can actually save time by not being restricted by the PaaS. I.e. if you have special needs, and you will, you can go in and hack the software.


If you don't have special needs, then you as a small team or company are taking on a burden that is large enough in scope for an entire company to focus on. It's why most startups just use GMail instead rolling their own. Sure it's easy to set up an email server, but now it's one more thing to think about.


GMail is a bad example. If GMail were to close you could just move to another email server.

Besides, using a PaaS is not free and I don't mean in terms of money.


>>Running your own servers in the cloud is not that hard

For small side projects, sure. But if you run your own servers, that means you're also responsible for their configuration, backups, maintenance, upgrades, etc.

That's why the "Dev Ops" people exist in many software companies. And they are not exactly cheap.


It's a bit unfair to make absolute statements like that.

As with everything else, there's a risk calculation involved. If the PaaS/SaaS company delivers a substantial improvement over the existing solutions, the risk may very well be worth it. Conversely, if you're replacing a tried-and-true solution with something shiny just because it happens to have a lot of buzz around it, well that can be a risk miscalculation. (Of course, the shiny new thing may be worthwhile because it attracts a certain class of talent, but shiny for the sake of shiny is not going to yield much good.)

That's why larger clients sometimes invest in a startup, to help ensure some stability by having better access to the internals. (Yet, I've seen that fail too: some startups outright lie to their investors, and that ends just the same way as you'd expect.)


The nice thing about a SaaS company is once they get big enough, they're very stable because of their recurring revenue. (Salesforce isn't in danger of going out of business) They also can't ignore you like many big enterprise vendors, because you can turn them off on shorter notice. They have to rewin your business every year. It's just a little trickier with SaaS startups who haven't cleared the hurdle of their fixed costs yet.


Choosing something like Cloud Foundry should help abate those fears. Backed by a foundation with some big companies donating engineering effort and providing hosted and on-premise offerings.

Disclaimer: Work for Pivotal who donate the majority of the engineering effort to Cloud Foundry.


SaaS are not new. I used to work for one here that has been going since 1998.

Though as someone who would like to found a SaaS one day, I wonder what I can do to prevent potential customers seeing me as unreliable. Stories like this do not help.


That's a shame. I've always had great interactions with the ClusterHQ team. Michael, Mohit and Carissa have always been incredibly friendly when I've run into them at Dockercon. Unfortunately my engineering team was never able to fully integrate flocker into our production environment as we relied heavily on custom storage driver actions. Wish you folks all the best in your next projects.


Maybe "stateful containers" aren't a good idea. The whole point of containers is supposed to be that they can be duplicated and loaded into many machine instances. "Stateful containers" with changing databases inside can't be treated that way.


I should make a startup called Trampoline. Other startups pay me insurance premiums and I hop in with a team and salaries for ejected employees to keep doors open for however long they paid for after a crash. As part of the customer SLA they cite Trampoline and the duration of post mortem life being paid for.


I like this idea, but I wonder how it would affect the business decisions of founders once they know they have a safety net.


> Mark Davis (CEO) explains this opportunity as, “Imagine if you were the 10th engineer at VMware. That’s the kind of experience you’re going to have with us at ClusterHQ.”

That was from a clusterHQ recruiter's email that I received just a week ago. I thought it was weird to sell a position in that way.

I don't find it unreasonable that a recruiter was hiring people while the company is closing down (what do they know?) I'm reminded it's always important to ask for specific financial information when hopping on to a startup. What's your revenue? Expenses? How long is your runway?

My condolences to anyone who had hope, time, and effort invested in clusterHQ stock options.


I remember trying to setup a Flocker cluster and got brainf*ed. A cert-based auth in local development clusters was probably an overkill.


Same thing here. The barrier of entry was too frustrating for a casual evaluation.


Wow this surprising. I wonder what the reason for shutdown is? Flocker looked like a really cool product but was pretty involved setup wise when I was evaluating it.

What are best options now for bare-metal? Ceph? NFS?


Maybe the docker infinit acquisition[1] caused it? Given Kubernetes' plug and play storage classes (and gluster's maturity) + docker planning to add infinit natively there might not have been much space for them.

http://venturebeat.com/2016/12/06/docker-acquires-file-synci...


Docker bought Infinit and will presumably integrate it into Docker Engine.


Mostly people like you that thought it was a good product but never paid.


Looks like those flagging a post are too young to understand nothing gets closer to bare metal than THE DIRECT BARE METAL LANGUAGE ITSELF, and are probably so enchanted with the idea of Open Source that they don't know from real experience that you can only rely upon yourself. The shutting down of this company only proves as much.


A little bit more info on why the outfit failed would have been nice.


They probably just ran out of money or a key relationship dried up causing their runway to become impossibly short.


Its not like good container storage solutions don't exist for databases and other stateful applications. The problem is in expecting it to be free and open source. Building orchestration or simple file or object storage is easy, but building high performance, resilient, scale out storage that can run on cheap commodity boxes is a difficult task. Once you get over the "free" requirement, there are some good options like ScaleIO and Robin Systems. https://robinsystems.com/containerization-platform-enterpris...


It would be interesting to hear what are the alternatives now to what they were trying to do with [flocker](https://clusterhq.com/flocker/introduction/).

The post seems to make the point other alternatives came up that removed their competitive advantage. Anyone here either using flocker or the other alternatives?


I'm sad to hear this. I loved reading Richard Yao's blog posts about ZFS on Linux.

https://clusterhq.com/2014/09/11/state-zfs-on-linux/


They are ceasing all operations. Someone might want to update their careers page.


Or their home page.


How is this company shut down?

Almost the same day my Us-based company officially announces to close down, but we haven't been paid since June 2016. I really want to know if this happens in Us

See also https://news.ycombinator.com/item?id=13242516

Thanks


Oh the immediate shutdowns! After going through the immediate Nebula shutdown, I'm glad we weren't depending on ClusterHQ.

Same question I had for Nebula: you had no idea that a month ago you'd have to shutdown, right?

I've started to follow these ex-CEOs so we avoid their next companies. This kind of shutdown is just terrible.


Many -- probably most -- ultimately very successful companies had near-death experiences. They aren't usually written about. Apple's near-death in 1997 is well documented. Tesla's is described in Ashlee Vance's biography of Musk. http://foundersatwork.com/ has firsthand stories about some others.

They would have become full-death experiences if the CEO had said, "Hey everybody, we're near death, just FYI". So in the alternative world where company deaths are always announced well in advance, far more companies would die. Probably not a better world.

I don't know the story here, but in most cases there was some deal on the table that would have saved the company but fell through in the couple of days before the announcement.

Regardless, the right thing is to have enough payroll in reserve for an orderly shutdown and transition plan for customers. It's not clear whether that's happening here -- I hope so.


The founder of FedEx once saved the company by taking its last $5,000 and turning it into $32,000 by gambling in Las Vegas.


Careful. There's so much BS in that guy's myth story. [0]

[0] http://www.snopes.com/business/origins/fedex.asp


Yes, but the essence is true: at some point they had to scrape together enough cash to buy the day's jet fuel or it would have been all over.


Maybe you have been seeing things through SV pink colored glasses for too long.

Companies everywhere are shutting down daily with proper warnings to customers, employees and users. It's not uncommon for companies to give several months (or even a year) of warning that they are winding down operations, won't be accepting new users, will honor contracts, etc. And they are doing the right thing.

Every time there's a post here about how people should keep at least 3 months of salary in case things go south, the general feeling is that whoever is living paycheck to paycheck must be stupid. However, when it's a CEO that couldn't do proper planning and screwed the lives of employees and users alike.. oh, sure, let's give the guy a break.

EDIT: Let me anticipate the argument here that if anyone's "lives" were screwed then it's their problem that they didn't understand the risks properly.


Please don't be uncivil when commenting here. The middle parts are fine (though weakened by a straw man you call "the general feeling"), but the personal swipe at the beginning is not.

tlb's argument is an interesting one, and so would yours be had you focused on its substance and dropped the snark and the elbowing—which was even worse in your original comment upthread. Indignation gets upvotes, but it also reduces signal/noise ratio and primes others for fight mode. Those things are bad for HN threads.


CEOs are trying to keep the company going.

It would make no sense for every CEO whopse company is a month away from shutdown to be announcing "listen just to let you know, we are a month away from shutdown, but we are trying to find the money to keep going".

For many companies this is the default state of operation and undoubtedly many companies have gone from that state to great success.

What would employees and customers do in the face of such an announcement?

If you have ever been CEO of an ordinary company trying ot build a business then you'd understand.


Indeed. There is also the fact that internal and external messaging is likely much different. I'd not be too surprised if there hadn't been some attrition within their ranks (especially devs) leading up to this point.


I know how you feel as a customer. I felt similarly surprised when I read this post because I'd done an interview with The Newstack at KubeCon with the VP at Cluster. They were talking about their future plans as if there was indeed a long future. I remember thinking at the time how a company like Cluster will survive in the peak chaos of the container landscape at the moment but definitely did not expect to hear of such a sudden demise.

Regardless, I think giving customers a large amount of time to migrate over, etc. should be a priority. I personally really like the way Runscope handled their change in fortunes. Screw VC trajectory and build a strong personal business.


> Same question I had for Nebula: you had no idea that a month ago you'd have to shutdown, right?

Often they've got ideas that they might shut down because cash is running out but they generally have leads/prospects they're trying to pursue (or they fool themselves into thinking that anyway), until the cash actually runs out there's always a chance for a hail mary.

And revealing that the company is in pretty dire straight publicly would make any chance of breakthrough, investment or buyout nil, so why I dislike this kind of sudden shutdowns immensely the incentives really are against softly landing customers (and employees).


For the record, I totally agree with you here, and with your further comments below. The HN attitude that running a startup is some moral virtue is obnoxious. This company simply screwed over its users. They aren't owed anything.


This is a ridiculous comment. Have you ever tried running a company? I'm certain the founders of these companies put their blood sweat and tears into their work, but unfortunately things didn't go as planned. They are probably deeply saddened and crushed about having to fire employees and made very hard decisions. Life doesn't revolve around you and your technology inconveniences.


Please don't post uncivil comments to Hacker News. There's a good point in the middle of your post, but it's ruined by the name-calling of the first bit and the personal nastiness of the final bit, which break the site guidelines.

https://news.ycombinator.com/newsguidelines.html


Sorry dang not my intention.

> name-calling of the first bit

When did I name call though?


'This is a ridiculous' is name-calling in the sense described at https://news.ycombinator.com/newsguidelines.html:

When disagreeing, please reply to the argument instead of calling names. E.g. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."


> please accept our enduring, deeply felt gratitude.

Sure, but would 'apologies' have been out of order too?


So whats next for the software projects by ClusterHQ? Making them a part of an Apache Incubator?


If we're going to celebrate failure can we at least fail with respect, humility, and maybe even a tiny bit of class?

The word "sorry" does not appear in this post. Instead of apologizing to investors, users, and employees for letting all of them down the CEO writes a contentless self-aggrandizing post.

The CEO also doesn't bother to thank anyone despite being literally and metaphorically indebted to investors, users, and employees for getting as far as they did. [Update: there was "gratitude" - my mistake; sorry]

Besides the self-aggrandizing "we did it first" tone of the whole post, here are a few more parts I'd love to see future farewell posts skip:

> it’s often the pioneers who end up with arrows in their backs

Unless your point is that you were a company who tried to take what wasn't yours and was punished for it... this phrase is awkward-at-best.

> I called these “Friends of ClusterHQ” by the sobriquet “FoCkers”

The use of "sobriquet" doesn't make your adolescent play on words classy.

> The big successes are literally impossible without the many failures. Take a moment to think about that.

What a ridiculous thing to tell your audience that includes employees looking for jobs, investors out of money, and users without a service they may have depended on. Out of those 3 groups only investors care about such things. The other 2 groups are collateral damage to your hubris.


> The CEO also doesn't bother to thank anyone despite being literally and metaphorically indebted to investors, users, and employees for getting as far as they did.

"To all the employees, customers, users, investors, advisors, partners, competitors, consultants, analysts, and vendors who helped us – there are thousands of you, and you know who you are – please accept our enduring, deeply felt gratitude."


Missed that; thanks for correcting me.

So at least there's one line of "thanks"


Unless an investor additionally gave a loan, there is no debt owed in any way to investors - that's the point, they made a decision to invest at risk and presumably did due diligence based on accurate information.


comment says "metaphoric" debt. Surely you can agree that the principals of a company owe (in the sense of being socially obligated) a debt of thanks and gratitude to investors for taking a risk on them in the first place?


> Surely you can agree that the principals of a company owe (in the sense of being socially obligated) a debt of thanks and gratitude to investors for taking a risk on them in the first place?

Investors don't take on risk for the sake of the company they invest in. They take on risk because they think they will make money for themselves, and they would screw over the CEO and all of the employees, and other investors, in a heartbeat if they could get a good exit.

The investors got an ownership stake in an early-stage company with a high risk of failure and a corresponding high potential reward. The company got money to fund operations. Both parties got what they signed up for, and there is no imbalance in gratitude owed.


No.

The relationship is wrong if you feel indebted to your investors and like they deserve thanks. They are partners in the business.

Presumably the CEO did all they could to succeed and behaved in an ethical and honorable manner. If they did so, then investors have been well served.

I will agree however that anyone who ever chooses to do business with you in any context deserves thanks. On that basis, thanks is deserved.


> The relationship is wrong if you feel indebted to your investors and like they deserve thanks. They are partners in the business.

Plus, they knew the risks. There is no reason to feel sorry for early-stage investors who lose money unless some kind of fraud was perpetrated on them.


Do you feel sorry for an athlete who hurts themselves?

Do you feel sorry for wife of a stuntman who dies in an accident?

Do you feel sorry for those with a gambling addiction?

I can understand why you wouldn't at all feel sorry for an investor to the same extent as my examples above; presumably the investor hasn't ruined their life. However, my point is knowing the risks doesn't preclude someone from receiving empathy/sympathy.


> I can understand why you wouldn't at all feel sorry for an investor to the same extent as my examples above; presumably the investor hasn't ruined their life. However, my point is knowing the risks doesn't preclude someone from receiving empathy/sympathy.

I was not writing about a general case of risktaking: I was writing specifically about investors.

If I had been writing about athletes, stuntmen, gamblers, or any other kind of people, I would have written something that took into account the different contexts in which such people take risks.

I really don't see what you are trying to accomplish by changing the focus from a specific case to a general case.


Errr, wait what?

Firstly, to say there's no debt, monetary or otherwise, to someone who believed in your company enough to hand over their own money (or money in their control) to back you is just pure arrogance.

Secondly, there absolutely is debt. When a company goes into administration the assets are sold to pay out those who hold equity in the company. They're literally owed debt.

EDIT: To clarify for those responding saying that equity does not constitute debt...

Creditors and investors are different, absolutely. It is also correct that investors are typically paid last during administration (although investors may form agreements to come before other investors). Nonetheless, during administration, the administrator determines how much money the investors are "owed", which is literally the definition of debt.


No matter how many times you say that this situation is "literally the definition of debt" you're still not going to the legal definition of debt so that it applies here. Equity is not debt, they do both until eventually owing another entity money. But in a legal sense that is not literally the definition of debt, because it is also part of equity which again isn't debt.

From a practical perspective the difference is that with equity you accept a lower floor (you might get nothing) in exchange for a higher ceiling (your invest might 100x if the company goes public). That's the deal these investors signed up for and unfortunately for them they got option number 1. They're not owed anything because that's the deal they agreed to.


I have zero idea what the legal definition of debt is in the country/jurisdiction you live in. I also didn't specify what country I live in.

Additionally, my first point was one of metaphorical (social) debt.

Given the context and the fact I'm typing in English it ought to have been clear I was referring to the definition of debt in English. If that wasn't clear, I apologise for the confusion.


> I have zero idea what the legal definition of debt is in the country/jurisdiction you live in. I also didn't specify what country I live in.

Only definition that matters is the one in the jurisdiction this startup was in.


Last I checked this was a tech website not a legal one...


So?


not to quibble, but no, they're not literally owed debt. The assets of the company are used to pay off any actual debt (payroll taxes, outstanding invoices to vendors, etc). Only then is any remaining money distributed to the equity holders. They take absolutely last.


They're not owed debt. They own a piece of the company. When the assets are liquidated they will get their share unless they are on the shit-end of a liquidation preference clause.


> When a company goes into administration the assets are sold to pay out those who hold equity in the company. They're literally owed debt.

This is incorrect. Please see the differences between equity and debt. [1]

[1] http://smallbusiness.findlaw.com/business-finances/debt-vs-e...


> I called these “Friends of ClusterHQ” by the sobriquet “FoCkers”

To me this just sounds like a shot at irony by combining crudity with eloquence, fully agree with your sentiment though.

Mark Davis is pretty tone-deaf, it reads like "hahaha good times, didn't we have a good run, FoCkers?" while the message is "here's a bunch of problems and you probably need to work overtime".


I think he's trying to keep his employees optimistic and happy. Like the investors, surely they knew that getting involved in an early-stage company was risky, but they were willing to do it because they were excited by the prospects, committed to the vision, or similar. While it is clear that the post was not written by a management professional, as a completely uninvolved individual, I personally didn't find the authentic, human tone jarring. I understand more directly involved people may feel differently.

I don't know what ClusterHQ did, but presumably they may have some users for whom the abrupt shutdown is inconvenient at best, especially if their company depended on whatever this is in production and now they have to try to fix it 3 days before Xmas with 40% of the company's staff on leave and the remaining 60% mentally checked out. I think that's the only group that this post disrespects. It would've been nice if he could have at least offered some alternatives, announced the shutdown ahead of time, or given some sort of migration path.


Is there a service that's shutting down? As I understand it, Flocker is an open source product you're meant to self-host.


Dude, chill.


You can't easily make a company out of selling free stuff?


Could this be in any way akin to the shutdown of Lavabit? I know it's not the same type of company, but if there was pressure to put back doors in or any sort of compromise, I would support the action.

If not, then it's a really bad way to shut up shop. OK the source is out there but given the holiday season people might have appreciated a little warning.


Hi, founder of ClusterHQ here. I was just reminiscing over the demise of my company, and saw this comment had not been replied to.

I can categorically state that there was no pressure to install back-doors or any Lavabit-style problems.

As for your other comment, as a business you don't get to choose when you run out of money. I believe there was a plan to secure more money, and when that plan failed the employees were told immediately. The timing is irrelevant.


Thank you for the comment.

I appreciate as a business you don't have complete control of your destiny. There is a fine line between keeping employees informed and scaring them witless and I appreciate you were doing the right thing for your company. I or anyone else would likely have done the same in your situation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: