Having run a similar setup (self-configured) on a Raspberry Pi I see one major issue - the Pi has an ability to damage your file system after prolonged use. I've had this consistently happen every 60-90 days of use. There are hundreds of reports of it in the forums. (Search for Raspberry Pi corrupt filesystem on Google)
My question is - how do you plan to account for this?
Several ways - arkOS already has a tool (https://github.com/jacook/logrunner) for buffering logs in RAM before they are written to disk, thus reducing probably the top cause for SD card wear on webserver Pis. Its RAM footprint is lightweight, and it makes a huge difference over time. Secondly, with a successful crowdfund campaign the project will be able to install both to SD card AND to a USB-connected drive. Meaning that boot gets written to SD and data is stored on an external device, something more hardy than a cheap SD card. :) Finally, backup services (plural because there will be a couple different options for people to choose from) will be made a part of the core framework, making regular backups easy and data loss less of a nightmare.
With a powered USB drive it would be a nice setup!
But I feel that Drydock and arkOS CONNECT do not fit your model. Setting up central backup and a VPN for your Pi is essentially just another service. The point of this was to get rid of the middleman, yet you conveniently introduce yourself as one?
I totally understand the impression. But the goal with arkOS is to help people securely self-host with as much stability as possible. Since self-hosting is a complex and occasionally troublesome thing, some people may need help in getting properly connected. So the only reason we consider hosting those services is to help with these ends. If someone needs the services in order to self-host, I think it is better that they use them, rather than not being able to self-host at all. Any services will be 100% optional and up to the end user.
Also, I like to dd the OS to the card as an image (1 write). I have no desire to untar 100's or 1000's of files (100's or 1000's of writes) to install an OS on a card.
I think "secure" and "cloud" are pretty misleading. In terms of security (if it's about my personal mail/calendar) my bigger concern would be that my data is lost, e.g. not backed up properly, deceptible to hardware failures. As for "cloud" - I can't see much on the page that would have remotely the same properties as a "cloud" solution. Maybe that's why they put it in quotation marks? Still, very misleading imho.
There are steps taken to address security and stability concerns. See my other commends on this item. Also you may need to look beyond just the front page - can't put everything we want to do and plan to do on the front page at once. :)
I've made ngrok completely open source and permissively licensed, I'd suggest using it rather than reinventing your own. Feel free to contact me about it.
I've got a 'Pi next to me now - which has open reverse ssh tunnels routing ports 25 and 465 from a DigitalOcean $5/month VPS to itself. (I'm working on getting Vagrant and Ansible set up so it can provison and configure inexpensive vpses and update dns MX records to suit on the fly…). The 'Pi lives behind my home NAT gateway - I can get to it's local network only port 443/ssl webmail if I vpn into my home network.
I see this as a persistent issue that seems to surface every few months. I think Google's fiber network permits running servers for 'non-commercial' use which is whole other can of worms entirely. Does anyone know of any sort of legal movement/petitioning going on currently to try and get ISPs to allow home servers?
Virtually all consumer-level ISPs in the US explicitly disallow running any sort of persistent listen server as part of their TOS (Terms of Service). The "right" they have to do it is that you agreed to the TOS at signup time.
There will be a tool that will help with dynamic DNS and port proxying for certain services (this may be against your ToS though). Or you will be able to use arkOS on a VPS too, if hosting at home is not a match for you.
People participate in it like in SETI@Home, Tor, or the mersenne prime number search.
"Site" code gets deployed in a git/bittorrent mashup with a web-of-trust system of signed certificates.
Code can be federated and distributed and not rely on a single point, ie github.
People access it by running their own modified DNS server that gets updated with the serving "exit nodes". (Part of the hypothetical client)
It is essentially Tor but no crazy onion URLs, direct connections, and divestment at the back end.
The user downloads the client, runs it and that's it.
Pick some TLD that sounds safe for the taking, like .!com and we're good to go.
There's even a trendy new currency that makes ecommerce viable using this system. ;). It's all there.
I've implemented bits and pieces over the years. But never gotten the full model working.
Everything from domain registration to fair load balancing all have hard, but feasible solutions.
And what do you get for it? The same freedom that bittorrent and bitcoin give you, but for everything. You get it all.