Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I believe that within a few years most of the JavaScript programmers out there who are afraid of VMs are going to realize that it really isn't that hard to deploy Node (or anything else for that matter) to Linode, Rackspace, Digital Ocean or AWS.

Until then, they will continue to pay crazy markups to Heroku or live with limitations like with the new Parse service that doesn't let you install any Node packages (there are 31000 and counting now in the npm registry https://npmjs.org/).

Linode even has a StackScript that sets up Node for you. And AWS has something similar with a Node AMI (in the marketplace at least). Or you could just copy paste like 10 lines of code from the Node.js wiki. Or find a script online to do it.

I would honestly like to know what is it that, say for example Parse, has done with their VM configuration that is so much more sophisticated than what people could get from following a few wiki pages on the web, or just getting a good bash script. Because I doubt that there are a lot of extra security or performance features or anything like that.

Configuring and running an Ubuntu server is not that hard.

But maybe it is. Maybe I am just really ignorant and there is some kind of complex configuration or tuning or firewall or daily task that Parse or some other Node.js hosting company is running on their servers that is completely beyond me or that I would never be able to find out about from the web. But I think that for 95% of people nothing complicated is really required.

I really appreciate any hints about Linux server management to clue me in if there are a lot of commands that I should be running on my servers that I am not aware of.

What I know is that my Ubuntu Node servers get running and stay up without me entering a lot of commands or doing a lot of monitoring or anything else.

If you go with AWS, they have a built in interface for the firewall in the browser. You don't even have to use a command line. Although honestly in Ubuntu how hard is sudo ufw enable; sudo ufw allow [port]



It's not hard to install nodejs, and then you need forever or w/e to keep it running, you need that to start on boot, you need pingdom or whatever to monitor it, you need your haproxy or nginx for load balancing, you need to decide what to log and how to store those files beyond megabytes, maybe you need ssl, you need a way to get the next version running, and you need to be able to fix it when stuff goes wrong.

All of those things are just distractions from whatever your project really is. Are you building software or are you dicking around with servers?


99% of Node applications do not need any load balancing. 1% are Twitter or something. And even a lot of the 1% would do it at the application layer using multiple VMs.

And I bet that most of the Heroku apps on Node that use multiple dynos really only need multiple dynos because Heroku's dyno isn't giving them realistic resources on the dynos, i.e. less than one small AWS or Linode or whatever.

Forever will not keep your Node application running. If you Node application crashes, forever will let it crash, and then run it again, and if it crashes again, it will run it again, and it will crash again.

There used to be basic gotchas in HTTP/Express that made it really easy to have an exception thrown that would take down a Node web server. I am not sure those are already there, but anyway I do what every Node expert advises against and include and uncaughtException handler in my Node web servers. That keeps them running. If I take it out, the effect is that they go down briefly, and forever restarts them. So honestly I see no benefit in most cases to using forever except to make it a little bit harder to figure out how launch your application.

And the thing with automatically restarting when a server reboots, honestly my servers very rarely reboot. If they are going to reboot then the VM (VPS) provider gives me warning and lets me control it. So for 99% of applications out there, an upstart or whatever to restart your Node application really isn't that important.

Contrary to popular belief, you do not need to install nginx in front of every Node application.

I will give you the SSL one, that could easily take a good UI developer 2 or 3 days to find the right instructions on Google.


You are correct, with enough experience it will suck a smaller amount of time and attention away from your actual project.

Unrelated to the core argument, I think much of what you just posted is bad advice - forever will restart your app above and beyond uncaughtException, nginx or haproxy let you add redundancy on independent processes if not machines, and restarting your site manually on reboot is ludicrous.


2 years ago I taught myself how to code and design. Figuring out how to set up my environment alone almost killed me. There were just so many cryptic commands that had to be done that weren't exactly english. Then I found the Rails 1 click installer, and it literally did just that. Next I saw the beauty of git push heroku master. Now my page was on the web. It was amazing. To seasoned programmers it might not seem that difficult, but to those who are entering this new world, a few steps and a wiki can start to get confusing, especially if something goes wrong or the person skips a step. Overall the environments for code still have bad UX and I can't wait till someone makes it as easy as the 1 click experience I had.


Ok but what I'm saying is that the UIs for the VPSs are really simple and some of them actually have a way to select a VM with Node already installed or automatically run a script that installs it. Its actually easier than Heroku.

Also if you just had a good bash script from somewhere, that would be easier than Heroku too.

Just basically saying that the right script or server image is all you need rather than paying twice or 10x as much money for something you can't control.


Sure it's not hard to spin up a Node process on a VM, but the abstractions Heroku provides are pretty nice: "git push" to deploy, Procfile to define process types, a simple command line tool for tailing logs, scaling processes, set environment variables to configure, etc.

Plus you need to know a bunch of stuff about sysadmining to properly secure the server, stay on top of security patches, etc.


Setting up and running an Ubuntu instance somewhere isn't hard, but sysadmining is. I think people who pay for Heroku (as I have in the past) are throwing money at a problem. Even suggesting blindly to use "sudo ufw allow [port]" kind of proves this - what ports do they allow? I allowed 80 because web, but now I can't SSH back in to my box. Also, updates? Security patches? Heaven forbid my node app gets popular and now I have to scale.

In other words, it's like the game of Othello - a minute to learn, a lifetime to master. Some people just want to develop.


Allow ports 80, 443 and 22.

Do you really think that Parse is doing all of the security updates that come through Ubuntu or CentOS? Is Heroku doing all of them? I doubt it.

And if someone needs to do a security update on Ubuntu, its a one liner.

99% of the Node apps out there will not need more than one VM or any complicated scaling. Or they can scale horizontally at the application level using more VMs, or vertically by just upgrading their VM to have more RAM/VCPUs.

From Heroku's site: "New systems are deployed with the latest updates, security fixes, and Heroku configurations and existing systems are decommissioned as customers are migrated to the new instances. This process allows Heroku to keep the environment up-to-date. Since customer applications run in isolated environments, they are unaffected by these core system updates."

Which means they do not apply security updates. They decommission servers if they think its necessary. How many times has Heroku actually done this? I am sure it is not with every security update.


Do you really think that Parse is doing all of the security updates that come through Ubuntu or CentOS?

Yes. Yes I do.

This whole line of argument is one I typically hear from folks who've never had to deal with a Sev 1 incident in the middle of the night. If your argument holds, sysadmins and DevOps teams are essentially pointless.

Moreover, even a developer confident handling both maintenance and incident response should value his or her time more than zero. Given that, presumably there is a price point at which it makes sense to simply pay a platform provider or hire dedicated DevOps. There may be varying opinions as to what that price point is, but it does exist if the developer has the cash.


All those services you mentioned aren't event-driven asynchronous which is an important part of node's ability to reach webscale.


LOL are you serious?

What on earth are you talking about? Maybe you are kidding.

But if you're not kidding.. are you saying that those VPS providers like AWS, Digital Ocean and Linode are not 'event-driven asynchronous'? Wat?? VMs run whatever code you want, asynchronous or synchronous or whatever.

I think you must be messing with me. Especially since you put 'webscale' on the end of that.

HAHA.


If you're in the AWS eco-system you can consider OpsWorks -- which is much closer to a full PaaS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: