Hacker News new | past | comments | ask | show | jobs | submit | kennystone's comments login

They shot a live torpedo at FDR. Not the usual.


Once upon a time every military asset operated with live rounds. There were no "training" rounds in wartime. What happened in the OP was someone missed a step in an otherwise very normal drill. Want to really scare yourself? Google around for "broken arrows", live nuclear weapons that went missing. Most all of them were live rounds being carried during training. Something went wrong and that live nuke ended up in a field or at the bottom of a lake.


Given it happened in November 1943, it may have been one of the first truely live mk14 torpedoes ;)


Really smart from Mozilla; they leverage trust in their brand with a product for which trust is the most important feature. Making a VPN is a non-trivial technology project, but it's pretty straightforward how to do it well.


George Pullman made a town outside of Chicago for workers to build his railcars. It didn't end well:

"the town and its design were... a paternalistic system that took away men's rights as citizens, including the right to control their own domestic environment"

http://www.encyclopedia.chicagohistory.org/pages/1030.html

Happy that google is making this investment; it's worth a shot even if it is far the ideal way to build housing. Hopefully the NIMBYism that defines this area won't kill it.


It ended perfectly well for the person who built it. The object wasn't to make the workers happy, the object was to make Pullman money.


The queues at stores have always been the worst part of the experience. You put stuff in a bag, which is effectively a queue, then you wait in a line - a human queue, then you de-queue your cart on a belt, which is another queue, it gets scanned item by item and then placed right back into a similar bag to where it started. Good job Amazon for finally working to eliminate the queues.


Many of the staples of the Western diet simply haven't been around for longer than 15,000 years (agriculture), many in fact less than 100 years (heavily processed foods). It's not about ancestors all eating the same thing; it's about the fact we've introduced tons of new things, some we can handle and some we can't (and it probably varies per person).


This is not true at all. Apps on Heroku are built on open source software (Rails, Postgres, etc). You can always decide to host it all yourself without too many issues.


How is this not true? It absolutely IS vendor lock-in, just in a slightly less sinister form.

If you add 5 services & then to switch hosts it's not another git push. You have to re-configure each of the services. If you'd done this yourself all along & made machine images it would have been less convenient at the time, but probably cheaper & a good learning experience. There are many other hosting platforms that offer Linux boxes so you could move your whole App/Software layer to another of these companies without much trouble.

I'm not saying Heroku is evil or anything. Yes, they provide a good platform. But I think any web company with a significant customer base would benefit more from the cost savings & freedom of a purer platform than the conveniences of Heroku.

Plus, there are so many configuration issues with their services. I have auto-scaling setup with Adept and still I see these long request-queue buildups now & then. I get the feeling I would not have the same issues with an AWS stack where I have CPU usage monitors that are very transparent & all the networking is trivial.

I still use Heroku as an app server & don't hate it enough to up & move (though a large factor in this is I'm not the one footing the bill, the client is...) but anything I can easily get on to AWS is a no-brainer. Database is one of those things -- a couple clicks to scale up/down every year is all thats really required.

CONCLUSION (cuz I rambled too much): I think Heroku offers scaling/convenience but AWS is just so rock solid & cheap that you can probably just buy larger instances than you need (to compensate for scaling) and have much better performance at the same price. Then you just need to learn how to install your tools & take a machine image as backup. Plus there's a lot of value in learning how to work with machine images that goes way beyond hosting a web app.


> Plus there's a lot of value in learning how to work with machine images that goes way beyond hosting a web app.

Every minute I spend doing that is a minute I spend not doing things I enjoy. YMMV though.


Yea I feel you on that but on Heroku I feel like "Every service I add is a per dyno credit card charge that is about to be multiplied by the number of hours in a month". :(

I prefer the Amazon model because you can stick these all on one server & as long as CPU isn't pegged at 100% you're good. I agree tho I'd rather not spend the time figuring it out. For now I only use it for RDS & for services not available on Heroku (some Adobe streaming server stuff).


This is almost the complete opposite of vendor lock-in.

Vendor lock-in is when you write your software on Oracle or MSSQL and moving away requires you to rewrite your whole thing. It's not losing the convenience of "got push" for deploys and having to spend time moving off their hosted versions of open source software and configuring and hosting it yourself instead.

Accusing Heroku of practising vendor lock-in is honestly absurd.


IMHO vendor lock-in is anything that makes part of your work process specific to a given vendor. The harder it is to move from one vendor to another, the deeper you are "locked in".

And as a point of interest, I think these days its probly much easier to convert your database than it is to switch hosting platforms (well... in some cases).


We started off on Heroku because we wanted something dead simple. The amount of time heroku saved us was an incredible value when we were starting out. I don't think there is any vendor lock in. We just did not switch to AWS earlier because our needs were met really well by Heroku and even Amazon Elastic Beanstalk (for ruby) did not come near the ease of a Heroku deployment. Once Opsworks came around, we invested in deployment scripts and switched because Opsworks gives us same ease of use and greater control of our stack.

What do you think stops you from switching? We had no issues at all - definitely none from Heroku.

We still use PostgresSQL from heroku because it is still a solid service and comes with niceties like dataclips. I should confess that I have not explored the Amazon PostgreSQL offering but I am happy with Heroku for databases at the moment.


Heroku is great for clients though. When I was a freelancer, I built these apps for other companies that assumed that I would do the admin & hosting. Heroku is great way to just "tack on a fee" for doing the hosting. If they want a cheaper option, they can always do it themselves. Most clients don't care about the savings on hosting..


checkout wellnessfx... affordable, nice design, pretty easy (although they use a big needle), and they track results over time.


San Francisco, CA; PlanGrid (YC W12); Looking for engineers in SF

We’re a small team of construction engineers, software engineers, and ex-rocket scientists, building intuitive, beautiful tablet apps for construction. We love disrupting an industry that makes up 11% of Global GDP because no one ever cared to do so (for comparison, defense is only 2.5% of global GDP). Our users are project engineers, architects, superintendents, and electricians, and they love our app (because it helps them build real things more efficiently). We're looking for front-end engineers with a passion for making beautiful intuitive products. Our front-end tech is iOS, Android, and backbone.js. You will be our twelfth team member and sixth member of engineering.

We've been around for a year and we're growing fast. Unlike a lot of early stage startups, we measure our growth in revenue, not users, and it's been exponential since the day we launched.

Competitive salary, equity, company engineering retreats to Mexico (this years location:http://i.imgur.com/kEiI2ej.jpg), and an office next to a beer garden (Hayes Valley, SF).

Send your info to: jobs@plangrid.com


"deliver by" would be even better


If you have 2 unicorn servers and you happen to get 3 slow requests routed to it, you are still screwed, right? Seems to me like it will still queue on that dyno.


That's exactly what happened to us - switching to unicorn bought us a little time and a bit of performance, but we hit the exact same problems again after a couple more weeks of growth.


Yeah, the only real question is whether or not it's true that they no longer do intelligent routing. If that is the case, then regardless of anything else the problem exists once you pass a certain scale/request cost. It won't matter if that one dyno can handle hundreds of requests at once, it will still queue stupidly.


This is true - unicorn masks the symptoms for a period of time but does not solve the underlying problem in the way a global request queue would.

Also, if the unicorn process is doing something cpu intensive (vs waiting on a 3rd party service or io etc) then it won't serve 3 requests simultaneously as fast as single processes would.


One of the hidden costs of Unicorn is spin-up time. Unicorn takes a long time to start, then fork. We would get a ton of request timeouts during this period. Switching back to Thin, we never got timeouts during deploys - even under very heavy load.


Maybe this is a stupid question, but with unicorn it forks the request and can process multiple requests at the same time. Previously it seems that only one request could be handled by the dyno so requests had to queue on the dynamic routing layer but with multiple request support with unicorn or whatever, wouldn't it be more efficient to dump all the requests to dynos? Followup question, also how would intelligent routing work if it just previously checked to see if which dyno had no requests? That seems like an easy thing to do, now you would have to check CPU/IO whatever and route based on load. Not specifically targeted at you but to everyone reading the thread.


> Previously it seems that only one request could be handled by the dyno so requests had to queue on the dynamic routing layer but with multiple request support with unicorn or whatever, wouldn't it be more efficient to dump all the requests to dynos?

It would be if all requests were equal. If all your requests always take 100ms, spreading them equally would work fine.

But consider if one of them takes longer. Doesn't have to be much, but the effect will be much more severe if you e.g. have a request that grinds the disk for a few seconds.

Even if each dyno can handle more than one requests, since those requests share resources, if some of them slows down due to some long running request, response times for the other requests are likely to increase, and as response times increase, it's queue is likely to increase further, and it gets more likely to pile up more long running requests.

> Followup question, also how would intelligent routing work if it just previously checked to see if which dyno had no requests? That seems like an easy thing to do, now you would have to check CPU/IO whatever and route based on load. Not specifically targeted at you but to everyone reading the thread.

There is no perfect answer. Just routing by least connections is one option. it will hurt some queries that will end up being piled up on servers processing a heavy request in high load situations, but pretty soon any heavily loaded servers will have enough connections all the time that most new requests will go to lighter loaded servers.

Adding "buckets" of servers for different types of requests is one option to improve it, if you can easily tell by url which requests will be slow.


That gets pretty unlikely, especially if you have many dynos and a low frequency of slow requests. The main reason unicorn can drastically reduce queue times here is that it does not use random routing internally.


how does it decide to queue at the dyno level anyway? Does it check for connection refusal at the TCP level?


The connection is accepted, and a single-threaded web server will do the queuing.


Oh so the server process hosting rails is itself queueing? Is that what they refer to as "dyno queueing"? I thought perhaps there was another server between the router and your apps server process.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: