Hacker News new | past | comments | ask | show | jobs | submit | ef4's comments login

A road that is plowed and salted enough to safely drive a car is clear enough for me to drive a bike too.

As for cold, the kids can stay under a bubble canopy if it's really bad. I have this bike, though we've never bothered with a canopy, we just dress them warm (in New England): https://www.google.com/search?tbm=isch&q=workcycles%20kr8%20...:


It's precisely because I have kids that I prioritize a walkable place. Because kids can't drive!

So in car-dependent places, they have no autonomy until they're nearly adults. I think it's much more healthy for them to slowly and steadily expand their autonomy, rather than a sudden discontinuous break when they learn to drive.

The same argument applies in reverse to old people too. In places with good transit and walkability they can stay independent and active longer, with no sudden loss of freedom when they can no longer safely drive vehicles at high speed.

Walking and transit are both overwhelmingly safer than cars (most things are).

When kids are small you just push them in a stroller. Once they're too big for a stroller, they're big enough to walk everywhere that you can walk. It's really not that complicated. Suburban kids who never walk anywhere may whine about needing to walk two miles, but my kids have been doing that since before they could walk unassisted, it's perfectly normal to them.

I do also have a Dutch-style cargo bike which we use a lot around our neighborhood. It's wonderful.


> It's precisely because I have kids that I prioritize a walkable place. Because kids can't drive!

This point is tragically under-appreciated. Kids who live in car-dependent suburbs are in a very real sense alienated from the larger society. A twelve-year-old should be able to visit the library, stop by the park, grab a sandwich at a lunch counter, mail a letter, and wander back home all by themselves in an afternoon.

It's no wonder so many kids feel isolated and alone. They are!


Don't children have bicycles anymore?


That only works if things are within a reasonable enough distance to bike to and if the roads are setup in a way that someone can safely bike at all.

If you're talking an older grid-style suburb with corner stores and the like, it's probably something kids can do.

On the other hand, in many of the exurbs/modern suburbs, there's large distances between things and very strict segregation of residential/commercial areas. Car-centric layouts don't help either, with routes between

-----------

Even older suburban areas can have their own problems.

For a personal anecdote, I grew up in a part of NJ that has been settled since 1700s and is rather hilly. (Watchung Mountains). Most of the main roads date from that time and resemble English country lanes in terms of width/geometry more than they resemble typical American roads. Speed limits are 35-45mph, traffic does at least 5 over.

It is absurdly dangerous to walk or bike on any of those roads, and there aren't really any practical solutions to that. They studied adding sidewalks and it was going to cost huge sums of money and require destroying hundreds upon hundreds of mature trees (the roads are thickly lined with forest, and there isn't even an inch of shoulder).

Cutting the speed limit is impractical because they're main roads that people drive 5-20 miles on, that's a significant time hit.


I had a bike when I was a kid. Had to ride over an hour just to visit my nearest friend. It was a pretty dangerous route on roads designed for cars too, so my parents wouldn't let me do it alone until I was 14.


I don't know about your neighborhood, but in ours, there are no sidewalks and a bunch of teens drive as fast as they possibly can.


I grew up in a suburb, and am raising my kids in a suburb, and all of that is completely doable. You don’t need a car to get around a suburb...


"Suburb" is a pretty generic term. My hometown has older, inner-ring suburbs that are built more or less on a street grid, with local shops, connections to transit, and walkable corridors.

It also has far-flung, residential-only, cul-de-sac communities where literally nothing is within safe walking distance. [0]

We call both "suburbs," but they're very different places.

[0] What would somebody who lives here walk to, for example: https://goo.gl/maps/nX5ttfsYJBq


Thank you. Moreover, it feels like suburbs (I’m calling the town of Overland Park, Kansas a suburb) can’t or don’t clean sidewalks and crosswalks as quickly (if at all) or worse use the sidewalk as a dumping ground for snow from the road. Even on a normal day, a walk to the post office and back home can easily take half an hour, probably closer to an hour if you’re walking. That being said, it feels cramped to live in a city. There are other reasons to not get a pet (allegedly my fear of commitment) but I hesitate getting a proper desktop computer because it will occupy space. It feels illogical and wasteful (and unhealthy?) of brain cycles to worry about space that much. (What if I have to move... )


The entire KC metro is offensively anti pedestrian and anti cyclist. It is one of the reasons I decided to leave.

Fun fact- there are more highway miles per capita in KC than any other city in the US: http://www.publicpurpose.com/hwy-tti99ratio.htm


I don't strongly disagree with your point here, but like all city statistics that aren't normalized for area and density (e.g. by using the MSA/CSA or some equivalent), this one is potentially quite misleading. Kansas City draws its borders around an awful lot of rural land that wouldn't be (indeed, isn't) counted inside the borders of most cities. This is all well within the city limits, for example:

https://goo.gl/maps/Dm76roteRbx

All of which is just to say: political borders are drawn differently in every city, so you can't meaningfully compare cities using political borders.

Does Kansas City (as a region) actually have more highways than most equivalent cities? Maybe. I don't know. That table doesn't tell us that.


I think as you gain more experience and perspective you will look back and realize that you simply weren't upper middle class yet. It's not about income (which is just a point-in-time metric that can change in an instant), it's about assets.

Programmers have a much better shot at establishing this kind of security than most people, because you should be making 2x to 3x median income for your area, which means you can live on 1x median income and invest the rest.

A lot of the people I've worked with as a programmer over the past 14 years have done effectively that, so by now they have a lot of assets. That is what upper middle class security looks like.


What nobody told this unfortunate person is that working for a tech company is not the same as being a tech worker.

I'm not defending the two-tier system inside many tech companies, just pointing out that it's real and it's maintained by market forces bigger than any one company.

If a big Seattle software company lays off a team of programmers, recruiters are swarming around them by the end of the same day.


Then are tech workers working in none technical organisation. A programmer in financial services or retail company. IT is seen as just a cost centre and there always seems to be mistrust between the IT department and the business. Most tech jobs in my country are not with tech companies but within industries.


If your tech company sees IT as a cost center, then you're not working for a "Tech" company as described in the article. The mythical, utopian "Tech" company the writer wanted to be at is built on top of gossip about the perks of working for Facebook and Google, where free lunch (M-F) is assumed, so the question is what are the breakfast specials and dinner options? How many free Uber credits do I get per month? When is massage-day?

IT for this kind of "Tech" company is not a cost center. There's a closet or a vending machine chock full of Apple magic keyboards and magic trackpads at ~$100 a pop, plus all the USB-C adapters that you could want (because everyone new gets a touchbar mac).

The attitude is that it's not worth anybody's time to sit there to dispense $100 keyboards or mice, and that promotes the feeling that the company just... trusts everybody there. However, it's undoubtedly more expensive to stock Apple mice and keyboards that way, than it would be to have much cheaper wired alternatives and a minimum wage worker to gatekeep - at least in terms of IT's balance sheet. However, if any employee has to taking an hour off working, to replace their mouse (starting with filing a ticket with IT), the company has lost more than the cost of basically giving away $100 keyboards in productivity.


That whole distinction between "IT" and "the business" makes me nuts. No one refers to marketing, sales, finance, production, etc., a thing separate from "the business" yet IT is just as fundamental to many businesses as any other function. Hearing IT people do it is worse.


:-) I will be sure to be more specific. Actually, I think it is the other way round. All other business units marketing, finance, communication are fairly well understood by everyone. When it comes to IT they don't understand why the DBA seems to earn a big salary but he cannot help fix the MS Word issue. Afterall it is IT. Therein lies the problem.


Yeah, either you're a programmer or you're a second-class citizen in many tech companies.


You gotta understand, it's a reflection of supply and demand. Companies will treat you exactly how well you are valued by the marketplace, no better, no worse (in the long term average). There's an extremely vast oversupply of non-engineering talents (MBAs) etc, so they're treated poorly. At least with engineering there is some semblance of balance. There's still some oversupply of engineers but it's not nearly as vast as other areas.


Then I seriously don't understand why so many tech workers want to lower the bar for entrance into the profession and induce higher supply of tech workers. What they don't realize is that that won't make lives better for the new tech workers - it will just make it worse for existing tech workers (just like what mechanical engineers or lawyers went through).

This is probably the biggest con pulled by tech companies - play on egalitarian tendencies of tech workers to induce more supply and thus gain additional leverage over employees.


Correct. And it's been this way for a long, long time.

Back in the early 1990s, I recall browsing the "open positions" list, posted in the cafeteria at work and being shocked at the variance in starting salaries: Windows (3.0) Programming positions (C & Visual Basic) requiring no more than a high school education were offered at $70,000. Positions for PhD Chemists began at $35,000 and demanded a long list of accomplishments not limited to publications in major journals. "listed as primary inventor on one or more patents, a plus!"

So in summary, one trade could be learnt by a motivated self-directed student in the span of a summer, for no more cost than a short stack of books. The other required at least 6 years of advanced education and likely $50,000+ of debt -- a serious investment not only in time and money but also opportunity cost, as those 6 years are NOT being spent earning. Yet the former paid twice the latter. (And may still, for all I know.)

Obviously, this was a lead-up to the great wave of offshoring efforts. Executives must have noticed this variance as well.


What's funny is it's completely the opposite in other industries. If your a tech worker for a non-tech company your the second class citizen.


Absolutely. Not only that, but the experience as a tech worker is quite terrible at almost all companies and especially startups. The difference is, of course, that jobs are plenty and demand is high. If that's how they treat their developers, it should come as no surprise that everyone else is treated even worse. I can't imagine why anyone would want to work in this industry if they don't have some obsession with technology, nor that many people who don't have such an obsession make it long term. It's brutal, although probably less so out of the tech hubs like SF and Seattle, at small companies, and generally wherever people still have some human decency. Human decency in business, if it was ever there to begin with, seems to be declining heavily.


> the experience as a tech worker is quite terrible at almost all companies

But then you acknowledge the economics ("jobs are plenty and demand is high") and also say everyone else is treated worse - so "terrible" relative to what then?

Having been on the other side (hourly clerk and sales) I would say that tech workers are treated really really well.


Having plenty of jobs and being in high demand have nothing to do with being treated well on the job.


It's 100% open source, there's a very nice Docker-based install. You can host it yourself on AWS or Digital Ocean for very little money.

Make a free Mailgun account and put your credentials into the Discourse installer and delivery is free up to 10,000 messages.


Yours is a pretty weird description of the current real estate market here in Somerville.

It's a pretty even mix of owners and renters (about 40%/60% last I checked). Because of all the factors in the article, huge amounts of money are flowing in and they're being spent on upgrading the housing stock. Far from "shitty and falling apart", you see gleaming post-restoration projects everywhere, at astronomical prices.

Maybe I'm biased toward the west end of the city because I live there and the gentrification is furthest along here. But "it's extremely expensive" and "no one likes it" are really not compatible statements. The market remains extremely hot, and it's being fed by people with money who want to buy and live here, not absentee landlords. There are still some of those, but it's a shrinking group because the overwhelming financial incentive is to gut-renovate and sell as condos.

As for increasing the density, I'm all for it, but it's worth pointing out Somerville is already the densest city in New England. Denser than Boston proper, despite all Boston's own high rises. You don't find a denser city until you get to NYC. People underestimate how effectively you can pack a city even at three-to-five stories tall, if you actually stick to that height _everywhere_. We were lucky to be built in the streetcar era, and then ignored through the incredibly-dumb architectural trends of the second half of the 20th century. If the rest of Greater Boston was merely as dense as Somerville there would be no housing shortage.

My favorite strategy for getting higher density here is killing all the parking lots, along with simply getting our transit system back to the scope and quality it had in 1930, with streetcar lines all over the place.


>"it's extremely expensive" and "no one likes it" are really not compatible statements

I don't know much about the situation, but these are easily compatible statements. People might be trying to live where their job is or near the city they like or grew up in. Quality of the building you are living in is usually not the #1 concern.


concur, most of the 2-3 family rentals are shitty and very expensive.


Thanks, I have been dreading the upcoming time when my pre-touchbar macbook pro is too old and I will check out these tips when it does.

One bit of feedback:

> RAM usage: Useless. In modern OS it's always nearly 100%. That's natural.

A good ram usage indicator is still really useful, and along with cpu and network usage it's one of the first things I install on any machine.

The issue is that your indicator needs to show you not just "how much is used vs free", which is indeed useless. It needs to show you the breakdown of wired, active, and inactive memory. This makes it clear when some app is starting to blow up and consume a huge amount.

I use https://member.ipmu.jp/yuji.tachikawa/MenuMetersElCapitan/


I use gkrellm, and set the three memory krells option. The "usage" is memory actually used for application memory. The other krells mark how much cache and buffers are allocated. htop does something similar, green is application usage, blue is buffer usage and yellow/orange is filesystem cache (at least, as far as I know, I might be wrong)

edit: on linux. I would assume Mac and even windows have enough granularity to distinguish.


> This makes it clear when some app is starting to blow up and consume a huge amount.

And how often does that actually happen anymore, really? Yes, Chrome and friends are memory hogs, but a constant RAM indicator? If a MacBook (or any other device for that matter) starts swapping, you'll know right away.


Multiple times a week. And yes, Chrome and the Chrome-devilspawn (Slack) are frequently to blame, but are not the only bad actors.

I use iStat Menus for that stuff. Working without various monitors seems to me a lot like doing without speedometers and gas gauges.


> And how often does that actually happen anymore, really

Pretty often, especially if you open an app that decides to allocate a lot of memory very quickly (Xcode, VirtualBox) or when you switch to an app that causes a lot of memory to be swapped in.


But if I just started up one of those apps, I'm wanting (and expecting) it to allocate a lot of memory because that's what those apps do. So I don't really see what actionable information an always-visible meter does for me. But I don’t begrudge those who like them!


> And how often does that actually happen anymore, really?

Every few weeks to a few times a week depending what I'm working on/with.

> If a MacBook (or any other device for that matter) starts swapping, you'll know right away.

1. not necessarily, SSDs make swapping way less problematic than it used to be

2. ideally you want to nip the problematic process before it locks up the entire machine


Sure, but I think many programmers underestimate the investment value of that time.

Diving into codebases is a skill that gets stronger with use, such that you can eventually do it radically faster. That makes a much larger set of problems economically practical to fix.


The problem with this kind of deep dive optimization is the cost of maintaining it in a long-lived project as the underlying javascript engines keep changing. What was optimal for one version of V8 can actually be detrimental in another version, or in another browser.

It's precisely the unpredictability of JIT-driven optimizations that makes WASM so appealing. You can do all your optimizing once at build time and get consistent performance every time it runs.

It's not that plain Javascript can't be as fast -- it's that plain Javascript has high variance, and maintaining engine-internals-aware optimization in a big team with a long-lived app is impractical.


It seems to me there is no reason we shouldn't be able to create an "optimizing babel" that could be doing performance optimizations based on the target JS engine and version, as a build step. I don't think we need to go to a completely different source language and a compilation to WASM in order to get permission to create such an optimization tool.

Such a tool would give you the benefits you're praising about the WASM compilation workflow: Separately maintained, engine-specific optimizations that can be applied at build-time and don't mess up the maintainability of your source code.


But what if I want to target all the engines, including future ones? The compiler could compile separate versions for each engine, I guess, and you could choose which one to load at runtime based on the UserAgent string ... but even then everyone would have to recompile their websites every time a new browser version comes out.

The advantage of WebAssembly is supposed to be (I think) that it's simpler and will give more consistency between browsers, so browser behaviour will be less surprising, and you can thus get away with compiling a single version.

And if you take this approach of compiling JavaScript to a simple and consistent subset of JavaScript that can be optimized similarly in all engines, you'd end up more or less targeting asm.js, the predecessor to WebAssembly. :)


JS engines already do insane levels of optimization, and they do it while watching the code execute so they understand the code better than any preprocessing tool can hope to.

What could a tool like you're describing do that the engines don't do themselves?


I assume that JITs don't do very expensive optimizations because they have to do a trade off between execution speed and compilation time. JITs are also fairly blind on the first execution of a piece of code. Static optimizations are not made obsolete by the existence of JITs.


Expensive optimizations like what? Can you give a before/after example of something such a tool might do?

(Note: I'm glossing over the case where one is using bleeding-edge syntax that a JS engine doesn't yet know how to optimize. In that case preprocessing out the new syntax is of course very useful, but I don't think this is the kind of optimization the GP comment was talking about.)


JIT engines usually don't do static analysis. I'm not sure if that is because the cost for that is that much higher, but a hint towards why could be that the engine simply does not know which parts of the (potentially huge amounts of) code that was loaded is actually going to be needed during execution, so analysing all of it is likely to bring more harm than gain.

As an example for something that static analysis could have caught, take the example from the article about the "Argument Adaptation"[0]. Here the author uses profiling to learn that by matching the exact argument count for calling a function, instead of relying on the JS engine to "fix that", the performance can be improved by 14% for this particular piece of code. Static analysis could have easily caught and fixed that, essentially performing a small code refactoring automatically just like the author here did manually.

[0] http://mrale.ph/blog/2018/02/03/maybe-you-dont-need-rust-to-...


I replied in main to your other comment, but regarding the "Argument Adaptation" issue it's maybe worth noting that I'm 90% sure V8 would have optimized this automatically if not for the subsequent issue (with monomorphism). I'm dubious that the former issue could be statically fixed as easily as you suggest, but either way I think it should be considered a symptom of the latter issue.


Well, for one, the compiler could rearrange key assignment of similarly shaped objects, so that they actually are similarly shaped objects to the JIT .... but that seems like really dangerous territory


Something like that would probably speed up more code than it breaks, but it would be a breaking change, so probably not what could be strictly considered optimization.


The author has proven exactly this point: By going through a number of engine-specific / implementation-specific code transformations they have achieved a significant performance boost for hot code, which, for whatever reason, the JS engines themselves failed to attain with the optimization repertoire they already have.

Also, remember that JS engines are not all-powerful and all-knowing in their optimization techniques, it's still just a limited number of individually imperfect humans working on them, just like the rest of us. So naturally there are going to be opportunities for other humans to utilize and help complete the picture and increase the overall value and effectiveness.

Maybe in this case there is room specifically for a JS-syntax-level tool that also has more freedom in terms of execution time and related concerns, because it can execute at build-time, potentially pull in or bundle much more information about optimizations with it (imagine using (potentially expensive) ML model to search for probable performance wins, actually do micro- or macro-benchmarks during the build step), be maintained outside of the implementation of a JS engine and thus have the additional benefits of a potential broader contributor base and a faster and more focused release cycle, etc. Or this may not be a good idea after all. I cannot tell. All I know is that if we actually do see that there are and will continue to be optimization gaps in the JS engines themselves, then there is a way to fill them, and likely without having to switch to basically entirely different (frontend) technology stacks.


> By going through a number of engine-specific / implementation-specific code transformations they have achieved a significant performance boost for hot code, which, for whatever reason, the JS engines themselves failed to attain with the optimization repertoire they already have.

I think you're mischaracterizing what happened a little. Most of the author's improvements weren't engine- (or even JS-) specific, they were algorithmic improvements. But for the first two that were engine-specific, it's not like he applied a rote transformation that always speeds up scripts when you apply it. Rather, the author (himself a former V8 engineer) saw from the profile results that certain kinds of engine optimizations weren't being done, and rewrote the code in such a way that he knew those specific optimizations would take place as intended. Sure, a deep ML preprocessor might do the same - but only after trying 80K other things that had no effect, and on code that wasn't even hot, no?

More to the point though, it strikes me that you say JS engines aren't all-powerful, but in the same breath you seem to assume that just because V8 didn't optimize the code in question that it can't. It seems very likely to me that any case you can find where a preprocessor improves performance is a case where there's a fixable engine optimization bug. Sure, in principle one could build a preprocessor for such cases, but it seems more useful to just report the engine bugs.


You're basically describing asm.js -- a subset of javascript that is known to be easy for engines to turn directly into native code and execute, that you can use as a compilation target.

The difference between asm.js and WASM is mostly just that WASM is more compact and easier to parse, while asm.js is a more gradually compatible upgrade story.


Wasm will turn into an optimizing jit. Or it will run slow or deliver massive binaries if it is the equivalent of statically linked. I don't really see a way around that.

Once it tries to start inlining on the client side, that will open the floodgates to other optimizations.


It could easily allow a different binary to be downloaded per client (e.g. sse3 capable, sse4 capable, avx capable, avx512 capable).

The compiler would then spit out eighteen different versions and the right one would be downloaded by the user.


Does this suggest that all you got to to do is compile the JS to WASM, to have it be just as appealing?


Whether that's what's being suggested or not it's untrue. They benefits in this case come from using a much lower level and more performant language. Javascript to WASM will always be at a disadvantage to an embedded Javascript VM natively compiled given similar optimization time, since the VM can bypass some security safeguards of WASM that it can ensure aren't needed.


If you are running your own webserver even a static simple site has required maintenance. You need to keep the server and OS patched. So adding a letsencrypt cron job is not any worse than configuring something like Debian's unattended-upgrades.

But I don't think most site owners should be doing even that much. They should just pay for static hosting, which is cheap and ensures somebody else will keep the server, os, and cert all safe.


I added https to my static site last year and it has been a huge waste of time.

ubuntu + nginx worked fine for years without much maintenance, but I've spent so much time reconfiguring things when something breaks (and it is really clear when something a renewal fails... thanks HSTS).

Things that used to be simple, like putting setting up a subdomain (need to get a new cert and reconfigure the cron job now) or pointing at a websocket (can't point directly at node since that's not secure, needs to pass through nginx now) consistently take hours to do now.

I mostly do data analysis and front end work; mucking around in nginx config files is something I would have been happy never experiencing. It sucks that it's harder to host your own website now.

https://pastebin.com/N2sbvULA


I have nginx fronting around 15 different (very low traffic) websites (most static, a few python), all of which have Let's Encrypt certs. The required additions to the nginx conf were minimal and easy. Adding a new subdomain is trivial. Fetching the initial certificate from Let's Encrypt is a short, easy command line. And "sudo certbot renew; sudo /etc/init.d/nginx reload" in a cron job keeps the certs up to date (the "renew" command is smart enough to go through the list of certs you have and renew them all).

It's really hard to imagine it getting much easier.


Try `certbot renew --post-hook "/etc/init.d/nginx reload`, which will only reload nginx if at least one certificate changed :).


You don't actually need to keep web servers serving static content patched. Simply close all other ports, run a minimal web server, and it's a tiny attack surface. Some of these have made it 20+ years with zero maintenance.


I use cloudflare for my static website so that https is seen by the browser even though I am serving http.

Advantages are that it is free and zero maintenance, however nation or network providers can intercept between cloudflare POP and my server. I'm ok with that for my situation.


As a website visitor this has always bothered me. HTTPS used to mean that I had an encrypted path between my browser and the server actually serving the webpage. With Cloudflare allowing this weird hybrid mode, I can never actually know if the connection is secured all the way end to end.


cloudflare didn’t invent this or make it normal. It’s always been common to terminate https in front of your “actual” server and with re-encrypt to the “actual” server or (very common) ... don’t.

Cloudflare may have made it more common for the most basic kind of site (with their easy setup and free tier) but at the same time most of those sites probably didn’t use https anyway.

The reasons this has been done are performance (specialized hardware/separation of concerns) load balancers/firewalls needing to decrypt to route/enforce policy (that doesn’t need to imply termination but it often goes hand-in-hand) and protecting keys from your app server (think of it as like an HSM - if your app server gets compromised you probably don’t want the TLS private key to be leaked. Again you could reencrypt with a different key but often this hadn’t been done.)

The threats for last mile network fuckery (e.g. consumer ISP) are quite different then on the backend. Google has to worry about nation states messing with their networks and so they’ve had to reengineer end-to-end encryption within their network. As an end-user you just sort of need to accept that this isn’t within your ability to control or know.


Reverse proxy were usally not routed through the Internet, only a local network.


Sure but the GP made a much stronger claim that they previous knew that it was the “actual” server terminating HTTPS.

Even still, the difference between this and e2e encryption isn’t something an end user is really equipped to evaluate IMO. The threat model is practically different vs e2e no-encryption.

Cloudflare also supports reencryption over the net which is useful if your hosting provider supports HTTPS but not via a custom domain (e.g. Google clouds GCS (S3))


No, I didn't make that stronger claim; if that's what it sounded like, I apologize for the poor wording. I was definitely assuming "terminate TLS at load balancer and proxy in the clear over internal, private network" as a common, long-established practice that I have no problem with.

With services like Cloudflare, you can terminate TLS at CF, and then proxy over the public internet to the server that actually serves the page, which I think defeats a lot of the purpose of TLS, and I can never know ahead of time when I request a page of HTTPS if this will in fact be what's happening.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: