Hacker News new | past | comments | ask | show | jobs | submit | sourc3's comments login

This is so exciting to see, especially for older folk like me.

Almost 20 years ago, one of my professors told us before graduation that hot tech is mostly about the idea pendulum swinging back and forth. I immediately chalked it up to 65+ above white wise men snobbery.

However, this is exactly that. We started with static pages, then came Ajax and Asp.net and the open source variants, then we went full SPA, now we are moving back to server side because things are too complicated.

Obviously tech is different, better, more efficient but the overall idea seems to be the same.


I’m glad this technique is making a comeback. The last 10 years of JavaScript on the client have been an utter shit show that left me wondering wtf people were thinking.


You have the benefit of hindsight at this time. You can draw parallel to history of flight and all the crazy contraptions that people attempted. Great technology can emerge from the combination of numerous shit shows. The whole is greater than the sum of the parts.


> You have the benefit of hindsight at this time.

People have been pointing out it's a shit show with no end in sight for the entire duration of the phase. Pointing out the performance impact and cost to end users, how diabolical it is for those on lower latency or poorer network connectivity (i.e. most of the world), and so on.

Same thing as always happens with these pendulum swings, newer engineers come in convinced everyone before them is an idiot, are capable of building their new thing and hyping it up such that other newer engineers are sold on it while the "old guard" effectively says "please listen to me, there's good reasons why we don't do it this way" and get ignored. Worse, they'll get told they're wrong, only to be proven right all along.

I'm not denying there are obstructionist greybeard types that just refuse to acknowledge merits in new approaches, but any and all critique is written off as being cut from the same cloth.

It's perfectly possible to iterate on new ideas and approaches while not throwing away what we've spent decades learning ('Those who do not learn history are doomed to repeat it'), but tech just seems especially determined not to grow up.


I guess I've become a grey beard. I've done the whole journey from CGI everything to a bit of js to SPA. As much as I'd really like to be nostalgic about the good old days, there are reasons everything got pushed into the client. One of those reasons is maintaining state.

"HTML over the wire" isn't really a return to the good ol' days. It's still the client maintaining state and using tons of js to move data back and forth without page reloads. It just changes the nature of 1/2 the data and moves the burden of templating back to the server.

It is amusing that they make a claim that reads a lot like "eliminate 80% of your Javascript and replace it with Stimulus". What is Stimulus? A Javascript framework.


They mean JavaScript that you write.


I'm not a front end engineer, but it always seemed crazy to me. I remember testing out the Google Web Toolkit when it came out more than a decade ago, and the craziest thing about it to me wasn't the Java --> JavaScript compilation, it was that the server just dumped an empty page and filled everything in with JavaScript on the client.

Then, remember the awful awful #! URLs? Atrocious, and seemed like obviously a terrible idea from the start, yet they spread, and have mostly died, thankfully. But even with the lessons from these bad tech designs, new frameworks come out that repeat mistakes, yet get incredible hype.


Hashbang URLs are gone because of the PushState API, not because people have given up on the idea.


Hashbangs preclude even the option of the server even seeing the state that the client wants from the first request, necessitating severa hops. PushState, though it allows URL transitions without a full reload, is an entirely better and different idea.


Around the time that GWT came out, offshoring was a big thing. And most of the contractors only knew Java. Also Java was the trusted language and javascript was not.


The only big GWT project I've ever been on was a governmental project that I won't go into (because it's Danish and I would have to describe a bunch of stuff that everyone in Denmark knows and nobody outside would probably care about), but the company providing it was porting their Java version to JavaScript and had a significantly large codebase to leverage.


AWS used to rely on JWT for their consoles (haven't for a few years now, most folks migrated away some 5 years ago)

It's why they used to be horrendously bloated with large javascript bundles that took so long to process on the client side.

Roughly speaking the idea was "We don't have any Javascript developers, but we do have Java developers. JWT allows us to bridge that divide". Neat in theory, and an understandable decision, but diabolical in practice!


I believe you mean “GWT” and not “JWT”.


Yup you're right, that was quite a brainfart :)


Very true. I think that mostly, web dev mainstream has taken a rational path. It’s with the benefit of hindsight as you say, or the yoke of unusual requirements, that people now say ”we did it all wrong”.


except we could fly before ;D


> The last 10 years of JavaScript on the client have been an utter shit show

A fair number of people would disagree. I'd say it's advanced a lot, considering the limited role of JavaScript on the client in the past.


Same. And I'm still amazed that people loved JS so much they put it on the SERVER too! And now node/npm is everywhere.


I think it's less about loving JS so much, and more about not having any options on the client. If there were any better client options they might have won!


Server side javascript was one of its original intended uses.

Here's the Netscape Enterprise Server manual from 1998[1]. Sorry I couldn't find an earlier version.

1. https://docs.oracle.com/cd/E19957-01/816-6410-10/816-6410-10...


I think a huge amount of that success is because JS is cheap to hire, i.e. Node is fast enough so why bother using a good language (cynically).

Anecdotally, most people I see who really really love JavaScript are kids who haven't really don't much else.


Yikes, what a condescending comment. I’d argue why this comment is beyond misguided but I’d be wasting my time.


Node has a better runtime than ruby or python, so why not? and you also have TS.


The node runtime is actually pretty quick (It's roughly as fast as an optimizing compiler from 10-15 years ago, give or take, which is fairly impressive), but even TS is still makeup on a wart - you can't escape JS's more bizarre semantics that easily.


Couldn’t have come soon enough. I’m exhausted


I’m glad I’m not the only one who thought this.


Yeah people shouldn't make applications in programming languages. If your application can't be made with html/css then it's bloatware. All this java, .net and C are totally unnecessary.


I don’t think the problem is with programming languages, but with JavaScript specifically, since it was never designed to be stretched this far. TypeScript is an improvement, but if you could write C or whatever on the client side and run it as easily as JS I think more people would go that route.


C/C++ and Rust actually run very fine client side nowadays. However, it's just more cumbersome to make UI in those languages, so it's mostly used to port existing libs or for performance.


Literally nobody is saying this


They are, look at the context of what they're saying. JS is simply a programming language in a VM like plenty of others, there's nothing inherently bad about it. But every thread here's the uneducated hate for it, completely misunderstanding that html/css static pages don't solve the problems a programming language does.

I'd like to see these people make applications in pure XML. No programming.


No, the argument has never been "replace all JS with static HTML/CSS". The argument is "JavaScript frontends are becoming unnecessarily bloated, slow, and complicated, and we can do better". Solutions like the one Basecamp is proposing with Hotwire include pushing as much rendering logic as possible to the server, where you're using a language like Ruby for logic. Nobody thinks you can just remove all logic from a web application unless it's literally just static content.

And even with Hotwire, you're not getting rid of JavaScript entirely. You can write it with Stimulus. The idea is just that frontend web development has become a mess, and it's possible to simplify things.

> there's nothing inherently bad about it

Disagree.


“Bloated.” This is actually the opposite. Go into a language like python and install numpy (30mb) and then come back to complain to me about a 2mb js bundle.

This argument is so bogus if you look at alternative language dependency sizes.


While there are issues within Python side, I think it is quite unfair to use numpy as an example.

My reasoning is that the numpy project is meant for scientific and prototyping purposes, but many times people are using it as an shortcut and include the whole thing into their project.

That being said, the quality in these packages does vary depending on who developed them. But I think this is a problem that exists with all languages where publishing packages is relatively straightforward.


numpy doesn’t get sent to the client if you use it on the server.

> JavaScript frontends are becoming unnecessarily bloated


People often conflate use with abuse.

The bad experiences stick out to people, whereas all the well behaved JS-heavy apps out there likely don't even register as such to most people.

Even with SPAs, it's very possible (and really not that hard) to make them behave well. Logically, even a large SPA should use less overall data than a comparable server-rendered app over time. A JS bundle is a bigger initial hit but will be cached, leaving only the data and any chunks you don't have cached yet to come over the wire as you navigate around. A server-rendered app needs to transmit the entire HTML document on every single page load.

Of course, when you see things like the newer React-based reddit, which chugs on any hardware I have to throw at it, I can sort of see where people's complaints come from.


I'd take it back even further:

We started out on mainframes.

Then things moved to the desktop, with some centralized functionality on servers (shared drives, batch jobs).

The processing moved to centralized web servers via the web, SAAS, and the cloud.

Then more moved into the client through React & similar.

And now things are moving back to the server.

Tick. Tock.

These changes are not just arbitrary whims of fashion, though. They're driven by generational improvements in technology and tooling.


I got a chuckle out of Apple M1 chip touting having shared video memory as a big step forward. (Which it is, but is still amusing to me how it might have sounded like a groundbreaking innovation to a layperson.)


In the PC industry, that's called UMA and has been around for a few decades, synonymous with ultra-low-cost (and performance) integrated graphics. To hype it so much as a good thing, Apple marketing is really genius.


Apple takes Cue From Original Xbox with Latest Chipset.


Or; Apple takes cue from own Macintosh IIsi from three decades ago?


Yes. I have thought about this a lot. There are cycles...

Like thin client (VT100), to thick (client/server desktop app), to thin (browser), etc.

Similarly, console apps (respond to a single request in a loop), to event-driven GUI apps, to HTTP apps that just respond to a simple request, back to event-driven JS apps.

It depends on how you define the boundaries, but history rhymes.


Virtual machines, containers, very similar to partitions and spaces on mainframes as well.


Virtual machines are not a recent invention. They were already being used on IBM mainframes starting in the early 1970s:

https://en.wikipedia.org/wiki/VM_(operating_system)

Notably, the VM operating system could run an instance of itself in one of its own virtual machines.


Is it really a pendulum, or is it more that this was always an idea with merit that's now finally seeing wider adoption because it's become more widely available? (In part, I understand, due to some IBM patents that expire 10 or so years ago)


And serverless is an anemic CICS executing non-transactions.


Serverless is kind of like Apache running PHP scripts in virtual hosts.


The greatest trick the devil ever pulled is convincing people that shared hosting is preferable to dedicated, and then charging them way more money for it.


The cloud is just someone else's computer in a data center in new Jersey about to be hurt by a hurricane.


They're driven by generational improvements in technology and tooling.

I'd say they're driven by corporate greed. Cloud computing is basically renting time, and so the more you use them, the more $$$ they make.


Every time the pendulum returns, it returns profoundly changed. And it returns because the changes makes the coming back possible.


So when and how does the p2p / distributed pendulum swing back? When do we stop using AWS mainframes for everything?

I sense that you're right about swings requiring change to older techniques. But I think there's also a component of being fed up with the direction things are currently facing.


Unfortunately p2p computing is hindered badly by the copyright industry. The research is still active and we have a lot of ideas for distributed computing and p2p for more than file exchange. A lot is used today to distribute a mainframe infrastructure instead of creating truely distributed network.


I completely agree with this. For reference, I'm a relatively new developer - 3.5+ years of experience in my first developer position.

At the beginning of college everyone was SUPER into NoSQL. All my friends were using it, SQL was slow, etc.

Nearing the end of college and the beginning of my job I began seeing articles saying why NoSQL wasn't the best, why SQL is good for some things over NoSQL, etc.

Technology is cyclical. 10 years from now I expect to read about something "new" only to realize that it was something old.


The NoSQL trend was so terrible. Anyone starting out right in that time frame where mongo and other NoSQL DBs were getting popular was really done a disservice.

I sit in design meetings all the time where people with <5 years experience go out of their way to avoid using a relational database for relational data because "SQL is slow". They will fight tooth and nail, shoe-horning features in to the application which are trivial to do with a single SQL command.

I helped out on one project lead by a few younger devs who chose FireStore over CloudSQL for "performance reasons" (for an in-house tool). They had to do a pretty major rewrite after only a few weeks once they got around to deleting, because one of their design requirements was to be able to delete thousands of records; a trivial operation in SQL, but with FireStore, deleting records requires:

> To delete an entire collection or subcollection in Cloud Firestore, retrieve all the documents within the collection or subcollection and delete them. If you have larger collections, you may want to delete the documents in smaller batches to avoid out-of-memory errors. Repeat the process until you've deleted the entire collection or subcollection.

> Deleting a collection requires coordinating an unbounded number of individual delete requests.

Turns out, once they started needing to regularly delete thousands-millions of records, the process could run all night. Luckily, moving over to CloudSQL didn't take very long...


> I sit in design meetings all the time where people with <5 years experience go out of their way to avoid using a relational database for relational data because "SQL is slow"

I mean, this is just dumb. I have less than 5 years experience and I understand that SQL isn't "slow", there are just different tradeoffs between SQL and NoSQL databases and you have to pick the right tool for the job.


Or just... use indexes.


NoSQL is very much like the databases that were around in the 1960s ("navigational" databases, nested sets of key-value pairs). E. F. Codd proposed a database of tables (which he, a mathematician, called "relations") to solve a number of problems that these primitive databases were having, one of which was speed.


The funniest part is seeing companies jump onto the distributed NoSQL bandwagon with their fundamentally relational and transactional data structures and then reinvent the transactional relational database.


Sql has always been faster in querying. Faster in development though is another thing depending on the project and experience.


You should be very glad you saw the utter pile of crap in technology fashion show at such a young age.


The Rails people never went for SPAs though. Releasing another server-rendering AJAX thing for rails (previous was TurboLinks) no more represents "the pendulum swinging back" than a new version of COBOL that runs on mainframes represents the pendulum swinging back to mainframes. If this approach gains market share against React etc., then that will be meaningful - but don't hold your breath, there are legitimate reasons for the move to SPAs and also an enormous amount of institutional inertia behind it.


I don't think that's entirely accurate. Lots of Rails users went the SPA route the second stuff like Backbone came out. Wycats was big in the Rails community at this time and he spearheaded Emberjs. The Shopify guys were (and are still) big in the Rails community and they created their own Batman.js. It's just that the Rails core devs made a decision to not go that route. They were even working on their own front end framework at one point and after some time they decided to kill it in favor of just using pjax/turbolinks. You can get your 80% case accomplished with these technologies with substantially less effort. There are definitely reasons to go SPA, but the dev community at large has jumped on the hype train here without really identifying that using these technologies are a good idea for their use case. I mean, there's a lot of people doing CRUD with React. That's crazy.


Lots of rails back end applications power SPAs on the front end. Sometimes for good reasons, often enough just because it was more "modern" - but much less efficient in terms of programming.


The interesting thing is that if you don't think of the browser as just another runtime, nothing more than The VM That Lived (where applets and flash died), but actually think of your applications as Web Applications, then you get the ideas behind this faster.

JSON is just a media type that a resource can be rendered as. HTML is another media type for the same resource. Which is better? Neither, necessarily, it depends on the client application. But if you are primarily using JSON to drive updates to custom client code to push to HTML, well, that should give you something to think about.


> I immediately chalked it up to 65+ above white wise men snobbery.

Perhaps this can be the opportunity for you to look through your past and consider and reevaluate other ideas you discarded because of your own bigotry.


You went to a school with professors and dismissed their final advice to you as "65+ above white wise men snobbery"?

Which part of that is supposed to be a reasonable thing to say?


> Which part of that is supposed to be a reasonable thing to say?

None of it - that's the point. They are self-deprecatingly pointing out how naïve and judgemental their younger-self was.


Lol so did I. Ageism is a thing, and it's everywhere. At least when you're young, you don't have the excuse of already having been in the other age class. That being said, several of my older professors were entirely full of snobby shit. The older I get, the more I see how they were not trying to impart knowledge, but to gain some kind of status as "hard-ass" old men with the younger generation.


Nobody is doing what they claim. It's all ego and posturing. I'm getting tired of humanity.


I read it as a self-deprecating dig at his / her younger self.


Exactly, it is kind of like Clarke's first law:

https://en.wikipedia.org/wiki/Clarke%27s_three_laws

Youth are always writing off the oldies - I did it, and now that I am old, I see it happening to me - and that is ok - we need that passion to shake things up, even if they end-up eerily similar to the way things were done before...


I would point out that those are older also tend to write off the younger. I think it's just perspective mismatch; If I can emulate another person's perspective in my head, I can anticipate their decisions (and reasoning), so I can decide if they are being reasonable.

However if I can't understand their perspective, I have a very hard time in understanding and judging their reasonableness (because I'm basing my judgement solely off of my own experiences and memories that are similar to their circumstances).

This lack of understanding translates to seeing a lack of credibility in them. "Maybe if they were more like me, they'd make more sense, be more reasonable". This type of thinking is common in most types of prejudice.

It's why young people write off older people: "They're too older to remember what it's like being my age, or to understand how things are now".

Why the opposite occurs: "They're still too young to understand how life works yet".

Why people of very different cultures tend to be prejudiced: "Their kind are ignorant of how the world works", and the opposite: "They've never been through what I've been through, they don't understand me or mine".

All of these statements evaluate down to: "If they were more like me, they would be reasonable". Which is of course true, if "they" were more like "you", their systems of reasoning and value be more similar to yours, and vice versa.


In computer science, it's particularly tempting for the yutes to write off the oldies, because technologies change so rapidly — I sometimes frighten the kids by mentioning that I got my first degree before the WWW was invented, and I'm far from retirement age.


This doesn't apply to all tech. Web just doesn't have a good solution because it's so complex. You'll always be making compromises. Some compromises are trendier than others at any given time. IMO, simple tech you don't have to think about, it's usage doesn't have "pendulum" effects. You forget it's there.


> then we went full SPA

Rails never did, and even actual SPA frameworks (e.g., React) have had SSR support/versions for quite a while. Basecamp introducing yet another iteration of front-end-JS dependent mostly-SSR for Rails isn't a pendulum swinging anywhere.


I think I'd be more impressed by this idea if their server wasn't currently down.


It's back up. If you check out the traffic to it, we hugged it to death.


It's not just that things are too complicated... the JS being sent to browsers is large and a lot of work. That requires more bandwidth, processing, and power usage on client devices. This eats phone, tablet, and laptop batteries.


But... that's one of the pro's of not having to do the rending cycle on the server. Also caching of framework libraries off CDN's and such.

I don't see much merit in moving back to server side rendering aside from obfuscation & helping SEO ratings (web crawlers have a hard time with SPA)


> But... that's one of the pro's of not having to do the rending cycle on the server. Also caching of framework libraries off CDN's and such.

This doesn't save battery life on a device. If someone downloads a few meg of JS their browser has to parse and execute that JS locally. This use of processing uses power. If that same person had half as much JS to parse and execute it would use less power.

A CDN does not save from this happening.

When power use happens on a server it's more on the server but less on devices with batteries. Batteries aren't used up as quickly (both between recharges and in their overall life).

A server side setup can cache and even use a CDN to only need to render parts that change.

My points are that it's not all cut and dry along with considering batteries.

Oh, and older systems (like 5 year old ones)... surfing the web on an older system can be a pain now because of JS proliferation.


> Oh, and older systems (like 5 year old ones)... surfing the web on an older system can be a pain now because of JS proliferation.

This matters because of the poor, the elderly (on a fixed income), and those who aren't in first world countries don't have easy access to money to keep getting newer computers.

Then there is the environmental impact of tossing all those old computers.

So, there is both a people and environment impact.


I think there's some kind of weird mentality among web devs that client-size computations are free, but server-side ones cost resources because you do more of them the more users you have.


That’s not so weird. It’s like IKEA shipping you disassembled furniture: they don’t have to pay for assembly (nor for shipping as much air). The client bears the cost of assembly, so if you don’t pay the client’s costs, it’s free.


They are free, just not to the client.


You're right it's not all cut and dry.

The two things that use the most battery in a phone are the radio and the screen.

If you can do most of the work client side, the phone can turn off the radio and save battery. The amount of battery savings of course depends greatly on what the application is actually doing.


An interesting development is that the argument "common libraries will be cached in the browser" is no longer true. Chrome and other browsers are starting to scope their caches by domain, to mitigate tracking techniques that used 304 request timing to identify if the client had visited arbitrary URLs.

Yes, I'm aware that "it will be cached" lost most of its glory when bundling became mainstream, but I still hear it as an argument when pulling things from common CDNs.


In a few tests I ran, I found rendering to be fast and lightweight. If you already have prepared the associative array of values, then the final stage of combining it with a template and producing HTML doesn't strain the server, and so it doesn't help your server much to move that part to the client.

The server's hardest work is usually in the database: scanning through thousands of rows to find the few that you need, joining them with rows from other tables, perhaps some calculations to aggregate some values (sum, average, count, etc.). The database is often the bottleneck. That isn't to say I advocate NoSQL or some exotic architecture. For many apps, the solution is spending more time on your database (indexes, trying different ways to join things, making sure you're filtering things thoroughly with where-clauses, mundane stuff like that). A lot of seasoned programmers are still noobs with SQL.

Anyway, if rendering is lightweight, then why does it bog down web browsers when you move it there? I don't think it does. If all you did was ship the JSON and render it with something like Handlebars, I think the browser would be fine, and it would be hard to tell the difference between it and server-side rendering.

I think what causes apps to get slow is when you not only render on the client but implement a single-page application. (It's possible to have client-side rendering in a multipage application, where each new page requires a server roundtrip. I just don't hear about it very much.) Even client-side routing need not bog down the browser. I've tested it with native JavaScript, using the History API, and it is still snappy.

I guess what it is, is that the developers keep wanting to bring in more bells and whistles (which is understandable) especially when they find some spiffy library that makes it easier (which is also understandable). But after you have included a few libraries, things start to get heavy. Things also start to interact in complex ways, causing flakiness. If done well, client-side code can be snappy. But a highly interactive application gets complicated quickly, faster than I think most programmers anticipate. Through careful thought and lots of revision, the chaos can be tamed. But often programmers don't spend the time needed, either because they find it tedious or because their bosses don't allot the time --- instead always prodding them on to the next feature.


And state is not reflective of reality in the database which is a terrible idea for most apps


Indeed, this is exactly how web chats worked in late 1990s, except for the use of WebSocket (they used infinite load instead). They even seem to revive frames, another staple of 1990s design!


>65+ above white wise men snobbery

Nice! Casual ageism and racism mixed into one post.


Conventional wisdom is discrimination against privileged groups such as white men is less offensive because they’ve endured so much less of it.

On one hand, it’s true. It’s part of white privilege which is tangible.

On the other hand, however less often people in a privileged class are realistically impacted by discrimination, it’s still > 0.0%. Since it usually costs nothing more to include everyone it seems useful.

But I think the biggest reason it’s important to care about discrimination wherever it shows up and not let people off the hook is that it’s unifying.

There’s a story out of Buddhism that suggests it’s important to think equally kindly about rich people, kind of similar in that they’re a privileged class.

I know it’s a hard sell. I don’t do it justice here. However a powerful argument can be made that not disparaging privileged classes, actually helps us all in the long run/big picture.

If I get down voted I understand, that’s ok. If it makes a difference I don’t mean to minimize the 10,000 year history of pain suffered by any humans due to discrimination.


> However a powerful argument can be made that not disparaging privileged classes, actually helps us all in the long run/big picture.

The powerful argument is that you should treat everyone well, period, and not do some kind of calculation to decide how cruel you're allowed to be to them.


So at first you're sympathising with discriminating against those that you see as suffering less and then your big idea is treating people equally and that's a 'hard sell'. That is like being a basic good fucking human, it's not a novel idea.


Sorry if it was unclear that’s not what I meant to imply.

First, acknowledging the thinking behind a common opinion is not the same as sympathizing with it. It’s only stating a concept I disagree with.

Secondly, it’d be nice to take credit for this, big fucking idea, but unfortunately it’d be thousands of years too late. I explicitly mentioned the source.

Finally, I don’t see how it’s not a novel idea. If you started asking people to think kindly about rich Wall Street bankers or cable company executives would everyone he instantly on board?

I know those are extreme examples but that was the point of the story. What’s indeed not novel is to say, think well of all people.

The hard part is when you try to actually apply it equally, including to less popular but highly privileged classes of people.

I don’t claim that I can do it all the time, I’m sure I don’t in fact. However for any ideal shouldn't it be ok to try and work towards it over time?


Racist people like you make peaceful protests and working for change against racism so much harder. You're just out for revenge and your rhetoric shows it.

Edit: I've had just about enough of people using "white privilege" to justify violence and blatant discrimination because "they haven't been exposed to enough". Its just another way to justify racism. Plenty of white people live in poverty. Its not ok in either direction.


Racism is singling out white people as the source of all evil, and then backing it up with statistics which don’t tell the whole story.


Did you read their entire comment? It sounds like you agree with them


The pre and post edit makes it seem like they could be on either side of the argument.


Seriously. That's a WTF from me, dawg...


Rough guess OP is making fun of himself with this now.


Yep.


[flagged]


.


[flagged]


Probably because they are relating an anecdote from their past, and self-deprecatingly pointing out how naïve and overly-judgemental they were _back then_.


I think the same.


I don't think this addresses all of what SPAs is used for. It seems to assume full-stack control.


It's specifically built for Rails, so yeah, it definitely assumes full-stack control.

And there are definitely applications I would prefer to write as an SPA over the Hotwire approach. But given that the vast majority of websites are just a series of simple forms, I prefer this approach over the costs you incur from building an entire complex SPA.


While it works with Rails... some of the parts are just JavaScript and will work with any underlying platform.


Fascists are out for ethnic cleansing while the world turns a blind eye. Sounds familiar?


Expected? Yes. Normal? Yes. Only thing there is out there? No.

For every $30/hr posting on the net, there is another one for $100. I also personally know of folks who are making +$150 with Django. It really depends on who you work for.

Although you will see a lot more of `build Uber for $5K` type of one off jobs, there are some reputable companies who need more capacity on new projects.

If I were you I would cast a wider net and sample the market more.


Who pays >150? I always hear about this but neither as freelancer or as employee who hires freelancers I have ever seen somebody making that much as freelancer. Sometimes consulting companies charge that much but the freelancers probably get paid only a fraction of that.


Industries where money comes in by the bucket have no problem spending it with a shovel for specialized knowledge they don't have.

If you're a contractor, your best bet is to focus on an industry to understand the needs of that industry better. Rather being a Python developer, you can be a contractor who builds usable solutions that happen to use Python.

Industry knowledge, referrals, good prior work and marketing are important. These $150/hr jobs won't fall into your lap unless you put an effort to position your lap that way :)


Real estate, finance, oil & gas, software development clients. I charge $160/h. 20+ years of experience.


I have a buddy who charges $200/h for JavaScript work in Oil & Gas. He works on-site, wears a suit, and makes bank in a cheap city. Never has issues getting work.


Normal start-ups (i.e. not Valley Bubble) and small shops are almost never able to pay that much, but medium and large clients will easily pay that rate for established senior engineers WITH REPUTATIONS.

As much as we like to think technology is a meritocracy, at the end of the day, it always comes down to who you know. The charitable interpretation of that fact is that there aren't a lot of ways to gauge reputation, so it's mostly about who can vouch for you.

Source: I co-founded a software and design consultancy co-op (https://www.fountstudio.com/about).


when I get my own clients directly, and I'm closer to that. When I'm subcontracted through someone else, it's typically < $100 ($70-$90). I consciously make that trade off sometimes because I'm effectively outsourcing some of the risk - negotiations, collections, etc. And it's sometimes got me in to projects on teams whereas I'd normally be an independent, and the dynamics and range of engagements I've had has been a bit wider because of it. The $120/hr+ engagements I've had tended to be shorter engagements - (< 6 months). The subcontract ones tend to be longer. While I know if I'm getting $80, the people selling me are getting $110 or more, again, I understand they're providing some service that I didn't particularly want to do (or... negotiated for a 9 month engagement, for example).


These numbers are more in line with what I have been and am still seeing.


Where are you?

In NYC and SF, you'd never fill your contract if you were trying to pay < 150.


I was freelancer in Maryland and then worked at a company in LA.


When I was freelancing (mostly Swiss companies) in the end 180$/hour was my starting point for negotations.

For bigger multi month projects i would come down quite a bit, but for small ones i wouldn't budge.

Its not something you can pull off when new to freelancing and more importantly don't have connections / referrals.

Its a comparable price to what small agencies charge (or charged, haven't been working/living in Switzerland in years).


If you have a name you can charge exorbitant rates for anything, and large companies will pay it. IAmTimCorey is somewhat of a YouTube celebrity with millions of views on pretty basic (but extremely well-made, all-around excellent) videos. His advertised consulting rate is $300/hr. To your second point, Oracle bills its consultants at $400/hr or more, and they're probably making $200-250k.


Most likely desperate very short term one off contracts.

Almost meaningless to talk about hourly rate without also providing the length of the contract.


Someone paying not just for a dev but a dev with significant domain knowledge in a niche area.


How much does somebody in this kind of field in the US pay in taxes? As for me (Germany) I have to basically give away half, such a posting would pay me basically like a very entry position after taxes.


In the US you would pay roughly 35% to 40% in taxes. However (in general), expenses incurred in the course of your business would not be taxable.

For example, if you earned $150,000 as a contractor, but you incurred $30,000 in expenses then you would have $120,000 in potentially taxable income.

The tax code is very, very complex, though, and there isn't a simple answer to this question. There are a vast array of factors that would shift how much you pay in tax up or down, and this complexity is heavily exploited by lots of folks.


I would include health insurance premiums in the US as a tax when comparing to other developed countries since they don’t have to pay those. Easily $5k/per family member per year plus $28k available in case of emergencies due to out of pocket maximums and $10k to $15k per family member per year if you’re 45+.


I think if you do that it becomes really hard to compare a lot of things.

So health insurance is a cost ... albeit not the same for everyone, at some companies you pay surprisingly little and that is compensation ON TOP OF your pay...

Other places not.

It's really hard to get good 1:1 comparisons.

Taxes and etc in the US are widely unevenly applied depending on the situation compared to what seem to be 'more' standardized and predictable numbers, services, in Europe.


This topic is concerning freelancer income, so it doesn’t involve an employer paying for a portion of an employee’s health insurance premiums.

It’s also very easy to compare even if you are an employee, as health insurance premiums are shown in box 12 code DD of an employee’s W-2. And health insurance premiums don’t vary that much between healthcare.gov and ACA complaint insurance that employers subsidize.

What is clear is that there exists a healthcare expense in life, and so if you are paying for it via taxes in one region, you would have to figure out how much it costs in the other region where it’s not included in taxes to make the comparison more accurate.

You can use this table from the state of NJ to reasonably guess how much healthcare will cost you in a moderately high CoL area in the US:

https://www.state.nj.us/dobi/division_insurance/ihcseh/ihcra...


Valid point, I lost track of the freelancer situation.

As for the differing costs of insurance outside of the freelancer world, I've found the costs in the US can vary wildly. Those tax boxes aren't the only way / shouldn't be the only comparison for US healthcare costs.


I don’t understand why they can’t be used to reasonably estimate healthcare costs. ACA compliant plans are pretty standardized, as well as the metal levels indicating expected healthcare costs.

Some plans will cover some providers and some won’t, and rural areas will have issues with even having providers at all, but for most major urban/suburban regions, it should be comparable, +/- 10%, even 20%.

Either way, you know that healthcare costs are in the thousands and $10k+ per year per family, which is a lower bound you can add to calculating US tax rates.


Keep in mind that contractors can put up to $57k per year into a retirement account and can deduct 20% of income from the QBI deduction which will drastically reduce your taxable income.


Earned around $130k in consulting last year. After deductions/expenses, US federal taxes were around $8k. Now.. FICA was something else (which I can't remember offhand), and state tax was something like $3k. Federal/state taxes where ~$11k. health insurance - that was around $10k.


As a Dutch freelancer, my marginal tax rate is a bit over 50%, but I think I've got way too many tax breaks and deductibles compared to regular employees. I'm not going to complain if they decide to cat some of those back a bit (thoug I'm sure other freelancers will).


Meetup was/is fundamentally a marketing platform for event organizers.

The value it brings is the user base. So it had to always find ways to bring people to the platform to keep the value for organizers. However, more casual users resulted in more no shows, legal issues, spammers etc.

As a result considerable amount of effort went into operations.

It was a great idea but as a business scaling it required proportional investment into operations, so it never made any real money.


That brought in most of the revenue. 1:50 ratio of organizers to users with $15/mo revenue doesn’t add up to much.


I am in a similar position and would like to know how you got started with rental units. Are you using a property manager like it was mentioned in a previous comment? Do you recommend AirBnB vs. long-term rentals?


I used equity from my primary residence to finance my first investment, since I couldn’t find other means of financing. I don’t think my area would be good for AirBnb although I haven’t tried, should probably read more about it. I am doing everything myself for now, since I get better ROI, but will probably move to prpty mngmt once I have more properties.


I self-manage since my properties are in the same small city that I live in. It's not the much work. I love AirBnB but you have to be careful with it - a lot of towns and cities have outright banned short-term rentals or at least made it very difficult to operate them legally.


A coworker who lives in PA sold his unit after laws were changed that would no longer allow his AirBnB to be profitable. He would have to add a sprinkling system and other things that cost 30k+


I was part of the founding team of a startup that did exactly this about 2 years ago. I wish you the best of luck but sales cycles are excruciatingly long and the larger players that are working with razor thin margins may find the 4.5% bit too expensive. Happy to chat and best of luck with the startup!


Thanks for the well wishes! We believe that the timing component is really important, and there is evidence that the winds of change are blowing right now. Would love to learn from your experience.


Count me in too! I've been growing more and more frustrated with the tools in the marketplace.


Interested as well. I'm a smaller investor in the Chicago market with a few ideas I've been wanting to flesh out or bounce off of others in the space.


mmm, wondering how to progress this. Would you guys be comfortable joining a WhatsApp group to discuss this?


Count me in. Let’s start with WhatsApp since a lot of folks have it. I can pm you my #. Just need to get in front of a real computer.


Any platform works for me. Let me know if there’s anything I can do to help.


Let me know how to get my contact information over to you.


I would like to join the chat, if there is still room!


I'm interested also.


This is very neat! Last year I had to essentially do this on GCP and relied on a very similar implementation. Everyone was surprised to see JS being used for data processing but it worked wonderfully.

One thing I want to ask is the retries, how do you handle that currently? I ran into multiple cases where functions would fail for transient reasons.


Functions need to be idempotent, so you have to assume they will be retried. Faast.js will proactively do retries in some cases where it thinks a function is slow, to reduce tail latency.

If a function fails to execute for transient reasons and exceeds the retry maximum (a config setting you can change), then it will reject the return value promise. You can catch that and handle with another attempt, or report an error, or just ignore it and report less accurate or complete results.


Living in the suburbs and having a similar 1 hour 20 min train ride to work, I ended up setting up a small workout space with a treadmill, weights and a rowing machine. It does the job but no substitute for a fully equipped gym. I applaud your disciplined schedule though. I still find it hard to work out consistently.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: