Hacker News new | past | comments | ask | show | jobs | submit login
Rebuilding our tech stack for the new facebook.com (fb.com)
655 points by alexvoica on May 8, 2020 | hide | past | favorite | 470 comments



A big point of contention in the comments here lies around the concept of what a website should be.

A school of thought in web development believes the web to be the next frontier in application development. For them it makes sense that websites like this feel and act like apps, both as an end-user (animations, transitions without full-page reloading, rich dynamic content, etc) and as a developer (client-side state management, frontend/backend separation, with an API in between, modular application structure, etc).

Apps don't load in 10ms, but they also can support some offline functionality given their independence from the server. Overriding browser behaviour and managing your own loading behaviour makes sense, because the default browser behaviour is not the experience you're striving for; it's not the app experience. These people are usually those who have worked on large web projects too— the developer experience that web developers have built for themselves in "userland" (JavaScript) is pretty good, and has evolved a lot to have features that makes developing the exact behaviour you want easier, and correctly iterating on a codebase, quicker.

A separate school of thought wish websites stayed true to their origins as enriched documents, and think trying to evolve them in the direction of applications is counter-productive to their true purpose as interactive surfaces to get and send information. If I am just going to scroll and see a few pictures, why do I need to download anything other than the structure of the page and the pictures themselves? If all the building blocks are already there, in the browser, why do people need to re-invent them and ship them as yet more JavaScript?

What should a website be though? The fact there isn't consensus about this is indication that there really doesn't seem to be a clear answer.

Per the document-like school of thought, facebook.com just keeps straying further and further away from the ideal, but as far as as the app-like school of thought goes, the new facebook.com is a pretty remarkable achievement.


Well I certainly want Google Maps to act like an app (which it does): panning, zooming and so on. And I'm happy for Hacker News to act like a traditional website. At that point it's a false dichotomy: different solutions for different problems, yes?

As an analogy: some pop-up books are amazing works of art. But reading would be frustrating if every book was a pop-up book.


On one hand I appreciate how elaborately the parent comment made their point. On another, I appreciate how succinctly you made your point. A conundrum within a conundrum, indeed.


Sometimes the medium is the message; sometimes a cigar is just a cigar.


HN in a nutshell


Things that act like apps also can’t be adequately prevented from compromising the general data privacy of the population. What this means is that it doesn’t matter very much whether your or I have mere preferences for something to have rich app-like functionality or not. Our wanting of it does not play a role unless it can be made verifiably secure and not abusive of user privacy rights. So far, general web applications cannot be made verifiably non-abusive of users’ data privacy, so we must only focus on websites as purely structured documents with no app-like capability.

I sure wish companies wouldn’t abuse data privacy so we could instead care about user preferences for app-like functionality, but we don’t yet live in a world like that.


How is this any different than native apps that offer the same functionality? The article is about facebook, not emacs. A native facebook apps is a much bigger privacy concern. The native app still stores your data in the cloud so you don't control it. It still can be using 3rd party libraries that are doing more than their stated function. It can still be communicating with servers from all over the world. On top of all that it has raw packet access to your WiFi for scanning your entire network on all ports (web apps can't do that). It can scan for SSIDs to find your location (a web app can't do that). It can scan bluetooth to find out your location and/or proximity to others (a web app can't do that). On most OSes it can read most of your files (like say your private ssh keys) web apps can't do that. If you give it access to your photos to select one photo it can read all of them, web apps can't do that. On a web app you can easily add all kinds of mods via extensions to add or remove features because the web apps are based on a known structure (HTML). that's rarely true for native apps. For app in question, Facebook, see FBPurity as an example.


I am not sure that's what he or she was trying to say. Agree with everything you said about native apps. The question for me is if it's necessary for every page to run javascript, even if it doesn't have any app-like features, even if it's just a document. I would love to be able to turn off javascript and be able to browse the document-web without everything being broken. In the specific cases where I need dynamic or app-like behavior, then I can use javascript, or maybe even a separate app browser. https://www.wired.com/2015/11/i-turned-off-javascript-for-a-...


I don't think you articulate this quite right: it's possible to both think that websites and web applications are worthy uses of the web, but also that many web applications would have been best engineered using 'classical' techniques from early websites. There's a strong argument to be made in Facebook's case, since the core value proposition of Facebook hasn't changed much since it's inception, and it began its life as a server-side rendered 'website.'

In any case, this argument is operating at the wrong level of abstraction, the issue here isn't the distinction between these two things conceptually, but if there would be less incidental complexity overall if what are typically called web applications took a different approach to implementing their features, while still maintaining the same user experience.

It's hard to look at all of the crap you need to do to get a functioning web app working to not think there must be a better approach.


> it's possible to both think that websites and web applications are worthy uses of the web, but also that many web applications would have been best engineered using 'classical' techniques from early websites.

I agree with this.

> There's a strong argument to be made in Facebook's case, since the core value proposition of Facebook hasn't changed much since it's inception, and it began its life as a server-side rendered 'website.'

Yes, but I assume that's from the perspective of the value the site brings to you, not in general and not to everyone. If someone solely gets value from facebook.com as a site to send and receive information to/from friends and the world, then yeah, it hasn't changed much.

Facebook today offers a richer experience and that might be part of its value for other people. On facebook.com you can IM a friend, while watching a video in a PIP window and engaging in a real-time conversation on the the comment thread of an event post. You can then switch back and forth between an online marketplace and a streaming service without losing the state of your chat window. The ability to do these things are part of the value proposition for many users that facebook.com now offers today, and delivering that value can be harder with a solely SSR'd website.

> there would be less incidental complexity overall if what are typically called web applications took a different approach to implementing their features, while still maintaining the same user experience.

If you've figured a way that's better do share! I'm sure there's instances in the wild, but I don't think an experienced engineer would ship their own client-side networking in JavaScript if there was a better way to achieve what they want without shipping any more JS.

> It's hard to look at all of the crap you need to do to get a functioning web app working to not think there must be a better approach.

To be clear, you can get a functional "hello world" web app with a single line a code (specifically, thanks to the fact html is very permissive with improperly formatted documents). Everything afterwards depends on the decisions you make, for the experience you want. Is getting rid of that 200ms flicker between page full page loads worth the 500ms it might take takes to load and initialize your client-side routing library? Is making your site available offline worth the effort of setting up all the extra PWA business? Some will think so, some will not.

In any case, this argument is operating at the wrong level of abstraction, the issue here isn't the distinction between these two things conceptually, but if there would be less incidental complexity overall if what are typically called web applications took a different approach to implementing their features, while still maintaining the same user experience.

It's hard to look at all of the crap you need to do to get a functioning web app working to not think there must be a better approach.


> You can then switch back and forth between an online marketplace and a streaming service without losing the state of your chat window.

On the other hand, if you built these things with lighter-weight techniques, these separate parts of the application could be opened in separate browser tabs or windows (without your computer grinding to a halt from loading multiple instances of a gigantic SPA.)


I sometime wish iframes were more useful.


Yeah I mentioned in another thread but the Rails and Phoenix communities both reject the approach of thick clients driven by data access APIs, in both theory and increasingly in practice.


Not really at all, there are some posts about things like stimulus and DHH is obviously against SPA apps. However, go look at any Rails job posting board... Like 60-70% of them are looking for Rails + React (which is a great combo btw)

Even rails 6 ships with webpacker by default and they have included an API mode for a while...

Phoenix live view on the other hand is totally awesome and goes along with your point


> ... as far as the app-like school of thought goes, the new facebook.com is a pretty remarkable achievement.

I agree, they’ve done a fantastic job. Not only that, but as far as corporate engineering blogs go, this article is one of the best I’ve ever read.

Usually I either know the subject too well to learn anything, or I don’t know the subject well enough to understand what they’re saying in the amount of time it takes to read an article.

In this case, they found the perfect depth, they had great judgment on when and how to use visuals, and what they’re conveying is so clearly valuable.

If you usually skip the article and just go straight to comments, consider actually reading this one!


Thank you. This is one of the first things I tell people in my web-dev workshop. They're probably not going to be building websites, they most likely will be building web applications.


There is something between 10ms and 10s, especially when it's after the initial load. The point of contention is not the website vs webapp debate, it's more that a webapp doesn't have to be that bloated.

The whole point of SPA initially was better use experience, notably through faster response time and loading.


If you are not building for the first time experience I do not understand why you would have a problem with initial load of a web app taking seconds. It's like an install just way faster. No one uses facebook only once and so shouldn't care. Android apps are so shit the "slow" webapps are still miles ahead. I do not understand why android google drive app takes 5 seconds to load


> I do not understand why android google drive app takes 5 seconds to load.

The Android UI framework is simply amazingly slow. Instanciating widgets takes forever, and you have to do it in the UI thread.


Under the document-like school of thought, less "engineers" would need to be employed. Also, less computing resources would be required on both the client side. Money, not to mention time, could be saved.

Perhaps application-like school of thought probably allows for more user manipulation. metrics and tracking.


Not sure about the resources part. One key point in having app like interactions Is that you forego the complete page request/render cycle a document oriented implementation requires.


Perhaps we should split the web into two worlds. Create dedicated browsers for just enriched document etc. Add some sort of constraints on what the browser can do, and how much control is removed from the user.


They tried that in the 90s, its called Java. Didn't really work out for that purpose.

In practice things like Twitter and Facebook, interactive programs, should really be just that - programs you run. If the interface is nigh static and the purpose is content interaction rather than primarily consumption you should be opening the Facebook program that gives you this interface and uses its client / server communication to feed messages to and from the interface, not provide the whole thing over the wire spread across document addresses.

And they are that on mobile. Who uses Facebooks mobile website? Everyone uses the app. The contention only exists on "desktop" OSes because Windows and OSX don't provide a UX workflow to push an app at user (at least they didn't when it mattered) the way a mobile site can. And that the app environments on both were way worse than the Android or iOS SDKs for making a dumb GUI for something like Facebook.


> Who uses Facebooks mobile website? Everyone uses the app.

I do, just like I do with every app that might feel too comfy reaching out to my contacts. Not to mention how resource hungry it is.

If you think MFC, VCL, .NET are worse experiences than Android SDK you really never coded for Android.


We have Lynx already.

But seriously, where do you draw the line between enriched documents and apps?

CSS? JS used for styling? d3 for visualizations? WebGL?

Websites have gotten so bloated that the only sane future I see is serving people unix + x in wasm.


The problem here is that the average end user would not know which one to use and when to use it.


And then all those non techy people will say why this website doesn't open... And they need to download another browser to have web apps. I hope you can see how bad this can go


I'm pretty sure it's because the Documents style isn't viable for Facebook as a business, their money comes from all the interactive JS crap they insist on shoving into their product

Tech has this global problem of technology products never being what they say they are on the tin. It's like those front-end/back-end iceberg memes -- what the user wants and what the business wants are just barely in alignment. This needs to end.


I'm currently reading "The Dream Machine", and it makes me instantly go back in time and imagine reading the same comment on discussions around "batch mode" vs "time-sharing" and the development of CTSS [1].

[1]: https://amturing.acm.org/award_winners/corbato_1009471.cfm


No, the problem isn't that the new FB acts like an app, the problem is that it's clunky as hell.

It's like a bad copy of "the new Twitter" and even Twitter isn't really good.

The only end-user software from FB is Messenger Lite. It's quick and does what I expect it to do.

Even the voice chat is good and I didn't expect a lite version of a messenger to have it.


Is the new FB even live yet?


You can toggle between the old and new - click the little triangle at the right end of the header bar and then "Switch to new Facebook"


It's selectively opt-in I think.


I got it-and I did not opt in, it just appeared


You can make a single-page app that isn't bloated.


I mean almost by definition a single app page is a mini browser inside a browser. So you can make it as fast and clever as you want, you still added another layer to render boxes, text and probably the most important: ads.


Maybe they should rename it to Faceapp? It certainly has nothing to do with pages or a book anymore.


And Apple doesn't sell fruit... what's your point?


Web is reinventing flash, poorly. But as an open standard. Because of Apple.


I'm actually really surprised by the number of comments in this thread about how the new redesign is slower. I've had it since yesterday and it genuinely feels much faster and more responsive than the old Facebook UI - though, to be fair, that's not a huge accomplishment give that the old UI would take forever to finish painting or respond to input. I'd consider it a success, especially when compared to the disaster that was and continues to be Reddit's redesign.


Across mobile messenger, messenger.com and the messenger desktop app, the website was the only place left where you could have multiple chats open at once. Now that moves to one at a time as well. That is a huge usability regression (unless its been fixed since last time I tried the prerelease. Edit: Just switched back, it does look like they maintained chat windows at the bottom instead of just chatheads on desktop, although I can still only have two chats open at once.) Friend lists also got left with the legacy interface, which cant bode well for them in the long run. They are still facebooks most squandered opportunity.

The neverending quest to reduce information density is a usability disaster, despite the misheld belief that cleanliness = usable. Zooming out on the new fb interface, to restore some information density, leaves a comical amount of whitespace. Wells Fargo has turned its desktop interface into a giant stretched mobile app. Nothing is hyperlinks that support right click or new tab anymore.

https://medium.com/signal-v-noise/why-i-love-ugly-messy-inte...

The cluster f of the old old facebook interface was beautiful.


Considering Facebook engineering has gone into detail about how the new site is much faster and transmits much less JS and CSS, I would be a little surprised if the opposite is true. I tend to not implicitly trust HN comments about things being extremely slow, because for whatever reason there are so many of these complaints I’ve never experienced myself. I still haven’t even had performance problems with Electron apps, and those seem to be widely panned on HN as having abysmal performance. They work fine for me.


> They work fine for me.

They generally work fine. They just annihilate battery life and processing capacity while doing so.


I’ve never had any noticeable issues with that, and I’m almost always running and heavily using VS Code and Slack on my Macs.


It's possible both are true: loads faster for newer computers/wider bandwidth, loads slower on older devices


Really? I assume you don't have slack installed on a Mac then eh? What a horrendous piece of shit.


I’ve had Slack running on multiple Macs for years. I have plenty of complaints about Slack, but none involve performance. What’s the actual complaint? Do people just look at their memory usage and see Slack taking a lot, and if so, is that actually a problem?


Slack is incredibly slow by any metric.

Startup takes tens of seconds.

Typing messages with formatting is visibly sluggish.

And in common cases it just crash's and restarts.

Memory usage is also an issue if you compare it to the usefulness of Slack.

Just communicating with my team shouldn't take 1/5th of my laptop's resources.


The problem is to install such a distractive application like Slack as an app on your computer. I leave these kind of solutions all in their website so I can control when I want to see messages. So neither accepting desktop notifications. That's nice thing about async messaging solutions, no expectation you respond within in a second.

If they need to talk straight away they can call :)


For me, it is decidedly slower and clunkier


Yep, works fast for me too. Random profile opens in 2-3 seconds max for me. But for some people websites are slow for some reason. I've heard complains about Gmail loading 30 seconds, while it takes 2-3 seconds cold start for me.


I don't understand how you think a 2-3 second load time is "fast" for such an enormous platform and the actual content the user sees. I sure get that Facebook is way more than that, but to think a page loading in 2-3 seconds is fast is something you could've gotten away with in 2000, but in 2020..? I genuinely don't understand why you find it quick.


Almost every fast website works with that speed for me. I'd like microsecond speed, but when there are websites which truly take 10+ seconds to fully load, 2 seconds is fine. I think that for me 5 seconds is when I'm getting a little bit worried about loading speed, less than 5 seconds is acceptable. Hacker news works only a little bit faster.

That said, I live in Kazakhstan and my typical ping to EU is 100 ms, so that might make my experience a little bit worse (it's 100 Mbps in theory). May be those who connect to those resources with 1 ms latency are getting much better typical experience, I don't know about that.


My guess is that Facebook is popular enough that they are not optimizing for first impression.


That's true, but they (apparently) used to care deeply about how long it took you to post something.

They've probably just accepted that consuming is more popular than producing, though.


Let’s go on this tangent, why is Reddit’s frontend so sluggish and lacking quality of life improvements that should be in place by now?

Surely they are hiring world-class devs, so what’s holding them back?


It's by design. Reddit doesn't want you using the website. They want you to download the app. As numerous annoying popups and notifications will tell you when visiting the website (especially on mobile).


This. Their UX is hostile to browsers, I suppose they make more money due to ads being harder to block in app vs in browser and much more data can be mined as well.

I recommend everyone the old.reddit.com experience while it's still available. There's browser plug-ins that force the subdomain everywhere on reddit


Yeah they sure are driving me to the app. The Apollo app that is.


> Surely they are hiring world-class devs, so what’s holding them back?

Facebook Engineering has a notorious "not invented by me" culture, it's not unique there but a lot of our "world-class" engineers are just acting economically rationally and hole-digging on some new bespoke framework or tool to cement their position in the company. You end up with a massive amount of churn and over-engineered vanity projects, and it's manifesting downstream in basically every product we've turned out for the last five years. That's why the applications bloated and terrible.

The joke inside the company is that it used to be "move fast and break things" but now it's "breakfast, vest and move things around". It's really an engineering culture of decadence and waste these days.


There are some brain-dead behaviours on Reddit's site. They do a JS render-then-fetch, which is the worst way to load data. They also seem to stick the data fetch inside a requestAnimationFrame, which means it only runs for foregrounded tabs. This is basic stuff. I don't see how this could be an accident.


> surely they are hiring world-class devs

What makes you think that?

Reddit seems like a place where the kind of experienced and talented people needed to turn it around could make a lot more money (via stock grants in addition to salary) and frankly have a lot more impact, at any of FAANG.

I've not seen anything to indicate that Reddit is hiring, or trying to hire, "world-class devs".


Well first, don't just blame the engineers. The company has to actually prioritize making a functional product.

Ultimately, reddit made no attempt to lean into the the thing that might attract world-class people to come there: a passion for the product. Or, at most, any attempts made were surface level. Some of the best engineers I've worked with are at reddit. They just happen to be outnumbered, and some have golden handcuffs on.


Angular


I guess it's optimized for some computers/networks but not for others. Right now I'm in an area without optical fiber so I have to use a 4g modem (12 MBps), I have an old 2013 MacBook Pro and even though my setup is far from being fast, I have no problems with most web pages, some few load kinda slow, but Facebook is in a different category, it's totally unusable, some stuff never even get to load. If I want to check Facebook I have to use Safari (the new theme is not supported in Safari yet).


New Reddit and all these JS-based websites are painfully slow for me on Safari on a modern MacBook despite being on an enterprise connection with 1Gbps and a few milliseconds ping to most of Europe. Bandwidth is not the only issue, processing all those megabytes of JS and (poorly) reimplementing browser behaviours in it is the main problem.


Try it with a normal computer for normal people, not with your monstrous 'developer machine'.


Just tried it on my 2012 $1k laptop. 2s for full page load; about 1s to load up a chat, about 1s to open a friend's profile.


Do you mean 2007 year CPU constantly throttling because of kilograms of dust on Windows 7 full of malware swapping on HDD?


I'm using a 1st gen Surface Book 1. Dual core, 6th gen i7. Not an awful computer but not a monster either. Seeing the other responses in this thread, I think it may have to do more with network bandwidth.


Are you using desktop of phone? I've found the desktop quite slow lately- I thought it was probably due to high usage.


Desktop, for Facebook. I've tried to avoid installing their phone apps for privacy reasons.


I even used network throttling to do Fast 3G and it's still taking like ~3 seconds at _most_ for everything.


I may be the only one here that thinks Reddit’s new design is fine now. At first, it was buggy and sluggish. Now it is fine.


[flagged]


jQuery days make me like React, but it's not because I like SPAs, it's because it gives things an intuitive and reusable structure that I can organize and maintain much more effectively and can pretty much be the same server and client side. React is even a comparative pleasure to use for building static sites.

I've always assumed SPAs were partially driven by it historically being difficult to combine server side and client side rendering with a single shared code base more than anything - so much of the early stuff ended up in people essentially having to maintain two entire sites between front and back end rather than just one.


people have diverse experiences -> "I really have no clue what they do for a living"

That escalated quickly.

I'm not even sure they were not trolls like you seem to think, but having that line of thoughts by default (because you are also unlikely to have any evidence of your theories) is just sad.


Quite sincerely, it's a total failure. I got the chance to try the new interface, and it's so slow that it's barely usable. It's even slower than the old website, that was already painfully slow.

Loading a random profile takes 8 seconds. Opening a messenger discussion takes 6 seconds. It reminds me of the new Reddit website. Facebook was more enjoyable to use 12 years ago.

It's really sad that in 2020, 10k+ engineers can't make a photo, video, post and message sharing website that is not a pain to use. We collectively failed as a profession. If one needs 2MB of CSS for such a website, there is clearly a problem.


I agree with you; however, the key disconnect is that Facebook is not a photo, video, post and message sharing website. It's a marketing platform intended to extract the most value out of you, the viewer, and transfer that value to Facebook and its advertisers.

If you think of it this way, you can see how you may need 2MB of CSS: to battle the bots trying to scrape your information and replicate your network, to sidestep the evil developers of adblocker software that threaten to destroy the sweet value transfer, the JS required to track every single movement you make both online and off, the A/B testing framework that allows you to determine how to most efficiently extract that extra 0.001% of valuable eyeball time, and so forth...

Connecting the world? Well, I guess that could be a nice side-effect...


Imagine this:

A "free" ad-driven social networking site that brings in gigantic revenue, but that has to pay thousands of high-priced engineers to implement all of the cruft you just described.

versus ...

A subscription-based, non-ad-driven social networking site (perhaps operating as a member-owned cooperative?) that brings in much more modest revenue but that also can operate with many fewer engineers because it can be largely cruft-free.

I know there have been a gazillion attempts at the latter and none has succeeded in any way comparable to the "free" sites. It's too bad, because if any of them were to ever achieve Facebook scale, the subscription price would probably be quite modest.


I think the biggest assumption you're making here is that most people care about ads - and they don't. Fundamentally, when I'm scrolling through Facebook or Instagram or Reddit or whatever, I just don't care that I see ads while I'm scrolling. I'm not going to pay $1/2/3/4/5 a month to avoid something I don't care about, and that's really only the value add of a subscription-based service. I'd also say most folks don't find Facebook's data policies as egregious as the tech community does.


I'd really like to understand how does one not care about ads. To me it's like potholes on road. It would require terrific willpower to ignore them.


I'm not interested in the ads, so I don't notice them if they're not shoved in front of what I am interested in.

I can see that if you consider ads to be inherently offensive, you would notice them wherever they appear, and be annoyed.

But that's not where most people are.


People do. My dad doesn't install an adblocker because he simply doesn't care. I wouldn't either if it wasn't for the serious performance impact.


Although it's getting harder and harder since many platforms try to blend their ads in with the actual site content, I can avoid them reliably well by checking metadata such as the poster, some indication that it's an ad, or if the content is clearly different from what I was expecting.

I wouldn't say it's as bad as potholes on the road, since ads are more predictable.


I don't think it's just about the ads themselves. It's about the fact that when a site is free, its users become the product, and there are all sorts of user-hostile design decisions made, which leads to all sorts of unnecessary bloat.

Think of it as the difference between a for-profit bank and a credit union. The bank exists to maximize returns for its shareholders. The credit union exists solely for benefit of its members. So the credit union isn't going to try to employ sneaky fine-print fees, because that's not what the members want.

In a similar vein, you might not care about ads and may have trained yourself to ignore them, but you might care about performance, which is sluggish because of all the cruft, or you might care about privacy, or you might care about being marketed to in more subtle ways than display ads.


Being unwilling to pay for the service suggests to me that the entire experience has no meaningful value.


Economists have done studies where they tried to determine how much someone would need to be paid to stop using Facebook for a year. Maybe people are not willing to pay for the service, but they would rather have the service than $X.


Food for thought- the value of each user is radically different when accounting for geographic (i.e. income) markets.

For subscription to work, you either:

1- undercharge users from wealthier countries

2- price poorer users out of you platform

3- give up on the idea of worldwide adoption (Facebook scale, as you say) entirely

4- attempt to charge different amounts by country of origin, and watch your users cheat the system mercilessly

5- go freemium, and suffer the same fate that news organizations do- find that far too few are willing to pay to go ad-free, stick ads back into the free version, and end up leaking data anyway

I suppose 4 might be the most feasible option, but once it is obvious that some people pay more for the exact same value, they are likely to assume that the product has less value than it actually does, feeling that they are being ripped off.

In short, there is probably a good reason that paid services will never reach Facebook scale.


6. Option 4, but just relax about the outcome.

At ardour.org, we offer 3 tiers of subscriptions ($1, $4 and $10 per month) named to target different economic conditions. We also offer a single fixed payment with a suggested but editable cost based on OECD data about the cost of dining out.

We're not trying to maximise revenue, which is perhaps an important difference between us and, oh, Facebook :)


What you argue here could probably be said about half the companies admitted to any YC batch, and still there are successes. Maybe the glass can be half full too?


I think the catch is focusing on "Facebook scale" in terms of users on a social network. I am not aware of any YC companies at close to their scale (or potential to get there) without being ad driven, freemium or running on borrowed money.

Small social networks are fine for what they are and, I think, have much more flexible options for getting the bills paid.


> A subscription-based, non-ad-driven social networking site (perhaps operating as a member-owned cooperative?) that brings in much more modest revenue but that also can operate with many fewer engineers because it can be largely cruft-free.

I, too, loved App.net. Alas, seems that people won't even pay a couple bucks a month to see what their friends are eating for lunch.


People instinctively know the value of something when asked to part with their money. :D

Yet they still spend their time on it. Huh.


People know their friends are not on App.net, but are on Facebook.

Or in other words:

https://en.wikipedia.org/wiki/Metcalfe%27s_law


People just don’t want to pay for social networking platforms. We’ve been conditioned to think that these things should be free.


Perhaps it's just a figure of speech, but I feel you should be challenged on the use of 'been conditioned to'. I see conditioning misused as an explanation all the time in tech discussions. This isn't an example of conditioning, but of anchoring, which if you want to view it through the lens of learning theory is an example of modelling rather than conditioning. But really it's better understood through the lens of behavioural economics. A value has been established and normalised, and other values are judged against it.

The reason I bring it up is that in changing user norms, it's much less helpful to think about them in stimulus response terms, than in terms of modelling other users behavior - which itself is guided by limitations on our capacity to process information, and heuristics which though adaptive are poorly adjusted to the modern media landscape. So people didn't flock to facebook initially because they were conditioned to prefer it - they used it because (in addition to offering a peek into the lives of others, it was cool). Conditioning is an important part of why people become addicted to the reward loops of social networking sites, but it's an insufficient explanation for their appeal and often misapplied.


I think social media is as social as sitting in a bar where everyone tries to get you to listen to their stories and look at their photos. To respond to someone in the bar you can send a text, which they might respond to somewhere in the future. Therefore, I am not planning to pay for it, since it is not adding (much) value to my life.


Well, they are free, so that probably contributes to that perception.


> I know there have been a gazillion attempts at the latter and none has succeeded in any way comparable to the "free" sites. It's too bad, because if any of them were to ever achieve Facebook scale, the subscription price would probably be quite modest.

MeWe is freemium (paid extra stickers and storage) and is actually nice, at least the parts I've seen, a lot like Google+ - a friendly neighborhood full of photographers, chili enthusiasts etc.

Of course most people are going to go with twitter, but if you'd like something more like Google+ or what Facebook could have been you might want to try MeWe.


Facebook's problem is that a subscription fee would drastically limit adoption and prevent them from monopolizing the social graph. They see more potential to make money without charging users directly.


> the subscription price would probably be quite modest

Yes. It would be set to the minimum required to dissuade banned users (e.g. spammers) from continuing to create new accounts. This amount would still likely be well above cost.


These views are aligned. A site that's fast and a pleasure to use advances both agendas. It's still a failure.


Only if there's competition. The network effect ensures that there is no competition. Leaving us where we currently are.


Not even, users can just 'not play' (or play less) if the game's no fun.


There are many users for which it is not fun, but an addiction. I have spoken to lots of people who say about once a month something like "Yeah, facebook is really bad for me, I just waste time and get upset", but they can't stop checking it every hour, and responding to posts that touch them emotionally, in either a good or bad way.

That's by design, of course - it benefits Facebook greatly that its herd is addicted, and unlike people addicted to alcohol, nicotine or other substances - there's not even another supplier they can turn to: It's either feed your addiction or suffer withdrawal symptoms.

And I think the success rate of quitters (as a percentage of those who actually want to quit) is also comparable, at single digit percent.


That's probably something that can be measured: if the profile/wall fills out over several seconds, when do the ads appear? First, before everything else? In that case it would be cynical, but I agree that they might monetize the delays by ensuring ads appear before anything else.


The ads are only on the side on profile pages.

The actual ads on FB desktop are the newsfeed ads, which you see as you scroll.


Unfortunately, you're right.


I guess they probably could build a nice and fast website if it was up to them. But there's probably a lot more requirements than just that.

Things like https://twitter.com/wolfiechristl/status/1071473931784212480... are probably not decided on and implemented by the engineering team but are coming down as a requirement from the top. This is probably the case for a lot of other decisions that slow down the page ("We need this A/B test framework", "this needs to be hidden to increase retention",...)


They already did: https://mbasic.facebook.com.


For me on FF this is only a tiny bit faster to load the main page, but way less readable and pictures are so small you have to open them to be able to see what's in there. So not a net positive result imo.


90's versions, like mbasic, of almost every website is better. More content, less noise. Mbasic could still use a flaming <hr> tag though


> 90's versions, like mbasic, of almost every website is better.

I mostly disagree and assume this is slightly hyperbolic, but I take your point. The web used to be documents, even for things that needed to be apps (like email). Now, the web is apps, even for things that should be documents.

Beyond that, the web was not the capitalist/SEO/marketer battleground that it is today, where many sites have so many costs behind them (devs, videos, etc.) that they need a boatload of ads just to try to stay in the black.

In the 90s, you could have a very popular message board with millions of pageviews a month for $30/mo.


> In the 90s, you could have a very popular message board with millions of pageviews a month for $30/mo.

You can still do that. A bare-metal dedicated server with an 4-core CPU, 32 GBs of RAM and SSDs can be rented for that price with unmetered bandwidth and it'll be more than enough to sustain that level of traffic.


>Things like https://twitter.com/wolfiechristl/status/1071473931784212480.... are probably not decided on and implemented by the engineering team but are coming down as a requirement from the top.

I've always suspected that the Div-itis plaguing fb's website is a result of React's dependence on the 𝚘̶𝚟̶𝚎̶𝚛̶𝚞̶𝚜̶𝚎̶ misuse of higher order components.


It’s not.

HoCs don’t add nesting. If they do, you’re doing it wrong.

React <15 (2yrs old), did nudge you in the direction of div-itis, because all components that rendered something had to render a single root element. The more recent versions did away with that constraint. HoCs don’t even need to return markup; they’re functions which return functions. The general expectation should be that an HoC behaves like a factory for the components it wraps.


So every React app can be effortlessly refactored to use whatever shiny new latest and greatest architecture abstraction React comes up with?


No, and FB themselves discourage this in their documentation.

Regardless of upgrade paths, React <15 would still let you use HoCs w/out adding excess element nesting.

I don’t contest that the older React tended towards div-itis if you weren’t careful with how you used it. But this thread was about Higher Order Components (or wrapper functions), which don’t have any inherent effect on nesting.


I'm not sure why FB's site has unnecessary DIVs... But to defend React, HOC or render prop techniques but don't have to output DOM. In other words, every React component does not map to a DOM element.


The extra divs are an obfuscation technique, intended to frustrate scrapers and browser extensions.


React components aren't a 1:1 mapping to the DOM, so you could in theory have 50 HoCs wrapping a single component and it still only output one div or whatever.

Also, HoCs have somewhat fallen out of favour over time, with hooks and the child as a function/render prop style becoming more popular. I think the only HoC I consistently use these days is `connect` from `react-redux`.


You can use hooks with redux now, so what’s the purpose of using connect instead of useDispatch or useSelector?


Two big reasons: Values passed via connect can be validated by proptypes, and by using connect you can mount your component without redux. That may make sense for building shared components that may be used with redux in one part of your app, but without redux in another part. There are a lot of smaller reasons to prefer connect, but those are the big ones to me. FWIW I use both approaches depending on needs.


Since the <Fragment> component was introduced (React 16?) excessive wrapper divs are no longer necesssary fortunately!

Edit: not sure they ever were actually, I think you could just return props.children in most cases?


All these things can be true and yet people will still frequently misuse them.


Heh just yesterday I was trying to click on a zoom meeting link for a Facebook event, and it wasn't working for whatever reason, so I tried to pull the url out of the page with the inspector, and I saw the same thing -- hopelessly deep nesting of the element, with the URL hard to find and its text version obfuscated ... and that's not even an ad!


> Things like ... are probably not decided on and implemented by the engineering team but are coming down as a requirement from the top

Yeah, welcome to the real world. We _all_ have to handle requirements like that, except maybe when we build our portfolio site


The problem there is that '10k+' "engineers" are trying to make the same 'photo, video, post and message-sharing website'.

It's a structural problem and little more: the website (and app) is their main money-maker, so they're going to give it a disproportionate amount of resources.

Imagine you hire ten thousand people to lay one railroad track. [note; see end of post] If any single one of them doesn't contribute directly in some way, you'll fire them. This seems kind of strange, doesn't it? Sure, it probably requires more than a single person to lay a track. But ten thousand people to lay one? How is that supposed to work, mechanically? This would be enough to warrant shareholder revolt.

Now, the railroad track gets broken a few hundred times, maybe they hammer it enough to make it twice as long, whatever. It now no longer resembles a railroad track. Certainly no train could go across it. Send a few hundred people to go ask the managers of this project for a replacement track. Okay, we're now at...maybe a tenth of people having contributed? Repeat this process until everyone's contributed. Maybe the manager gives different groups different materials for the track to fuck with them, whatever. But somehow, every single person manages to not get fired.

What's the outcome look like? You have a single railroad track, probably not even well-fit for the job (sparks fly whenever trains run on it; maybe it causes them to tilt, so on), but it's laid! And ten thousand people are employed!

It's the same thing with a website. You can't put a terabyte onto a user's device every single time they load your website; you just can't. So you have a window of performance you have to hit. Between ten thousand people trying to have things thrown onto user devices? Good luck making anything resembling 'decent'.

It's the same problem that Dave Zarzycki noted in his talk about launchd[1], but worse. Instead of 512 megabytes shared between some random abstract parties you can basically ignore, it's <10MB shared between ten thousand coders, translators, graphic designers, users, managers, etc. Does something seem strange about this?

[note]: This is the appropriate comparison here; at the scale of 'Over ten thousand people working on one program', it's grunt work, not art, science, or even programming. There's a word for implementation-grunts that's fallen out of favor in the past few decades: coders. This was seen as distinct until recently.

[1] https://youtu.be/SjrtySM9Dns?t=255


I don't know how many people FB actually allocates for their main app, but this reminds me of a chapter in The Mythical Man-Month. It is said over 1000 people and 5000 man-years went into OS/360. I don't see it anywhere today.

Instead, the book proposes The Surgical Team, i.e. about 10 people taking specialized roles, with the system being the product of the mind of a few key people. I wonder how well this aged.


Fred Brooks was accurate about most things, yeah. It's hard not to envy him; he got to work with (and write a book with) Ken Iverson. Can you imagine? People back then really had all the luck!

...at least as far as computers go, anyway.


For me the performance seems better. It also seems strange that if one of their two main publicly stated goals was to increase performance (the other goal being ease of maintenance), that it would slow down. Maybe you have extensions interfering?

Also, the set and scale of features in the Facebook app makes it literally one of the most complex webapps out there. It's far more than just multimedia posts+messaging -- it's a marketplace, dating, games, apps, groups, pages, and more. Nobody's "failing". And the 2MB of CSS was the "before" uncompressed. The "before" compressed was 400 KB, and this update appears to reduce it to under 80KB compressed. That's 96% less than the 2MB you're complaining about, more than an entire order of magnitude.

So Facebook seems to be improving here, no? I fail to see what is a "total failure" or "clearly a problem".


I'm not sure what your setup is, but 8 seconds to load a profile is not my experience. It takes less than a second here.

This is an anecdotal datapoint that is insanely useless in the real world, but the fact that it is the top comment is typical of this site.


Really? I really like it. I almost solely use Facebook to organise blood bowl matches, so it’s a lot of group chats and events, and it’s so much better than the old design.

I haven’t noticed it being slower either, it’s certainly not fast, but it’s not really something I notice either.


Sorry, I won’t take blame for that.

Still haven’t found a use case for React/Angular or SASS/whatever.

If I’m guilty of something is not recognizing the validity of those tools, as I’m sure there are.

But 2MB CSS is simply inconceivable to me.


I agree, React/Angular front ends always seem slow and clunky to me.


Big +1.

I would be really interested to find one, only ONE, website where React/Angular was really bringing a better experience and better final product than a standard pure JS with simple Ajax system.


What makes you think React/Angular are products to increase user experience? Of course you won't find that, they are tools for developers to streamline development and make maintenance easier. Have you worked on a platform that uses nothing but pure JS and fetch calls? I have, and there's no way to make sense of a project that has to deal with so many things.


>they are tools for developers to streamline development and make maintenance easier.

I'm not convinced they are successful at that either.



why is it downvote ? This link just show you can create a basic twitter like application in 10 minutes with the right tool. Imagine how many hours/days it would take with any JavaScript framework !


Probably because a Twitter close isn't even close to a complex application. You can build one in React/Vue just as fast as you would without them.


Yeah, it's a common trope. Everyone can create a Twitter/Facebook/Instagram clone in a week or so of intense work. But that's not actually the hard part. What's hard is getting millions of users in the first place and scaling to that magnitude. In that order.


Anytime you need to stream data to or from a client, like audio/video or data. Single page apps have a much better experience.

From an architecture standpoint, it does also allow a much simpler separation of concerns.

That said, it does it get used more often that it should and can take more time to build than a multi-page app.


Fucking gmail, for example.

The new SPA version is far, far slower than the old HTML version, and uses a truly insane amount of memory. I have 2 gmail tabs open, and according to Firefox's about:performance page, one is using 140mb of RAM, the other is using 95mb, and both are at the very top of the list in terms of CPU usage. Above even YouTube in both CPU and memory, which is itself fairly bloated.

It is absolutely disgraceful.


Google Cloud Console felt slow to me too. And the user experience could sometimes be better too. For example: I set filters in a table of items, clicked many times to Next to paginate on the 10th page. Then I accidentally clicked on an item on the page, detail page shows. Clicking browser' Back button takes me back but the filter is cleared and I am on the page #1 again. State was not persisted so I could start filtering and paginating again. I am not 100% sure but I think it was the Quota listing. The old, document-style pages used to hold state!


Humans have been using physical and mental systems to manage complexity for eons, I simply can't understand this argument. React doesn't automatically fix spaghetti code if you don't know how to organize a platform-sized code base efficiently in the first place.


This is like saying "cabinets don't help organize a kitchen if you don't put anything in them" -- I mean, duh, you have to use it right. That's not an argument to have no cabinets.

There's a reason nearly ALL major web applications rely on a _framework_....rails, django, laravel, you name it. These exist because it's really hard to organize vanilla code without a framework. React and FE JS are no different.

If you're arguing against using a FE framework to organize FE code, you're basically saying "We don't need frameworks in general! All code should be inherently organized!" That's not realistic. It's just not feasible when you have a large project.


> This is like saying "cabinets don't help organize a kitchen if you don't put anything in them" -- I mean, duh, you have to use it right. That's not an argument to have no cabinets.

Good counterpoint. Parent comment sound too much like the "TRUE programmers don't use data structures" old meme


Yeesh, you've badly misinterpreted my comment. Re-reading it, I'm partly to blame here.

>That's not an argument to have no cabinets.

This isn't the point I was trying to make. I didn't mean to piggy back on this part of the parent comment: "Have you worked on a platform that uses nothing but pure JS and fetch calls?"

I meant to respond to this part of the parent: "they are tools for developers to streamline development and make maintenance easier." I often find that the most ardent React fans will see "not React" and jump immediately to "spaghetti of vanilla js and fetch calls," with no further questions asked.

I'm trying to argue against React dogmatism, I'm not arguing in favor of "no framework" dogmatism.


Well, if we take a step back from React and talk about a sort of data-first approach to UIs, I think they are definitely, much easier than the old jQuery era.

Mutable state, notably knowing all possible variations of said mutable state and how it relates to everything, is very difficult imo.

Is React the best implementation of this? Definitely not. It will evolve as time goes on. But I don't think I could ever manage state in the jQuery mutate-everything model again.


To be clear I'm not advocating for React on every possible use case. In fact I've grown weary of it over the years, but not for the reasons the comment I was responding to pointed out. My point is that React makes the process of organizing frontend code easier, not that it's the only way to do so.


I've used react for internal administrative tools for configuring UI, and the ability to do a real-time preview of what the UI will look like based on how you've configured it is pretty useful. Also used it a number of times to build form inputs that aren't natively supported ("list of objects all with the same properties where you can select and make a change to a bunch at once" sort of thing).

Basically I've found that if you're doing something useful which could be done with jquery but would probably have had subtle bugs due to a combinatorial explosion of possible states, you can usually use react in that context to make a cleaner, faster, and less buggy version of that same UI that is faster to develop and easier to reason about (and thus better for the end user, since software that works consistently is more valuable than software that mostly works as long as you don't breathe wrong near it).

If you're looking for examples where a single-page app is better than server-side-rendered HTML with some javascript sprinkled in to make certain components more interactive, though, I can't help you. The successful use cases I've seen for react are of the "use it as the way you sprinkle additional functionality into your server-side-rendered HTML" type.


better experience for who? For the developer it's a way better experience. For the consumer it often isn't because of inefficient bloat, but if effort is put into packaging it sensibly it can be better thanks to build pipelines/optimizations. In my experience, there's a lot of apps that would have never gotten written in the first place without the boost from React/Angular/etc. It simply takes way longer (which means also more expensive) to use "pure javascript and ajax."


I've been doing FE development on and off since IE5/6, starting out with pure JS and Ajax, and in my opinion the modern FE developer experience using the major frameworks is a generational improvement over the messes we used to write. There are still frustrations, but it's a lot better than it used to be.

Browser API's and CSS have all improved drastically since then, so pure JS and Ajax isn't as bad. I still avoid frameworks for small things just to keep pages lightweight. But for heavyweight projects, if you don't use an established framework, you just end up with a shitty home-grown framework anyway, because the alternative, teams of developers working on the same site with no frameworks, is even worse.


Agree completely. I'm amazed at how good pure JS is these days. I only use React when I have a non-trivial UI that I need to write. It's just not worth it for simple stuff. That said I don't do a ton of FE work these days, so my opinion is less relevant.


For a certain kind of developer who drinks a certain kind of Kool-Aid, sure. Personally I am a lot less productive when I’m forced to work on a React codebase. It’s kind of like using an ORM, it can feel like it makes things easier, but really you end up fighting with the abstraction more than if you had just learned the underlying technology itself.


The thing is, every place I've worked that was against ORMs or frontend frameworks wound up evolving their own half-documented, half working framework. I find it's easier to spend as much time learning the abstraction as the underlying technology, and then that knowledge can be ported from system to system.


My experience too. The whole "we don't use a framework" sounds great but it always turns in to some sort of home grown framework that is poorly documented, has bugs, and isn't open source so you can't take it with you when you leave.


Related: "I don't watch or even own a TV" [watches Netflix/YouTube on a laptop six hours a day]


You also miss out on the community aspect of things. Working with a framework like React means that there is a worldwide community of developers that you can tap into when you run into a roadblock.

It is incredibly frustrating to run into an issue with a home grown framework and have to ask around only to discover that the person who wrote the part you're having trouble with left the company 2 years ago and no one else understands it.


I can't help but feel like the people that feel this way are people that don't build maintain and modify customer focused web products for a living.


I updated openEtG to use React instead of PixiJS. https://etg.dek.im https://github.com/serprex/openEtG

Fixed some UI bugs, made it straight forward implement features like better animations & replays, performance improved by removing stuff like a constant timer for UI & having a global mousemove handler tracking mouse coordinates


Come on man, real programmers use assembly and roll their own browsers.


Assembly is a needless abstraction. Real programmers create binaries with xxd.


Real programmers use a magnetized needle and a steady hand.


It sounds to me like you may have never worked in a codebase with thousands of JS files. On a major application, organizing code in vanilla JS is all but impossible. I agree it's better for a very small case, and I agree react slows things down, but that's a worthy trade-off for having organized, legible code.


These are parallel things. Frontend frameworks let you build more predictable and solid apps than vanilla JS.

Indirectly it leads to better UX since the developers can spend more time on UI tweaking, than doing it vanilla.


Well, one primary benefit seems to be to make loading indicators/spinners into first-class components.


What simple Ajax system?


That 2MB of CSS is legacy crap most of it they didn't use. It's the hallmark of Conway's law.


React, Vue, and Angular all use a virtual DOM, which means they're already deprecated 5 years from now. A virtual DOM is dumb.

SASS, SCSS, LESS, etc are kind of great though. It sucks you have to compile them to css, but you can do this:

    .App {
      .Topbar {
        .Logo { color: green }
      }
      .Content {
        h2 { color: orange; }
      }
    }
Saves a lot of time and effort.


Angular uses an incremental-dom not a virtual dom: https://github.com/google/incremental-dom


https://github.com/google/incremental-dom#usage

Technically correct. The point is, it's not native.


I'm not a Facebook (the company) apologist by any means, and only use it because there's a few groups on it relevant to a company I own.

That being said, I find the new FB to be insanely fast. I don't even block ads on it.

I do agree Facebook was way better 12 years ago (I saw real updates and photos about friends, rather than companies and ads). But speed right now hasn't been the problem.


I can't decide whether I love or hate the UX of this new stack. It certainly feels more like an app now than a website. I like the new basic layout, "app shell" or tier 1 rendering. It feels like the First Contentful Paint is improved and some random layout shifts have eliminated. It might take a couple of seconds more to load something but it appears where you expect it to appear.

On the other hand, navigation and clicking around is still sooo slow. My 60-year-old aunt called me and asked if she needs a new pc because facebook makes her laptop fans spin like crazy. I couldn't explain to her about all this react-redux-graphql thing and frankly, she doesn't care. All she cares is that facebook is slow and all she does is post photos and talk with friends like she did 10 years ago.


> If one needs 2MB of CSS for such a website

The 2MB was for the old site. The new site loads 20% or 400KB.


Agreed. Using this on the i9 8-core 16" MBP and the thing just isn't fluid. Has anyone at Facebook even bothered testing this on computers people actually use? Like some 2015 Macbook Pro? Or a 2014 Macbook Air or whatever.

I wouldn't even want to know how it runs on those.


I've got a 2015 MBP. FB is pretty sluggish, and I wonder why. When you send someone a list of notifications, it's probably worth getting the data ready in case they click it. And the resource usage is pretty big as well.

The mobile app seems to be just fine though, perhaps they want to push people to use that.


You’re missing their main goal: having an easier-to-change codebase.

Apparently this is way more economically rewarding than performance for Facebook.

With that in mind, who cares if the site is slow (btw this is the only complaint of your rant). If the software requires a few devs to change and a few eyes to maintain, they can literally scale as much as they want. And actually now they’re probably in a way better position than they would if they had developed a super performant but unmaintainable site.

The quote, premature optimization is the root of all evil, is still very much valid imho.


> It's really sad that in 2020, 10k+ engineers can't make a photo, video, post and message sharing website that is not a pain to use.

Too many cooks spoil the stew.

I might even go so far to say that 10 engineers would have a larger chance of success than 10k+ engineers.


This was my experience at Facebook. Attempting things with a small team (or heaven forbid by yourself) was heavily frowned upon because it didn't justify manager and director salaries. As a result you ended up with poorly performing over-engineered code bases that prefered complex, expensive systems that would take multiple teams to build, but for whatever reason complexity that would improve performance was frowned upon. I'm sure this is common at many big tech companies. I didn't work on the mainline FB app but it seemed like part of the culture.


When were you there, if you don't mind me asking?


The 2MB of CSS is needed to justify that headcount.


Where are you getting 2MB from?


"On our old site, we were loading more than 400 KB of compressed CSS (2 MB uncompressed) when loading the homepage, but only 10 percent of that was actually used for the initial render. We didn’t start out with that much CSS; it just grew over time and rarely decreased. This happened in part because every new feature meant adding new CSS."


Read the first four words.


It really is appalling. I'm on a top of the line laptop with Gigabit internet and I can't do anything on Facebook without waiting several seconds for loading. Usually I only open it to check notifications. I just refreshed and counted and it took 9 seconds for the page to load and to show my notifications.


At a company I used to work for, we worked so hard to make sure our web app would load extremely fast… just to end up losing the battle with the data and analytics team for analytics scripts. They used a tag manager (tealium) which by itself can be used for good but it ultimately gave the other team the ability to overload our site with 3rd party scripts.


Twitter's new design is pretty fast. Sure, it does a lot less than Facebook but it's using similar modern SPA tech.

Facebook is still pretty slow even on a Ryzen 3900x with 32GB of 3600mhz RAM. It's a lot better than it used to be though.


Yeah, I think that Twitter did a pretty good job. After the initial load, it even works offline, so the actual API calls are the only thing that it's fetching over the network.


After the twitter update, I can't seem to get the initial load of an individual tweet to work. Ever, on any device, on any network. I encounter this issue on my laptop (on both Windows and Ubuntu), on my desktop (also both Windows and Ubuntu) and on my Android phone. It doesn't matter if I'm logged in or not, I always get "something went wrong" when I load the page and have to refresh at least once.


It is not a failure of the profession. There are engineering teams out there that excel at software performance. Granted they may not have billions of users. It is a matter of mindset and core values and those are hard to change.


Facebook was more enjoyable to use 12 years ago

We'll see what the data show. I have been reading comments about Facebook's supposed decline for as long as I've been aware of Facebook and yet their published numbers continually show greater engagement. https://jakeseliger.com/2018/11/14/is-there-an-actual-facebo...


The unquestioning supplication at the alter of 'engagement' (a sterile marketing term if there ever was one) is what lead to where we are now in the first place. This is an affliction that pervades the entire consumer internet sector, but the folks at facebook seem to have refined it to its fullest potential.

The other day I got a facebook notification on my phone, which said something along the lines of "You have 4 new messages". Of course, thinking it was from my friends I opened the app to look at them. 3 of my 4 "messages" were notifications for friend requests from people I had never met. The last one was a photo someone had posted of cake she'd baked (not to me specifically, just in her feed). To someone sitting at her desk at facebook, looking at an engagement metrics chart, the notification would seem to have served its purpose - another data point, another person enticed to open the app in response, engagement maximized. But of course, this was deception. I found this experience distasteful enough to disable notifications entirely - probably another data point for their metrics team - and annoyed enough to complain about in an HN comment.


The new "Person X has posted a photo" notifications are the worst. Their abuse of the notification icon is getting ridiculous. It used to be focused on when someone interacted with something you had done, now it's just used to drive "engagement".


I've seen people in the past make the mistake of correlating "Enjoyment" with "Higher Engagement", but you want to be really careful there.

For example - Flame wars increase engagement, even if people feel drained and frustrated afterward.

I understand why it's a useful metric - It's particularly valuable if your business model depends on time-on-site to sell ads.

But I wouldn't recommend them as a proxy for enjoyment by any means.


There’s a lot of fake profiles though and I think a lot more than they’re prepared to admit. Even brazen binary options trading scam profiles don’t get removed - it appears that they’re happy as long as the number are going up.


They’re trying to police 2.5 billion accounts with 45 thousand employees (including HR, developers). I’m not surprised they stuck. Don’t get me wrong, it’s not OK for them to suck, but I’m not surprised given the 55,555:1 accounts:staff ratio.


>and yet their published numbers continually show greater engagement.

Do you think this might have anything to do with the fact that, as an advertising company, it's crucial that they are able to tell companies that engagement is increasing?


I would be very surprised if engagement wasn't down among Americans under 50 – it may well be counteracted by growth in other markets and in other apps (especially Instagram), but there's _much_ less activity on Facebook.com from my peers than there were five years ago.


Engagement isn't just driven by "enjoyability", so that's not a particular convincing counter-argument.


Looking at the internet today, I think we need to lower our expectations and be realistic. At least in the US.

We are still driving 60mph on freeways and what trains we have do not travel at 300kph.

Perhaps many of us flipped out when we only had 9600 baud modems, but you could get up, brew some tea, walk the dog, or read a book while waiting for a page to load. We all had so much more patience back then.

Why do we need instant gratification with FB and other social media? Maybe, or maybe not /s.


Because the underlying computing technology has gotten so much faster, unlike the case with cars and roads.


And computing technology is not beholden to oil/car interests and NIMBYs.


Let's not even get started with thir mobile apps, horribly large in size, poorly engineered (performance) and privacy loopholes are everywhere (perhaps by design).


Comments like these remind of an old post on Slashdot, "What makes a good website?" Ask "geeks" what they prefer, it's usually minimalism, no images, consistent text styling. In the end, the ideal format becomes a text file without markup. I think we need to accept the opinions of techies are increasingly irrelevant in tech. It's like being fine artist getting paid to design flyers or a chef making burgers.


Everyone hates slow webpages. Not just geeks. We can all argue over whether minimalism or eye-candy is preferred. But if your site feels like running in mud, it's frustrating regardless of the design.

And all these SPA, client-side rendered, sites seems guilty of this. You navigate to a page, and it loads up "instantly", except you see nothing but gray placeholder images. Then content starts loading in, but haphazardly. You see a link you want to click, and you go to click it, when BAM! it jumps down 37 pixels because some stupid widget just loaded above it on the page.

I really hate the modern web. Not the look of it, or the styling. The mechanics and slowness.


I hope we someday enter a period of reform where our field begins to apply more rigor to our process. We have the tools to make a fast, sleek web. The process is very, very broken from an engineering perspective. The money seems to never stop, which is why businesses can burn through developers without too much regard to whether they are doing the right things. Maybe that can't be fixed, because the people in charge are going to keep the status quo going if the money is coming in and they're made to look good. Optimization is a risk that companies of even modest size find to be too great.

Or perhaps people involved in a revamp/redesign/unification should be well compensated to the point where they are unlikely to leave in the middle of a multi-year project. I have a feeling based on my experience that a lot of these bad rebuilds are a result of too many engineers and designers coming and going.


But, perhaps ironically, Slashdot was killed because its design updates made it more "designer-y" but much less usable. I remember one update in particular that added a ton of whitespace, gave it a "cleaner" look I guess, but it meant there were like 1/4 of the posts on the screen so it just took longer to peruse the comments. They also f'd up how they showed voting so it became much harder to just scan for popular comments and valuable discussion. I remember going to Slashdot pretty much daily and after that update just said screw this, will have to find something else.

Design updates can be useful, but just like for engineers, "beware lots of highly paid people looking for something to do".


Yea, I left Slashdot after The Great Redesign, and other websites too for similar reasons. It seems to be this inevitable milestone in any website's (or, generally, software's) life:

- V1: Focused, works, fast, lacks some features, but good enough to grow

- V1.1: More features, still performs well, exponential community growth

- V1.2: Adds chat, messages, social, loses focus, performance starts to suffer, linear or slowing growth

- V1.3: Start loading up with ads, things are getting worse, usage plateaus or teeters

[EMERGENCY! HIRE THE DESIGNERS!]

- V2.0: Huge, unnecessary re-design [1] without community input. Most features gone. More ads. Community craters. This is the Fark.com "You'll get over it" phase.

- V2.1: Saturated with ads, founders have moved on, site is on autopilot, a shell of what it used to be.

1: https://www.youtube.com/watch?v=YnVeysllPDI


Wrong. The average user does not give a shit if the page is rendered server-side, or of it's a SPA.

Geeks prefer speed, like everyone. There are plenty of papers that show that a reduction in latency improves the conversion rate. And it's does not have to be ugly to be fast.


Performance need not be coupled to idea of minimalist design. A performant website can still look like Facebook and not take 5000 ms to load.


First, that's just wrong.

"Geeks", as a class, will tend to focus on technical issues before aesthetic ones. Looking at that fact and immediately equating it to an absurd extreme is a fun game, if you don't care about describing reality. I know some accomplished engineers who are also good designers, and vice-versa.

Second, if my professional opinions are not being taken seriously, that usually means one of two things: I'm too far out over my skis, or am in the wrong place with the wrong people. Especially so if you feel like a chef making burgers.

Of course, if "in total control" and "irrelevant" are the only two states of being one sees, I suppose I see how you get there.


Geeks prefer function over form

Consumers (apparently) prefer form over function (or at least, they are more easily fooled into thinking the more form, the more function)


I prefer form to follow function. Make it as pretty as you can, but not at the expense of the function.


It's Facebook's own standard: "We knew we wanted Facebook.com to start up fast, respond fast, and provide a highly interactive experience.". The person you're responding to is saying they haven't met their own metrics for success based on their experience. You can't really attribute this "preference" solely to them.


Can you please compare the speed of https://yang2020.app/events ?

On the mobile web or desktop, either one. (We're running it off of one server, it might get the HN effect, we'll see.)

We have been building our own, open source social networking platform and we have tried to make a lot of things more efficient while doing so. The site I linked to didn't minify any files or optimize images. However, it loads things on demand as needed, and even lazy-loads entire components.

Is it faster than Facebook? We have our own component system, not React.

Here is a site that did minify and combine all files: https://intercoin.org

And here is the platform we used: https://gitub.com/Qbix/Platform (warning: not all of it is documented, but enough, at https://qbix.com/platform/guide).


I've never been on Facebook so I can't compare but your site responds pretty quickly for me. Also you left out the h in GitHub in your link so it goes to a domain for sale site.


Remember when working at a FAANG company was supposed to be some mark of pride?


All this and the chronological sort mode is still totally broken. Third post in my feed is 3 days old followed by one 2 hours. Total joke.


They really don't want you sorting chronologically. That negates The Algorithm™.


The new design is so laughably bad. I was trying to send a message to someone on the website, it took 10+ seconds to open the chat window then when typing it couldn't keep up with my typing (I'm not an especially fast typer either). It was like having a 5 second ping over SSH. This is on top of the (pinned) tab regularly needing to be closed as it slowly takes up system resources.

This is all on my 8core/32gb workstation. I can't even imagine how much utterly useless crap they are running in JS to make that kind of experience.

On the bright side it does mean I am weaning myself off as keeping a pinned tab open is a non starter so I can't just have a quick refresh. And I'll be fucked if I'm installing their apps on my phone.

So I guess thanks needs to goto the FB engineers for making their new website so utterly garbage that the tiny dopamine hits driven by the FB algorithms are worth less than the pain caused when using the site.


I wonder if they fixed the bug where if you visit the site with Safari on an iPad, when you try to type a comment, every space becomes two spaces. Also, I wonder if paste (command-v) is also randomly blocked at times.

I use mbasic.facebook.com as much as possible. Occasionally I'll use m.facebook.com. I've had the mobile apps uninstalled for ages.


I still don't understand how the biggest websites get away with being so unbelievably bloated. My guess is that most people have medium to old phones and PCs that are bogged down with nonsense running in the background and facebook, instagram, twitter etc. run extra slow, but I guess people just put up with it.


people just put up with it


> Quite sincerely, it's a total failure. I got the chance to try the new interface, and it's so slow that it's barely usable.

Were you using a machine with a gigabit connection, 32GB RAM, and 10th gen intel cpu like the devs?


I don't use Facebook but how do you know it's not BE calls that are creating the slow experience? It sounds like they rewrote the FE not the entire system.


Does the compensation of those "engineers" reflect that they have "failed"? Perhaps there is another way to evaluate the work, not from the perspective of the user waiting in front of a screen. Do not forget that the money to pay the salaries of those who do this work does not come from users.


I don't know if I have the new stuff or not, but I agree that some parts are currently frustratingly slow to use on desktop. I figured it was because so many people are spending much more time on it (including, tbh, me). But I was trying to message with an old friend and simple gave up a few days ago.


Agree whole heartedly, Facebook is a disgrace of a website and has been for years. Crazy how slow it is to load.


I think you could add Twitter and New Reddit to the list as well.


In France we have a Craiglist-like website. They recently moved to ReactJS : https://www.leboncoin.fr/

The website features didn't changed in between. It's basically a pagination + a search based on radius (so DB related) + name (so DB related) + categories (so DB related).

The complete website could be build in pure HTML + CSS and a bit of JS + Ajax to refresh parts of it.

But no, it's build with ReactJS, and it takes seconds to search a simple item on it.

To compare, just try the same search on the Dutch equivalent, MarktPlaats, https://www.marktplaats.nl/. The experience is way snappier, way lighter, the features are the same, and it's just HTML + CSS + a bit of JS.

We made a mistake with React/Vue/Angular. And we should really go back and stop using those frameworks.


> And we should really go back and stop using those frameworks

I think you are cherry picking.

There are plenty of examples in the modern web of terrible React/etc implementations but that doesn't mean the approach in itself is bad.


> There are plenty of examples in the modern web of terrible React/etc implementations but that doesn't mean the approach in itself is bad.

If the creators of React can't get it right, then what hope is there?


React is bloated and slow, but there are dozens of frameworks other than React.

https://krausest.github.io/js-framework-benchmark/current.ht...


Please name some.


Some good React alternatives?


I’ve been plying around with lit-element lately and am finding it a much simpler experience overall. Likely a lot faster as well https://youtu.be/uCHZJy2n8Qs


Lit is excellent.

It's also possible to use htm with Preact for developing without a bundler if your target users support ES6.

https://github.com/developit/htm


What's the reason for redoing a website that provides the same functionality? Why use an SPA for basic website features?


They made the same with local Czech eBay-like website https://www.aukro.cz/ . Previously it was a fast website. Transition to the webapp was painful, their filtering component was not working properly on mobile. It still takes several seconds until the initial white page switches to the rendered DOM. They lost many customers due to this painful transition. I think they fixed some of the problems (I know filtering component works ok now) but I basically stopped using it since that transition too.


Thank you for this post. I yell at my screen each time I have to use leboncoin.

It was super ugly, but it did the job. Now it's super ugly, but it steals my focus at every opportunity, refreshes parts of the UI I'm about to click, or has select inputs that love to play hide and seek with my cursor. It's a UX nightmare.

I must say their new payment system is nice. But boy do we have to suffer when looking for something to buy now.


Could not agree more. It is barely usable now. The exact opposite of the original spirit. A complete failure.


Agreed. SPAs only make sense for few websites like Trello, for most other websites plain dynamic HTML with dash of ajax here and there are much better.


You know it's possible to use any of the modern JS frameworks without SPA, right?


It has 96 .js scripts on a single page.


This used to be a bigger deal before HTTP/2 increased the number of concurrent requests to be virtually unlimited.

Unless I'm missing something, it's "optimal" for a site to have many split files. If their JavaScript were 1 file, a change to a single character would mean the need to re-download every bit of JavaScript. Instead, with 96 files, it would mean 95 of them are still cached client-side and only 1 is need of downloading.


It looks to me like these scripts are asynchronously-loaded components that load only once they're needed. In this case it looks suspiciously like they're nested and each script download causes another script to be downloaded once the component renders, which would make HTTP/2 a moot point. I can even watch the requests pile up in the dev tools when they're cached, so my guess is if they dumped everything in one file (or even 10 files, just not 95) they'd get noticeably improved performance.


Twitter and the new Reddit aren't in the same category. Twitter is still somewhat usable.

Reddit, on the other hand, is an absolute clusterfuck from layout and usability perspective. The choices made were not made for monetary gain, they are simply really bad design choices that for one reason or another have not been fixed.


Aah, the Reddit redesign. I still can't use it. I check up on it once in a while, but my 7 year old MacBook Pro still don't like it. It remains slow, regardless of the number of times Reddit claims of have improved the speed.

On the phone it's even worse, a large number of subreddit now require that you use the app... unless you just go to old.reddit.com.

The point of the reddit redesign still alludes me. Sure the old design isn't mobile friendly, so I can understand that they would want to fix that. Then again they mainly use the redesign to push the app. And the new design isn't that mobile friendly anyway. Certainly not if you value battery life. Comments are now also hidden by default, which is just weird. But you have infinite scroll, which seems to be the main selling point. I'm not sure I needed that though.


Totally agree.

It's incredible that ugly old.reddit.com with RES still provides a better user experience.


I like the redesign actually. The default view is terrible, though. Once you start using classic or compact view it‘s usable.

Like 80% of my reddit usage is on mobile by now, though. So it doesn‘t matter that much to me.


Mobile twitter website gives me an “oops, something went wrong” or “you’re doing that too much” error probably about 50% of the time I open a link to it.


I sort of tolerated New Reddit at first, until I experienced first hand their (to the user) cynical reasons for doing it. Namely: throwing mobile app prompts in your face, blocking some reddits unless you're logged in, inserting ads masquerading as proper posts...

Even then, I could cope with some of it, except that they just totally broke the experience with shitty infinite scrolling. You can't click a damn thing and hope to go back to where you left on a post. Sometimes even old.reddit.com will redirect you to the new version now.

These redesigns would suck less if they were more about being functional and not about scraping every last morsel of engagement from unwitting visitors, through whichever devious methods they can imagine.


Yes! I use old.reddit.com and am not a twitter user so I don't have much exposure there.


Yes, in particular, for large discussions, when I get a notification about a comment I made on it, and I click to jump to that comment, it takes a while to load -- and in a way oddly proportional to the size of the discussion and frequency of the posts on that group.

I'll always see the very top post of the group, and its whole page load, and then slowly my discussion will come up and then it will scroll down to that comment; and if I do anything, that breaks the whole process.

It's like no one ever considered the concept of just loading a piece of the discussion, like reddit does.

What's more, there are all kinds of UX nightmares, like how, if I open the messenger in one Facebook tab, it opens in every tab, blotting out content I want to read.

Or how a FB livestream event will just randomly stop playing, giving me no indication that I'm lagging behind the current video -- I've done trivia nights that way and I only find out I'm behind after my team members suggest answers to questions I haven't heard yet.


Not sure why I'm being downvoted, in what way is Facebook's performance not atrocious?


There's too much CSS and JS because of web components and too many teams.

* team for web component A => CSS, JS

* Team for web component B => CSS, JS

And so on with 1000+ components,

Ends up to be a big pile of mud and everything maybe duplicated, just got a different name and failed to be optimised away and removed.


I wish they would just give us APIs and let us build our own experience.


I'm curious what your response to the Cambridge Analytica stuff was? Open APIs to build your own experience are like 100 times worse than that, at least the APIs that CA used were limited (such that they didn't provide enough info to actually recreate FB) and required CA to sign a developer agreement with FB to restrict how they could be used.


But then how would they force us to look at ads? How would they keep all their valuable data locked in their walled garden? Sadly I don’t think we’re ever going back to the glory days of open APIs.


There's always m.facebook.com if you want a retro experience.


I haven’t tested but I doubt the problem is with the CSS, is it?


Where did you get 2MB from?


Facebook isn’t trying to maximize your enjoyment, they’re trying to maximize their profits. By that measure they are doing vastly better than 12 years ago.


> as little as possible

I just opened FB with the cache disabled and it downloaded 5.85MB (19.76MB uncompressed).

Most of it happens after the page has rendered, which is great, but that's a lot of stuff. There are 13.74MB of uncompressed JavaScript.


For reference, vscode ships as 13.3MB of uncompressed JS. (Not including builtin extensions, but that wouldn't add much)


Stealing data about the users as accurately as possible requires more code than one would think.


>Stealing data about the users

its not stealing when the users agreed to it


Using a website does not imply informed consent.


No, but signing up for an account and agreeing to the Terms of Service does. The not-logged-into homepage is 1.3mb. Still not tiny, but definitely not the same as the web app.


No it doesn’t. Informed consent is not the same as shoving a ToS/EULA in front of someone who may not have the legal or technological education to understand it or its consequences.

From the Wikipedia definition[1] of informed consent (in medicine):

> An informed consent can be said to have been given based upon a clear appreciation and understanding of the facts, implications, and consequences of an action. Adequate informed consent is rooted in respecting a person's dignity. To give informed consent, the individual concerned must have adequate reasoning faculties and be in possession of all relevant facts.

What tech companies do is obtain the minimum legally required consent - and sometimes not even that. This may be legal, but it’s far from ethical.

[1] https://en.m.wikipedia.org/wiki/Informed_consent


Fortunately, Facebooks ToS[0] is not full of legalese or jargon, and I'm sure anybody with enough technological education to sign up for Facebook can understand it too.

[0] https://www.facebook.com/terms.php


Quick read of it and it is obvious that document likely cost millions for Facebook to produce and with the aid of countless hours of legal professionals who in sum likely have 1000s of years of legal experience.

Claiming the average person is able to understand scope of that document— and the likely 1000s of internal documents related to it, and the 100s of thousand pages of related legal code and case law is a stretch.

Document also does not disclose Facebook is subject to secret court orders, gag orders, etc.

— and that doesn’t even begin to cover the knowledge required to understand the related technology and the impact it might have on the users.


Huh. Sex offenders aren’t allowed to use FB.


That's pretty much size of a mobile app. But the thing with web apps is that they are downloaded every time and hence the outcry.


They aren't though. You have a cache.


Facebook release multiple times a day (or they did), so the browser cache is irreLevant.


As mentioned in the blogpost, they use code splitting and bundle hashing to actually maximize usage of browser cache.


If you have code splitting many chunks will remain the same.


There's a bunch of ways FB mitigates this.


Thanks siblings, for letting me know about FB’s use of code splitting and how it helps caching.


I would bet that js bundling and splitting might prevent that from being true?


Perhaps but thankfully not all webapps are like Facebook.


Not so, refresh the page to see how large the second download is (much smaller)


With a standard 4G/FTTH connection that's less than one second of download time.


Everyone have 4G and Fiber, and a Core i7 and 16Gb of RAM :) Seriously…


That's not what I've said, and I somewhat agree with the original point, but this trend of wanting to cater to the lowest denominator rubs me the wrong way.

Some guys think the web should be usable with a 50€ smartphone on GPRS, and I say no, because there's a middle ground.


How about just caring for the real world instead of the numbers provided by the internet provider? Those download numbers are for the best case scenario. And unlike broadband 4G is not reliable at all.


> because there's a middle ground

So where's your middle ground?


>Some guys think the web should be usable with a 50€ smartphone

You went to the other extreme though. Most people in high-income countries don't even have FTTH.


Have you read anything about Facebook’s emerging markets? They want to cater to those people themselves!


The web should be usable on a 28.8k modem from the mid 90s. Why wouldn't it be?


There is absolutely no excuse. The app is not doing protein folding, its merely showing text!

Surprise, many many ppl use facebook in crap laptops, fb is popular with everyone.


No, it's not merely showing text. Facebook does a thousand things more than show text. You could argue those features don't belong in Facebook, but you can't argue that features are supposed not to take up any space, because that's unrealistic.


If I have to download 5mb of js for every page I visit on my phone I would be out of data in less than 15 days.


This is 100% incidental complexity. It's painful to consider that this level of sophisticated engineering is needed to render a website quickly in 2020. What went wrong?

I'm personally excited about things like turbolinks and phoenix liveview, which may provide a path out of this mess.


Facebook rendered just fine a decade ago. What changed between now and then, in terms of actual improvement to end user experience, to make it so slow this kind of crap is needed?

My guess is:

- Desire to offload more processing to end user machines to save compute

- More and more ads and user analytics in order to pick which ads to show

- More engineers that irrationally hate the simplicity of PHP

Duct tape on top of abstractions on top of duct tape in order to make a document platform behave like an application platform. Isn't it time to just replace web browsers with something that doesn't suck?


Duct tape on top of abstractions on top of duct tape in order to make a document platform behave like an application platform. Isn't it time to just replace web browsers with something that doesn't suck?

It's not the browsers that suck, it's what companies like Facebook do with browsers that sucks. And then all the other non-thinking middle managers in other companies who want to copy these terrible things because they're incapable of leadership.


> It's not the browsers that suck,

They're great at rendering documents, yep. Applications delivery platform? Not so much, not until you tack a shitty language on top of it and then a bunch of crap on top of that language to make it remotely useful....


> Facebook rendered just fine a decade ago. What changed between now and then

It does way more things. In particular, there are a lot more interactive experiences. A decade ago, it just loaded a web page and nothing changed until you refreshed. Now live videos and other content types have streams of comments and reactions pushed to the client in real time.


I really don't want a video to autoplay on the side of the screen when I'm reading some news article, for example.

It was so much better before when there just were less ways of doing what the website wanted and more ways of doing what you wanted with your browser and your computing power.


Sure, it's totally reasonable for somebody to dislike these features. I just object to the claims that it doesn't do anything substantially different than what it did 10 years ago.


> Now live videos and other content types have streams of comments and reactions pushed to the client in real time.

Many of these are anti-features I’d love to be able to turn off.


None of that stuff has improved anyone's life worth a damn.


What? For example, "X is typing..." message in messenger is great for pacing the conversation. It makes online conversation closer to offline conversation. Not all "dynamic HTML" stuffs are good, but some are indeed a big plus.


A lot more features?

Facebook of 12 years ago didn't have groups, marketplace, dating, live videos, stories, ...


99% of that could have been implemented fine using the same technology. Live videos etc. are a scourge on the culture and I wouldn't miss them.


I guess you think television is a 'scourge on culture' as well then? Get a grip.


> Duct tape on top of abstractions on top of duct tape in order to make a document platform behave like an application platform. Isn't it time to just replace web browsers with something that doesn't suck?

Be careful what you wish for. You just described native mobile apps.


I do prefer native apps on mobile in many cases. At least the UI is idiomatic.


I don't think it's any of those. It's about interactivity in the webpage. Imagine you have the template for the HTML of comments implemented in PHP. Now if the user posts a comment and you want to show the comment on their screen before it gets to the server and back, then you need the template itself on the client. The template being on the server in PHP isn't any help. You could imagine the server filling the template in with dummy values, and then giving that to the client to fill in, but that only works for the simplest of templates where a few values need to be substituted in. You have to create another template language both the server and client understand if you have more complicated logic like many conditional or repeated parts. When you have one part of the UI the client needs to render on its own, it's easy to make a one-off solution for it, but in Facebook's case, I imagine they have a ton of UI that they want to be server-rendered but also renderable on the client. React is a template system that can run on both the server and client, and can live-update already rendered elements on the client as the user interacts with the page.


I agree with this assessment of the 'why' - and it's why I mentioned things like turbolinks and liveview. What you mention here seems like a convoluted solution to the problem, but I realize it's how it's done, and I've written thousands of lines of code doing it.

In fact, the original AJAX stuff in the early 00's typically did a primitive version of this since it was the most obvious solution during the era of server side rendering: just have the server render the new comment and slap it in as-is. Instead of extending the reach of that concept instead we shifted towards generic data access APIs and pushing all the complexity to the client, which has resulted in the gigantic mess we see.

I remember writing a basic client-side templating system in literally 2002 that naively regenerated the whole page in Javascript in response to AJAX API updates - you wouldn't notice unless you looked at the CPU being pegged and realize your computer was useless for multitasking if that site was open. It was clearly a bad idea. Little did I realize at the time that the next ~20 years of web software development would take that approach and just try to optimize it.


Asking the server for new HTML means you can't show changes until you've done a roundtrip with the server. That means no optimistic rendering or immediate feedback for local changes, such as a comment being made or an option being set in a modal that opens up related settings, etc. I think it's extremely useful to have a system that allows the client to respond to changes locally on its own as the default case rather than treating local interactions like a strange one-off exception.

React only re-renders the components that have their own props or state change. There seems to be a popular misunderstanding that React that makes you re-render the whole page in response to changes, but that's not how it works.


I’m aware of how react works. Server round trip time alone is not a reason to push to the client. Light is fast. Computers are fast. See: Phoenix LiveView


> Instead of extending the reach of that concept instead we shifted towards generic data access APIs and pushing all the complexity to the client, which has resulted in the gigantic mess we see.

So like, fat clients done badly? I had a Usenet client in 1993 that provided just as good of a forum experience as we have now, in 16 bits and 4 megs of ram.


I don't have to imagine any of that, I used it. In fact, it's basically how HN works now, and we seem to be commenting fine.


I worked on FB site performance a decade ago and I assure you it was abysmally slow.


I used it a decade ago and it was fine for an end-user, so why should I care? Raw advancements in CPU power would have been enough to keep the old backend going just fine.


'Desire to offload more processing to end user machines to save compute' strikes me as a real possibility


I think this is very unlikely. Making the page load one second faster pays Facebook more in terms of user retention and engagement than it costs them in server usage.


Do Facebook users really quit Facebook because the page is too slow? Maybe a minute, but one second?


The frontier of web design is apparently digital Rube Goldberg machines, existing probably more for job security rather than consideration for the end-user.


The all-but defacto use of javascript transpilers especially invokes the image of 'Rube Goldberg' machines.


When you don’t use transpilers, you have to write ES5 code. No-one wants that.


Is ES5 really that bad that nobody can stand it?


Even in 2020? Isn't ES6 standard now across all supported browsers?


No. And "ES6" is itself a vague term. Browser support for features and syntax varies considerably and changes constantly.


that might well be, but just recently i got a crash report and it turned out the user had some "unsupported" version of Safari... and i'd rather transpile/polyfill than be that site that asks you to upgrade your browser to use it

(funnily enough, it actually parsed ES6 stuff fine, but was missing the `Object.fromEntries` method)


On the contrary, it does seem like jumping through these kinds of hoops has become strictly necessary if you are building web apps you would like to be responsive.


Are you sure? Media queries handle layout changes font size has been proportional to user agent since the beginning the web. A little SCSS preprocessing makes writing complicated CSS a lot easier.


I didn't mean responsive in the CSS sense, I meant responsive as in the "not slow to load" sense, which is the point of this article.


You have a ten digit user base that you need to engage, so you have a four (five?) digit engineering team that somehow needs to coordinate their patches in a way that doesn’t cause the complexity of the site to explode.

It’s huge and pretty impressive.


I never said it wasn't impressive. I was saying that it's sad that this level of incidental complexity is necessary to run, what is for all intents and purposes, a website.


I think you're unfairly trivializing it by labeling it a "website".


I think you're missing the point by assuming I'm making a statement about Facebook. I'm talking about the fact that this incidental complexity is necessary to do what Facebook does. I don't think my description of it as a website is relevant, it conveys the point that ultimately it's a tragedy that to render obviously simple content to users you are stuck building all this crap to make it work.

Too many people in this thread seem to be projecting that I'm somehow saying this complexity speaks poorly of Facebook's product or engineering. On the contrary, my point was that it's tragic that brilliant minds at Facebook are forced to build all the stuff described in the post just to achieve simple goals like "the page loads quickly", instead of focusing on other problems.


...but it's not a website. It doesn't list 10 cars, 1 family photo and the address on the footer.


take a screenshot of facebook - it's a website.


I can run Linux in my browser. Is Linux a website? https://bellard.org/jslinux/

You have to draw the line somewhere, or it becomes meaningless.


sure, for me the line is clearly on the side of "scrolling through boxes of text and basic commenting" = website, since that was the case in 1995. my point was that if you took a screenshot of facebook today and when it launched, other than some cosmetic improvements, you couldn't tell me which version warranted extreme dynamic application logic and which was just a dumb server generated HTML page. the point being that it shouldn't be necessary to do all the stuff in this post to get this result, but it is, because our field has failed to build better generalized solutions to these problems.


If you think websites from 1995 were driving the same levels of engagement that FB is in 2020 you would be really, very, extraordinarily mistaken.


just gonna to chime in to say: yea, this is a defensible position IMO. where FB falls is debatable (they have a ton of features / different UI elements / etc), but I can see where you're coming from.

tbh I think the vast majority of this engineering absurdity is to prevent teams from stepping on each other unknowingly, not for any direct end-user benefit. a lot of work goes into "make it so I don't have to work with X to get my work done" in all businesses, and... I dunno. it's not always a bad thing, but it does feel like quite a lot of waste.


>This is 100% incidental complexity.

The article talked a lot about their new dark mode feature, how they wouldn't have been able to implement it in their old tech stack, and how they were able to reduce their CSS size while adding a dark mode.

But is dark mode all that important? Even as a developer I don't care at all about FB having a dark mode, did they really need to rewrite their entire site to implement features no one cares about? Also, for a photo and video sharing site, is CSS size really important? I just loaded the page and it loaded 13.2mb worth of data while making 249 requests. Thanks for cutting down your 400kb CSS file though I guess.


"I don't care about this feature" !== "nobody cares about this feature." I feel like this is a fallacy a lot of programmers (myself included) tend to fall into, but it's a dangerous trap.


Just because you don't care about a feature doesn't mean no one does.


Dark mode is very important. Do you use Flux by the way?


Supporting dark mode == 2 bad UI instead of 1 good one.

f.lux works well with normal UI, replicating natural light.


>What went wrong?

From yesterday:

https://news.ycombinator.com/item?id=23101483 Hello, World – Zerodha, India's largest stock broker

Facebook most likely has a different attitude towards software development.


What I find weird about the article is that they spend so much time talking about the performance of the new design, yet fail to include any hard numbers comparing the old vs new implementations. They do throw out a few numbers like how the new page only downloads 20% of the previous 400kb CSS for the homepage, etc, but I'm surprised to see no actual browser benchmarks for what they claim. How else would they be measuring this internally?


Great write up. Glad to see they're making some changes as the current interface still feels like 2005 with the main feed at 500px wide. And as always, dark mode is welcome.

I really hope this helps performance on the site. In the past year or so I've been noticing that when the page sits in an unfocused tab for a while, clicking back usually takes 20+ seconds to actually load and I'm stuck at a white screen. It actually locks up the tab pretty well too, so navigating to other addresses and such takes a pretty long time.


About every 3 minutes I went a little mad with the new interface. With the old one, photos open, and you can click almost anywhere to close them.

With the new (desktop) interface, you have to find the X to close the photo. No, no, not the X you find on the top right of every other interface ever. This time it's on the top left!


> top right of every other interface ever.

That doesn’t sound right.

Mac OS and at least some Linux desktop environments have buttons at top left.

Still, I agree with your general point: why change away from a tradition for little to negative benefit?


You mean they didn't just use Fancybox like everyone else? /s

But yeah, that's annoying. We've had lightboxes for like 15 years now and people still can't get it right.


I've been using the new Facebook and the interface is even slower than before. It also feels (may not actually be) less information-dense, forcing me to interact with the page more to see the same information, compounding the problem.

This is on a i7 laptop.


> forcing me to interact with the page more to see the same information

Product Lead to Zuck: user engagement up 5% after this update!

Zuck: great, here's your bonus check


This is the only reasonable explanation. The user density is terrible as GP said.


Interesting to hear. I haven't gotten the new interface yet, so time will tell.

I think we need to shift back to 4:3 displays. Widescreen is great for consuming content, but we lose so much vertical space.


> And as always, dark mode is welcome.

I don't get this infatuation with dark mode, beyond it looking cool. People claim it helps with eye strain, but a brighter background constricts your pupils, improving focus.


In my experience light mode will work fine on things like e-paper displays, but when you have an emissive or transmissive display the bright light can be straining.

I also feel like dark mode highlights text more, making it easier to identify the text. If I use light mode in text editors I get completely lost very quickly.


Just wanted to point out you might’ve pasted the wrong snippet, which is about eBay’s policies.


Oops! Thanks for the catch.


I think people might have different sensitivities to this issue. Before I changed everything to dark mode, I had trouble concentrating on a bright screen especially in a dimly lit room. Dark mode feels like a total game changer for my productivity.


It never occurred to me that this could be the reason for dark mode blurriness, but it makes sense intuitively. (Think you mean "constricts" rather than "dilates" though.)


Yup, you're right; they constrict. I found this Stackexchange post that goes into far better detail than I did: https://graphicdesign.stackexchange.com/a/15152

The only thing I have to add is you can think of constricted pupils like a pin-hole camera or a higher f-number on a camera aperture. You need brighter light, but the depth of field is wider.


OLED


I have been using Facebook for like +10years. Facebook used to be reference for speed and usability.

I left it ~4 years ago.

Last day I entered again for curiosity. It's so sad. Strange interface, slow, unresponsive. It's sad.


Facebook used to be reference for speed and usability

On web, for a while. Remember when Facebook's iPhone app came out, it was a disaster. It was so slow, I could open the app, then take the elevator down to the basement of my building, drop off some outgoing mail, and return to my apartment before it finished updating the news feed. It was legendary in its time for its slowness.

At first, nobody complained because there weren't a lot of "apps" available. But then all the other apps came out, and everyone complained for years that all the other app-building companies could build responsive apps, but Facebook couldn't.

Then one day Facebook updated its app and it was a little better. Then another update came and it was good enough, and everyone stopped complaining and forgot.

Last day I entered again for curiosity. It's so sad. Strange interface, slow, unresponsive. It's sad.

I only use Facebook once a week, to update the page for a web site I manage. It is terribly slow on web. On both of the computers I use it on, loading the first page takes upwards of 15 seconds. Clicking on the text field to enter a new post takes eight to ten seconds for the editor to load.

I don't know how people who are addicted to Facebook manage to use it so much without going mad.


Remember when they had to monkey patch the android runtime because their app had too many methods? https://www.facebook.com/notes/facebook-engineering/under-th...

It's a fun hack, and I enjoy spectacle, but it might be a sign that you are complicating your app more than you need.


Remember when reached the limit for __TEXT and had to put code in sections that weren’t really meant for it: https://blog.timac.org/2016/1018-analysis-of-the-facebook-ap...?

Remember when they were taking to long to start up that they pulled stuff into a separate framework just so they could meet launch deadlines: https://blog.timac.org/2017/0410-analysis-of-the-facebook-ap...?

Remember when the app had 18,000 Objective-C classes: https://quellish.tumblr.com/post/126712999812/how-on-earth-t...?

Facebook’s teams somehow cannot manage their bloat and they keep hiring people to hack the platform they're running on rather than fixing the actual underlying problem.


The root cause of the problem is that they have too many people working on it, and that's obviously not a problem you can hire your way out of.


Perhaps they should hire engineers who have the explicit job of ripping out things that should not belong.


Or if they must insist on writing more code than refactoring it out, perhaps they should build more products.


Steven Levy talks about this in his new book (which is really good).

Zuckerberg's focus on mobile first was a major focus to save the company and ultimately a success story (this happened right after the IPO).

Their big mistake was non-native applications.


On their phones, perhaps? At least we already know that testing it on a computer browser is not testing it for the majority user.


> I don't know how people who are addicted to Facebook manage to use it so much without going mad

That's the thing with the addiction. Once you get used to it, it doesn't matter how bad it is.


Your comment made me remember being amazed by BigPipe and how damn fast that made the app (still called page back those days).


Did you try the new UI? It's a lot better but you may have to opt in


>We addressed this by generating atomic CSS at build time. Atomic CSS has a logarithmic growth curve because it’s proportional to the number of unique style declarations rather than to the number of styles and features we write.

I've always wondered... has anyone checked to see if this strategy of CSS (which AFAICT is slowly growing in popularity, since it's a super simple minification trick) ends up costing more in bandwidth?

It seems like it would, because 1) you frequently need multiple classes per html element, and 2) those html classes need to be sent on every page load, where CSS caches well if used "normally". I can see it being smaller on any individual page, especially with class-name minifying, but across many? And infinite scrolling loading many more elements?


we use this approach since more than a year with very good results. Regarding the data transfer it is not much of a problem because those classes usually get repeated a lot across the elements and having lot of repeated string is the best case scenario for gzip compression leading to basically no size impact.

https://github.com/utilitycss/atomic This is the framework we developed to create atomic CSS component libraries if you want to have a look (documentation needs some love, but is quite stable)


Your HTML is still undeniably larger than something with dramatically fewer class attributes though, if you minify the same way. Hence my "single page" vs "many" difference.

gzip helps for sure, but I doubt `<class="card">...` ends up larger than `<class="a b c d e"><class="sub-a sub-b q etc"><class="repeat per element">...`.


there is of course a bit of difference in some use cases, my point is that in a real world scenario you often do not have

<class="card"> vs <class="a b c d e">

but more something like <class="card"><class="stuff-inside-card"><class="other-stuff">

vs <class="a b c d e"><class="b c f"><class="c d e g">

so in the long run you tend to have more repeated strings across even completely unrelated elements and that usually balances out the possible increase in non gzipped bytes. But to be honest we did not had a detailed comparison with edge cases and it would be interesting to see when it actually may be a bad idea and when it is totally fine


when you're less "component-y" with your styles, yeah, that happens pretty frequently. careful styling is mostly rare (like all careful things)... but atomic styling seems to make the savings of careful styling effectively impossible, though I absolutely believe it helps the less-careful (especially the least-careful) cases.


> We knew we wanted Facebook.com to start up fast, respond fast, and provide a highly interactive experience.

And we forgot about it while developing it, this is just common mistake in product management, when projects are not iterative, things are so long, that you even forget why you are doing them


Future will tell, but I do think this is a good strategic move by FB.

FB is now seen as an app for old people and not fun at all, and probably one of the reason is because of how it looks. With the new design, things are more shiny, and the product now looks cool. How a product is perceived has a big influence on how people use it, (for example why people use snap when they can send the same videos on insta).

I wouldn't be surprised that this is a beginning of a lot of changes on FB


It's not seen as an app for old people. A lot of college students use fb religiously, and it's still the place to go for nearly any college discussion.


What other changes would you forecast?


Can we now sanely read comment threads with several hundreds or thousands of comments without clicking 200 times on "show previous comments" only to then lose all "progress" if we click somewhere else? Also looking at you Instagram.


One minute of contemplation for those who scroll for 10 minutes on mobile, to finally find something interesting to read/click, just to click by mistake the back button, and the whole timeline completely changed. Youtube recommendation feed also terribly guilty of that.


I saw no mention of Reason or ReasonReact. I thought they rewrote Messenger in Reason, and so I figured facebook.com would be next. Did Reason fall out of favor or something?


Hey - maintainer of ReasonReact here. We don't believe in mandating the tools that people use and there are plenty of good reasons to use TS, Flow, PureScript, Rust, etc. Folks on the Messenger Web team in Facebook like Reason as their language of choice. Reason is used heavily on Messaging code in the old facebook.com and in this rebuilt version.

The rebuilding of facebook.com was done quickly and involves almost every web team at Facebook. Almost all of these folks are familiar with Flow and only some are familiar with Reason. Asking every engineer at FB to learn Reason at the same time in addition to the already massive number of new things would have been adding unnecessary risk to this already incredibly risky project.


Thanks for the info :)


Was Reason ever more than an experiment?


ReactReact just had a release (0.8) a few days ago, so it's apparently not dead

https://github.com/reasonml/reason-react/blob/master/HISTORY...


The repos have some activity, but it looks bad that the most recent post on the official blog was in August 2018.


lol no, "Did Reason fall out of favor" does not logically follow from "I figured facebook.com would be next". you were simply too optimistic on Reason.


Did they fix the "back" button? (I.e. if you click a link in the middle of a feed, then "back", do you get back at the same position in the feed?)


My guess is that they have a/b tested that if you remove the item from the feed user clicked, users become "more engaged" i.e. frustratingly scrolling to find the place where they were.


Yes.

(Source: Was involved in early planning for the rewrite and fixing the Back button was one of the primary goals.)


I think the Facebook feed is completely random on purpose to make it feel more up to date.If the app crashes on your phone then good luck on finding a post in your feed again.


I miss information density.

HN is great for this and you can change enough settings on reddit to get it where it used to be, but FB is really bad for it and Twitter is just ok (though with the tweet deck app you can get a lot on macOS).

In that FB screenshot you can see half of one post?


I rarely use Facebook, but when I do I use https://m.facebook.com instead of the main site. Similarly i.reddit.com and the HTML version of Gmail. Which aren't great looking, but pretty usable. I hope these versions will be maintained for a long time.


>Similarly i.reddit.com

Try https://old.reddit.com

It's so bad that people made Browser Extensions for Firefox[0] and Chrome[1]

[0] https://addons.mozilla.org/en-US/firefox/addon/old-reddit-re...

[1] https://chrome.google.com/webstore/detail/old-reddit-redirec...


Old Reddit is so damn ugly though apart from the Subreddits which have nice custom themes (although does make for a 00's Myspace-like inconsistent interface)

The new Reddit definitely seems a lot faster nowadays than it did when it first launched.


I don't think old reddit is ugly, though it takes some getting used to. Combined with RES for some minor enhancements (navigation etc) and it's great.

It's similar to HN. Not fantastic design, but incredible information density and usability.


Incredible? HN has horrific usability.

Good luck knowing when people respond to your comments. Searching takes you to a completely different website. You can't delete comments (which should be a basic privacy ask from this crowd). Click targets are incredibly small.

It took years of begging for them to even implement collapsing comments. And for some reason they put it on the right side (not lined up with the tree level), and made it a super small click target.


HN could definitely do with a redesign. Just seems like stubbornness from the owners not to change it.

I'm not talking turning it into an SPA or adding tons of JavaScript, just a bit of CSS/HTML TLC with some nicer fonts and make the whole thing a bit more scalable and bigger with some UX tweaks.

Something akin to http://gabrielecirulli.github.io/hn-special/


Not to go off on a rant about reddit here, but I still use the old reddit (old.reddit.com). The new reddit layout is horrid,. Using additional whitespace to improve readability, has done the opposite. Counter-productive iconography sizing and font styling, it goes on and on. Some of this is to make it more mobile-friendly. I get it, but as a singular design, it's worse for desktop users.


> The new Reddit definitely seems a lot faster nowadays than it did when it first launched.

For me it's not the speed, it's the number of times the page straight up just doesn't refresh, or comes back with no data. Very frustrating.


Gmail is a disgrace. I've been using it for so long and I miss the version from 2005. It's so slow today I just hate using it. It's really sad because it was a great product and that's why it took over email.


I don't really care about their end result because I don't use facebook. But in spite of all the negative comments here, I'm actually interested in how their styling solution works.


If I had to guess, I would expect it to be similar to react-native-web. They say the api is inspired by RN (StyleSheet.create) and the atomic classes look similar to what react-native-web produces. Also the author of RNW works at facebook.

It‘s called xstyle and you can see some examples in a talk from last year that presented the new tech for new faceook. I will update if I find the link (can‘t right now).

Edit: starts at about 28:00 here, but the rest about react+relay data fetching is interesting as well if you care about that stuff https://developers.facebook.com/videos/2019/building-the-new...


There is nothing about backend here, that's where the heavy lifting is done. Is that all in PHP still and GraphQL ?


The most irritating "feature" of Facebook is they insert themselves like eight deep in the history when opening their page so the back button doesn't work.


This kind of thing is normally the domain of sites you find after searching “office 365 warez”. Pretty bad.


It looks a lot like the new Twitter. I guess the new trend is just to show multiple tabs from the mobile app at once as the desktop site.


I don’t get the hate for the new Twitter, to be honest. It loads fast, is simple, it’s to the point.


It's slow, requires JS, and fails around.. 50% of loads, I think?


ok I guess YMMV. It doesn’t fail for me, so I guessed it’s like that for everyone.

everything requires javascript nowadays, that’s just a default


"Oh there's radioactive material in every food now, why would you complain about that? That's just a default."


It obviously does not fail on half of loads.


it is yahoo from the 90s

https://cdn.searchenginejournal.com/wp-content/uploads/2006/...

Nothing new under the sun


The Yahoo I knew and loved from the 90s manifested as a web directory:

https://www.ttcs.tt/wp-content/uploads/2014/09/screenshot-of...


Wow. That made me way more nostalgic than I was expecting.



For me, I consider the real test for a website's performance to be how it reacts under JS disabled conditions. Using NoScript has completely changed the way I use the internet. Combine that with DoH and filtering and you can remove a large part of the jank that fills browsers needlessly.

It's a delicate balancing act, but it puts you in control. Sometimes it's just not worth the hassle, and I suspect it's why you see many comments like "just read the f* article", except sites are so slow and cause so much rendering and CPU cycles it's just an abuse of the platform.

Websites are capable of serving text without JS being required.


With increasing adoption of reCaptcha v3 which won't let you browse on any page without Javascript, it will become harder and harder to use the internet with NoScript enabled.


I've tried several times to get used to NoScript, but there's just too much friction in day-to-day usage. I already get regular breakage due to agressive uBlock configuration.


It uses scrolling as cookie consent, apparently.

Which is wrong as discussed here: https://news.ycombinator.com/item?id=23090393


It's not like Facebook is concerned about GDPR compliance, and given the lack of enforcement I can't really fault them for this.


impressive attack on would be competitors. now new start-ups will look up this doc and say, ahya let's build our interfaces with react + relay. & the new css | js wizardry. when they don't have the manpower and time to do so. end of day can't launch. when they forget this rewrite is implemented 15 years later when fb is already making money. and likewise, as other commentators have said in the thread - we have failed as an industry due to usability regression


I wonder what the Mobile App versus Mobile Browser versus Desktop Browser usage percentages look like.

Generally speaking, how important is it to actually update the desktop version other than for the sake of updating tech? I only use desktop as part of my marketing work, if I do it use it at all.

(Side Note: I can't believe it took this long for Dark Mode to be a normal option for all software)


I opt in when Facebook prompted me to test out the new UI in Chrome browser, immediately I regretted and frustrated to find the button to revert back to the old one.

But then I found out my "old" Safari browser wasn't supported in their new fancy UI. Now I only check my Facebook once per day in Safari and never sign in again from Chrome.


Facebook folded to the "quit changing the UI" crowd years ago when they opened registration up beyond colleges.


The use of Monokai is so off-brand for FB.



nowadays is quite rare to write classes in pure html, using any framework like react, vue and similar we are used to pass classes in a JS environment. Obscure and minified class names can be imported from a package using a perfectly meaningful name, also the same set of classes can be exported with many different names to even better usage specificity with no cost in size.

We overcome most of those issues using a mix of postcss compose and css modules with a custom hashing solution based on the actual css rule content, this allowed us to have virtually infinite semantically named components with a css bundle size that tend to stabilize around 20/25kb gzipped for a very big e-commerce use case and I doubt any other use case would go much higher than that size.

https://github.com/utilitycss/atomic If you want to have a look (documentation needs some love, but the samples generated by the init do give a good idea of the concept)



My browser constantly pegs the CPU, Firefox performance manager indicates it might be Facebook but CPU metics never seem accurate. I use the new layout and swear the issue occurs after a tab has been left open for a long period of time (8-16 hours).

Anyone else experience this?


There doesn't seem to be any mention of actual KPIs/metrics changes in the relaunch?


Everything is in the past tense without dates. When was this massive re-write? 2010? 2018?


theyve been talking about this since 2019 https://www.youtube.com/watch?v=WxPtYJRjLL0 i reckon it happened over the course of 2018-19


Looking at the graphical design changes, they haven't rolled this out yet.


They did roll this new interface on some users (like me), and this new experience genuinely sucks.

It's easy to guess what happens under the hood:

"Legacy" programmers implemented working solution.

A newcomer comes into the company, doesn't really give a shit about the company because he is #XXXX, follows textbook processes, has a very nice pay-check so no pressure, and decides to rewrite because "code sucks".

A second newcomer joins, still no pressure, because he knows he will get his pay-check. Tells his manager that he cannot work without refactoring (it's not true).

10 programmers later, you refactor code instead of producing features. You can do that for all your lifetime.

To their defense, the FB initial codebase from the time of Mark Slee or Philip Fung was not a gift but because they had pressure to make revenues they were trying to do what was right.

When you have 10 years of positive iterations regarding product experience and user feedback, revamping everything in a big bang boom is a terrible idea from both engineering and product perspective.

Sometimes, full revamp are positive because the initial product sucked, but when you managed to onboard a billion user, it's dangerous to change their habits if you aren't 100% sure it's an improvement.

Here, we have PO/PM pushing for change, for whatever reason (engineering pressure, or they get a bonus if they deliver the product, etc). Same story with all the Google Messengers.


This is not a big bang rewrite. All the technology here is stuff that's been built over the last decade. Haste is a decade old, React is 8 years old, GraphQL is around 7 years, Relay must by 5 years old by this stage. This is simply bringing them all together.


I can't remember the last time I loaded FB on a computer. At this point it's clear that they care far more about the mobile experience than the desktop. Even if I'm sitting at my computer I'll still use my phone for FB.


If you want to know more about the atomic css-in-js Facebook is using, check my article https://sebastienlorber.com/atomic-css-in-js


I don't think there's a single piece of software (if you can call facebook's website a piece of software) that I've used longer than facebook.com. I was one of the earliest users in 2004 and have had an account the entire time. It's really interesting, in sort of a morbid way, to see how it really has become a shittier experience every year. I'm sure it's hard to get right and I wouldn't claim to know better than them but it is quite amazing that they have so many engineers being paid so much money and the product from a usability standpoint manages to get worse. I really think that will be their ultimate downfall one day.


I wonder how much time and how many managers/PMs/engineers (a.k.a. Prod Dev) did it take to refactor/rebuild it? Or you can throw a number in terms of money?


Tech stack? This is only stuff they put in the browser.


exactly it's missing all the backend information which is most critical.


> We knew we wanted Facebook.com to start up fast, respond fast,

You failed.

mbasic.facebook.com on the other hand, is actually fast. Oh. Wait. That's no js at all.


They reduced CSS by 80% but the new site looks like garbage! On the desktop, It looks like a mobile app just really stretched out.


I like the approach to css with stylex. I hope that now linaria and astroturf will get more support.


Advancing technology doesn't make up for destroying humanity, IMO. Fuck Facebook.


I hope the new fb works with Javascript disabled, like it did in the old days


I understand the new stack is React + Relay, but what was the old stack?


A hodgepodge of (mostly proprietary) technologies.


I don't like it.

Round borders with misplaced text, cannot go to a specific time on my timeline anymore, does not feel very responsive.

Overall, the UI does not look very appealing and feels like a downgrade.


Reminds me of the new Chrome UI. Not a fan of this soft, rounded-off aesthetic that seems to be spreading about.


The new interface caused my iPad to heat up severely so I switched back to the old clunker interface. It might be ancient but it doesn't cause my iPad to heat up.


interesting to see these kinds of details, but this is hardly 'rebuilding the the stack' - just UI<->API layer


there is such a thing as a frontend stack


who shat in all your brains?


I hope this speeds things up for Facebook. On the rare occasions I use it these days to check up on family photos or something, the entire site is fucking painfully slow. Even on my quad core I7 desktop on a fibre backbone connection, I'm routinely waiting 10+ seconds each time I click on the "notification" icon (or any other action).


I think the only HoC I consistently use these days is `connect` from `react-redux`.


Title says tech stack but content is about css


A browser extension that automatically filters out Karen from your feed would yield better results, but they wouldn't want to alienate their core users


Do they still use PHP on backend. If so ick


AFAIK they use 'Hack', a language designed to be very similar to PHP.

https://hacklang.org/

PHP is a lot better than it used to be now though! It's got a bad rep but newer versions are fast and surprisingly modern-feeling.


They use Hack if you really needed to know.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: