Hacker News new | past | comments | ask | show | jobs | submit login

Suppose I wanted to make a program inscrutable, hard to modify, hard to test, heavily coupled, and hard to reason about:

- hide information so that it's not queryable

- force information to flow through multiple hops

- make it hard/impossible to set the true state of the system in its entirety

- allow state to be mutated silently

- give interfaces roles and force certain state to have to flow through specific paths

- multiple concurrent access

- pointers

- spray the app across as many machines as possible

Sounds familiar, maybe like a scathing criticism of OOP? Well check this out. What if I wanted to make a program as slow and bloated as possible?

- put all state in one place. Bonus points if you can bottleneck it and force all changes to be well-ordered and sequential

- all data has to be immutable and copied over in its entirety when propagated

- use larger data structures than necessary. Lots of wrapping and indirection

- no caching. Can't trust data unless it's fresh

- read/write from disk/net tons

- use scripty garbage collected languages

- spray the app over as many machines as possible

The latter kind of feels like a criticism of FP, though really it's more critical of distributed monolith. What if I want to make my app as vulnerable to outage and overload as possible?

- concentrate the app in as few machines as possible

It's really interesting how all the different tradeoffs of approaches just kinda appear when you take this inverted approach. It's all too easy to get caught up on the positives of an approach.

(opinion - I still think FP comes out looking better than OOP, but it does suggest that you need a lot of techniques to safely hide optimizations to make FP more performant, which can make it harder to scrutinize all the layers)




>What if I wanted to make a program as slow and bloated as possible?

>put all state in one place.

>no caching.

Just as a counterpoint, both as a user and a developer I've found the main cause of performance issues and UI bugs come from a combination of distributed state and caching.

State being stored as a private member on the widget object itself is arguably more likely to cause a state desync bug or accidental O(N^2) behaviour than everything being global. IMGUI solves this but has only really seen a warm reception amongst the gaming industry (where, incidentally, UIs are always lightning fast despite being traditionally limited by older console/handheld hardware.)

Having a cache at the wrong level might reduce some function's execution time from 100us per frame to 10us, but makes it much more likely that the entire app will become unresponsive for 10s while some dumb bit of code rebuilds the whole cache 25,000 times in a row.

In a similar vein, I've found issues in multi-threaded apps where some function is sped up from 10ms to 2.5ms by making use of all four cores, but occasionally spins on a mutex, blocking the UI for 250ms if some slow path (usually network/disk IO) is taken.

Genuinely, I think the simplest way to make sure a program will have bad performance just requires one step:

- Make it difficult for a human programmer to reason intuitively about the performance of the code they write.


> Just as a counterpoint, both as a user and a developer I've found the main cause of performance issues and UI bugs come from a combination of distributed state and caching.

Isn't that like, the main cause of most software headaches? insert joke about cache invalidation and naming things.

It sounds like you're saying bad caching can be worse than no caching. I can absolutely see that being the case.

> - Make it difficult for a human programmer to reason intuitively about the performance of the code they write.

Totally agree.


>It sounds like you're saying bad caching can be worse than no caching. I can absolutely see that being the case.

More specifically, I'm saying that caches (implicit or explicit) can and do go wrong, and the simplest way to avoid invalidation errors (and the performance hiccups when higher-level developers start manually invalidating the cache to work around them) is to just implement them in as few places as possible.


> What if I want to make my app as vulnerable to outage and overload as possible?

> - concentrate the app in as few machines as possible

Well, sorta. Distributing an application across multiple machines only reduces downtime if the machines are independent of each other.

I've seen bad architectures where wider distribution leads to more outages, not fewer. I even consulted for a company who had their services in two data centers for redundancy, but they put different services in each one, so a failure in either data center would effectively bring down their entire stack.


It was mostly a quip that the first two anti-optimizations result in distributing the application across tons of machines. Since this is only the basis of the entirety of distributed computing, I thought it was only fair to include an instance where distributed computing patterns win out (fault tolerance and parallelization).


Absolutely, it was a very solid post overall, including the nits I picked.


For the most part I agree, but if you ask "suppose I want [a conjunction of deprecated outcomes]" then it is possible that something tending towards one of these deprecations may protect against another. For example, hiding information so that it is not queryable may make testing and certain modifications more difficult, but encourage heavy coupling, which in turn makes a program hard to reason about and therefore also inscrutable - and thus, one further step removed, harder to test and modify.


> The latter kind of feels like a criticism of FP

It may feel like that for somebody with little experience with FP, but half of those points are design decisions that aren't changed by the language paradigm, and most of the rest is a non-sequitur where the stated problems aren't caused by the characteristic that precedes them.

Immutability makes chaching easier, not harder, the same for propagating by reference. Besides, there's nothing on FP that asks for more IO operations or distributing your software. The item about larger data structures is on point, although the indirection is very often optimized away on practice, while imperative programmers often have to create their own indirection by hand, that the compilers have a harder time to put away.

Anyway, most of both lists apply perfectly to the modern "mandated microservices" architecture some place use, that I think was your point.


Because it's not meant as a criticism of FP, or a list of FP things, in fact I actually really like FP. It's just that if you anti-optimize for bloat and slowness, you coincidentally end up with a lot of features that FP has.

Centralized state, immutability, wrapping, indirection, and nested data structures with garbage collection are very FP things. They are also almost always slower than mutate-everything imperative style, and require a lot more under the hood to make them performant. You basically need a clever compiler to really reap the benefits. Contrast with C with a dumb compiler, easy to get a fast, albeit buggy, program.

IO, bad caching, etc are very much not in the spirit of FP. The other points are just other bad things that I've seen a lot of apps do.

> Anyway, most of both lists apply perfectly to the modern "mandated microservices" architecture some place use, that I think was your point.

That was exactly my main point.


Your FP example reminds me of the JS trend that picked up about 5 years ago. Not sure if they're still doing it, but you described the dogma near perfectly.


Because imperative programming in Javascript, even with OOP principles and patterns, generally leads to even more complexity and messiness.

FP is now almost the norm in frontend engineering. Certainly in the React world. Immutability and side-effect-awareness is highly valued.


Thé irony being that JavaScript doesn't add anything that Smalltalk didn't already provide, yet we have all this OOP backslash and how FP is somehow better.


Well, Javascript is the language that is able to run everywhere, so that's what frontend engineers have to target. There is some more flexibility now with Webassembly...

If you are interested, look up Fable, Bolero, Elm, PureScript, ClojureScript, Grain...


All of them don't wash away JavaScript semantics, just because the source language is something else.


That's actually not true, at least if your definition of "Javascript semantics" isn't extremely broad.

This was the case with CoffeeScript. But the examples I gave either work with Webassembly (Grain, Bolero), providing their own runtime, or do some work to hide Javascript's data model to a large degree (F#, Purescript, ClojureScript).

What stays the same is that you are dealing with the DOM, User events occurring there and a metric sh*t ton of asynchronous APIs.


Yes, that's exactly what I was getting at. It was less a criticism of FP (which is why I say it feels like it might be - it's not) and more a criticism of the bevy of techniques common in modern full-stack development.

Even then I'm not saying it's bad to do thing that way. They absolutely have their place. OOP has its place. FP has its place. Distributed computing has its place and vertical scaling has its place. The point is, dogmatically latching onto "X bad! Y good!" blinds you to potential solutions.


Do you mean Redux? If yes, it's still quite popular


> Sounds familiar, maybe like a scathing criticism of OOP?

No, not really. For example:

> - make it hard/impossible to set the true state of the system in its entirety

Strangely enough, environments like Smalltalk allow you to do exactly that, but other environments not so much.


The problem is, semantically and linguistically, there are two OOPs. There's Smalltalk OOP and Java OOP. I haven't worked with Smalltalk but from everything I've heard, it "does object-oriented right". Unfortunately, Smalltalk just isn't popular (not even in the top 50 on the Tiobe index, though it fares slightly better on Redmonk).

For better or worse, Java is massively popular, and thus the Java conceptualization of OOP, which is just frankly bad, is what most people think of when they think OOP.

OOP encapsulation works when you can't have objects with invariants violated, and when you can't cover the combinatorial space with tests. The problem is, Java-style setters and getters are an almost guaranteed way to get the above properties. That's why it's better to be able to just have a small number of course-grained state stores that you can interrogate easily (REST, Reactors, databases, and Kubernetes data models all exhibit this). Class Employee inherits Person, doesn't. Too fine-grained, too easy to mutate yourself into an absolute mess.


In which camp does Simula (arguably the first object-oriented language) fall?

In which camp does CLOS (the Common Lisp Object System) fall?


> What if I wanted to make a program as slow and bloated as possible?

That sounds like a very exact description of React/Redux.


If you are at war with your technology, your codebase will look like a battlefield. React and Redux require a functional, reactive mental model. If you approach it with an imperative mindset, you'll get a mess of a codebase.

Unfortunately the popularity of React means it is used by people who know neither Javascript nor FP very well. And the popularity of Redux is even worse.

In my opinion people should stay away from Redux until they know and understand intuitively why it is necessary. Until then, use the useState and useReducer hooks, then maybe something like "Zustand". When you start using Redux, use Redux Toolkit.


> If you are at war with your technology, your codebase will look like a battlefield.

That almost sounds like a criticism of people who developed React and Redux and went to war against their browser APIs.


Not really. Redux doesn't even touch Browser APIs as far as I know.

React uses a virtual DOM to avoid DOM unnecessary manipulation, which is slow and error prone. DOM manipulation could be seen as a week point of browsers, and by avoiding it, React is actually a very good ally.

Javascript is a dynamically typed language and offers quite a bit of functional programming functionality itself. React and Redux use that to their advantage, rather than insisting everything be modeled by classes.


> React uses a virtual DOM to avoid DOM unnecessary manipulation, which is slow and error prone.

While it may be the case that DOM is far from perfect, React is hardly the only way to avoid it. Even if for your use case avoiding DOM manipulation is necessary (which I strongly suspect is not true for like 95% of people who use React as the framework du jour), there seem to be significantly better thought out approaches, like for example Svelte, if you absolutely have to go down the "let's turn our browser workflow into a C-like one" road and churn out an opaque blob of code as a result. That also avoids unnecessary DOM manipulation, but unlike unnecessarily duplicating browser data structures is at least somewhat elegant, just like compilers are considered elegant compared to interpreters.

> React and Redux use that to their advantage, rather than insisting everything be modeled by classes.

Sure, but Javascript is not even based on classes. It traces its heritage back to Self which doesn't even have classes.


Avoiding direct DOM manipulation is a benefit in almost any case. Virtual DOM is now at the root of most popular UI frameworks libraries, including Vuejs and Angular, also less popular WASM ones like Blazor(C#) or Percy (Rust).

I do remember writing complex JQuery components. React felt like a liberation for me...


But because JQuery was bad doesn't mean that React was the answer. That would be a false dichotomy to make.


Interpreters are considered elegant, too. For example, Lua is a great little language and it runs everywhere precisely because it's interpreted.


Sure, there are many elegant interpreters. I'm not sure that patching DOM from the changes in a redundant data structure is one of them. Even Blink's idea to instead pull browser's own DOM into Javascript definitely looks saner to me.


It's not a redundant data structure if you need it to figure out the necessary changes.

You are free to choose other approaches for your frontend projects. Just don't expect to get hired into larger teams easily.


> It's not a redundant data structure if you need it to figure out the necessary changes.

And why exactly can't you "figure out the necessary changes" without it?

> Just don't expect to get hired into larger teams easily.

Honestly, I see that as a win-win.


Seeing how much you like trolling, that is not a surprise.


Sincerely held opinions are by definition not "trolling". I just fail to see that as relevant, just as I fail to see the number of Big Macs being sold globally as being relevant to food choice criteria.


We've built a React/Redux application[1] that people keep telling us is very snappy, and we definitely haven't optimized as much as is possible, so from my experience React/Redux is not inherently slow and bloated.

[1] https://my.supernotes.app


FWIW it takes 7-8 seconds to load on my computer as well, on the latest Firefox, with a good CPU.

This has finally convinced me to not waste my time learning React, at least for now.


Yep, first load isn't very quick, as there are a lot of things that need to be loaded which are never going to be very small. However, first load only happens once. After that the assets should be cached in your browser and loads should be much faster.

There is definitely something to be said for faster first loads, but unlike many other sites on the web, ours is of course optimized for consistent/repeated use, so in the scheme of things first load is negligible compared to making sure it runs fast while actually using it 100s of subsequent times, which (I hope) it does.

Definitely wouldn't let that discourage you from learning React. If you want a smaller bundle size, you can use Preact[1], which is nearly a drop-in replacement for React's runtime but much smaller.

[1] https://preactjs.com/


It seems pretty snappy aside from the first page load (and refreshes, ...). Not perfect (some actions take a few frames sometimes), but not anything I'd spend more dev time on.

The first page load is nightmarishly slow. I tend to avoid services that pull in that much data because they're nearly unusable in low bandwidth scenarios (e.g., lots of stores or other commercial buildings made from steel, if you wanted to check a note while travelling, if your customers are among the 5-10% of the USA without home access to top speeds of >1MBps, ...).

As something of an aside, you're hijacking keyboard shortcuts that you don't actually use (at least, they don't appear in the help menu and don't seem to have any effect).

Also, it might be worth considering the privacy of the friend finder. When I add a username you instantly tell me their name if they're on your platform, even before they accept my request. On the one hand that isn't much different from twitter showing your name and handle together, but on the other hand that seems like surprising behavior for a note taking app, even one with collaborative features.


Thanks for the feedback! After first load the assets should actually all be cached in your browser, so subsequent loads will be much faster (including full page refresh). But yes the initial bundle size is one of those things we could probably spend more time optimizing for.

We recently released desktop apps (and will hopefully release mobile apps soon) where this is of course a non-issue.

Could you tell me which keyboard shortcuts you are having problems with? Our intent is definitely not to hijack anything we don't use.

Thanks for the note on the friend finder. Unlike many other platforms, we don't actually require users to have a surname, so we felt that if privacy was a concern with regard to name the best solution is for a user to only include their first name. But I can see how that still isn't perfect from a privacy perspective. We are working on improving the way friends work on the platform and will try to improve that as part of it.

Thanks again for all the feedback, very helpful.


I appreciate the response! Sorry to only be pointing out problems by the way. Plenty of other components do seem well done. I just know that in your shoes I'd want to know where users struggle.

> Initial load

I just poked around a bit, and I think the other commenters on mobile or firefox might mostly just be noticing main.[key].js being slow. The inital load is structured as a couple of sequential requests followed by a lot of js (device dependent, 0.5s - 7s+) and then a lot more requests fired off mostly in parallel.

> Refresh bandwidth, caching

Your heaviest assets are set with max-age=0, and the server often responds to an if-none-match header by regurgitating those assets. Refreshing after doing stuff in the app (or just waiting) for 3min reliably generates nearly as much latency as a cold load.

If you expect most mobile users to prefer your mobile app it might not matter, but a workflow that looks like doing something on your site, navigating to another site or app for a few minutes, and then coming back can often trigger a mobile browser to refresh the page -- especially on lower end devices where the cold load is most expensive.

> Native apps

Awesome!

> Shortcut hijacking

Mostly a misunderstanding on my part. It looks like the following things are happening:

(1) Keyboard hooks apply across the whole app, even in screens where they do nothing (like using CTRL+SHIFT+I in the settings) or when the elements they would apply to don't exist yet (like navigating between cards).

(2) I falsely assumed the cheatsheet of commands was comprehensive.

(3) Nearly every navigation keystroke (space, shift+space, arrows, ...) is actually used by something in the app, mostly stuff that doesn't work till you have cards and friends and whatnot.

The net effect for me was that it looked like all my favorite shortcuts were being swallowed and not doing anything useful in return. I don't have much useful advice here, but easy discoverability of all commands might be nice.

> Privacy

That makes sense. Reflecting back on exactly why I found that off-putting in the first place, I don't necessarily think the problem is with the search feature (some people probably disagree), but with the fact that it wasn't clear the information would be public when I was first asked for it. That might lend itself to a simpler solution.


The main page is not snappy at all, on my phone when I click on a link on the top bar it takes at least 2 seconds before it changes the page


I'm all for bashing unnecessary usage of Javascript, but I work with React a lot and I think there is room to make it a lot slower and more bloated, i.e. for what it is React is reasonably lean and performant.


Sounds like a serverless app on AWS




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: