Hacker News new | past | comments | ask | show | jobs | submit | danabrams's comments login

I’m old enough to remember not having an iPhone and not feeling sexy.


Two types of sales philosophies: 1. It doesn't matter what you're selling, it's about the sales technique. 2. Develop deep domain and customer expertise.

The former is the scammy type, the latter is the type we love to work with.

But the same is true in any industry. Too many of us in technology are doing the technology equivalent of 1--becoming experts in C++ or React--instead of becoming deep domain and user experts.


In software I like the person knows C++ or React in and out and I like the person who understands the domain, UX and such. I want both on the team.

I despise the guy who sells extended service contracts at the car dealership. I sure as hell don't want that guy selling software work because I won't be able to complete the work profitably and I'll be dealing with angry customers who don't trust me.


Here's a theory...

Although illegal now, San Francisco used to have a widespread practice of "key money"--a bribe you paid the landlord to choose you to rent the apartment that due to rent control or other factor was priced below market demand.

Because the landlord was capturing the extra value directly, a cultural practice of high broker fees never developed there, while it did in the east, where bribes were less common. Thus someone other than the landlord captured the excess value.

It's also entirely possible that the broker's fee is being illegally passed as "key money" to the landlord in a way that's harder to detect/litigate in NYC because it's not direct from the tenant.


The author leaves out what to me is the most compelling argument against static types: it is somewhat at odds with interactive development with a running system, as seen in smalltalk and lisp.

Now, not a lot of developers are really doing this. But it's still a good reason for those who are.


The article's author is using Blub static types, in which you're declaring that the variable that captures something from the cache is a Pet handle.

Under a properly modern static typing system, you let that be inferred:

  let var = cache.get("key");
var is of whatever type that the cache object returns. Even clunky old C++ can do this with auto.

That author will most likely die in some statically statically typed corner he painted himself into.


I’m reminded of Alan Kay’s observation that software is a pop-culture.

Simplicity is great, how can we combine it with the 17 other frameworks we saw on HN this week and have to use?


You wouldn’t use Astro with NextJS, but you absolutely would with react.

Astro is an SSR more tuned to generate static sites than SSR with hydration. It uses the islands architecture instead of full page re-hydration. So if you’re generating a static site with a few react components sprinkled in, it’s a good thing to use.

Because of the islands architecture, you can also mix and match component libraries. So one component can be react, one can be vue, one can be svelte, etc.

Next and remix are both less focused on SSG than Astro. A lot of people are making very content driven sites using react or Next—sites that aren’t really or shouldn’t be SPAs—and this is a great tool for content driven sites that don’t benefit from SPA-level interactivity (which is probably most sites using SPA frameworks)


How are Next and Remix less focused on SSR, I thought that was one of their main selling points? As well as static site generation. For example I use Next for a blog site that is SSG, works fine.

The islands architecture is interesting but in practice I doubt I'd swap between multiple component libraries.


Astro SSG and Next SSG have vastly different outputs. Astro's image component output is a prime example compared to Next with Astro being much, much cleaner. Astro's output in general is much cleaner, more streamlined, and less JavaScript heavy.

With the island architecture, it's not that you would switch between different architectures, but that you can choose which component framework you want to use on top of the same templating framework. Of course, you can mix and match, but that's not really the point.


If you haven't checked out Next.js in awhile, the next/image output changed recently. It's just an img tag now, basically setting srcSet automatically on easy mode, plus the automatic optimization of images. Most of the props or modifications you can use are native <img> features. It's similar to Astro (which is great!).


Next JS hydrates the entire page as a react component, so it's SSR on initial site visit and then navigating from there requires rendering (and maybe you'll use SSR to get some props).

Astro is actually an MPA that allows some client side components, so it only requires you to render on parts of the page. I prefer that for content heavy sites because I'm not sure how much interactivity I need.


This is Next.js in the "Pages Router" world (e.g. everything prior to 13.4). Past 13.4, you can also use the "App Router", which is kind of like a framework in a framework. It uses React Server Components, which can run server-only without hydration. Thematically similar to islands.


Yes, these are pretty new though and I hear some people are having issues with the app router. Astro came before this release


Remix is entirely SSR, so not sure what they meant. Next.js is static first, but definitely still supports dynamic. It started out as a dynamic, SSR framework.


Sorry, there was a typo. Astro is more focused on SSG than SSR. This is what happens when trying to comment on a phone keyboard first thing in the morning.


From context they are almost certainly referring to the city of Washington (DC), which is part of the northeast corridor described, and not the state of Washington, which is on the west coast.


It's been years I thought it was near Seattle or something. How did I get this mixed up? Feels like a Bernstein bear kind of thing.

Thank you for the correction.


For 350 years of US history africans and their descendants were enslaved. Native Americans were ripped from their land and relocated, often with genocidal levels of casualties.

After that, these two groups were substantially discriminated against in law, and other races were added to the mix to be given less rights than others.

Today, there are huge disparities between outcomes for different races in large part due to this historical discrimination. There's also an ingrained culture of stereotyping and discrimination that's hard to lift. It doesn't matter if you're the first generation of Americans descended from African immigrants who came in the 1980s... you still are impacted by this legacy.

The concept of affirmative action was to specifically counteract the effects of these negative, historical circumstances and provide a countervailing effect.

I can't speak to other countries, but in the US, it is definitely the case that poor people of color have a harder time getting ahead than equally poor white people. (I suspect it's similar elsewhere, but we are also a pretty racially diverse country, so the effect is larger)


> I can't speak to other countries, but in the US, it is definitely the case that poor people of color have a harder time getting ahead than equally poor white people.

Then why are the white people equally poor? And does it matter where they live? For example in a major city compared to a dying small town where industry has left? That's a really broad claim to make. Would it hold true in Appalachia, for example?


White rural poor is the absolute lowest class in the US. Late night show hosts openly joke about fantasizing their death so it’s one less vote for the other guy. It’s probably the only class in the US where the rest of US society relishes their suffering.


> it is definitely the case that poor people of color have a harder time getting ahead than equally poor white people

Do you have any sources for that?


Do I have any sources that systemic racism is real?

I mean, there's a large body of evidence (I personally like the economics methodology of this study, which has been repeated many times: https://www.shrm.org/hr-today/news/hr-magazine/pages/0203hrn...).

But just like many will never be convinced that vaccines are safe and the earth is round, many will never be convinced that racism in the US is real, I suppose.


I once failed an undergraduate student because they argued that racism ended in 1965 and that racism did not exist after that. It's like they didn't pay attention in class at all.


The complaint that at ideal viewing angles, the resolution will only be 720p is silly.

720p is fine for watching movies, even if it's not home theater perfect. But it's absolutely fine and way better than the alternative of watching on a terrible IFE seatback (which probably gets the aspect ratio wrong)


The M in MVC has come to mean “data model,” but it originally referred to the “mental model” of the user. What kind of thing are we trying to manipulate and what is the user’s mental model of such a thing?

How about a bank account? A mental model of a bank account would include useful operations like deposit, withdraw, transfer, checkBalance, and these would be the methods on the object. The data schema and the persistence would be necessary, of course, but an implementation detail. Models were smart and mapped to human perceptions of a thing, rather than dumb data persistence layers.

These kind of business logic operations have often been moved into controllers, which couldn’t have been further from the original intention.

MVC, smalltalk, OOP we’re all about stopping and thinking about the way humans think while interacting with computers. It was about designing nice interfaces for interaction based on human expectations, not database requirements. Internal object schemas and data persistence were implementation details of an object that could—if you did it right—be easily changed without changing the interface.

But we can’t help ourselves, and instead OOP today is a world of getters and setters with a little bit of data validation (if we’re lucky) and models are just a schema plus a generic data persistence interface (maybe an orm). And the business logic exists in the controller, the least important, least reusable component of the architecture.


> it originally referred to the “mental model” of the user.

Do you have a source for that? I agree that that’s a useful way to think about MVC (somewhere downthread someone wrote that the model should be like a headless version of the application, which is similar), but I’m curious about the original expressions of that idea.


Source: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...

The source is Trygve Reenskaug, the originator of the idea.


Since the link won't open for some, here's the relevant bit (the first two lines of the abstract of the paper linked to):

"MVC was conceived in 1978 as the design solution to a particular problem. The top level goal was to support the user's mental model of the relevant information space and to enable the user to inspect and edit this information."


for some reason i couldn't open the link... maybe t got truncated?


It's a pdf. It opens for me when clicking. Are you on a browser that can't easily open PDF?


hmm when i tried again it worked,thanks!


Thanks for this insightful comment.

I think of the model as being the perfect API for your system that you would happily use if you were in a REPL or in a command line.

It's primary goal should be elegance to do the things that a user would want to do with your system, from a programmatic perspective.

The programmatic interface is a user interface that's not necessarily graphical.

Very few people treat OOP as interacting objects with message passing as in Smalltalk and I think we've lost something.


I agree, and I wonder if this is just what happened as an accident of history or so much a tendency of human nature that we couldn’t have done it any other way. Maybe simple data models is the mental model of computing that couldn’t be easily changed.


I feel interactions between multiple objects is more complicated than looping over SQL query results, ORM logic or object graph traversal logic.

Arbitrary message passing between objects is like an N-way network of communication. The number of participants in an object graph increases the complexity of OOP systems. Kind of like a distributed system since every object is in a different state.


Yes absolutely. Reenskaug has stated that MVC was designed for simple operations (and co-designed DCI as an architecture for more complicated ones). And a number of the early OOP people including Alan Kay, have said something to the effect of “Erlang is the only true OOP language.”

I did a deep dive reading the early papers and watching the lectures from the 70s, 80s and 90s on this a few months ago. The early Xerox employees developing smalltalk seem to have originally thought the idea of encapsulation would compose at all levels. That as object interaction got more complicated, you would simply group a few related objects together inside a larger object, and the rest of the application would use that encapsulating objects interface, and you could go infinitely deep that way while managing the complexity. Later, in the 80s, Kay would talk an about writing objects in smalltalk then gluing them together with a glue language (usually Mesa C), because he felt smalltalk worked well for programming in the small but not the large.

Again, I think erlang got a lot right here, using a different model for programming the small (functional) vs programming in the large (actors/otp).

But to hear the OOP pioneers talk about objects, they consistently describe the objects not in terms of data but in terms of behavior, similar to Erlang actors being processes. Each object is supposed to represent a “computer” and the network of objects is supposed to work like a distributed system.

I know which mental model I like better, but objectively, it’s a very different concept from what most developers think of as OOP today (although very similar to microservices).


This is basically Task Driven Development. You start from a list of user affordances and workflows, try to make them as clear, predictable, and robust as possible, and work backwards. It's top down from the user perspective, not bottom up from the developer/library perspective.

Apple's take on this is the reverse. It enforces UI conformity across apps because there are only so many UI objects in the library. You can build your own, but it's much harder than bolting together what's there already.

This is good for a unified look and feel, and fine for many common applications. But IMO it's not really MVC.

On the web you regularly see applications which are half task driven but not very robust, and break if the user does something a little unexpected.

Example: I got a 2FA code from Namecheap yesterday on my laptop, didn't have my phone next to me, closed the laptop, found my phone in the main office, logged in on the desktop, and it let me right in without the code.

TDD is really a kind of behavioural programming. Instead of tracing code paths you're tracking user behaviours and making sure the paths through the app match behavioural expectations with some sane leeway.

The original conception of MVC fits that nicely. What we have today - not so much.


This idea of "multiple objects" operating as "one object together" is some idea I really like too.

a) A good object orientated API is enjoyable to use, if it maps well to what you want it to do. Look at the developer productivity of ActiveRecord, Django ORM, SqlAlchemy or Hibernate. The object graph model is kind of fun to work with and many developers prefer it to working directly in SQL.

b) Where object orientated APIs fall down is where you want behaviour that the data underlying graph model does not support. I am thinking of OpenGL rendering pipeline or operating system APIs such as POSIX.

c) The Document Object Model in web browsers and the Component Object Model (Microsoft windows, word, Visual Basic, office suite etc) are both dreams that everything on the screen and on the computer was object orientated and could be interacted with with a simple API. Most cross platform GUI frameworks are object orientated even if the underlying graphical APIs are procedural. For example, win 32 API is procedural.

d) There is impedance mismatch of object orientation, procedural (C programming) and data structure driven (including data-driven or data orientated, relational tables, or matrixes)

e) UML entity relationship diagrams are another dream that people had to model objects and relationships in computer systems that didn't pan out completely.

I have a number of ideas in this space. I think graphical user interface development is in its infancy still and all the approaches we use have shortcomings of some sort and I say this as a devops/software engineer as someone who only did a small amount of frontend development in previous roles. I've been loosely following the Rust desktop development progresses.

I desire system behaviour to be trivially easy to transform from one architecture to another architecture. This is my dream.

Take for example Postgres' process orientated model or an imaginary system that uses threads per network socket that you want to refactor to be multiple sockets per thread. The idea of "Late architecture" means we should be capable of transforming this model from one to another slightly different model without dramatic destructive code changes.

a) How do you model behaviour without tying it to a mechanism, so that it can be refactored easily. In Java we have interfaces or Rust we have traits.

b) If you have an extremely rich data model structure, is it flexible enough for future behaviour to be supportable? I feel that introducing plurality (1 to many) (many to many) is a pain point.

One of my ideas is that if you were to log the behaviour of a program with timestamps and implement a program that implements the same log, then its behaviours are identical.


> Most cross platform GUI frameworks are object orientated even if the underlying graphical APIs are procedural. For example, win 32 API is procedural.

Well, win32 is kind of object-oriented, in a way, even though it doesn't always map cleanly to OOP languages.

Windows are basically objects whose methods you call through SendMessage. There is even inheritance by replacing the window procedure and delegating to the parent procedure. There is polymorphism in that many kinds of windows support the same messages (e.g. WM_PAINT) and can decide how to handle them.


> One of my ideas is that if you were to log the behaviour of a program with timestamps and implement a program that implements the same log, then its behaviours are identical.

This sounds a lot like event sourcing.


>> These kind of business logic operations have often been moved into controllers, which couldn’t have been further from the original intention.

Moving them out of the object model without putting them in the controller is actually a good thing IMO. I don't want to test controller plumbing or data persistence, but I do want to focus on the business logic, so simple controllers that route to smart objects with dumb data models helps. I agree the smart parts don't belong in the controller but I don't think it's as bad as you make it sound.


There’s no reason your model can’t have an abstraction layer that contains the business logic and a concrete layer that has the persistence (indeed, if it’s complex at all it should).


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: