I will cheer for the success of ETH2 as well. The reason I like Lightning better is because the engineer in me wants to freeze the base layer and do the experimental layers in isolation. Internet infra is all layered for good reason.
Ethereum is also layered. ZK Rollups are later two and not related to the Ethereum devs. It's an isolated engineering later with lots of experimentation going on with optimistic rollups, ZK Rollups, state channels etc.
The problem with Bitcoin is that it's not technically sophisticated enough to support proper L2s which results in poorly engineered solutions like Lightning and centralized solutions like Liquid.
With Bitcoin more work needs to be done on L1 for the ecosystem to support layers above it while Ethereum could freeze development forever and still support flexible, fast, and decentralized layers on top.
The OSI layering model of internet infra is largely a myth, a simplification for students. It's similar to C is called "portable assembly" despite compilers being far more complex in practice.
Layers 2 and 3 are not real - most networks today merge their functions in the same devices. (In the OSI model switches are illegal, you can only have hubs and routers).
4 and 7 are ceasing to be real - HTTP2 is both a session protocol and an application protocol, and that's before we even get into things like DoH.
I disagree. The model was supposed to be universal but it fell apart as soon as people started doing anything slightly unanticipated with it. It's simultaneously both overengineered and fragile, not providing enough insight to justify its complexity - rather like OSI itself. There's a reason their protocol suite was a failure in the real world.
That's fair and I agree that the OSI model is not useful for a precise understanding of deployed networking technology.
I do think it is useful as an architectural device or a conceptual design goal -- i.e. a model to model your models on. :)
But I also concede that part of its teaching value is that it is a failure in practice.
It was a formalization of the ad hoc (successful!) design strategies of early networking. I see echoes of it everywhere, most obviously in the Linux kernel, and I think it's valuable for that.
The usual criticism of the OSI model is that there are grey areas and dependencies, where a lower layers bleed into higher layers (e.g. L2 switching which can work a lot like routing in modern hardware), and higher layers that are tightly bound to lower layers, making the distinctions unclear.
The purist in me wants to agree, but I think the model is too useful to disregard casually.
The value comes from the abstraction, and like Newtonian physics, it is a great model -- until it isn't.