CS is built on abstractions, which make things more modular and easier to think about.
A data structure can be seen as an interface, a logical structure, a physical layout, or an encoding. When you teach them, you have to start from somewhere. The logical structure is usually a good choice, because it contains the key ideas of the structure. If you cram in too many details and too many levels of abstraction, you are just going to confuse the audience.
A data structure can be seen as a mathematical object with attendant definition of structure and operations, which can be useful for formal reasoning; or a implementation, an actual executable artifact encoding those structures and operations into a particular runtime, which can be useful to, like, write/run some code.
What you're calling a "logical structure" is not really a thing. It's a sort of outline or mnemonic, usually for the mathematical objects but sometimes the program. It's per se insufficient for either mathematical or practical use. When the outline fails to capture the right details (as it fails here) it's also harmful to anyone trying to understand either view of the data structure.
That's a narrow point of view. It would make minor variations of the same structure different data structures, which is not very useful approach.
Logical structure is the heart of the data structure. It can be understood as the set of invariants maintained by the structure. Those invariants determine which operations can be supported efficiently by some version of the data structure. Layout and encoding choices affect the performance of those operations. The interface determines which operations are available, but it's often a poor match for the actual structure.
The lines between the logical structure, the layout, and the encoding are vague, but the concepts are useful. For example, if two data structures have the same logical structure and layout, they can often share the same serialization format. That implies that there should be a simple but very efficient conversion algorithm between the two, which can prove useful if you sometimes need to support different operations efficiently.
I don't think "computers have caches that are faster to access than main memory" is just an implementation detail though; if you're designing data structures for a specific kind of storage you should teach the properties of that storage first and the data structures after.
If you are designing data structures for a specific kind of storage, you have hopefully studied them beyond the first/second year algorithms and data structures class. Like most introductory classes, that class mostly teaches you concepts and ideas.
My job is largely about designing and implementing new data structures. Those four levels of abstraction are the ones I've personally found useful. None of them deals with implementation details, as they are all abstractions. Implementation details are an orthogonal concept that is relevant at every level.
In the CS curricula I'm familiar with, the introductory algorithms and data structures class is usually a prerequisite to the core systems classes. In order to learn how computers really work, you must first be familiar with the key ideas of organizing and manipulating data. Designing algorithms and data structures for real computers is an advanced topic few students take, because they are more interested in software/systems/theory/ML.
A data structure can be seen as an interface, a logical structure, a physical layout, or an encoding. When you teach them, you have to start from somewhere. The logical structure is usually a good choice, because it contains the key ideas of the structure. If you cram in too many details and too many levels of abstraction, you are just going to confuse the audience.