My guess is that your original SYN did not go to the target, but was redirected somewhere close by. I'd look at the TTL value in the IP header of your first SYN-ACK, and play with such things as traceroute.
Such redirection is often done on a specific port basis, so that trying to access different ports might produce a different result, such as a RST packet coming back from port 1234 with a different TTL than port 443.
There is so much cheating going with Internet routing that the TTL is usually the first thing I check, to make sure things are what they claim.
They are technically `undefined` according to the C standard, but are the behavior of every mainstream compiler. So much of the world's open-source code depends upon these that it's unlikely to change.
Using clang version 15.0, the first 2 produce no warning messages, even with -Wall -Wextra -pedantic. Conversely, the last 3 produce warning messages without any extra compiler flags.
The behavior of the first two examples are practically defined even if undefined according to the standard.
Now, when programming for embedded environments, like for 8-bit microcontrollers, all bets are off. But then you are using a quirky environment-specific compiler that needs a lot more hand-holding than just this. It's not going to compiler open-source libraries anyway.
I do know C. I knowingly write my code knowing that even though some things are technically undefined in the standard, that they are practically defined (and overwhelmingly so) for the platforms I target.
Most people who write for typical desktop and mobile computers don't do C. They tend to do C++ or other, higher level languages. Those who write C tend to do either quirky embedded code, or code that is highly portable, in both cases, knowing about such undefined or implementation defined behavior is important.
If you intend on relying on such assumptions, make it explicit, for example using padding, stdint, etc... On typical targets like clang and gcc on Linux, it won't change the generated code, but it will make it less likely to break on quirky compilers. Plus, it is more readable.
The first 4 are implementation defined rather than undefined.
That said, warnings do not necessarily mean that the code is invoking undefined behavior. For example, with if (a = b) GCC will generate a warning, unless you do if ((a = b)). The reason for the warning is that often people mean to do equality and instead write assignment by mistake, so the compilers warn unless a second set of braces is used to signal that you really meant to do that.
Every mainstream compiler targeting a 32 or 64 bit platform.
Have we crossed the point yet that the majority of new microprocessors and microcontrollers sold each year are 32+ bit yet? Most devices I’m familiar with still have more 8 and 16 bit processors than 32 and 64 bit processors (although the 8 bit processors are rarely programmed in C).
Nobody (yet) has mentioned Microsoft PWB - Microsoft's Programmers Workbench for their C compiler, around 1990. It's what all the Microsoft engineers themselves used when writing code for Windows, WinNT, OS/2, etc. It was essentially perfect for its time.
Ethernet doesn't use TCP/IP. Ethernet is it's own network. It has nothing to do with TCP/IP.
Other things use Ethernet. Routers, when connected to each other, often use a local Ethernet network to communicate.
Think of the TCP/IP Internet as it's own network, ignoring how routers physically talk to each other. Sometimes it's a directly link, a wire. Sometimes it's carrier pigeons. Sometimes it's WiFi. Sometimes it's Ethernet. Whatever it is, it's local to between the routers and does not extend outside that.
I think my confusion arises that, when I access some other PC on the local network, I connect to an IP and port. That's not on the wide internet, but it uses IP to communicate. Therefore... where's the ethernet's role in that?
Yea, it's not about engineers constructing systems. I mean, engineers do frequently pretend their creations fit the OSI model, but they work backwards to make it appear to conform to orthodoxy.
The issue is about education. People teach the model, or some variation of it. It teaches misconceptsion, such as how Ethernet and the Internet are integrated in a single network stack rather than being independent networks. It leads to professionals in IT and cybersecurity who continue to hold this misconception.
> It teaches misconceptsion, such as how Ethernet and the Internet are integrated in a single network stack rather than being independent networks.
It really opened up a whole world of understanding for me when I decided to go look up "RFC 1" to see what it was about.
Reading that — and the few low-numbered RFCs after it — made me realize that "the birth of the Internet" as we know it, was essentially the moment of the deployment of the first network switch (the BB&N IMP), isolating physical networks' electrical properties and collision domains from one-another and using DSPs to arbitrarily re-write packets between different signalling standards — thus rendering uniformity of physical/electrical media and LAN signalling standards, completely irrelevant.
Until that moment, I had always thought of "the Internet" as a standard for LAN networking that grew like a social network until it overtook the world — where things like "Ethernet" and "TCP/IP" were the "Internet flavor" (DARPA flavor?) of those LAN-networking technologies; where "the Internet" was competing with other LAN technology suites, the likes of ChaosNet or AppleTalk or NetBEUI; where people gradually "switched over" from using whatever networking equipment and signalling protocols they had been using, to using Internet networking equipment and standards; and where the fact that people were finally settling on the same networking protocols across multiple Autonomous Systems, allowed them to finally yolk those systems together into inter-networks, with more and more of that happening until we had one big hierarchical LAN called The Internet.
But no! The whole clever thing about "The Internet" is that it didn't do that! It just took all the random proprietary networks that people had built, and connected them together as black boxes, by coming up with a set of standards for how the networks would speak to one-another at their border gateways, and leaving everything else up to implementation, with the assumption of border-gateway routers being implemented by each LAN-technology-vendor to translate between "Internet" signalling and whatever that LAN was doing!
And, in that view, the whole "layer separation" concept — of there being such a thing as an "IP packet" that bubbles up to userland separately from any delivery enveloping — wasn't fundamental to The Internet; in fact, existing protocols that were vertically integrated continued to work, being rewritten into something else when they reached the AS border gateway. The "layer separation" was an optimization to allow new "post-Internet" protocols to be passed "transparently" across AS border gateways without those gateways needing to know about them to rewrite them.
Rather, these "post-Internet" protocols, consisting of separate "LAN envelope" and "Internet payload" parts — and designed with trancieving logic on the endpoints such that the "Internet payload" could traverse the [lossy, laggy] Internet intact — could simply be re-enveloped from "LAN packets" into "Internet packets." And the responsibility for constructing/parsing the "LAN envelope" part would be taken away from userland, made the responsibility of the OS, so that "post-Internet" applications could be portable between computers that used different LAN technologies but wanted to speak the same Internet protocols.
But, of course, network stacks continued to support non-layer-separated communication for decades afterward the advent of The Internet; and AS border gateways continued to support rewriting these protocols "at the edge" for just as long.
It wasn't until much later that LAN networking equipment truly became commoditized. (I have a Windows 2000 manual that describes its support for token-ring networking. Windows 2000!) In a fully post-Internet era, when everyone is using Internet protocols, a "network" could no longer offer much to differentiate itself. So we started to see shifts to actually standardize on LAN technologies, with vendors all moving toward making the same stuff. At that point, border gateways began getting simpler, and companies like Cisco that had made their fortune in the AS-border-gateway space stopped being household names, instead being relegated to the NOC (as they had successfully pivoted into dumb-but-high-throughput enterprise LAN switching, and even-dumber-but-even-higher-throughput Internet backbone switching.)
It's wrong to think of them as different functions of the same network. Instead, they are differnet networks.
Ethernet and the Internet both provide the function of forwarding packets through a network to the target destination address. The MAC protocol and IP protocol provide the same theoretical function.
Where they differ is that Ethernet was designed to a be a local network, whereas the Internet is designed to be an internetwork. All it demand from local networks is that they get packets from one hop to the other. Ethernet does this for the Internet, but so do carrier pigeons.
The same is true for HTTP. Instead of thinking of it as a component of the network, think of it as something that rides independently on top of the network. In a hypothetical future whe we've replaced the Internet with some other technology, the web would still function.
I discuss this point several times. I claim the model is not useful, specifically because the layering abstraction for hte lower layers is a misconception rather than the truth.
For example, I describe how the OSI Model claims that layer #2 and #3 describe different functionality in the same network stack. I claim the opposite, how they describe roughly the same functionality in differnet networks.
Namely, both Ethernet and the Internet forward packets based upon addresses. The difference is that Ethernet does this locally while the Internet does this locally. Otherwise, the theoretical concept of packets, forwarding, and addresses are the same.
The key concept in routing is hierarchy, which Ethernet does not have. It’s an extra dimension that changes the way the protocol and applications work at that layer.
You can say a cube and a square are the same thing, but I don’t think that’s particularly useful pragmatically. Ditto for L2 and L3; if you abstract out the difference, they are indeed the same. But that’s not useful.
Now, you could say that time has brought evolutions / optimizations that blur layer boundaries, like VLANs at layer 2. But that doesn’t make the model less useful.
In my experience working alongside network engineers, they use “layer 2” as a synonym for Ethernet, “layer 3” as a synonym for IP (v4 and v6), “layer 4” as a synonym for TCP and UDP, and “layer 7” as a synonym for application protocols. They don’t talk about the other layers. It’s just jargon, without any deeper meaning.
I claim that by 1981, the OSI Model was already obsolete and that it didn't fit well, that was not a good teaching tool, that it has done more to beffudle than enlighten students. The problem being is that if you learned according to the modle, you are not aware of what you misundertood.
Such redirection is often done on a specific port basis, so that trying to access different ports might produce a different result, such as a RST packet coming back from port 1234 with a different TTL than port 443.
There is so much cheating going with Internet routing that the TTL is usually the first thing I check, to make sure things are what they claim.