"but that's a decision based on cheap labor, not one that's inherent in smartphone design"
This is the heart of the matter. The US has abandoned skills because cheap labor in Asia. An example from the story about dealing with touch screen tests: they're employing disposable workers to toy with pinch and zoom testing; something easily automated with a simple machine and image comparisons. How sad. This is an actual regression in technology.
If the US wants to get manufacturing back, the only areas that matter are electronics and, to a lesser extent, machinery. See this chart.[1] That's an achievable goal.
Here's a useful smartphone that could become big:
- Solid state battery that will last at least 5 years.
- 5 year full warranty.
- No connectors. Inductive charging only.
- Screen as unbreakable as possible.
- Sealed unit. No holes in case. Filled with inert gas at factory.
I've had to deal with too many Eloi that are in desperate need of learning about the benefits of a solid, wired connection. If this story moves the needle on that while neglecting Ethernet's long history of mediums, it's fine with me.
If any of those are reading this: Your WiFi makes you sound bad in conference calls and video meetings, and people think you're a low status worker that can't afford a good laptop or internet. Hurry up and get a Ethernet thingy.
> Meanwhile a protocol like SPI is ridiculously simple
Yes, it is. It was intended to require as little silicon as possible to minimize the cost to the transistor budget. SPI doesn't contemplate power supply, hot-plug, discovery, bit errors, or any other of a host of affordances you get with USB.
I think there is some value for software developers to understanding SPI and the idioms used by hardware designers with SPI. Typically, SPI is used to fill registers of peripherals: the communication is not the sort of high level, asynchronous stuff you typically see with USB or Ethernet and all the layers of abstraction built upon them. Although there is no universal standard for SPI frames, they do follow idiomatic patterns, and this has proven sufficient for an uncountably vast number of applications.
I feel like you could support hot-plug, discovery, and bit errors with a protocol orders of magnitude simpler than USB, something you could bitbang on an ATTiny45. (And without Neywiny's list: 'bit ordering, half-duplex, the 4 modes, chip select to first clock setup times, bits per word, "strobing" nCS between words.' Those incompatibilities are mostly just people taking shortcuts.) And, unless you're talking about USB-C PD, which isn't involved here, the power-supply question isn't really related to the software complexity; it's just a question of making the power and ground traces on the USB-A plug (and the corresponding parts of other connectors) longer than the data traces so they make contact first.
You couldn't make it quite as simple as (a given flavor of) SPI, but something close to I²C should be feasible.
I never thought about making hot pluggable SMBus peripherals. It's an interesting idea. Many motherboards even have headers broken out, and of course operating systems already have drivers for many SMBus peripheral types. LoFi USB.
We have hot pluggable I2C at home. Every HDMI port (and pretty much every DP port with a passive adapter) has I2C on the DDC pins. The port also provides 5V 50mA so your MCU doesn't need external power. Example: https://www.reddit.com/r/raspberry_pi/comments/ws1ale/i2c_on...
Perhaps there is a place for a simpler alternative. My comment was pretty tangential to this discussion about the merits of SPI vs USB vs whatever. My point is that I believe some benefit can be had by software developers in understanding how components can be integrated together using a primitive as simple minded as SPI. I used the qualification "some" again, as well. I don't offer any revolutionary insights, but if you survey how SPI is used in practice, you'll learn some things of value, even if you never use SPI yourself.
I think there's something like a dual of the uncanny valley when it comes to protocol complexity vs adoption. Really simple like UART, I2C, or SPI, and engineers will adopt it on their own. But once you start wanting to add some higher level features, engineers would just as soon reinvent their own proprietary schemes for their own specific needs (the "valley"), and the network effects go away. So to create a more popular protocol you end up with a design committee where everyone piles in their own bespoke feature requests and the thing ends up being an ugly complex standard that nobody really likes. But at least it gives it a shot at wider adoption to prime the pump of network effects. (Or maybe it's more akin to how Java beats the Lisp Curse by using the social effect of having to raise an army to do anything?)
The main reason it's near impossible to bit-bang USB is that all devices to are required to use one of a few fixed clock rates (1.5 MHz, 12 MHz and 480 MHz), unlike SPI and I²C which allow variable/dynamic clock rates.
If you simply remove this restriction, bit-banging USB would become trivial, even with all the other protocol complexity.
Though, I think USB made the right call here. The requirement to support any clock speed the device requested would add a lot of complexity to both hosts and hubs.
Only supporting a few fixed clock-rates makes certification and inter-device compatibility so much easier, which is very important for an external protocol. Supporting bit-banging just isn't that important of a feature, especially when the fixed clock rates really are that hard to implement on dedicated silicon.
USB generally needs a crystal or at least a ceramic resonator to meet its timing precision specs, though apparently Dmitry is getting by without one here. This commonly adds extra cost to USB implementations, because dedicated silicon isn't sufficient; you also need some precisely tuned silicon dioxide or similar.
In V-USB, usbdrv/.[ch] contains 1440 unique lines. The bitbanging stuff is mostly in .S *.inc, so correct me if I'm wrong, but I think this is roughly the non-timing-related complexity imposed by USB protocol stack. (This division is not perfect, because there are things in e.g. usbdrvasm.S which have nothing to do with bitbang timing, but I feel like it's a reasonable approximation.) The remaining complexity in, say, examples/hid-mouse/firmware/main.c is only a few dozen lines of code.
And that's a USB device. Implementing a USB host is at least another order of magnitude more complexity.
You definitely don't need 1000+ lines of code to implement, say, the PS/2 mouse protocol. From either side.
So, while I agree that a lot of the difficulty of bitbanging USB results from its tight timing constraints, I don't agree that what's left over is "trivial".
I think, with appropriate adjustment of pin lengths (so that GND is always first to connect and last to disconnect), Apple Desktop Bus is hot-pluggable. There are even adapters for this purpose, made by hobbyists. May want to install a wee bit of software on the host, to occasionally force a rescan of the bus, but not strictly necessary.
USB served its intended purpose extremely well, being maliciously complex is a feature. The goal first and foremost was to put the PC at the center of the future digital world, to avoid a timeline where FireWire or something like FireWire linked our devices together w/o a PC as an intermediary.
I would love a protocol you outline, but could you use SPI as the physical layer and put the rest on top?
Isn't that basically what USB is? At least if you stick to USB 1. Obviously, since that time, it's expanded to cover a wider range of capabilities. It's a half-duplex serial line, just like I2C. Unlike I2C, it's asynchronous, like a UART.
Very true. And dealing with bit ordering, half-duplex, the 4 modes, chip select to first clock setup times, bits per word, "strobing" nCS between words, the list goes on. But when you see "USB 1.1 device" you know a large majority of what it can support and what it'll do.
While true it does seem like the flexibility allows for good optimizations. For example, a lot of devices don't need addresses or have memory maps etc. So, sadly while it's a pain to deal with, it does make things very fast and efficient.
Interesting. I recently tested a 6ft USB3 cable and an attached drive. The transfer of a 1TB file failed a few times (not sure of the details). This is strange, since the cable couldn't have been that bad (?) and the 16-bit CRC should have caught those errors (assuming an error will trigger a resend of data). Any ideas what the issue could have been? Does Linux provide a way to view the error rate?
I believe your confidence in the 16-bit CRC is excessive. There is a 1 in 65536 chance of a 16 bit CRC failing for certain types of corruption in 512 byte bulk USB packets, and there are about 2 billion packets in a 1TB transfer. If the BER is high, corruption of the transfer is not surprising.
A 6ft cable should be fine, assuming it is well designed, manufactured correctly, in good condition, and not in close proximity to high noise sources, such as SMPS. If any of those factors are compromised the BER will increase, and you will then be testing the rather limited capabilities of 16 bit CRC.
USB4 has 32 bit CRC for data payloads for a reason. In the mean time, the #1 thing you can do is use short, high quality cables.
Yes, perhaps you are right, that's why I was asking about a way to see the error rate. I think it is also a failure of the USB standard then, to allow a transfer to happen if the BER (according to the CRC) is so high.
Glad to see that USB4 is fixing things but one thing I worry about is if this 32 bit CRC is a mandatory part of the standard that cannot be turned off or ignored by manufacturers. Especially since it apparently is not transparent to users.
https://wiki.wireshark.org/CaptureSetup/USB says "Software USB capture captures URBs (USB Request Blocks) rather than raw USB packets." So Wireshark couldn't give you the CRC, which would be stripped away.
You could hypothetically use a really-really high bandwidth oscilloscope (like 2 GHz to view 480 MHz USB HS signals), but those are expensive. So you would have to resort to using external USB sniffer...out of curiosity I found someone made a sniffer that is basically a USB-capable microcontroller plus an FPGA and a USB PHY: https://hackaday.com/2023/06/13/cheap-usb-sniffer-has-wiresh...
They benefitted from it so hard they voted for the exact opposite with eyes wide open. Twice.
> Because now most Americans don't slave away in unsafe factories 7 days/week for dollars an hour.
Now they're collecting disability in their unsafe neighborhoods, getting morbidly obese while their substance abusing kids play vidya games in the basement into their 30s.
Yes, it's really like that. People want their factories and incomes back. I don't claim that anything happening here is going to deliver that, but that's the pitch they're voting for. To their credit, at least they're pursuing that in lieu of some UBI ideocracy made of fantasy money.
As for you: it's fine to point out all the ways they may be misguided and/or misled, but unless you have an alternative that doesn't amount to expecting everyone to somehow earn an advanced degree, and then discover it's next to worthless (even before "AI",) your really not contributing much. So what do you have?
People who voted that wouldn't want to work at factories with working conditions and salaries Chinese factories make everything they consume. They also don't support the unions that would make working bearable in factories. Even if somehow factories would return and pay reasonable compensation, that would make the products so expensive most Americans couldn't afford them. People would have to consume a lot less. Which may be a good thing for the planet, but I doubt that's what the voters are prepared for.
> They benefitted from it so hard they voted for the exact opposite with eyes wide open. Twice.
Have you considered the platform that the Republicans have actually been running on? Was it one of economic policy? Did you consider why they attacked DEI and minority groups (including LGBTQ)? Because they would not have won on this roughshod economic policy.
> They benefitted from it so hard they voted for the exact opposite with eyes wide open. Twice.
This conundrum, like so many others in public discourse, is downstream of the widespread but fundamentally incorrect belief in free will (which in turn is downstream of belief in supernatural powers, because free will sure as hell isn't explained by anything in nature).
Nothing is in anyone's control. There's no such thing as "eyes wide open". People's behaviors are 100% downstream of genetics and environment. Some people behave rationally some of the time, and to the extent they do so it is because the environment set them up to do that. There is absolutely no coherent reason to generalize that into the idea that most people vote (or do anything else) rationally.
I mean in their actual self-interest rather than, say, what they have been made to believe is in their self-interest.
> Deindustrialization and Nikefication in the past several decades isn't "rational" long-term behavior either.
Maybe, but I was responding to "They benefitted from it so hard they voted for the exact opposite with eyes wide open. Twice."
There's an implication here, and in a subsequent reply that people voting against their interests is "[t]he go to midwit rationalization for every electoral loss", that people exercised free will when they voted.
This is plainly incorrect, because free will quite clearly does not exist. No one has ever shown the kinds of violations in the laws of physics that would be required for free will to exist.
Since free will does not exist, there is simply no a priori reason to believe that people voted in their interests. People's voting decisions, like everything else they do, are out of their control. To the extent that they vote in a particular way that's good or bad for them, it's driven purely by luck and circumstances.
It is this a priori belief that people vote or act in their own interests that's the real "midwit rationalization".
> There's an implication here, that people exercised free will when they voted.
There's no such implication.
> This is plainly incorrect, because free will quite clearly does not exist.
> Since free will does not exist, there is simply no a priori reason to believe that people voted in their interests.
What are you even talking about.
People (and living beings in general) acting in their own self-interest - pretty much all the time - it is the most universal general principle of life if there ever was one. This doesn't require or involve free will.
How well a biorobot (no free will!) executes in pursuing his self-interests, is the selection critereon.
Now, the people make mistakes pursuing their self-interests, doesn't mean they aren't acting in their self-interest. Because they sure as hell are - all the frigging time! It's their whole firmware!
Deindustrialization / nikefication all the way through the value chain except the very, very top last step of the value add - hasn't been in their self-interest, it isn't in the interests of their nation either.
It's only in the self-interests of short-term thinking shareholders that min-max asset valuations with great costs to everyone else but themselves.
> People (and living beings in general) acting in their own self-interest - pretty much all the time - it is the most universal general principle of life if there ever was one.
Base evolutionary instincts to survive don't translate to humans living in complex modern societies acting in their self-interest.
>Base evolutionary instincts to survive don't translate to humans living in complex modern societies acting in their self-interest.
What are you talking about?
Base evolutionary pressures and instincts have translated in exactly that.
Complex modern societies, and emergent behaviors and strategies arise from agents acting in their own self-interest (organizing in groups or otherwise to further their goals).
The idea that not only people don't act in their self-interest, but you - in fact - know better what's in their best interest is truly some mid-tier thinking. Or that you have some unique ability to know what's in their best self-interest, but they... for some reason... don't.
Now it doesn't mean that acting in self-interest doesn't sometimes result in ruin, because it surely can!
That however doesn't mean that all these choices weren't made with self-interest in mind, front and center, despite people claiming otherwise.
The groups and societies that enact the winning, most sensible strategy, economic and industrial policy will win out.
Those individually or in groups that don't, will go to shitter and or be selected out. It's that simple.
I think people with expertise and training do generally know what's in people's interest more than untrained people themselves, yes. I also think that the fact that this isn't blindingly obvious to most folks is at the heart of a lot of the rot in modern society.
> I think people with expertise and training do generally know what's in people's interest more than untrained people themselves
For petes sake dude, people act in their own self-interests.
That includes so called "people with expertise and training", or more correctly put - credentialed people.
They worked towards getting these credentials (fancy law or economics degree at a fancy university) - not because they were interested in acting in interests of the "untrained people". They just wanted a cushy, high status, well paid job.
What do you think governments are ran by - generally speaking? People without "expertise"(cough, cough) and "training"?
No, they are ran by people with "expertise and training" (ie. credentialed)!
The problem is that they mainly act in their self-interests (and interests of their social group) first and foremost, and not for their expertise or lack-thereof. And the people that vie for positions of power and status act in their self-interests and interests of their social clique squared or cubed. Everything else is an afterthought.
>I also think that the fact that this isn't blindingly obvious to most folks is at the heart of a lot of the rot in modern society.
> Experience and training makes you better at things. What can I say.
The orange has a degree in economics by the way (from a ivy league uni too). So you could say he has both the credentials, the experience and the training. You could even... dare I say... call him an expert.
Or you could just accept the obvious - any barely functioning middling brain can get credentials and become an "expert". And that they do. It is neither a competency nor an intellect filter.
Neither is there personal responsibility or real liability if they are wrong about their economic and other policies that lead to ruin (endless list of examples of this in past). Seen any heads on the pike lately? Yeah, me neither.
Nor are there incentives in place to think what's in other "common" peoples best interests. So why would they?
There's a long line of "expert economists", Paul Krugman among others who advocated for free-trade policies that directly led to nikefication, deindustrialization of US. Now they are nowhere to be seen to take the credit, woops!
The presumption that the credentialed ("expert") knows (or even cares frankly) what's in other common peoples best interest is completely baseless and extremely naive.
The credentialed "experts" being so incompetent and confidently wrong is what gave you the orange. Now orange is the "expert"! And you better listen!
No, I wouldn't call him an expert. I'd call him deeply incompetent and missing basic skills.
Simply having a credential is not enough. You need actual training and expertise — to be good at what you do. I'm thinking of all the scientists and bureaucrats who run things like the NIH, vaccine programs, and air quality/pollution control. Many people do not perceive those programs to be in their self-interest. But in reality they are, regardless of someone's personal opinion.
Instead of going "hmmm, they oppose green policies, which means pollution IS in their self-interests -
ie. they are probably from a coal mining town, working in a fossil fuel petro chemical related industry
or an area with industrial outputs wherein their livelihood solely depends on pollution to a large extent".
Or maybe they can't afford an expensive electric vehicle and an old dirty gas guzzling clunker is the only means of transportation they have.
Or that they move from a pollution free country-side to a polluted dirty city, not because they seek the pollution, but because the opportunities and jobs are more in their self-interests than... ODing on fenta in pristine clean air.
Naah, midwits don't do this.
They presume they are smart and everyone else is stupid and need guidance from the expert (that would be me, the midwit of course), and everything else is derived from it.
When the "expert" gets rejected on basis of incompetence or not acting in their self-interests,
that always upsets the midwit, because the midwit always self-identifies as an expert. And rejection of the "experts" equals rejection of the midwit.
Of course, the midwit never has demonstrated competence (nobody doubts demonstrated competence!), all they have is credentials and university degrees and papers. This frustrates the midwit to no end.
Demonstrated expertise and competence is always outside their abilities and reach - they are far from somebody like John Carmack, Michael Abrash, etc who has many shipped products, you can see his code. Nobody doubts their competency, etc. All they have instead is some sort of paper that says "believe me I'm an expert".
No matter what training, education or experience midwit has... he still is just a midwit at the end of the day.
As per modus operandi of the midwit, you didn't demonstrate anything. You just disagreed.
I implore you to bring evidence where demonstrated competency of John Carmack or Michael Abrash is called into question. Demostrate how that is a phenomenon or a trend. And if you can't, I rest my case.
It is obvious to any reasonable person that in the statement "nobody" doesn't literally mean not a single entity within 8 billion population of the planet.
You said "demonstrated competence" in general, not Carmack or Abrash.
My own competence has repeatedly been questioned, even though I've consistently delivered results on the teams I've been on. My body of work is almost all public so feel free to verify yourself. It turns out that how much you're questioned has a lot to do with race and gender — basically every single highly skilled minority I know has been in this position.
> You said "demonstrated competence" in general, not Carmack or Abrash.
I did specifically mention Carmack and Abrash to give a concrete example of what demonstrated competence is.
And to avoid having to argue some vague, watered down, abstract pedestrian notion of what "demonstrated competence" is.
>My own competence has repeatedly been questioned, even though I've consistently delivered results on the teams I've been on. My body of work is almost all public so feel free to verify yourself. It turns out that how much you're questioned has a lot to do with race and gender — basically every single highly skilled minority I know has been in this position.
It looks like my intuiton has been on point here. You see yourself as a highly skilled, competent expert.
Except others are apparently not always sharing in your self aggrandizing perception of self.
Since you can't come to terms with this, hijinx insues wherein you assume that everyone else is just stupid, irrational instead. Or you surmise that they reject your "competency and expertise" based on irrelevant immutable characteristics.
Did you ever work at a factory? I did. I would most certainly prefer to collect a pension and play video games (which I do now in retirement). Anyone would.
> Now they're collecting disability in their unsafe neighborhoods, getting morbidly obese while their substance abusing kids play vidya games in the basement into their 30s.
> People want their factories and incomes back
Sounds like what they really want is safety and hope for their futures. I'm not sure going back to the way things were - good or bad - is the way for society to move forward though.
Doomsaying prognostications, odd questions, free will talk for some reason, evidence free assertions about voters and their interests, doubts and fears...
And precisely 0.0 alternatives offered.
I can't imagine anyone being surprised that we've ended up with Trump et al. When all you offer is un-actionable thoughts and cowardly status quo, no one will listen to you. Meanwhile, the cohort of disenfranchised, disposable people grows around you until they fear the status quo more than they fear change.
The richest country in the world cannot save their own citizens from poverty. Not to mention most number of millionaires and billionaires. Obviously the solution is to impose tariffs based on some made up numbers. Wonderful idea!
Kinda looks like click bait generally. Per-capita US avocado consumption is about 9 lbs./y. That's a bit more than 1 per month of the large Mexican avacados. Somehow that's "crazy!!1"
You would think there would be some joy for all this: something Americans will happily eat that isn't "ultraprocessed" and/or meat, and a Mexican product that isn't narcotics or fossil fuel. But no. omg they're so "resource intensive!"
Is it? Docker is quite long in the tooth and this point and is a long way from perfect. Docker's design is rife with poor choices and unaddressed flaws. One I've been tracking for years: there is no straightforward way to undo several things that get inherited by derived images, such as VOLUME and EXPOSE, and Docker Inc. doesn't care no matter how long the github threads get.
The LTC6227, used in this design, is "last time buy." The replacements aren't as good: either higher noise or lower GBW, and they too will vanish in a few years. That is, of course, deliberate, forcing designers to climb the curve of ever more expensive, exclusive components. Another component in this design, the FST3253, is also a figment of history: no longer produced. Electronics is a nasty rat race, with AD, for example, having bought literally all of their competitors except TI, we have an perfect ADC/DAC/op-amp oligopoly, with everything getting worse with each passing day. So the notion of a cheap, easily reproduced "crystal radio for the digital age" isn't actually workable: nothing you can dream up will be buildable more than 2-3 years after you design it as the components you've chosen cease to exist.
I love the Principle of Least Astonishment, but I first encountered it in the Ruby book and I gave up reading it halfway through because I kept thinking, "He and I have very different definitions of astonishing..."
In a way that makes sense though, once you're at a level where you can not only write Ruby, but write books about Ruby, surely very few things would astonish you.
It's definitely a footgun but I think it's also pretty clear in go docs that defer is a function-return-time thing, versus a loop iteration thing. "A defer statement defers the execution of a function until the surrounding function returns." from https://go.dev/tour/flowcontrol/12
I think per scope-level is probably better, but honestly still - as a I mention elsewhere - still something that seems fairly limited compared to writing code inside blocks that clean themselves up in the Ruby world. The more we're messing with scope, the more it seems like it would be possible to go all the way to that? The go-style defer appears likely to be simpler from an implementation POV; if we're gonna make it harder let's go all the way!
I know a lot of people hate the nesting of indentation from that, but it makes so many other things harder to screw up.
My biggest astonishment is how people continue to shoot themselves in the foot by not making scope vs function declarations explicit. For the reasons that “someone will misunderstand complicated ideas, so let’s make it implicit” or something. While there could be just:
defer x // scope scoped
defer fn x // function scoped
Also:
var a = 0
fn var a = 0
for fn i := …
But we have this allergy to full control and invent these increasingly stupid ways to stay alert and get unpleasantly surprised anyway.
Edit: Same for iifes. Everyone uses them, so turn these into proper embedded blocks!
Say that the defer would execute inside for loops, what would make you more astonished: loops and functions are the exceptions, or defers execute at the end of any block? I would prefer the latter of these two. But then the consequence is that a defer in an if-block executes instantly, so you cannot conditionally defer anymore. So it seems that the rules for when deferees execute need to be arbitrary, and "only functions" seems fewer exceptions than "only functions and loops", isn't it? And what about loops implemented through gotos? Oh boy.
- declare a function pointer variable cleanup
- initialize it with no_op
- call defer ‘call cleanup’
- if, inside a block, you realize that you want to do
something at cleanup, set cleanup to another function
That’s more code, but how frequent is it that one wants to do that?
One thing this doesn’t support is having a call into third party code defer cleanup to function exit time. Does golang supports that?
> a defer in an if-block executes instantly, so you cannot conditionally defer anymore.
Of course you can, using ?: (or && and || if you prefer), just like any other case where you want an expression rather than a statement. Or simply using the non-block form of if. (Some stupid autoformatters or tech leads insert extraneous braces, but you should be avoiding those already).
The parent I was replying to was talking about Go. There is no ternary operator in Go. There are no non-block forms of if in Go. I'm not sure what your "of course" is referring to, but I guess it is unrelated.
Yeah, I get it. There's an idiom. Still, that glitch is guaranteed to catch everyone off guard, experienced or otherwise, when taking up golang. As I said, it's an entry on the list, and such a list exists for most (all?) mainstream languages. At least it's minor compared to nil, a flaw somehow promulgated in a brand new language many years after anyone purporting to be a language designer would or should have known to avoid. That's a mystery for the ages right there.
This is certainly true. Also, by replacing the allocator and changing compiler flags, you're possibly immunizing yourself from attacks that rely on some specific memory layout.
By hardwiring the allocator you may end up with binaries that load two different allocators. It is too fun to debug a program that is using jemalloc free to release memory allocated by glibc. Unless you know what you are doing, it is better to leave it as is.
UBSAN is usually a debug build only thing. You can run it in production for some added safety, but it comes at a performance cost and theoretically, if you test all execution paths on a debug build and fix all complaints, there should be no benefit to running it in production.
I think it's time for the C/C++ communities to consider a mindset shift and pivot to having almost all protectors, canaries, sanitizers, assertions (e.g. via _GLIBCXX_ASSERTIONS) on by default and recommended for use in release builds in production. The opposite (i.e, the current state of affairs) should be discouraged and begrudginly accepted in select few cases.
https://www.youtube.com/watch?v=gG4BJ23BFBE is a presentation that best represents my view on the kind of mindset that's long overdue to become the new norm in our industry.
I do not think things like the time command need to be compiled with such things. It is pointless, but your suggestion here is to do it anyway. Why bother?
Assertions in release builds are a bad idea since they can be fairly expensive. It is a better idea to have a different variety of assertion like the verify statements that OpenZFS uses, which are assertions that run even in release builds. They are used in situations where it is extremely important for an assertion to be done at runtime, without the performance overhead of the less important assertions that are in performance critical paths.
Why would I want potentially undefined behaviour in 'time'? I expect it to crash anytime it's about to enter UB. Sure, you may want to minimize such statements between the start/stop of the timer, but I expect any processing of stdout/stderr of the child process to be UB-proofed as much as possible.
I think it's a philosophical difference of opinions and it's one of the things that drive Rust, Go, C# etc. ahead - not merely language ergonomics (I hope Zig ends up as the language that replaces C). The society at large is mostly happy to take a 1-3% perf hit to get rid of buffer overflows and other UB-inducing errors.
But I agree with you on not having "expensive" asserts in releases.
ASLR is not security through obscurity though. It forces attacker to get a pointer leak before doing almost anything (even arbitrary read and arbitrary write primitives are useless without a leak with ASLR). As someone with a bit of experience in exploit dev, it makes a world of a difference and is one of the most influential hardenings, next to maybe stack cookies and W^X.
I'm genuinely curious what was so undesirable about this sibling comment that it was removed:
"ASLR obscures the memory layout. That is security by obscurity by definition. People thought this was okay if the entropy was high enough, but then the ASLR⊕Cache attack was published and now its usefulness is questionable."
Usually when a comment is removed, it's pretty obvious why, but in this case I'm really not seeing it at all. I read up (briefly) on the mentioned attack and can confirm that the claims made in the above comment are at the very least plausible sounding. I checked other comments from that user and don't see any other recent ones that were removed, so it doesn't seem to be a user-specific thing.
I realize this is completely off-topic, but I'd really like to understand why it was removed. Perhaps it was removed by mistake?
Some people use the "flag" button as a "disagree" button or even a "fuck this guy" button. Unfortunately, constructive but unpopular comments get flagged to death on HN all the time.
I had thought that flagging was basically a request for a mod to have a look at something. But based on this case I now suspect that it's possible for a comment to be removed without a mod ever looking at it if enough people flag it.
My point was more that, at least in this case, it looks like a post was hidden without any moderator intervention.
If this is indeed what happened, it seems like a bad thing that it's even possible. Since many, perhaps most people probably don't have showdead enabled, it means that the 'flag' option is effectively a mega-downvote.
I believe most people see security through obscurity has an attempt to hide an insecurity.
ASLR/KASLR intends to make attackers lives harder by having non consistent offsets of known data structures. Its not obscuring a security flaw, instead its raises an attacks 'single run' effectivness.
The ASLR attack that i believe is being referenced is specific to abuse within the browser, and running with a single process. This single attack vector does not mean that KASLR globally is not effective.
Your quote has some choice words, but its contextually poor.
That attack does not require a web browser. The web browser being able to do it showed it was higher severity than you would think than if the proof of concept had been in C, since web browsers run untrusted code all of the time.
The 'attack' there does require you to be able to run code and test within a single process with a single randomized address space, which is the exact vector that the web browser provides.
Most times in C, each fork() (rather than thread) has a differential address space, so it's actually less severe than you think.
The kernel address space is the same regardless of how many fork() calls have been done. I would assume the exploitation path for a worst case scenario would be involve chaining exploits to do: AnC on userspace, JavaScript engine injection to native code, sandbox escape, AnC on kernel space, kernel native code injection. That would give complete control over a user’s machine just by having the user visit a web page.
I am not sure why anyone would attempt what you described, for the exact reason you stated. It certainly is not what I had in mind.
Its been a few days and a thousand kilometers since I've read the paper, I thought it referenced userspace. How is it able to infer kernel addresses that are not mapped in that process ?
I assume people downvoted it because “ASLR obscures the memory layout. That is security by obscurity by definition” is just wrong (correct description here: https://news.ycombinator.com/item?id=43408039). It does say [flagged] too, though, so maybe that’s not the whole story…?
No, that other definition is the incorrect one. Security by obscurity does not require that the attacker is ignorant of the fact you're using it. Say I have an IPv6 network with no firewall, simply relying on the difficulty of scanning the address space. I think that people would agree that I'm using security by obscurity, even if the attacker somehow found out I was doing this. The correct definition is simply "using obscurity as a security defense mechanism", nothing more.
No, I would not agree that you would be using security by obscurity in that example. Not all security that happens to be weak or fragile and involves secret information somewhere is security by obscurity – it’s specifically the security measure that has to be secret. Of course, there’s not a hard line dividing secret information between categories like “key material” and “security measure”, but I would consider ASLR closer to the former side than the latter and it’s certainly not security by obscurity “by definition” (aside: the rampant misuse of that phrase is my pet peeve).
> The correct definition is simply "using obscurity as a security defense mechanism", nothing more.
This is just restating the term in more words without defining the core concept in context (“obscurity”).
I'm inclined to agree and would like to point out that if you take a hardline stance that any reliance on the attacker not knowing something makes it security by obscurity then things like keys become security by obscurity. That's obviously not a useful end result so that can't be the correct definition.
It's useful to ask what the point being conveyed by the phrase is. Typically (at least as I've encountered it) it's that you are relying on secrecy of your internal processes. The implication is usually that your processes are not actually secure - that as soon as an attacker learns how you do things the house of cards will immediately collapse.
What is missing from these two representations is the ability for something to become trivially bypassable once you know the trick to it. AnC is roughly that for ASLR.
I'd argue that AnC is a side channel attack. If I can obtain key material via a side channel that doesn't (at least in the general case) suddenly change the category of the corresponding algorithm.
Also IIUC to perform AnC you need to already have arbitrary code execution. That's a pretty big caveat for an attacker.
You are not wrong, but how big of a caveat it is varies. On a client system, it is an incredibly low bar given client side scripting in web browsers (and end users’ tendency to execute random binaries they find on the internet). On a server system, it is incredibly unlikely.
I think the middle ground is to call the effectiveness of ASLR questionable. It is no longer the gold standard of mitigations that it was 10 years ago.
ASLR is not purely security through obscurity because it is based on a solid security principle: increasing the difficulty of an attack by introducing randomness. It doesn't solely rely on the secrecy of the implementation but rather the unpredictability of memory addresses.
Think of it this way - if I guess the ASLR address once, a restart of the process renders that knowledge irrelevant implicitly. If I get your IPv6 address once, you’re going to have to redo your network topology to rotate your secret IP. That’s the distinction from ASLR.
I don't like that example because the damaged cause by and the difficulty of recovering from a secret leaking is not what determines the classification. There exist keys that if leaked would be very time consuming to recover from. That doesn't make them security by obscurity.
I think the key feature of the IPv6 address example is that you need to expose the address in order to communicate. The entire security model relies on the attacker not having observed legitimate communications. As soon as an attacker witnesses your system operating as intended the entire thing falls apart.
Another way to phrase it is that the security depends on the secrecy of the implementation, as opposed to the secrecy of one or more inputs.
You don’t necessarily need to expose the IPv6 address to untrusted parties though in which case it is indeed quite similar to ASLR in that data leakage of some kind is necessary. I think the main distinguishing factor is that ASLR by design treats the base address as a secret and guards it as such whereas that’s not a mode the IPv6 address can have because by its nature it’s assumed to be something public.
Huh. The IPv6 example is much more confusing that I initially thought. At this point I am entirely unclear as to whether it is actually an example of security through obscurity, regardless of whatever else it might be (a very bad idea to rely on it for one). Rather ironic given that the poster whose claims I was disputing provided it as an example of something that would be universally recognized as such.
I think it’s security through obscurity because in ASLR the randomized base address is a protected secret key material whereas in the ipv6 case it’s unprotected key material (eg every hop between two communicating parties sees the secret). It’s close though which is why IPv6 mapping efforts are much more heuristics based than ipv4 which you can just brute force (along with port #) quickly these days.
I'm finding this semantic rabbit hole surprisingly amusing.
The problem with that line of reasoning is that it implies that data handling practices can determine whether or not a given scheme is security through obscurity. But that doesn't fit the prototypical example where someone uses a super secret and utterly broken home rolled "encryption" algorithm. Nor does it fit the example of someone being careless with the key material for a well established algorithm.
The key defining characteristic of that example is that the security hinges on the secrecy of the blueprints themselves.
I think a case can also be made for a slightly more literal interpretation of the term where security depends on part of the design being different from the mainstream. For example running a niche OS making your systems less statistically likely to be targeted in the first place. In that case the secrecy of the blueprints no longer matters - it's the societal scale analogue of the former example.
I think the IPv6 example hinges on the semantic question of whether a network address is considered part of the blueprint or part of the input. In the ASLR analogue, the corresponding question is whether a function pointer is part of the blueprint or part of the input.
> The problem with that line of reasoning is that it implies that data handling practices can determine whether or not a given scheme is security through obscurity
Necessary but not sufficient condition. For example, if I’m transmitting secrets across the wire in plain text that’s clearly security through obscurity even if you’re relying on an otherwise secure algorithm. Security is a holistic practice and you can’t ignore secrets management separate from the algorithm blueprint (which itself is also a necessary but not sufficient condition).
Consider that in the ASLR analogy dealing in function pointers is dealing in plaintext.
I think the semantics are being confused due to an issue of recursively larger boundaries.
Consider the system as designed versus the full system as used in a particular instance, including all participants. The latter can also be "the system as designed" if you zoom out by a level and examine the usage of the original system somewhere in the wild.
In the latter case, poor secrets management being codified in the design could in some cases be security through obscurity. For example, transmitting in plaintext somewhere the attacker can observe. At that point it's part of the blueprint and the definition I referred to holds. But that blueprint is for the larger system, not the smaller one, and has its own threat model. In the example, it's important that the attacker is expected to be capable of observing the transmission channel.
In the former case, secrets management (ie managing user input) is beyond the scope of the system design.
If you're building the small system and you intend to keep the encryption algorithm secret, we can safely say that in all possible cases you will be engaging in security through obscurity. The threat model is that the attacker has gained access to the ciphertext; obscuring the algorithm only inflicts additional cost on them the first time they attack a message secured by this particular system.
It's not obvious to me that the same can be said of the IPv6 address example. Flippantly, we can say that the physical security of the network is beyond the scope of our address randomization scheme. Less flippantly, we can observe that there are many realistic threat models where the attacker is not expected to be able to snoop any of the network hops. Then as long as addresses aren't permanent it's not a one time up front cost to learn a fixed procedure.
Function pointer addresses are not meant to be shared - they hold 0 semantic meaning or utility outside a process boundary (modulo kernel). IPv6 addresses are meant to be shared and have semantic meaning and utility at a very porous layer. Pretending like there’s no distinction between those two cases is why it seems like ASLR is security through obscurity when in fact it isn’t. Of course, if your program is trivially leaking addresses outside your program boundary, then ASLR degrades to a form of security through obscurity.
I'm not pretending that there's no distinction. I'm explicitly questioning the extent to which it exists as well as the relevance of drawing such a distinction in the stated context.
> Function pointer addresses are not meant to be shared
Actually I'm pretty sure that's their entire purpose.
> they hold 0 semantic meaning or utility outside a process boundary (modulo kernel).
Sure, but ASLR is meant to defend against an attacker acting within the process boundary so I don't see the relevance.
How the system built by the programmer functions in the face of an adversary is what's relevant (at least it seems to me). Why should the intent of the manufacturer necessarily have a bearing on how I use the tool? I cannot accept that as a determining factor of whether something qualifies as security by obscurity.
If the expectation is that an attacker is unable to snoop any of the relevant network hops then why does it matter that the address is embedded in plaintext in the packets? I don't think it's enough to say "it was meant to be public". The traffic on (for example) my wired LAN is certainly not public. If I'm not designing a system to defend against adversaries on my LAN then why should plaintext on my LAN be relevant to the analysis of the thing I produced?
Conversely, if I'm designing a system to defend against an adversary that has physical access to the memory bus on my motherboard then it matters not at all whether the manufacturer of the board intended for someone to attach probes to the traces.
I think that's why the threat model matters. I consider my SSH keys secure as long as they don't leave the local machine in plaintext form. However if the scenario changes to become "the adversary has arbitrary read access to your RAM" then that's obviously not going to work anymore.
> The correct definition is simply "using obscurity as a security defense mechanism", nothing more.
Also stated as "security happens in layers", and often obscurity is a very good layer for keeping most of the script kiddies away and keeping the logs clean.
My personal favorite example is using a non-default SSH port. Even if you keep it under 1024, so it's still on a root-controlled port, you'll cut down the attacks by an order of magnitude or two. It's not going to keep the NSA or MSS out, but it's still effective in pushing away the common script kiddies. You could even get creative and play with port knocking - that keeps under-1024 ports logs clean.
Except I do know what security by obscurity is and you are out of date on the subject. When you have attacks that make ASLR useless, then it is security by obscurity. Your thinking would have been correct 10 years ago. It is no longer correct today. The middle ground is to say that the benefits of ASLR are questionable, like I said in the comment you downvoted.
ASLR obscures the memory layout. That is security by obscurity by definition. People thought this was okay if the entropy was high enough, but then the ASLR⊕Cache attack was published and now its usefulness is questionable.
ASLR is by definition security through obscurity. That doesn't make it useless, as there's nothing wrong with using obscurity as one layer of defenses. But that doesn't change what it fundamentally is: obscuring information so that an attacker has to work harder.
Is having a secret password security by obscurity? What about a private key?
Security by obscurity is about the bad practice of thinking that obscuring your mechanisms and implementations of security increases your security. It's about people that think that by using their nephew's own super secret unpublished encryption they will be more secure than by using hardened standard encryption libraries.
Security through obscurity is when you run your sshd server on port 1337 instead of 22 without actually securing the server settings down, because you don’t think the hackers know how to portscan that high. Everyone runs on 22, but you obscurely run it elsewhere. “Nobody will think to look.”
ASLR is nothing like that. It’s not that nobody thinks to look, it’s that they have no stable gadgets to jump to. The only way to get around that is to leak the mapping or work with the handful of gadgets that are stable. It’s analogous to shuffling a deck of cards before and after every hand to protect against card counters. Entire cities in barren deserts have been built on the real mathematical win that comes from that. It’s real.
With attacks such as AnC, your logic fails. They can figure out the locations and get plenty of stable gadgets.
Any shuffling of a deck of cards by Alice is pointless if Bob can inspect the deck after she shuffles them. It makes ASLR not very different from changing your sshd port. In both cases, this describes the security:
okay, sure, ASLR can be defeated by hardware leaks. The first rowhammer papers were over ten years ago, it's very old news. It's totally irrelevant to this thread. The fact that there exist designs that have hardware flaws which make them incapable of hosting a secure PRNG does not have any relevance to a discussion about the merits or lack thereof of a PRNG-based security measures. The systems you're referring to don't have secure PRNGs.
Words have meaning, god damn it! ASLR is not security through obscurity.
Edit: I was operating under the assumption that “AnC” was some new hotness, but no, this is the same stuff that’s always been around, timing attacks on the caches. And there’s still the same solution as there was back then: you wipe the caches out so your adversaries have no opportunity to measure the latencies. It’s what they always should have done on consumer devices running untrusted code.
ASLR is technically a form of security by obscurity. The obscurity here being the memory layout. The reason nobody treated it that way was the high entropy that ASLR had on 64-bit, but the ASLR⊕Cache attack has undermined that significantly. You really do not want ASLR to be what determines whether an attacker takes control of your machine if you care about having a secure system.
The defining characteristic of security through obscurity is that the effectiveness of the security measure depends on the attacker not knowing about the measure at all. That description doesn’t apply to ASLR.
It produces a randomization either at compile time or run time, and the randomization is the security measure, which is obscured based on the idea that nobody can figure it out with ease. It is a poor security measure given the AnC attack that I mentioned. ASLR randomization is effectively this when such attacks are applicable:
You are confusing randomization, a legitimate security mechanism, with security by obscurity. ASLR is not security by obscurity.
Please spend the time on understanding the terminology rather than regurgitating buzz words.
I understand the terminology. I even took a graduate course on the subject. I stand by what I wrote. Better yet, this describes ASLR when the AnC attack applies:
If you exclude cost as a relevant measure.
reply