That’s right. Financial accounting and taxation are not the same thing. Even if you are taxed on a cash basis, it’s prudent to manage your business with appropriate revenue deferrals.
An experienced software engineer can learn new languages and syntax quickly. I would say the same for the core libraries for a given language. Learning the development environment, productionalization, deployment, and hosting can be daunting even for experienced architects.
Seniors have a language-independent model of software development tasks such as writing code, testing, debugging, building and publishing artifacts, etc. to work from. They can map that model more or less to any software ecosystem. That model comes from hands-on experience building software, and often takes years to build.
While they are building up that conceptual model of software development, an engineer is also building up knowledge about the _details_ of their first or primary language’s ecosystem, libraries, and tooling. This also takes years, one just doesn’t notice it with the first language because it intuitively feels like “learning programming” since it happens in parallel.
The result is that a senior engineer can be productive quickly in any language, relying mostly on their conceptual model. But to go from “productive” to “mastery” with a new ecosystem is still all the same time and effort it took for the first one.
I grew up in Pittsburgh. When I was flying in and out of the Pittsburgh airport (usually to Atlanta) during and after college, I would often see Iron Mountain uniformed employees waited for standby seats carrying their little pelicans cases…
There was a warrant in this case. That type of warrant is now deficient and electronic data holders like Google with easily refute future attempts to get data with a clearly deficient warrant. Parallel construction typically means a warrant was not given on the first pass.
The overwhelming majority of US legislators have no concept of what it means to run a business that serves customers what they want. The market has already corrected this situation. Some carriers offer the service free of charge, some don’t. Parents are completely able to see the policy associated with a specific ticket before they purchase.
> Parents are completely able to see the policy associated with a specific ticket before they purchase.
It's a process of discovery rather than being something that you know up front. You generally have to get pretty deep into the process of purchasing a ticket before you learn how much it will cost to pre-book your seats. If you don't like it, you have to start the process anew with another airline.
And this is deliberate by the airlines. They want you to invest some time before they reveal the true, all-inclusive price, so you're less likely to look elsewhere.
Be benevolent, but ration out your help to those you want to give it to. Also, use it as a network builder and give help to multimillionaires; never worry about a job or resume again.
No thanks. A multimillionaire can afford to pay me. I don’t work for “exposure.” If I’m helping someone for free it’s because I want to and they need it.
Newer cars are probably less likely due to how arbitrarily locked down they are, rather than being incapable. The manufacturer could run DOOM on their car that you bought, but you probably can't.
(1) a one time pad is and will remain highly secure
(2) blocking shortwave radio (even if you are a nation state) is more difficult then taking down web assets.
(3) there are benefits to security by obscurity when its part of a layered approach with constant maintenance and feedback (#3 is my controversial take)
A one-time pad generated correctly and used correctly will remain highly secure, provided you have a highly secure means of sharing the key material. There's a lot rolled into those assumptions.
At the same time "highly secure" is significantly underselling it. One time pads (if properly implemented) are information-theoretically secure. Even if you solve P=NP your one time pad will not be cracked. It is safe against an adversary with both infinite time and infinite compute.
And the cost is that one-time pads are a royal pain in the ass. But if you're willing to pay that price without cutting corners, you get a completely unbreakable crypto system that will laugh in the face of the NSA and quantum computers.
In fairness, quantum doesn't really help against normal crypto (of the type that is being discussed - symmetric). AES-256 will also laugh in the face of QC.
TBH, I think "highly secure" might be overselling it. Yes, assuming you're generating random numbers well, there's actually zero chance your security will be breached because of an attack on your encryption algorithm. But there's not actually zero chance that your random number generation is flawed, and (very much more important) the cost is in making harder the pieces of your system that are probably more likely to fail in the first place. And of course you're still potentially vulnerable to traffic analysis and such even if all the rest goes right.
> But there's not actually zero chance that your random number generation is flawed, and (very much more important) the cost is in making harder the pieces of your system that are probably more likely to fail in the first place.
I don't think it's that hard to get true randomness. Just measure something random in nature like radio static.
There are server cards (or were at least some time ago) with tiny bit of mildly radioactive material, well enclosed of course, and a good sensor for those isotopes/particles.
I've heard other approaches including that static too, ie the famous analog TV without real signal, IIRC its cosmic microwave background, or camera watching water drops fall or similar. There are many other ideas (and probably products too), the only thing is one needs to keep it 100% reliable across long time.
Genuine question: How are those random sources actually used?
I would think that for crypto it’s very important to not just have random numbers, but to have a uniform random distribution. Many natural sources would be either Poisson or Gaussian; if you make an assumption for the distribution you could of course make it uniform, but that assumption would be a weakness if inaccurate or changing over time.
So how is a true random source usually used to ensure uniform random outputs?
A truly random source will yield independent and identically distributed values.
You can take a collection of those values and convert them to an index in the set of all possible permutations of those values. That index will be uniformly distributed in the range of the number of permutations, regardless of the input distribution so long as it's IID.
Once you have a uniform value on a range you can extract uniform bits from it.
See also: Von Neumann's debiasing algorithm.
In practice RNGs use some kind of debiaser, though often they use ones that leave a lot of entropy on the floor. OTOH, stronger debiasers are more harmed by failures to be completely IID (e.g. some inter-output correlation, or the distribution changing over time with temperature).
It’s a well known exercise in prob textbooks (edit: it’s the algo referenced in the other reply) to convert one distribution to another. If you can generate gaussians (or any other distribution) you can generate uniform variates. It’s a very simple application of rejection sampling that involves some efficiency loss, but that’s irrelevant at the time you’re getting your OTPs.
Perhaps not, but truly secure randomness is much harder. If someone else can measure the same thing you're measuring then it doesn't matter if it's random. If they can influence what you're measuring that's even worse. In the case of radio static, for example, your RNG could be compromised by a another compromised device simply colocated nearby.
In the event your adversary knows so much about your procedures that they can tune into the radio used to generate randomness, presumably it would be much easier just to steal the piece of paper the pad is written on.
Which does kind of further your point that one time pad makes more secure the parts that are already incredibly secure, while not helping the real weaknesses of cryptosystems i.e. the human element.
The fact that 3 is controversial is telling of the sad state of the security knowledge of techies generally. The most people seem to be able to do is to cargo cult / parrot, misunderstand, and misappropriate quips like “security by obscurity bad!” when it, all else equal, is a perfectly reasonable and often useful additive measure to take if it’s available to you.
A knee-jerk aversion to anything halfway adjacent to "security by obscurity" is flawed, but this reaction to that aversion is also flawed.
Instead of trying to suggest "security by obscurity is fine, actually, and don't worry about it", it's time for us to just stop being pithy and start being precise: your cryptosystem should be secure even if your adversaries understand everything about it. If that is true, then you can (and, in the real world, almost certainly should) add defense in depth by adding layers of obscurity, but not before.
While “security by obscurity” may be good for some spy agency as an additional layer over a system that would remain secure even if it were published, most people are right to say that “security by obscurity bad!”, based on the known history of such systems.
The reason is that, without any exception, every time when some system that used “security by obscurity” has been reverse engineered, regardless if it was used for police communications, mobile phone communications, supposedly secure CPUs etc. it was discovered that those systems have been designed by incompetent amateurs or perhaps by competent but malevolent professionals, so that those systems could be easily broken by those who knew how they worked.
“Security by obscurity” is fine for secret organizations, but for any commercial devices that incorporate functions that must be secure it is stupid for a buyer to accept any kind of “security by obscurity”, because that is pretty much guaranteed to be a scam, regardless how big and important the seller company is.
Obscurity is OK only when it is added by the owner of the devices, over a system that is well known and which has been analyzed publicly.
> The most people seem to be able to do is to cargo cult / parrot, misunderstand, and misappropriate quips like “security by obscurity bad!"
That is the point. It is a good rule of thumb for people who don't know much about security. Anything they create trying to add more security to their system is more likely to do the opposite.
If you think you know better, feel free to ignore it. Just be aware you wouldn't be the first who thought they knew what they were doing or even the first who did know, yet still messed up.
This misunderstands how "security by obscurity" came about, because there are good and bad types of obscurity. Back in the 1800s people were selling shoddy locks that were easy to pick and they were mad that people were disclosing lock picking methods: https://www.okta.com/identity-101/security-through-obscurity...
This history repeated later, with people making shoddy cryptography where they didn't want anyone to know how it worked, and similar things, most of which got broken in embarrassing ways. This sort of obscurity was actively harmful and let people sell defective products that people relied upon to their detriment.
Meanwhile, there are good types of obscurity, too. For example, there are the information disclosure CWEs that tell users of products not to disclose version numbers, stack traces, etc. to users, and this sort of "obscurity" is perfectly reasonable and widely accepted.
So it's not the case that all things that might be termed "obscurity" are bad.
You can even poke holes in it using their own terminology. Obscurity is equivalent to minimizing attack surface area. The less adversaries know about your system the smaller of a target it is.
I think there's overlap between surface area and obscurity, but they're not equivalent. To use the most pedestrian example, moving SSH off of port 22 makes it more obscure, but the total surface area hasn't gotten smaller.
Yeah. I think it's the result of conflating theoretical cryptography and practical IT security. Kerckhoffs's principle is true in the theoretical domain and it's certainly important that the designers of standardised crypto algorithms adhere to it but it doesn't follow that it's pointless to change your SSH port.
Things are more secure if you share your file with a specific set of users, but that requires your counterpart to have an account with the system you’re using (eg a Google Account for Google Drive). When sharing files with an arbitrary counterparty, it’s often sufficient to generate a publicly available, unlisted/unindexed, hard to guess URL. Even better if it’s time boxed.
I’m sure there are attackers who attempt to identify and enumerate these URLs. If they’re well designed though, it should be infeasible to guess the link.
It is much harder than it would seem to keep these links secret. If one of your assets gets caught by other means, they could endanger the entire network if they use the same methodology.
The CIA thought they had a super great system, and then many of their assets got rolled up at once in a hugely embarrassing (and deadly) blunder.
" there are benefits to security by obscurity when its part of a layered approach with constant maintenance and feedback"
im not sure i understand what this means, can you provide an example and why its controversial?
do you mean a one time pad using memes via image steganography on heavy traffic forums? I recall this is what North Korean spies used to do in early 2000s
> im not sure i understand what this means, can you provide an example and why its controversial?
There is a longstanding tradition of vendors of mediocre 'security' systems using trade secrets/restrictive license terms/anti-hacking laws to cover up their mediocrity.
If you're shopping for a garage door opener and one vendor publicly documents their security system and well known experts have given it their thumbs up, while another vendor says their system is secret and has sued people for attempting to reverse engineer it? Knowledgeable folk would have far more trust in the former than the latter.
still dont get it. are you saying the former is susceptible to layering attacks where they get people to drop their guard? or that the latter which is secretive is to conceal its actual use
reply