that's fine, what Im saying is that the above workflow is idealized and you have to do a lot more things overtime. If you look at the happy path it looks similar. With the unhappy path instead of keeping stashes and commiting trash then amending. You describe your change beforehand, and any change you do goes there. Thats it.
Hotfix? jj new main
go back to what you were working on? jj edit refA
need to solve some problems in another pr of youre? jj edit refB.
Theres no overhead at all in thinking these, you can never lose work by mistake, you can never mess it up. It just works
It's fiber interconnect between sites that is not actively visible to other people who are routing data around the world. Think of it as being pre-layed but not yet turned on/made available.
The way it's written means this just isn't the case. They _MAY_ use it for what you have mentioned above. They explicitly say "...here are a few examples of improvements..." and "How Slack may use Customer Data" (emph mine). They also... may not? And use it for completely different things that can expose who knows what via prompt hacking.
Agreed, and that is my concern as well that if people get too comfortable with it then companies will keep pushing the bounds of what is acceptable. We will need companies to be transparent about ALL the things they are using our data for.
This is a timing attack or timing oracle. Lets assume a mac represented in an array of 32 bytes. If we had a pseudocode method like:
byte [32] (actualMac, expectedMac)
for int x = 0..31
if (actualMac[x] != expectedMac[x])
return false;
fi
end
return true;
We return false as soon as we hit an invalid byte in our calculated mac. If the time taken to execute one iteration of the loop is Y and the attacker is able to time this method accurately they will be able to tell what the value of actualMac is by feeding known inputs. They will know because the return time will be 2Y when they have bailed after the first byte. 3Y after the second, 4Y after the third etc.
This is why we should check the arrays in constant time - compare every byte in both arrays before returning. We do not return early so we can’t leak information
why is it called constant time if it isn't constant with respect to array length? Just seems confusing because the algorithm is linear without a short circuit
It's constant time in that it always takes the same amount of time regardless of the extent to which the two strings are equal. It is a different concept than constant time in complexity analysis.
What's even more confusing is that it is also constant time in the complexity analysis sense given that the mac is usually a fixed-size string after choosing a hashing algorithm.
Isn't it sufficient to compare 64 bits at a time? Then the oracle becomes rather useless.
Many current memcmp implementations use such large comparisons because they avoid hard-to-predict data-dependent branches for extracting the specific point of mismatch.
Don’t really buy it. Seems to be both “spherical cow optimistic assumptions” and “anyone who could seriously think about pulling this off has nation-state level resources and already 0wnz you and/or already has the rubber hose at hand"
Not really. It doesn't rely on that big of an assumption, nor does it require nation state resources[0]. When you're trying to find the secret you can make a bunch of requests and measure for statistically significant change, which can still be detectable beyond jitter & web server load.
Also ignoring the fact that calling constant_strcompare(string, string) instead of strcompare(string, string) when working with secrets isn't that big of an ask.
If you could measure the time granularly as a client requesting some resource on the server how exactly would you know the time corresponds to the comparison and not to some tangential task?
If they're modular then they'll be intended to run more than one. You get the land, grid tie-in, permits and staff and then... stack 4 of these instead of 1. You use the same benefits from economies of scale to have as many of these as can be safely crammed into your building and go from there.
I was fed up of having to substitute every dependency manually whenever I wanted to write a test, finding it hard to apply clean software design principals to test suites.
It's laid out further in the thread [1]. The key quote comes from Thorsten:
> All that afaics doesn't matter. If a new kernel breaks things for people
> (that especially includes people that do not update their userland)
> then it's a kernel regression, even if the root of the problem is in
> usersland. Linus (CCed) said that often enough
Another example is what happened when Linux moved to 3.0; some programs expected a 2.x version, or even 2.6.x, these programs were clearly buggy, as they should check that the version is greater than 2.x, however, the bugs were already there, and people didn’t want to recompile their binaries, and they might not even be able to do that. It would be stupid for Linux to report 2.6.x, when in fact it’s 3.x, but that’s exactly what they did. They added an option so the kernel would report a 2.6.x version, so the users would have the option to keep running these old buggy binaries. Link here.
You're welcome to assume forever that there will be a version field, if it ever becomes the case that this simply makes no sense anymore then a sensible dummy value will be placed in[0]
"There's a number of fields in /proc/<pid>/stat that are printed out as zeroes, simply because they don't even *exist* in
the kernel any more, or because showing them was a mistake (typically an information leak). But the numbers got replaced
by zeroes, so that the code that used to parse the fields still works."
When the Opera browser updated from 9.60 to 10.0, they found that some sites stopped working because they blocked user-agents with low versions of Opera... and they checked versions by looking at only the first digit after "Opera." So version 10 looked to them like version 1.
Opera "fixed" the problem by having version 10 report itself as version 9.80 in the user-agent string.
Because time is a flat circle, the same thing happened when Microsoft was planning the successor to Windows 8. Too many programs saw "Windows 9" and thought the OS was Windows 95 or 98, and tried to use outdated versions of APIs (or refused to launch). So Windows skipped a version and that's why the current version is called Windows 10.
So to answer your question: There is precedent for that scenario.
You're just being disingenuous, what does that help?
It's clearly an error on the part of the kernel if the owner of said kernel has specific expectations of behaviour on upgrade with respect to user spaces and those are not being met.
It is just as disingenuous as saying "details don't matter". bonzini's comment was saying a line needs to be drawn, and I agree with that. It's pretty clear the line needs to not include "breaking on version number updates", less clear what it also needs to not include.
Well, I do kernel dev. I am the maintainer of KVM. :)
> anyone who expects it not to change is an idiot or just being obtuse
If this was the case, Microsoft would have never had to skip from Windows 8 to Windows 10 (see below in another comment: "starts with Windows 9" was used in the wild to detect Windows 95/98).
I still don't get it; either I'm remarkably dense today, or there's a disconnect. I wrote:
> anyone who expects it not to change
And you appear to be using an example of a version-change curiosity as a counterexample. It is not a counterexample, because the version number in question changed from 8 to 10. In my original formulation, anyone who claimed to worry that the number would stay 8 forever was being an idiot or being obtuse.
I think that holds up. It was not part of my comment, but someone who expected it to be 9 would not be an idiot or obtuse, because that would have been a reasonable guess, absent additional information. They were expecting it to change but were surprised by an exceptional circumstance.
But this is getting a bit silly; there may be someone out there who thinks version numbers should be immutable across versions, but I bet they're pretty lonely. It was an example picked up while making a wider point.
Windows contains code like "If SimCity is running, allow it to use memory after it's been freed"[1]
So yes, if you had a significantly deployed app that segfaults because the version number >= 4.14, Linux probably would seriously consider workarounds for your app.
Also, Microsoft skipped releasing Windows 9 because so much code out there think anything labeled "Windows 9..." is of the Windows 95/98 lineage.
Damn it, you can still install Windows 10 in 32-bit form on a modern x86 CPU and expect win16 binaries to work.
The only reason there is a problem with the 64-bit form is that AMD made the 64-bit mode and the 16-bit mode mutually exclusive. You can jump from 64-bit to 32-bit mode, or from 32 to 16, but not from 64 to 16.
And the other key quote from John: "It is a userspace configuration issue. Your userspace is set up to basically do policy development".
The problem is that apparently this is true for all of OpenSUSE, Debian and Ubuntu, and that's when a userspace bug becomes in practice a kernel regression.