- This is currently solved by inference pipelines.
- Models and techniques improve over time.
The ability for different agents with different specialties to add additional content while being able to take advantage of existing context is what makes the pipeline work.
Storing that content in the format could allow us to continue to refine the information we get from the image over time. Each tool that touches the image can add new context or improve existing context and the image becomes more and more useful over time.
> Each tool that touches the image can add new context or improve existing context and the image becomes more and more useful over time.
This is literally the problem solved by chunk-based file formats. "How do we use multiple authoring tools without stepping on each other" is a very old and solved problem.
“A student asked, ‘Yeah, but do the wrinkles always form in the same way?’ And I thought: I haven’t the foggiest clue!” said German, a faculty member at the Thomas J. Watson College of Engineering and Applied Science’s Department of Biomedical Engineering. “So it led to this research to find out.”
I wish the authors would have mentioned the kid by name in the acknowledgement section of the paper. I bet the kid would have felt very proud and inspired to having their name published in a scientific journal.
That makes me wonder how many important results actually came from inquiries from curious students whose name was forgotten or purposefully ommited, lost from history. Any other similar examples? Student from Student's t-distribution was actually an engineer that had to adopt a pseudonym because of his employer, so I think it doesn't count.
It's because such research has no obvious initial use that the public must pay for it; no private enterprise will fund it, and often it will be useless knowledge, but occasionally someone will figure something out that unlocks a whole new understanding of the world.
I experimented with the proposed parallel data type extensions to the C++ standard library. I got impressive performance gains for calculating APFS fletcher checksums without resorting to compiler intrinsics or inline assembly.
Farm Subsidies are largely a myth and a misconstruction of "externalities" as subsidies. If you knew how to get the alleged subsidies the media likes to trot out to disparage farmers into the hands of actual farmers while charging a small % as a consulting fee you'd be unimaginably wealthy.
https://data.ers.usda.gov/reports.aspx?ID=17833
The past decade it has been only about 10 billion a year with fixed direct payments largely eliminated in 2014. 2020 will obviously be an exception due to Covid-19. Most of the recent payments are Market Protection Programs to prevent exports from being wrecked by retaliatory tariffs under Trump.
https://www.farmers.gov/manage/mfp
That's very much a means-tested program to prevent farmers from being driven into bankruptcy by tariffs on things they've already produced. It's not a magic money fountain.
This doesn't really answer my question. I knew the money was given for reasons. The question is who is really getting it.
Note that the same report says that excluding subsidies, farmer net income increased in 2020 over 2019, so I'm not convinced the Covid pandemic is a good reason.
Hard to say. Farm subsidies have limits to discourage this. There are a ton of loop holes, and not all crops qualify for subsidies. There is also debate about what even is a subsidy.
It is hard to explain what is `beta8` and `curve.P` specifically, but they are arbitrary-precision integers so you can see what went wrong with an appropriate pseudocode:
Essentially we want to compute `(alpha * alpha - beta * 8) % curve.P`, so to say. The modulo is expensive though, so for typical cases we can just repeatedly add `curve.P` to compute the modulo a few times. This is indeed a valid optimization when we are sure of the range of `alpha` and `beta`, but `beta` can be controlled outside. So a very large `beta` from an attacker will cause the while loop run forever---a denial-of-service attack.
I think you are right. Go `big.Mod` should be Euclidean (i.e. `x % y` follows `y`'s sign) so the code is redundant. It doesn't seem to be required to run in constant time (if so we won't have `if` at all), probably the committer wanted a minimal change?
I'm not sure when the last time you've tried, but in the last year VSCode has come a long way towards "just working" out of the box for C++. They've specifically focused on it. If you've got some time, I'd recommend that you checkout Rong Lu's CppCon 2018 talk. https://www.youtube.com/watch?v=JME1i3vCRR8
Believe it or not, I wasn't trying to be snarky. I honestly didn't know if I misunderstood your argument. It was possible that you were talking about using a flawed premise or something that I wasn't understanding.
This is analogous to adding uint8_t when we already had unsigned char. In C these would be exactly the same; in C++ they are different types. Same with uint8_t vs. byte: the former is an integer type, the latter is not. (Thus, a better question would be, why introduce byte when we already had unsigned char. I think, the answer to that is in a general tendency of moving away from the C way of looking at types and making code better reflect the intent and do it in a more type-safe manner.)
Overloading and templates. I can now use unsigned char, uint8_t and byte as distinct types, meaning they can be separately overloaded and used as separate template specialisations.
That's not a purely hypothetical point; I already create custom types to do this. Not every 8-bit type is a character, nor is it necessarily an integer. I always found it frustrating that the default stream output was a character when using numerical quantities; now we can specialise raw output accordingly.
Many programs require byte-oriented access to memory. Today, such programs must use either the char,
signed char, or unsigned char types for this purpose. However, these types perform a “triple duty”.
Not only are they used for byte addressing, but also as arithmetic types, and as character types. This
multiplicity of roles opens the door for programmer error – such as accidentally performing arithmetic on
memory that should be treated as a byte value – and confusion for both programmers and tools.
Having a distinct byte type improves type-safety, by distinguishing byte-oriented access to memory from
accessing memory as a character or integral value. It improves readability. Having the type would also
make the intent of code clearer to readers (as well as tooling for understanding and transforming
programs). It increases type-safety by removing ambiguities in expression of programmer’s intent,
thereby increasing the accuracy of analysis tools.
such as accidentally performing arithmetic on memory that should be treated as a byte value
My reaction to that can be summed up succinctly as "WTF!?" The whole point of uint8_t or (signed/unsigned) char is an 8-bit quantity that you can do arithmetic and bitwise operations on. To put it more bluntly, "have C++ programmers forgotten how computers work?"
The proposed solution is to add yet another same-yet-subtly-different type, with its own set of same-yet-subtly-different rules? If anything that would cause even more confusion due to the complexity it causes in interactions with all the other parts of the language.
IMHO this "let's do everything we can to stop people from even the very slightest change of possibly doing something wrong" line of thinking is ultimately unproductive... and actually rather dystopian. The end-result is quite scary to contemplate.
(The fact that an 11-page, text-only PDF somehow turns out to be over 800KB is somewhat less disturbing, but still notable.)
- This is currently solved by inference pipelines. - Models and techniques improve over time.
The ability for different agents with different specialties to add additional content while being able to take advantage of existing context is what makes the pipeline work.
Storing that content in the format could allow us to continue to refine the information we get from the image over time. Each tool that touches the image can add new context or improve existing context and the image becomes more and more useful over time.
I like the idea.