Hacker Newsnew | past | comments | ask | show | jobs | submit | jtsylve's commentslogin

The answer may be in your question.

- This is currently solved by inference pipelines. - Models and techniques improve over time.

The ability for different agents with different specialties to add additional content while being able to take advantage of existing context is what makes the pipeline work.

Storing that content in the format could allow us to continue to refine the information we get from the image over time. Each tool that touches the image can add new context or improve existing context and the image becomes more and more useful over time.

I like the idea.


Said it better than I could have

also, the idea is to integrate the conversion processes/ pipelines with other data that'll help with customized workflows.


> Each tool that touches the image can add new context or improve existing context and the image becomes more and more useful over time.

This is literally the problem solved by chunk-based file formats. "How do we use multiple authoring tools without stepping on each other" is a very old and solved problem.


  “A student asked, ‘Yeah, but do the wrinkles always form in the same way?’ And I thought: I haven’t the foggiest clue!” said German, a faculty member at the Thomas J. Watson College of Engineering and Applied Science’s Department of Biomedical Engineering. “So it led to this research to find out.”
I wish the authors would have mentioned the kid by name in the acknowledgement section of the paper. I bet the kid would have felt very proud and inspired to having their name published in a scientific journal.


That makes me wonder how many important results actually came from inquiries from curious students whose name was forgotten or purposefully ommited, lost from history. Any other similar examples? Student from Student's t-distribution was actually an engineer that had to adopt a pseudonym because of his employer, so I think it doesn't count.


[flagged]


It's because such research has no obvious initial use that the public must pay for it; no private enterprise will fund it, and often it will be useless knowledge, but occasionally someone will figure something out that unlocks a whole new understanding of the world.

It's publicly-funded venture capital for ideas.


IIRC even LASER was seen as a novely demonstration of quite an obscure effect…


Gladstone once asked Faraday about the usefulness of electricity, just saying.

Faraday's response: "Why sir, there is every possibility that you will soon be able to tax it!"


I experimented with the proposed parallel data type extensions to the C++ standard library. I got impressive performance gains for calculating APFS fletcher checksums without resorting to compiler intrinsics or inline assembly.

Gains were even more impressive when adding some simple loop unrolling: https://jtsylve.blog/post/2022/12/24/Blazingly-Fast-er-SIMD-...


I think it just causes the type loophole_t<std::string, 0> to be defined (and thus the loophole friend function).


Does this also mean that they're the largest private recipients of farm subsidies?


Farm Subsidies are largely a myth and a misconstruction of "externalities" as subsidies. If you knew how to get the alleged subsidies the media likes to trot out to disparage farmers into the hands of actual farmers while charging a small % as a consulting fee you'd be unimaginably wealthy.


USDA estimated farmers received $46.5 billion in direct payments in 2020. Where did this money actually go?


https://data.ers.usda.gov/reports.aspx?ID=17833 The past decade it has been only about 10 billion a year with fixed direct payments largely eliminated in 2014. 2020 will obviously be an exception due to Covid-19. Most of the recent payments are Market Protection Programs to prevent exports from being wrecked by retaliatory tariffs under Trump. https://www.farmers.gov/manage/mfp

That's very much a means-tested program to prevent farmers from being driven into bankruptcy by tariffs on things they've already produced. It's not a magic money fountain.


This doesn't really answer my question. I knew the money was given for reasons. The question is who is really getting it.

Note that the same report says that excluding subsidies, farmer net income increased in 2020 over 2019, so I'm not convinced the Covid pandemic is a good reason.


I'm only upvoting b/c you clearly have navigated USDA ERS website before... (or google skills are lvl 100)

That type of info should be a whole lot easier to access and digest than it currently is.


Hard to say. Farm subsidies have limits to discourage this. There are a ton of loop holes, and not all crops qualify for subsidies. There is also debate about what even is a subsidy.



Is there anyone who can ELI5 what `beta8.Mod(beta8, curve.P)` does?


It is hard to explain what is `beta8` and `curve.P` specifically, but they are arbitrary-precision integers so you can see what went wrong with an appropriate pseudocode:

    x3 = alpha * alpha
    beta8 = beta << 3
    // beta8 %= curve.P
    x3 -= beta8
    while x3 < 0 {
        x3 += curve.P
    }
Essentially we want to compute `(alpha * alpha - beta * 8) % curve.P`, so to say. The modulo is expensive though, so for typical cases we can just repeatedly add `curve.P` to compute the modulo a few times. This is indeed a valid optimization when we are sure of the range of `alpha` and `beta`, but `beta` can be controlled outside. So a very large `beta` from an attacker will cause the while loop run forever---a denial-of-service attack.


Here's the new code:

    beta8.Mod(beta8, curve.P)
    x3.Sub(x3, beta8)
    if x3.Sign() == -1 {
        x3.Add(x3, curve.P)
    }
    x3.Mod(x3, curve.P)
I don't understand it why it's all necessary. This is shorter and seems to do the exact same thing:

    beta8.Mod(beta8, curve.P)
    x3.Sub(x3, beta8)
    x3.Mod(x3, curve.P)
Also, beta8 is never used after this code. So this should do the same thing as well:

    x3.Sub(x3, beta8)
    x3.Mod(x3, curve.P)


I think you are right. Go `big.Mod` should be Euclidean (i.e. `x % y` follows `y`'s sign) so the code is redundant. It doesn't seem to be required to run in constant time (if so we won't have `if` at all), probably the committer wanted a minimal change?


I cannot ELI5 this for you, but here’s more context:

  beta8 := new(big.Int).Lsh(beta, 3)
  beta8.Mod(beta8, curve.P)
https://golang.org/pkg/math/big/#Int.Mod


I'm not sure when the last time you've tried, but in the last year VSCode has come a long way towards "just working" out of the box for C++. They've specifically focused on it. If you've got some time, I'd recommend that you checkout Rong Lu's CppCon 2018 talk. https://www.youtube.com/watch?v=JME1i3vCRR8


The Japanese bombed Pearl Harbor, not the Germans. Am I missing something from your argument or is this just a silly mistake?


It was an unnecessarily snarky comment (not trying to be mean) just as a way to illustrate that I viewed OP's comment as a non-sequitur.

http://bfy.tw/C1Mr


Believe it or not, I wasn't trying to be snarky. I honestly didn't know if I misunderstood your argument. It was possible that you were talking about using a flawed premise or something that I wasn't understanding.

Sorry to offend.


Nope, the spec seems to suggest it's unsigned char. This is also what gcc has done. https://patchwork.ozlabs.org/patch/737032/


Can anyone express plainly what is the point of the std::byte circle jerk when we already have uint8_t?


This is analogous to adding uint8_t when we already had unsigned char. In C these would be exactly the same; in C++ they are different types. Same with uint8_t vs. byte: the former is an integer type, the latter is not. (Thus, a better question would be, why introduce byte when we already had unsigned char. I think, the answer to that is in a general tendency of moving away from the C way of looking at types and making code better reflect the intent and do it in a more type-safe manner.)


Overloading and templates. I can now use unsigned char, uint8_t and byte as distinct types, meaning they can be separately overloaded and used as separate template specialisations.

That's not a purely hypothetical point; I already create custom types to do this. Not every 8-bit type is a character, nor is it necessarily an integer. I always found it frustrating that the default stream output was a character when using numerical quantities; now we can specialise raw output accordingly.


std::uint8_t is not required to exist on a particular implementation, for example, if the machine byte is not 8-bit.

std::byte still seems pretty useless, though. There already is a built-in type for designating bytes: unsigned char.


It's for clarity about fixed size bitwise operations. Here's how the spec doc describes it's motivation:

(http://open-std.org/JTC1/SC22/WG21/docs/papers/2017/p0298r3....)

Motivation and Scope:

Many programs require byte-oriented access to memory. Today, such programs must use either the char, signed char, or unsigned char types for this purpose. However, these types perform a “triple duty”. Not only are they used for byte addressing, but also as arithmetic types, and as character types. This multiplicity of roles opens the door for programmer error – such as accidentally performing arithmetic on memory that should be treated as a byte value – and confusion for both programmers and tools.

Having a distinct byte type improves type-safety, by distinguishing byte-oriented access to memory from accessing memory as a character or integral value. It improves readability. Having the type would also make the intent of code clearer to readers (as well as tooling for understanding and transforming programs). It increases type-safety by removing ambiguities in expression of programmer’s intent, thereby increasing the accuracy of analysis tools.


such as accidentally performing arithmetic on memory that should be treated as a byte value

My reaction to that can be summed up succinctly as "WTF!?" The whole point of uint8_t or (signed/unsigned) char is an 8-bit quantity that you can do arithmetic and bitwise operations on. To put it more bluntly, "have C++ programmers forgotten how computers work?"

The proposed solution is to add yet another same-yet-subtly-different type, with its own set of same-yet-subtly-different rules? If anything that would cause even more confusion due to the complexity it causes in interactions with all the other parts of the language.

IMHO this "let's do everything we can to stop people from even the very slightest change of possibly doing something wrong" line of thinking is ultimately unproductive... and actually rather dystopian. The end-result is quite scary to contemplate.

(The fact that an 11-page, text-only PDF somehow turns out to be over 800KB is somewhat less disturbing, but still notable.)


It's just a bit of type safety. Calm down.


I happen to be in the Netherlands today and have had no problem using Google on OS X & Chrome


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: