The fine details resemble the analysis of correctness - all the evidence shows people expect per-iteration semantics with considerable frequency, and don’t rely on per-loop semantics with measurable frequency. But it’s impossible to completely automate that assessment. Likewise, it’s impossible to automatically detect code that will spuriously allocate because of the semantic transition.
Wikipedia suggests Modern begins in the 1860s; the parent post is absolutely correct that it’s more the patronage rather than the artists or styles that drive classification in this case. Lots of things are disrupting the patronage of art around this time, especially photography, and changing markets for art result in very different kinds of art being produced.
Names would be named if that were really the problem. I don’t think it is. The common thread between this and the Foundation trademark drama is that the Project sucks at coordinating and communicating - they have been bigfooters late in the process and not servants earlier on.
I don't think enabling a witch hunt is the right play, but it is worth noting that the longer we go on with people who know what's going on behind the scenes not actually saying anything but overall alluding to it just adds fuel to the overall speculation fire.
Which is really all to say: fuck your concept of a weekend, the Rust ecosystem should've cut a response Friday night or Saturday.
FWIW: slog has been pretty stable for a month or two, and should be officially standard library in go1.21
There was a last round of changes mostly revisiting use of contexts a few months ago - hats off to jba for taking a lot of time to work out the best fit
I’m left wing and think it’d be catastrophic to reject AI for military applications.
I do worry that hyping something as “ChatGPT-like” distorts better analysis, and makes political discussions about employing AI more opaque than they should be. Maybe the way I’m more left wing is that the profiteering angle worries me.
I’m not sure text prompts offer the kind of plasticity needed for artists to really do their thing.
Frank Gehry used to do really chaotic abstract drawings and hand them off to aerospace CAD operators to turn them into something that could be engineered and machined and assembled. Lots of people could do the drawings and have magic mushroom facades but Gehry understood the resulting interior spaces as well … I think it’s there is a ton of potential for great tools employing the past decade or so of ML/AI but ISTM it should empower the process rather than replace it.
I can only find one source on this that isn't extremely recent[1]. It seems like it would only brick the device if you pirated and used an illegitimate copy of the custom firmware.
Well that definitely makes him an asshole, but not a prison-and-then-life-of-crushing-debt asshole. Is copyright law really this fucked, that someone was extradited and made an example for selling a ransomware chip?
Time constraints are a bit of state that can influence how both humans and engines how search for moves. There was some contention over how to understand the training time of AlphaZero in relation to more conventional engines - even as just a physical process this seemed pretty interesting.
The fine details resemble the analysis of correctness - all the evidence shows people expect per-iteration semantics with considerable frequency, and don’t rely on per-loop semantics with measurable frequency. But it’s impossible to completely automate that assessment. Likewise, it’s impossible to automatically detect code that will spuriously allocate because of the semantic transition.