Because the Linux kernel is incredibly complicated and shouldn't be trusted as a strong security boundary. A simple hypervisor likes has far fewer vulnerabilities so is an easier to trust boundary. They are in very different security tiers.
I would summarize as containers are good for mostly trusted isolation (teams within a company, purchased software) VMs are good for general untrusted software (different tenants in a cloud provider) and separate physical hardware is for scenarios where attacks are likely (military, known malicious code). Of course use cases are very fuzzy, but I wouldn't run fully untrusted code in the same kernel as anything of value.
I’m having a hard time understanding what your problem with the “sustainability” concept is. Don’t you think that “efficiency” was the tyrant of the 20th century yet no one complains about it in the 21st century? What about “AI”?
Yes, sustainability will mean different things across different domains, because for some reason people are starting to realize the consequences of their actions in the real world. What else would you expect?
One should not confuse energy-efficiency with sustainability. Yes, pushing simpler frameworks to its limits is good, but it is not the main issue, nor it is enough given the exponential growth in demand that we have been witnessing.
I tend to disagree, in the same way that it is better to use higher efficiency class dishwasher or washing machines it is better to use code that achieves the same with less computing power.
Saying that the increase in energy demand through reason x does not mean that efficiency efforts are in vain.
HE washers have been known to fail more quickly, are more complex and expensive to repair, and not do as good of a job as the machines they replaced, so I don't think that's a good analogy.
Yes, which is very sad. Nearly all things go into this direction. I still remember the old days of jail breaking the iPhone, and the cat and mouse chase with Apple.
> It is very, very simple, and I would say easier to learn than Python, as long as you remember that it is not programming but maths. For example, suppose you specify this function on the integers:
> f ≜ CHOOSE f ∈ [Int → Int] :
> ∀ x ∈ Int : f[x] = -f[x]
> What function is it? Clearly, it's the zero function
Did you mean your example is the constant function [1], rather than a zero function [2] (where c = 0)?
* Nevermind, I just saw you used the ">" sign in the definition. Is it why the definition only applies to positive numbers? In any case, you did not write it in your textual description, which looked confusing to me. I think it would be easier if one could define it as ℤ+ or something like that.
f ≜ CHOOSE f ∈ [Int → Int] :
∀ x ∈ Int : f[x] = -f[x]
You added the > in your quote of pron, he didn't have it in the original. There is no c in ℤ with c != 0 s.t. f(x) = c and f(x) = -f(x), that would imply that c = -c for non-zero integers which is not true. The only function that can satisfy pron's constraints is f(x) = 0 since c = 0 is the only time
c = -c, or 0 = -0.
That’s true, my mistake. Thank you for the clarification! In this case, I have another question.
Why is this original definition different than say
f ≜ CHOOSE f ∈ [Int → Int]:
∀ x ∈ Int : f[x] = 0
If you want some function to be 0, just specify it. Why does one need to find this a broader but more complex way of specifying the possible “input” space in TLA+? How does it help is my question, I guess.
Because sometimes you're not sure if your belief is correct. To show what TLA+ can do, I used a simple example where the truth of the proposition (f is the zero function) is pretty obvious yet behaves very differently from how programming works to show how you can prove things in TLA+:
THEOREM fIsTheZeroFunction ≜
f = [x ∈ Int ↦ 0]
PROOF
⟨1⟩ DEFINE zero[x ∈ Int] ≜ 0
⟨1⟩1. ∃ g ∈ [Int → Int] : ∀ x ∈ Int : g[x] = -g[x] BY zero ∈ [Int → Int]
⟨1⟩2. ∀ g ∈ [Int → Int] : (∀ x ∈ Int : g[x] = -g[x]) ⇒ g = zero OBVIOUS
⟨1⟩3. QED BY ⟨1⟩1, ⟨1⟩2 DEF f
The TLAPS TLA+ proof-checker verifies this proof instantly.
You can then use that proof like so:
THEOREM f[23409873848726346824] = 0
BY fIsTheZeroFunction
But when you specify a non-trivial thing, say a distributed system, you want to make sure that your proposition -- say, that no data can be lost even in the face of a faulty network and crashing machines -- is true but you're not sure of it in advance.
Writing deductive proofs like above can be tedious when they're not so simple, and the TLA+ toolbox contains another tool, called TLC, that can automatically verify certain propositions (with some caveats), especially those that arise when specifying computer systems (though it cannot automatically prove that f is zero everywhere).
So the purpose of my example wasn't to show something that is useful for engineers in and of itself, but to show that TLA+ works very differently from programming languages, and it is useful for different things: not to create running software but to answer important questions about software.
This was just an example that TLA+ is not executable.
You didn't realise that f[x] = -f[x] implies f[x] = 0, and that is how it is often: You have some property, but you don't know what it entails exactly. TLA+ allows you to reason about that.
I think the comparisons to images are a bit misleading, where the author implies that generating images are somehow more energy-efficient than taking the photos yourself.
It basically ignores the fact that the very data these large models use came from real photos taken from real cameras etc. and that used real energy, which in the end should be included in the total training cost, i.e., in addition to the electricity consumed by the GPUs. This would have an impact in how the total cost can be amortised with usage. Of course the impact might be just to slow down a little the amortisation with use, but it still should be included for a fair comparison.
Same applies to some of the other examples they give.
reply