Hacker News new | past | comments | ask | show | jobs | submit | piannucci's comments login

AIUI, there is a technical criterion for an ambient EM field to imbue circuits within it with broken time-reversal symmetry.

One example of a system that meets the criterion is a ferromagnet. Another is this altermagnet.

One example of a system that doesn’t meet the criterion is a diamagnet. Another is the anti-ferromagnet.

Roughly speaking, some systems are microscopically “asymmetric enough” to be useful in a certain way, and others are “too symmetrical.”

Ferromagnets have a downside that altermagnets avoid: their microscopic fields don’t average out to zero over macroscopic distances.

I think, but honestly don’t really understand, that the goal is to cause the material to treat currents of spin-up and spin-down charge carriers (think electrons or holes) dissimilarly. Constructing materials that distinguish between charge carriers of differing spin is a step towards spintronics. Again, I don’t know why that’s important, but it is what it is.


To add a bit more to this, it's really a new class of magnetism. Traditionally we might think of ordered magnetic materials as being one of two: ferromagnets, which is what you think of when you think of magnets, and antiferromagnets. Antiferromagnets locally order their magnetic moments antiparallel so any which way you measure it, you measure no magnetization.

The application is this: We would like to use ferromagnets and spin currents to make spin-electronic devices ("spintronic") where only the spin information is transferred without any large electrical currents. The goal of this is to save energy from Joule heating as spin can flow with significantly lower energy dissipation.

Ferromagnets run into a lot of problems: they have a stray field, so patterned elements will interact and interfere with each other that sets a limit on how dense each nanostructure can be. Antiferromagnets have a big problem: they are extraordinarily difficult to measure and that is a challenge to overcome.

So the benefit that altermagnetic materials presents is a clear union that tries to overcome the problems of both while retaining the strengths of both.

The exact definition of the ordering of an altermagnet is a bit subtle and it mostly comes from an understanding of how the electronic band structure is different as compared with normal antiferromagnets.


:shocked_pikachu:

Renegadry aside, for those who are more interested in the Information Theory perspective on this:

Kolmogorov complexity is a good teaching tool, but hardly used in engineering practice because it contains serious foot-guns.

One example of defining K complexity(S, M) is the length of the shortest initial tape contents P for a given abstract machine M such that, when M is started on this tape, the machine halts with final tape contents P+S. Obviously, one must be very careful to define things like “initial state”, “input”, “halt”, and “length”, since not all universal machines look like Turing machines at first glance, and the alphabet size must either be consistent for all strings or else appear as an explicit log factor.

Mike’s intuitive understanding was incorrect in two subtle ways:

1. Without specifying the abstract machine M, the K complexity of a string S is not meaningful. For instance, given any S, one may define an abstract machine with a single instruction that prints S, plus other instructions to make M Turing complete. That is, for any string S, there is an M_S such that complexity(S, M_S) = 1 bit. Alternatively, it would be possible to define an abstract machine M_FS that supports filesystem operations. Then the complexity using Patrick’s solution could be made well-defined by measuring the length of the concatenation of the decompressor P with a string describing the initial filesystem state.

2. Even without adversarial examples, and with a particular M specified, uniform random strings’ K complexity is only _tightly concentrated around_ the strings’ length plus a machine-dependent constant. As Patrick points out, for any given string length, some individual string exemplars may have much smaller K complexity; for instance, due to repetition.


>uniform random strings’ K complexity is only _tightly concentrated around_ the strings’ length plus a machine-dependent constant

What is the distribution of the complexity of a string? Is there some Chernof-like bound?


I think you’re putting the right spin on this.


I love the drama of how the abstract is written, but TBH I don't think this is a surprise. I believe it's well-known among color theorists that large perceptual distances are inconsistent with sums of small differences. So maybe the most generous thing to say here is, good on them for bringing awareness of this subtlety to a broader audience.


Not only that, it is also well known that the smallest perceptible color difference (ΔE=1) is not actually consistent, even in the "perceptually uniform" color spaces.

So ΔE is actually 1 in some parts of the space, but up to 4 in others. However, that is "good enough" for the purpose for which these color spaces were created: quality standards for color ("can I buy more of the same color and it will look the same?"). If your measured and computed ΔE is below one, the difference will not be perceptible by most humans regardless.

And last I checked, new and improved "perceptually uniform" color spaces are proposed every couple of years.


Yep: color space is a pragmatic kludge not a “real thing” divorced from the human neural network that is its fundamental basis.


I don’t see how the author’s arguments about impossibility results pertaining to “distributed sub-symbolic architectures” apply any more strongly to LLMs or DNNs than they do to human brains. Human programmers aren’t magically capable of solving the halting problem either, but we muddle through somehow.


Yea. Most of what we do isn't that rigorous and it's fine. When we do need rigor, we use external tools, like classical computer programs or writing math down on paper. LLMs can use external tools too. We're also hopeless at explainability - people usually have no idea why they make most of the decisions they do. If we have to, we try to rationalize but it's not really correct because it doesn't reproduce all the intuition that went into it. Yet somehow we can still write software!


Book golems


Wiki says they're incorporated in Netherlands.


<ironic> Finally Europe has a European Google! </ironic>


Europe stretch to the Urals.


Yandex was there for many years now. Its just that now this is "european" Yandex and in Russia there is a "Moscow Yandex".

Also - search tech will remain in Moscow. EU Yandex is mainly about cloud solutions, self-driving cars etc.


Probably just to avoid paying taxes (the famous "dutch sandwich" trick)


So many footguns in this proposal! From the misleadingly symmetric symbols <= => >=, to the unintuitive behavior (for non-mathematicians) in the case of negative left argument, to the similarity to operators in other languages. Short-circuiting is already possible with the well-understood (!A || B) notation, which has the added bonus of allowing (B || !A) as an alternative with the same truth table but opposite short-circuiting. This proposal saves one “!” character at a very high cost. Just… why?


I haven't studied antitrust policy in ages, but IIRC one aspect of this in the past has been to forbid the monopolist from peeking at their competitors' prices.


In re = “in the matter of”, a legal term != “re:”, regarding/reply


I'm sorry, but I don't see how that difference is material to the HN title.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: