Anyone else think that there is already primitive image data encoded in biological data? Essentially basic shapes and patterns which are passed down semi-generationally.
Anton Petrov has a recent video on YouTube I never watched it yet but it's title is "Could Life Be Transmitted Via Radio Waves? Information Panspermia". Just a bit of fun I'm sure Anton isn't too wild he puts out some interesting videos but not in a way to push quackery.
Recently here on HN someone posted a quote saying something like “if you shine light at something for a long enough time, don’t be surprised if you end up getting a plant”
It was about how the environment seems to reorganize in certain ways to use up energy (the latest Veritasium video about entropy also talks about this)
I guess it's possible if this conferred some survival advantage.
It can be useful to work from the evidence to a conclusion instead of the other way round.
But wondering and philosophising can be fun :]
It would be cool if humans could pass knowledge via their offspring. But I always get worried thinking if I'm the asshole, I wouldn't want my kid to be one too.
I think it would have very high energy requirements. For this trait to survive over generations there would need to be a tremendous evolutionary benefit. What would that be for a “primitive image data”?
Maybe things like "long green shape" (cats' fear of cucumbers because they resemble snakes), or "a series of black and yellow stripes", or even "a black blob with many appendages" to watch out for spiders? Encoding some primitive image data so that further generations know what to avoid or pursue seems like a very tremendous evolutionary benefit.
Yeah, I expect this isn’t going to be how that sort of mechanism works, but it’s always been an interesting concept for me, that while “genetic memory” as presented in much fiction is extremely unlikely just from the sheer entropic hill such mechanisms would have to evolutionarily climb to be able to pass on so much information (on top of the baseline necessary information for reproduction, the majority of memory won’t on average confer a lot of reproductive advantages, so it’s statistically more likely to get optimised out by the random mistakes of evolution, hence entropically “uphill”) …
Yet while this fictional form is unlikely we have quite a lot of good examples and evidence for “inherited information”. You have to be careful with it since it’s too easy to accidentally include side channels for organisms to learn the information and thus break the test. Such as insects being genetically driven towards food by smell at a molecular chemical interaction level, and the smell becoming associated with the information you wish to test. A bee colony can’t be reliably tested unless you raise it from a new queen in an odourless environment if you wish to see if bees genetically know that the shape of a flower is associated with food. It’s tough to subtract the potential that a colony will have learned and “programmed” later generations of bees with things like the classic waggle dancing in order to more efficiently gather food.
We do have good ones though like cats and snake shaped objects, it’s surprisingly consistent, and pops up in some other animal species. It’s wired into our brains a bit to watch out for such threats. There’s a significant bias towards pareidolia in human brains and it’s telling how deeply wired we have some of these things, but it is there and study shows it seems to form well before our cognitive abilities do… these all have some obvious reproductive advantages however so it makes sense that the “instinct” would be preserved over generations as it confers an advantage. But it’s still impressive that it can encode moderately complex information like “looks like the face of my species” or “cylindrical looking objects on the ground might be dangerous”… even if it’s encoded in a lossy subconscious instinctual level.
> But it’s still impressive that it can encode moderately complex information like “looks like the face of my species” or “cylindrical looking objects on the ground might be dangerous”… even if it’s encoded in a lossy subconscious instinctual level.
I think it helps that the encoding does not have to be transferable in any way. This kind of "memory" has no need for portability between individuals or species - it doesn't even need to be factored out as a thing in any meaningful sense. I.e. we may not be able to isolate where exactly the "snake-shaped object" bit of instinct is stored, and even if we could, copy-pasting it from a cat to a dog wouldn't likely lead the (offspring of the) latter to develop the same instinct. The instinct encoding has to only ever be compatible with one's direct offspring, which is a nearly-identical copy, and so the encoding can be optimized down to some minimum tweaks - instructions that wouldn't work in another species, or even if copy-pasted down couple generations of one's offspring.
(In a way, it's similar to natural language, which rapidly (but not instantly) loses meaning with distance, both spatial/social and temporal.)
In discussing this topic, one has to also remember the insight from "Reflections on Trusting Trust" - the data/behavior you're looking for may not even be in the source code. DNA, after all, isn't universal, abstract descriptor of life. It's code executed by a complex machine that, as part of its function, copies itself along with the code. There is lots of "hidden" information capacity in organisms' reproduction machinery, being silently passed on and subject to evolutionary pressures as much as DNA itself is.
Oh absolutely... and that's a great analogy for the more computer oriented, "Reflections on Trusting Trust" highlights how it can be the supporting infrastructure of replication that passes on the relevant information... a compiler attack like that is equivalent to things like epigenetic information transfer... and for fun bonus measure since it came to mind... the short story Coding Machines goes well for really helping to never forget the idea behind "Reflections on Trusting Trust" https://www.teamten.com/lawrence/writings/coding-machines/
It definitely would be minimised data transfer, be it via an epigenetic nudge that just happens to work by sheer dumb luck because of some other existing mechanism or a sophisticated DNA driven growth of some very specific part of the mammalian connectome that we do not yet understand because we've barely got the full connectome maps of worms and insects, mammals are a mile away at the moment... no matter the mechanism evolution will have optimised it pretty heavily for simply information robustness reasons, fragile genetic/reproductive information transfer mistakes that work, break and get optimised out in favour of the more robust ones that don't break and more reliably pass on their advantage.
You need to compare that with an alternative solution where this information is learned by each generation and then asses the survival advantage of having it encoded in DNA. This is outside my field and I don’t have a strong opinion.
Category theory isn't the only way to solve this. Arguably a purely continuous dynamical system like differential equations with certain boundary conditions would similarly work. Discrete dynamical systems however, are much better at representing finite, discrete relationships, particularly with recursion. I'm only just learning about these topics, but simple rules lead to modelling indeterminately complex behaviour (Rule 30, logistic map). These can be viewed quite clearly through the lens of category theory as functors and fixed points. However, the gaps between automata theory, discrete (and non-linear) dynamical systems and finally category theory are still very wide at the moment.
CIM/WBEM goes all the way back to 1996. They essentially wanted a management infrastructure on all kinds of devices (including different architectures, so actually C made sense then), but that also notably included remote access. At the time, SOAP was still popular, so here we are with a rather silly transport protocol and all kinds of overhead reinventing things like SSH. However, the overall goal still makes sense, it was essentially a way of 'object'-ifying everything from logs to other metrics. This fit in with the overall mode of thinking in MS with DCOM and COM (and registry), and structured configuration/management. I'm sure it's paid massive dividends on Azure Linux infrastructure. For highly structured objects, SOAP and XML aren't a terrible fit, but I doubt many people would do the same thing again today.
Honestly, they just needed to rewrite it in a safer stack. However, that still may not have saved them from all these vulnerabilities, given the scope of what they're implementing as remote management protocols. The relative scrutiny, fuzzing and manpower just hasn't been there, especially when it's obfuscated by various layers.
Not to take away from the rest of what you said, but I don’t think SOAP was _still_ popular in 1996. I don’t think it had become popular yet. I don’t think I even heard of SOAP before 1999 or 2000. I’m not a trend setter or anything, but if it was popular, I probably would have at least heard of it.
That's fair, I was more speaking about XML and its use as a form of binary transport. Things like WS-Management and explicit SOAP obviously came a little bit later, and SOAP-like technologies were popularized for more general use in the 2000s. I think it's fair to say my experiences in general lean more towards observing standards groups.
Implementing it with a hylomorphism is actually quite clean, however, understanding the plumbing to get there is not. The principles are generally quite simple, however, there's no non-mathematical wordings to describe it.
Have you seen any signs of which compression algorithm might be the one affected? Presumably it's one of their more snowflake ones. If it is in a library, surely SMB isn't the only affected resource? Perhaps it's not the library, but the plumbing or the headers and such.
It's not actually clear that they prioritize their own products all that much. Certainly in regards to Project Zero, it seems the point is that they are detached from the rest of the product teams. (Correct me if I'm wrong)
Outside of the security teams, I think it's actually that Chromium is much better fuzzed and scrutinized. They just have so many more resources, including those for security.
>It's not actually clear that they prioritize their own products all that much. Certainly in regards to Project Zero, it seems the point is that they are detached from the rest of the product teams. (Correct me if I'm wrong)
Most of what I've read relates to Microsoft products - and I'm not saying that Microsoft is better/worse than the rest when it comes to security.
Depends on what you mean by Linux. The ABI Microsoft were attempting to emulate was absolutely Linux. (This API business is exactly the issue with Oracle vs Google over Java).
No, a lot of cross-platform applications are going to be using this infrastructure IIRC still. In this case, they only need a subset of the Linux APIs and they can optimize it to produce the right translations.
Because they aren't starting from nothing? They're starting from humans -- the more we can usefully encode about our knowledge of ourselves, the more time we can skip in evolutionary effort. The mutation rate is also rapidly accelerated and although we drop the fidelity in simulation, for the most part we can run magnitudes faster than real-time (and certainly if a limiting factor was human decision making time).
As a counterpoint, I would like to say that a lot of research still is 'routine intellectual work'. Movers and shakers are rare and far apart. The vast majority of academia are collectively and slowly boiling over problems, rather than taking bold and independent strives.
At a certain level of abstraction, perhaps, but someone who's only good at learning and regurgitating existing knowledge is still not going to do well at even routine research. The "routine" of bulk research is still a higher-order routine than standardized tests.
> As a counterpoint, I would like to say that a lot of research still is 'routine intellectual work'. Movers and shakers are rare and far apart.
While this is very true, the "movers and shakers" are the ones who set the standard of a research culture. Frankly, that's why the US has a major research advantage over most (probably all) countries that strongly embrace tiger parenting.
It's fairly routine, but by the standards of, say, the LSAT or GRE, it's hardly signposted at all. I kinda understated my point in the original post by not making it clear we're not talking about discoveries of breathtaking originality; more "do your first independent work" at the honors or 'new senior dev' level.