But will they be exchanging anything, and would that anything be worth calling "information"? There's a lot of assumptions in this sentence, to the level of calling it magical thinking. (Our) AI is not magic, and is only interpolating human knowledge, so I don't think it can figure anything out by itself. In any case nothing novel. Note I'm not saying "impossible" but definitely not just "let them chat" - which already assumes they found a way to do (what we define as) chat.
> But will they be exchanging anything, and would that anything be worth calling "information"?
If I'm understanding, the main idea is for it to figure out language/grammar rules, which would then enable communication between the two species. I think it makes sense to call that information, but its utility is independent of what we call it.
Such a device might make sense as something to include on probes we send out in case they encounter life, since in many cases round-trip communication back with Earth would take a long time.
> (Our) AI is not magic, and is only interpolating human knowledge, so I don't think it can figure anything out by itself.
I don't think deep learning models are interpolating in any literal sense - curse of dimensionality means essentially no real inputs or outputs fall within the convex hull of the training samples.
LLMs aren't currently as good at reasoning as humans are, but figuring out language rules does feel within their wheelhouse. Granted our architecture choices, like attention, do convey some priors that are true of human languages but might not be of alien languages, so we shouldn't expect them to be as good at picking up alien languages as human languages, but similar can be said of our own brains.