Hacker News new | past | comments | ask | show | jobs | submit | glomgril's comments login

Check out this recent benchmark MTOB (Machine Translation from One Book) -- relevant to your comment, though the book does have parallel passages so not exactly what you have in mind: https://arxiv.org/pdf/2309.16575

In the case of non-human communication, I know there has been some fairly well-motivated theorizing about the semantics of individual whale vocalizations. You could imagine a first pass at something like this if the meaning of (say) a couple dozen vocalizations could be characterized with a reasonable degree of confidence.

Super interesting domain that's ripe for some fresh perspectives imo. Feels like at this stage, all people can really do is throw stuff at the wall. The interesting part will begin when someone can get something to stick!

> that's basically a science-fiction babelfish or universal translator

Ten years ago I would have laughed at this notion, but today it doesn't feel that crazy.

I'd conjecture that over the next ten years, this general line of research will yield some non-obvious insights into the structure of non-human communication systems.

Increasingly feels like the sci-fi era has begun -- what a time to be alive.


Very cool. Got a silly sci-fi question for you. IIUC, with current technology it would take on the order of tens of thousands of years for a vessel to physically travel to the closest known Earth-like planet (correct me if I'm wrong).

So any thoughts on what kinds of hypothetical breakthroughs would be needed to make the trip doable in (say) less than a human lifetime?

And related, what do you think about the plausibility of the [Breakthrough Starshot](https://en.wikipedia.org/wiki/Breakthrough_Starshot) initiative? Aware of any alternative approaches?


A different stab at this is to ask what it would take to build a telescope that could image some of these Earth-like planets, a project that turns out to be easier (in a very loose sense of that word) than sending cameras there.

The idea is you send a camera very, very far out in the Solar System (hundreds of AU) and then use the Sun's gravity well as your lens. Neat stuff and, unlike the interstellar probes, potentially doable in our lifetime.

https://en.wikipedia.org/wiki/Solar_gravitational_lens


Normally, diffraction and the effective aperture are what limit optical resolution. How does that work with gravitational lensing? Does the effective aperture become the diameter of the sun?


I'm too ignorant to answer that, but the technical paper here [https://arxiv.org/pdf/2002.11871] goes into a wealth of detail, and includes an image of Earth as it would appear to such a telescope (before and after post-processing) from 30 parsecs away. The optical properties of the solar gravitational lens are pretty astonishing.


Self replicating automata as described by Von Neumann able to repair and duplicate themselves, and other things like electronic components. ICs keep getting faster (so far) but use smaller and smaller features of silicon and could wear out from metal migration and all components will be under much more cosmic radiation than on earth. This makes a large shield of heavy material on front of vehicle to minimize this effect but that increases the energy/fuel needed. The space shuttle only took maybe week long trips but it had four computers for flight control , three extra in case of failure in different parts of the shuttle along with IIRC a separate backup backup computer in for use as last resort.


* Research faster interstellar travel, especially using something like a Buzzard engine to utilize interstellar hydrogen as resection mass. Required nuclear fusion power plants / engines and ridiculously strong magnetic fields; both seem attainable.

* Slow down human body metabolism and allow humans to stay asleep at near-freezing temperatures for a long time. If bears and chipmunks can do it, chances are humans could learn it, too.

* Invent sets of machines that can reliably self-replicate, given most basic inputs like minerals, water, and sunlight. Advanced semiconductors are going to be the tricky part.

* Study psychology, sociology, history, game theory, etc, so that the early society that will form on the new planet, isolated from Earth, would avoid at least some of the pitfalls that plagued human history on its home planet.


> Buzzard

That's a bird, the engine is named after a person and is spelled differently:

https://en.wikipedia.org/wiki/Bussard_ramjet

Also, it won't work unless scaled up to the sort of thing only a Kardashev type II could do — 4000 km diameter — and at that level you've got other options that mean they probably won't:

https://arstechnica.com/science/2022/01/study-1960-ramjet-de...


The reason being the interstellar medium is way less dense than we thought in 1960.


>Slow down human body metabolism and allow humans to stay asleep at near-freezing temperatures for a long time. If bears and chipmunks can do it, chances are humans could learn it, too.

The thing is - our current bodies can't live in space for long. So either we will have to build new bodies for us somehow or build a ship that can have gravity inside and protection from space outside (and we are talking about very heavy protection here)

In any other case there is no point in slowing down metabolism or whatever. You will die rather soon.


With a big enough ship, pseudo-gravity can be easily produced by rotation, especially if we expect the crew to spend 95% of time asleep.


Time dilation means that the closer you get to the speed of light the less time you experience passing. So even a 12000 year long journey as seen from earth, if moving fast enough, could feel to the travelers like a much shorter amount of time.


Yes, but practically with todays technology there is no feasible way of getting to a speed where time dilation matters over that distance, we run out of fuel so we need some external power source like a laser or solar wind that have other issues, iirc one only gets to 2x time dilation at 0.9 c. That’s a lot of acceleration.


We need to think about where we want the knowledge and what knows it. We could use humanoid AIs. We could hatch humans "just in time". Run them in a sim to 18 then release them on their mission. Ethics would need to accept this. Maybe we would be happy slowly expanding across the universe and an decendant talking to 'the aliens'.

I am not totally serious. But you wanna meet aliens? Gotta do something a bit radical.


If you haven't, you should read Accelerando, it's a collection of short stories IIRC that were put into a novel by the author. I didn't want to start with that, but that is in there. :)


If I may suggest another read: Perfect Imperfection by Polish author Jacek Dukaj. It's definitely weirder, than Accelerando, as the book drops you straight into the last parts of evolution curve, but definitely worth reading if you have liked Accelerando.

The story is super weird, but what I found out is that piecing together a picture of a far-future society from this story was very exciting.


My one-sentence review of Accelerando is "VASTLY better than the first couple chapters will make you believe."


I'm reading Accelerando right now and there's some unnecessary weird sex stuff at the start.

Good book despite that though, some very interesting ideas.


I imagine fuel isn't that big a problem until you care about being able to decelerate once you arrive.


And we don't have to send people, we should do our job as a Von Neumann probe and send frozen rna to distribute across the surface.


In space culture this is widely considered a dick move.


and in that 10,000 year blink, a civilization progresses from bronze metalworking to digital computers, awaiting our arrival


Can't pulse nuclear get there? Or does it require antimatter catalyzed fission?


looks like it's there now


Models like this are experimentally pretrained or tuned hundreds of times over many months to optimize the datamix, hyperparams, architecture, etc. When they say "ran parallel trainings" they are probably referring to parity tests that were performed along the way (possibly also for the final training runs). Different hardware means different lower-level libraries, which can introduce unanticipated differences. Good to know what they are so they can be ironed out.

Part of it could also be that they'd prefer to move all operations to the in-house trn chips, but don't have full confidence in the hardware yet.

Def ambiguous though. In general reporting of infra characteristics for LLM training is left pretty vague in most reports I've seen.


He is coming from the perspective of a long-running debate on symbolic versus statistical/data-driven approaches to modeling language structure and use. It seems in recent years he has had trouble coming to terms with the fact that at least for real-world applications of language technology, the statistical approach has simply won the war (or at worst, forms the core foundation on top of which symbolic approaches can have some utility).

I come from the same academic tradition, and have colleagues in common with him. He has been advocating for a quasi-chomskyan perspective on language science for many years -- as have many others working at the intersection of linguistics and psychology/cog sci.

TBH I suspect he himself is a large part of his target audience. A lot of older school academics raised in the symbolic tradition are pretty unsettled by the incredible achievements of the data-driven approach.

Personally I saw the writing on the wall years ago and have transitioned to working in statistical NLP (or "AI" I suppose). Feeling pretty good about that decision these days.

FWIW I do think symbolic approaches will start to shine in the next several years, as a way to control the behavior of modern statistical LMs. But doubtful they will ever produce anything comparable to current systems without a strong base model trained on troves of data.

edit: Worth noting that Marcus has produced plenty of high-quality research in his career. I think his main problem here is that he seems to believe that AI systems should function analogously to how human language/cognition functions. But from an engineering/product perspective, how a system works is just not that important compared to how well it works. There's probably a performance ceiling for purely statistical models, and it seems likely that some form of symbolic machinery can raise that ceiling a bit. Techniques that work will eventually make their way into products, no matter which intellectual tradition they come from. But framing things in this way is just not his style.


Savor it while you can. As a former academic, for me the lack of intrinsic motivation to "create value for shareholders" is the hardest part of working in industry.


As painful as it can be at times, it is a truly beautiful phase of life during which your main obligations are to become an expert in something that interests you and to make enough money to not starve and have a place to live. If you are single, coming directly from the "broke college student" lifestyle, and end up at a university with a good stipend, it won't even feel like you are "poor" and the money is mostly enough. But the life of a grad student in a large public university can come with much more financial instability and heavier teaching loads from day one, with less time for slacking off and letting ideas marinate. Less so if you are in a field/have an advisor with good/consistent funding. The devil is in the details.

Wouldn't change it for the world though, and anecdotally most people I know who ended up finishing the PhD feel the same way.

Main shortcoming of the (American) grad school experience imo is lack of preparation to join the corporate workforce (in my field, there are easily >10x the graduating PhDs each year than there are available university jobs). Academia has done a terrible job preparing grad students for the harsh reality of a non-academic career. Keeping this in mind throughout grad school will help a lot -- you can see the difference in non-academic career trajectory between people who had a backup plan and those who didn't.


> But the life of a grad student in a large public university can come with much more financial instability and heavier teaching loads from day one

Depends on whether you won federal grants though, although most of those end up thrown at PHD students from Ivys and Stanford (sadly?).


This is just brilliant. Brings back memories, some fond others less so. Only addition I'd suggest is a subplot involving teaching/TAing duties and/or money problems.

Good to be occasionally reminded that slacking off is a legitimately important part of the scientific process. Wish this view was more popular in the industry.


That's an insane story. As much as I hate flying, modern aviation infrastructure is one of mankind's most impressive feats.


For those interested, there is a podcast about this book and some cases it's been relevant in: https://podcasts.apple.com/us/podcast/hit-man/id1449636432


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: