Hacker News new | past | comments | ask | show | jobs | submit login

> Rather - we had fundamental strong reasoning almost flawless rigorous (formally documented) capabilities from the get go.

What are you talking about? This doesn't describe the vast majority of human knowledge.

You're dismissing the possibility of new technology via a real loose analogy and insulting anyone who disagrees. This isn't what good reasoning looks like.

GANs already exist. The linked project already exists.

> A model will generate its own input and will watch it's own output and in process, will become more intelligent than it really is.

If you sub out the needless contradiction of "more intelligent than it really is" for "more intelligent than it was", then this is something that already happens. It will continue to happen whether you believe in it or not.

You're acting like you're arguing against people with sketches of perpetual motion devices. You're not. This is like arguing against heavier-than-air flight in 1905. You are way behind.




> What are you talking about? This doesn't describe the vast majority of human knowledge.

I hope you don't mean to say that vast majority of knowledge is devoid of any consistent reasoning and is just hallucinated on the way as an LLM does.

> It will continue to happen whether you believe in it or not.

We'll see. I believe post transformers, next AI winter is around the corner - for a while.

What I see above is an LLM chewing its own output with light modifications as input and that's not going to lead anywhere as the README of the project itself clearly notes. It gets stuck.


It's certainly not "almost flawless rigorous (formally documented) capabilities".

> We'll see.

I guess people could just... stop. Seems unlikely. Again, you're saying that something already happening is impossible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: