Hacker News new | past | comments | ask | show | jobs | submit login

We’ve seen models improve for years now too. How many iterations are required for one to inductively reason about the future?



How many days does it take before the turkey realizes it’s going to get its head cut off on its first thanksgiving?

Less glibly I think models will follow the same sigmoid as everything else we’ve developed and at some point it’ll start to taper off and the amount of effort required to achieve better results becomes exponential.

I look at these models as a lossy compression logarithm with elegant query and reconstruction. Think JPEG quality slider. The first 75% of the slider the quality is okay and the size barely changes, but small deltas yield big wins. And like an ML hallucination the JPEG decompressor doesn’t know what parts of the image it filled in vs got exactly right.

But to get from 80% to 100% you basically need all the data from the input. There’s going to be a Shannon’s law type thing that quantifies this relationship in ML by someone who (not me) knows what they’re talking about. Maybe they already have?

These models will get better yes but only when they have access to google and bing’s full actual web indices.


I don't think access to bing or google solves this problem. Right now, there are many questions that the internet gives unclear answers to.

Try to find out if a plant is toxic for cats via google. Many times the results say both yes and no and it's impossible to assume which one is true based on the count of the results.

Feeding the models more garbage data will not make the results any better.


quite, look at Google offering a prize pot for 'forgetting', also, sorry, but typical engineer think that this comes after the creation, like forever plastics or petroleum, for some reason, great engineers often seem to struggle with second and third order consequences or believe externalities to be someone else's problem. Perhaps if they had started with how to forget, they could have built the models from the ground up with this capability, not tacked on after once they realise the volume of bias and wrongness their models have ingested...


We watched Moore's law hold fast for 50 years before it started to hit a logarithmic ceiling. Assuming a long-term outcome in either direction based purely on historical trends is nothing more than a shot in the dark.


Then our understanding of the sun is just as much a shot in the dark (for it too will fizzle out and die some day). Moore’s law was accurate for 50 years. The fact that it’s tapered off doesn't invalidate the observations in their time, it just means things have changed and the curve is different that originally imagined.


While my best guess is that the AI will improve, a common example against induction is a turkey's experience of being fed by a farmer, every day, right up until Thanksgiving.


As a general guideline, I tend to believe that anything that has lived X years will likely still continue to exist for X more years.

It is obviously very approximative and will be wrong at some point, but there isn't much more to rely on.


> I tend to believe that anything that has lived X years will likely still continue to exist for X more years.

I, for one, salute my 160-years-old grandma.


With humans, there is a lot of information available on how long a normal lifespan is. After all, people die all the time.

But when you try to predict a one-off event, you need to use whatever information is available.

One very valid application of the principle above is to never make plans with your significant other that are further off in the future than the duration of the relationship. So if you have been together for two months, don't book your summer vacation with them in December.


May she goes to 320


420




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: