Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have to say, having to tell it to ask me clarifying questions DOES make it really look smart!



imagine if you make it keep going without having to reprompt it


Isn't that the exact point of o1, that it has time to think for itself without reprompting?


yeah but they aren't letting you see the useful chain of thought reasoning that is crucial to train a good model. Everyone will replicate this over next 6 months


>Everyone will replicate this over next 6 months

Not without a billion dollars worth of compute, they won't.


Are you sure its a billion? Helps with estimating the training run




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: