Hacker News new | past | comments | ask | show | jobs | submit login

Deepseek shows you details of the reasoning so you can trust its answers more and you can correct it when it makes a bad turn.



o1 also does this.


o1 does not show the reasoning trace at this point. You may be confusing the final answer for the <think></think> reasoning trace in the middle, it's shown pretty clearly on r1.


Is this different data or different annotation


I wasn't really referring much to the UI as I was the fact that it does it to begin with. The thinking in deepseek trails off into its own nonsense before it answers, whereas I feel openai's is way more structured.


All you get out of o1 is

    Reassessing directives

    Considering alternatives

    Exploring secondary and tertiary aspects

    Revising initial thoughts

    Confirming factual assertions

    Performing math

    Wasting electricity
... and other useless (and generally meaningless) placeholder updates. Nothing like what the <think> output from DeepSeek's model demonstrates.

As Karpathy (among others) has noted, the <think> output shows signs of genuine emergent behavior. Presumably the same thing is going on behind the scenes in the OpenAI omni reasoning models, but we have no way of knowing, because they consider revealing the CoT output to be "unsafe."


o1 does not output the full CoT tokens, they are not comparable.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: