Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It does not have access to the Excel app. You may be able to generate the .xlsx file using python libraries but you would need to run the python code on your own. ChatGPT can run generated code, which is probably why it works there.


I don't expect it to have the Excel app, I expect it to run the code it is capable of generating.

This is what I mean by their strategy being a jumble. Claude can do the hard part of figuring out what code to write and writing it, but then refuses to do the easier part of executing the code.


The Claude web UI cannot generate binary files, it's (currently) restricted to plain text.

If you want binary files you'd be better off with ChatGPT with Code Interpreter mode, which can run Python code that generates binary content.

Or ask Claude to write you Python code that generates Excel files and then copy and paste that onto your own computer and run it yourself.


> you'd be better off with ChatGPT with Code Interpreter mode

Yes, this is what I am saying. Why go to the trouble to build something as capable as Claude and then hamstring it from being as useful as ChatGPT? I have no doubt that Claude could be more useful if the Anthropic team would let it shine.


They've been investing engineering effort in Claude Artifacts instead, which I find incredibly useful: https://simonwillison.net/2024/Oct/21/claude-artifacts/

I'd love to see them produce their own Code Interpreter alternative, but in the meantime it's open for third parties to offer that (and a few do).


I have used Artifacts a couple of times and found them useful.

But now I am even more confused. They make an LLM that can generate code. They make a sandbox to run generated code. They will even host public(!) apps that run generated code.

But what they will not do is run code in the chatbot? Unless the chatbot context decides the code is worthy of going into an Artifact? This is kind of what I mean by the offering being jumbled.

BTW saw your writeup on the LLM pricing calculator -- very cool!


Yeah I can't imagine Claude will be without a server-side code execution platform forever. Both OpenAI (Code Interpreter) and Gemini (https://ai.google.dev/gemini-api/docs/code-execution) have had that for a while now, and it's spectacularly useful. It fills a major gap in a Chatbot's skills too, since it lets them reliably run calculations.

Sandboxing is a hard problem, but it's not like Anthropic are short on money or engineering talent these days.


See, I think this is a case of personal preference. I much prefer Claude's approach of figuring out the code and writing it for me to execute myself, rather than having it all in one box. Apart from anything else, it makes me think a little more about the process, and the desired outcome, rather than just iterate, click, iterate, click.

It's marginally less efficient, for sure, but it allows me greater visibility on the process, and gives me more confidence that whatever it's doing is what I want it to do.

But maybe that's some weird luddite-ism on my part, and I should just embrace an even blacker box where everything is done in the tool.


YMMV obviously. If I ask the magic box to make a spreadsheet, I don't need to see the Python for that any more than I need to see the code it uses to summarize something I paste in. I don't really even care that it has to write code to make the spreadsheet at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: