Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Claude skills is bad mate, it needs to be put to bed. The approach this guy is talking about is better because it's open and explicit. https://sibylline.dev/articles/2025-10-20-claude-skills-cons...




That article appears to be very confused about how skills work.

It seems to think they are a vendor lockin play by Anthropic, running as an opaque black box.

To rebut their four complaints:

1. "Migrating away from Anthropic in the future wouldn't just mean swapping an API client; it would mean manually reconstructing your entire Skills system elsewhere." - that's just not true. Any LLM tool that can access a filesystem can use your skills, you just need to tell it to do so! The author advocates for creating your own hierarchy of READMEs, but that's identical to how skills work already.

2. "There's no record of the selection process, making the system inherently un-trustworthy." - you can see exactly when a skill was selected by looking in the tool logs for a Read(path/to/SKILL.md) call.

3. "This documentation is no longer readily accessible to humans or other AI agents outside the Claude API." - it's markdown files on disk! Hard to imagine how it could be more accessible to humans and other AI agents.

4. "You cannot optimize the prompt that selects Skills. You are entirely at the mercy of Anthropic's hidden, proprietary logic." - skills are selected by promoting driven by the system prompt. Your CLAUDE.md file is injected into that same system prompt. You can influence that as much as you like.

There's no deep, proprietary magic to skills. I frequently use claude-trace to dig around in Claude Code internals: https://simonwillison.net/2025/Jun/2/claude-trace/ - here's an example from last night: https://simonwillison.net/2025/Oct/24/claude-code-docs-map/

The closing section of that article revels where that author got confused They said: "Theoretically, we're losing potentially "free" server-side skill selection that Anthropic provides."

Skills are selected by Claude Code running on the client. They seem to think the it's a model feature that's proprietary to Anthropic - it's not, it's just another simple prompting hack.

That's why I like skills! They're a pattern that works with any AI agent already. Anthropic merely gave a name to the exact same pattern that this author calls "Agent-Agnostic Documentation" and advocates for instead.


This isn't an accurate take either. You can load client skills, sure, but the whole selling point from Anthropic is that skills are like memory, managed at the API layer "transparently" and that those skills are cross project. If you forced Claude to only use skills from the current project to try and be transparent, it would still be a black box that makes it harder to debug failures, and it'd still be agent and vendor specific rather than more human friendly and vendor agnostic.

What do you mean by "managed at the API layer"?

I'm talking about skills as they are used in Claude Code running on my laptop. Are you talking about skills as they are used by the https://claude.ai consumer app?


Skills are just lazy loaded prompts. The system prompt will include the summary of every skill, and if the LLM decides it want to use a specific skill the details of that skill will be added to the system prompt. It's just markdown documents all the way down.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: