Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

comparing kubernetes to what amounts to a subdirectory of shell scripts and their man pages is... brave?




Shell scripts written by nearly every product company out there.

There are lots of small and niche projects under the Linux Foundation. What matters for MCP right now is the vendor neutrality.


Are you saying nearly every product company uses MCP? What a stretch

Welcome to the era of complex relationships with the truth. People comparing MCP to k8s is only the beginning.

Truth Has Died

Lemme ask an AI to double check that vibe.

I'd say this thread is both comparing and contrasting them...

Quaint. People 1%, AI 99%.

I meant to say every enterprise product

It doesn't matter because only a minority of product companies worldwide (regardless enterprise or not) uses MCP. I'd bet only minority uses LLMs in general.

Oh so is that "truth" or "vibes" as the sibling comments are laughing about?

No, it's just another statement with no sources just like you:)

For what it's worth, I don't write MCP servers that are shell scripts. I have ones that are http servers that load data from a database. It's nothing really all that more exciting than a REST API with an MCP front end thrown on top.

Many people only use local MCP resources, which is fine... it provides access to your specific environment.

For me however, it's been great to be able to have a remote MCP HTTP server that responds to requests from more than just me. Or to make the entire chat server (with pre-configured remote MCP servers) accessible to a wider (company internal) audience.


Honest question, Claude can understand and call REST APIs with docs, what is the added value? Why should anyone wrap a REST API with another layer? What does it unlock?

I have a service that other users access through a web interface. It uses an on-premises open model (gpt-oss-120b) for the LLM and a dozen MCP tools to access a private database. The service is accessible from a web browser, but this isn’t something where the users need the ability to access the MCP tools or model directly. I have a pretty custom system prompt and MCP tools definitions that guide their interactions. Think of a helpdesk chatbot with access to a backend database. This isn’t something that would be accessed with a desktop LLM client like Claude. The only standards I can really count on are MCP and the OpenAI-compatible chat completions.

I personally don’t think of MCP servers as having more utility than local services that individuals use with a local Claude/ChatGPT/etc client. If you are only using local resources, then MCP is just extra overhead. If your LLM can call a REST service directly, it’s extra overhead.

Where I really see the benefit is when building hosted services or agents that users access remotely. Think more remote servers than local clients. Or something a company might use for a production service. For this use-case, MCP servers are great. I like having some set protocol that I can know my LLMs will be able to call correctly. I’m not able to monitor every chat (nor would I want to) to help users troubleshoot when the model didn’t call the external tool directly. I’m not a big fan of the protocol itself, but it’s nice to have some kind of standard.

The short answer: not everyone is using Claude locally. There are different requirements for hosted services.

(Note: I don’t have anything against Claude, but my $WORK only has agreements with Google and OpenAI for remote access to LLMs. $WORK also hosts a number of open models for strictly on-prem work. That’s what guided my choices…)


Gatekeeping (in a good way) and security. I use Claude Code in the way you described but I also understand why you wouldn’t want Claude to have this level of access in production.

Ironically models are sometimes more apt at calling REST or web APIs in general because that is a huge part of their training data.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: