Hacker News new | past | comments | ask | show | jobs | submit login

This article feels like an old timer who knows WebSockets just doesn't want to learn what SSE is. I support the decision to ditch WebSockets because WebSockets would only add extra bloat and complexity to your server, whereas SSE is just HTTP. I don't understand though why have "stdio" transport if you could just run an HTTP server locally.





I'm confused by the "old timer" comment, as SSE not only predates WebSockets, but the techniques surrounding its usage go really far back (I was doing SSE-like things--using script blocks to get incrementally-parsed data--back around 1999). If anything, I could see the opposite issue, where someone could argue that the spec was written by someone who just doesn't want to learn how WebSockets works, and is stuck in a world where SSE is the only viable means to implement this? And like, in addition to the complaints from this author, I'll note the session resume feature clearly doesn't work in all cases (as you can't get the session resume token until after you successfully get responses).

That all said, the real underlying mistake here isn't the choice of SSE... it is trying to use JSON-RPC -- a protocol which very explicitly and very proudly is supposed to be stateless -- and to then use it in a way that is stateful in a ton of ways that aren't ignorable, which in turn causes all of this other insanity. If they had correctly factored out the state and not incorrectly attempted to pretend JSON-RPC was capable of state (which might have been more obvious if they used an off-the-shelf JSON-RPC library in their initial implementation, which clearly isn't ever going to be possible with what they threw together), they wouldn't have caused any of this mess, and the question about the transport wouldn't even be an issue.


> I'm confused by the "old timer" comment

SSE only gained traction after HTTP/2 came around with multiplexing.


HTTP call may be blocked by firewalls even internally, and it's overkill to force stdio apps to expose http endpoints for this case only.

As in, how MCP client can access `git` command without stdio? You can run a wrapper server for that or use stdio instead


> As in, how MCP client can access `git` command without stdio?

MCP clients don't access any commands. MCP clients access tools that MCP servers expose.


Here's a custom MCP tool I use to run commands and parse stdout / stderr all the time:

    try {
        const execPromise = promisify(exec);
        const { stdout, stderr } = await execPromise(command);
        if (stderr) {
            return {
                content: [{
                    type: "text",
                    text: `Error: ${stderr}`
                }],
                isError: true
            };
        }
        return {
            content: [{
                type: "text",
                text: stdout
            }],
            isError: false
        };
    } catch (error: any) {
        return {
            content: [{
                type: "text",
                text: `Error executing command: ${error.message}`
            }],
            isError: true
        };
    }
Yeah, if you want to be super technical, it's Node that does the actual command running, but in my opinion, that's as good as saying the MCP client is...

The point is that MCP servers expose tools that can do whatever MCP servers want them to do, and it doesn’t have to have anything to do with stdio.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: