Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am scared that this is vibe coded and not audited in any way. tsnet is good software, but wrapping it in this way is a recipe for disaster. Please reconsider.





I agree and had the same thought. Tailscale ssh is good and I was interested in something like this but absolutely not if it’s AI generated garbage.

> I am scared that this is vibe coded

Totally serious question: would you feel better about this piece of software, if you didn't know that it was vibe coded?

Do we need "build without AI" stickers on every piece of software created these days?


I looked at the code and the documentation and it's definitely vibe coded. Also the presence of CLAUDE.md is pretty telling. I have no issue with vibe coding in general, but I am skeptical of the usefulness of LLMs with security code.

Yes, I think projects that are coded wholly or in part by LLMs should be noted as such.


Why would you trust a random person's project anymore than an AI project? I'd say the vast majority of the population is vastly less skilled than Claude Code.

I.e. just because it's human doesn't mean it's any more secure.


[flagged]


I don't really care if "AI assistance" was used as long a human is actually reviewing the output, which just doesn't seem to be the case here (and usually not the case when it comes to "vibe coding")

I feel fine if AI was used to add features to an established software. Let it loose on the linux kernel for what I care. It still somehow feels icky to use it to build something from scratch.

Ironically it wouldn't be very useful for Linux kernel development (would be very hard to out it in context) while it is more suitable for new projects written from scratch.

This of course not considering the quality or anything else.


Somewhat off topic question but I ask this from time to time and maybe now is that time. Has AI started fixing everyone's software bugs and closing out all the CVE's yet?

I saw a youtube video of a few CVE's being closed out by some automated AI bot they were talking about recently i can't remember the channel sorry :(

That said coderabbit's pretty damn impressive for just generally reviewing PR's and catching shit for human review.


No one is against using AI or coding with agents unless you don't understand what it's doing and you're incapable of reviewing the output. The problem isn't the tool, it's "coders" who unthinkingly trust it without verification.

Can you explain what the possible risks are?



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: