Hacker Newsnew | past | comments | ask | show | jobs | submit | SOLAR_FIELDS's commentslogin

I watched the most recent avatar and it was some HDR variant that had this effect turned up. It definitely dampens the experience. There’s something about that slightly fuzzed movement that just makes things on screen look better

from what I heard, the actions scenes are shot in 48 fps and others are in 24 fps or something along those lines. You might be talking about that ?

This is a valuable lesson I learned when I worked with someone, not at Elastic, but who had previously worked at Elastic. Elastic was one of the original companies who made FOSS but with enterprise licensing work well. We were discussing in a meeting at this place we worked how to design license checking into the product.

What the guy said I found very insightful: he said that you don’t really need to spend a bunch of time and effort creating sophisticated license checks, you just need perhaps a single phone call to a server or something else that can be trivially defeated for anyone with a reasonable amount of technical knowledge. Why? Because the people who would defeat it are the kind of people who make horrible enterprise customers anyway. So in a way it’s just like a cheap lock. Won’t defeat anyone determined, because it’s not designed to. It’s designed to keep already honest people honest


I did something that was almost the same. Used to work for an educational software company that almost solely sold to schools, universities, and government institutions. Sometimes to corporate learning centers. Every sale was on a per-seat basis.

Every single customer we had wanted to be legal. Didn't want to exceed their seats or do anything which would violate their sales agreement. In the case of our government clients, such violations could lead them into legal penalties from their employer.

Despite having an unusually honest customer base, the company insisted on horridly strict and intrusive DRM. Even to the point of using dongles for a time. It frequently broke. Sometimes we had to send techs out to the schools to fix it.

I ended up just ripping all of that out and replacing it with a simple DLL on the Windows client. It talked to an tiny app server side. Used a barely encrypted tiny database which held the two numbers: seats in use & total seats available. If for some reason the DLL couldn't make contact with the server, it would just launch the software anyways. No one would be locked out due to the DRM failing or because the creaky school networks were on the blink again.

This system could have been cracked in five seconds by just about anyone. But it didn't matter since we knew everyone involved was trying to be honest.

Saved a massive amount of time and money. Support calls dropped enormously. Customers were much happier. It's probably my weakest technical accomplishment but it's still one of my proudest accomplishments.


Totally understandable and even reasonable position, but the paying customer gets the worse treatment, which does not sit right.

I got hooks working pretty well for simpler things, a very common hello world use case for hooks is gitleaks on every edit. One of the use cases I worked on for quite awhile was getting hooks that ran all unit tests at the end before the agent could stop generating. This approach forces the LLM to then fix any unit tests it broke and I also enforce 80% unit test coverage in same commit. I found it took a bit of finagling to get the hook to render results in a way that was actionable for the LLM because if you block it but it doesn’t know what to do it will basically endlessly loop or try random things to escape

FWIW I think your approach is great, I had definitely thought about leveraging OPA in a mature way, I think this kind of thing is very appealing for platform engineers looking to scale AI codegen in enterprises


Part of my initial pitch was to automate linting. Interesting insight on the stop loop. Ive been wanting to explore that more. I think there is a lot to be gained also with llm-as-a-judge hooks (they do enable this today via `prompt` hooks).

Ive had a lot of fun with random/creative hooks use cases: https://github.com/backnotprop/plannotator

I dont think the team meant for the hooks to work with plan mode this way (its not fully complete with approve/allow payload), but it enabled me to build an interactive UX I really wanted.


I think the key you point out is something that is worth observing more generically - if the LLM hits a wall it’s first inkling is not to step back and understand why the wall exists and then change course, its first inkling is to continue assisting the user on its task by any means possible and so it’s going to instead try to defeat it in any way possible. I see the is all the time when it hits code coverage constraints, it would much rather just lower thresholds than actually add more coverage.

I experimented with hooks a lot over the summer, these kind of deterministic hooks that run before commit, after tool call, after edit, etc and I found they are much more effective if you are (unsurprisingly) able to craft and deliver a concise, helpful error message to the agent on the hook failure feedback. Even just giving it a good howToFix string in the error return isn’t enough, if you flood the response with too many of those at once the agent will view the task as insurmountable and start seeking workarounds instead.


> ... if the LLM hits a wall it’s first inkling is not to step back and understand why the wall exists and then change course, its first inkling is ...

LLM's do not "understand why." They do not have an "inkling."

Claiming they do is anthropomorphizing a statistical token (text) document generator algorithm.


The more concerning algorithms at play are how they are post-trained. And the then concern of reward hacking. Which is what he was getting at. https://en.wikipedia.org/wiki/Reward_hacking

100% - we really shouldn't anthropomorphize. But the current models are capable of being trained in a way to steer agentic behavior from reasoned token generation.


> But the current models are capable of being trained in a way to steer agentic behavior from reasoned token generation.

This does not appear to be sufficient in the current state, as described in the project's README.md:

  Why This Exists

  We learned the hard way that instructions aren't enough to 
  keep AI agents in check. After Claude Code silently wiped 
  out hours of progress with a single rm -rf ~/ or git 
  checkout --, it became evident that "soft" rules in an 
  CLAUDE.md or AGENTS.md file cannot replace hard technical 
  constraints. The current approach is to use a dedicated 
  hook to programmatically prevent agents from running 
  destructive commands.
Perhaps one day this category of plugin will not be needed. Until then, I would be hard-pressed to employ an LLM-based product having destructive filesystem capabilities based solely on the hope of them "being trained in a way to steer agentic behavior from reasoned token generation."

I wasn’t able to get my point across. But I completely agree

There’s a reason they call working at Google “shuffling protobufs” for the vast majority of engineers. Most software work isn’t innovative compression algorithms. It’s moving data around, which is a well understood problem

It’s worth pointing out that it can be both. The hub and spoke model, relays, is often used for cloud setups where the overhead of installing clients on nodes is not worth the tradeoff

I believe strongly that people have zero problem paying their knuckle dragging police fuckwad of the day $150k if they would actually do the job they signed up for. It’s the fact that 99% of them can’t handle it that pisses people off

I’m glad Benn has gone into the YouTube space. He has demonstrated a great balanced view on how to sell your soul for advertisement money in YouTube land.

I’ve known of him a long time simply because of his extremely progressive views towards releasing his own music. In other words, I would not care about Benn Jordan but for the fact that he was releasing his own torrented music on WCD 15 years ago


Are you saying that because you fundamentally just don’t believe the db is a good place for auth, or because these low-code frameworks tend to roll it in and as such you see a lot of low quality implementations of auth from these systems simply because using them is within reach of someone who has no idea what they are doing?

To me it’s important to make this disambiguation. One take says that auth in db itself is a problem. The other take says “auth in db is a symptom of low code garbage”


I like to separate concerns. Unix philosophy and all that. That was the primary concern on my mind when writing my comment above.

I think the feature is there not necessarily because it’s the best technical idea but instead because of its ability to pull in less educated developers. That makes sense financially because there are fewer people out there with a higher degree of expertise. But from my perspective it shows that it’s not meant for me.


FWIW firebase auth and firebase DB are two separate things, and you can use them completely separately. However "Firebase" is a PaaS so I see how it gets confusing.

Fair call out but if I am a firebase customer, as I have been in the past but less frequently so, I treat them as a singular entity. In other words, there’s no situation I would use firebase and not use its auth, because the reason I might use firebase is Because Of the auth, not In Spite Of. There’s no world for me where firebase is the preferred option that doesn’t use auth, the integration like that is literally the only reason I would ever consider ClosedSourceOwnedByGoogle over alternatives

It’s even worse than No Idea what you are Doing. One can, as has been alluded to in other comments, be a completely naive rube who is using Supabase under the hood with v0 or Lovable and not have any idea that you’re even using it or that it exists at all.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: