Honestly the same is true for human devs. As frustrating as strict linting can be for newer devs, it’s way less frustrating than having all the same issues pointed out in code review. That’s interesting because I’ve been finding that all sorts of stuff that’s good for AI is actually good for humans too, linting, fast easy to run tests, standardized code layouts, etc. Humans just have more ability to adapt to oddities at the moment, which leads to slack.
My rule of thumb is that if I get a nit, whitespace, or syntax preferences as a PR comment, that goes into the linter. Especially for systemic issues like e.g. not awaiting functions that return a promise, any kind of alphabetization, import styles, etc.
Yeah I find it pretty funny that so much of us (myself included) threw out strict documentation practices because “the code should be self documenting!”
Now I want as much of it as I can get.
thelab (https://www.thelab.co) | Senior Backend Engineer | ONSITE (NYC) or REMOTE (US or Türkiye) | Full-Time
We are looking for two experienced back-end engineers: (1) one to lead work with our client’s e-commerce business; (2) a second to lead a greenfield project involving ML and NLP driven workflow automation tools. The ideal candidate would have experience working with large Python and Typescript codebases. We are looking for someone who is self-motivated and is able to work with a team early on in a project, plan and identify requirements, see a project through to completion, and mentor junior members of the team along the way.
---
About thelab
We’re an agency of makers with deep expertise in solving creative and technology challenges. Our focus is on making better work to help brands work better. From branding & design, to software builds, large-scale ecommerce or anything in between, thelab mixes inspiration and hard work to produce results that mean business. Oh, and we host a mean barbecue too.
Luxury. My washer and dryer both display numbers the appear to be minutes remaining, but do not tick down with any discernible relation to time. They start around 55, then over the course of 90 minutes decrement down to 10 in random steps. Then it hangs at 10 for another 10-20 mins, then eventually goes to 0 and shuts off. Are these supposed to be minutes? Is there any reason for their disconnection from the passage of time? Who knows? The manual certainly doesn’t say.
Of all the services AWS has, RDS is one of the best values and lowest risks. The peace of mind and reliability it gives you on something as important as a DB is well worth the cost.
If you’re trusting a party for identity verification of a party you’re communicating with, isn’t that implying trust with the correspondence itself? After all, they could just “verify” themselves as the other party of your correspondence.
It’s the same principle as user permission management—if someone has the ability to grant permissions then they are a superuser, because they can grant any additional permission to themselves.
Just because I trust you to recommend a therapist doesn't mean I'll trust you to listen to my therapy session and keep it private. Or that you wouldn't try to decode messages.
I believe that you are supposed to use identity providers that aren't involved in the actual transmission of the message.
For example suppose I want to send you an encrypted message. Your email address is in your HN profile. I ask an identity provider for a public key for that email address, encrypt my message using that key, and send it to that email address.
Identity provider shenanigans might result in me encrypting that message with a public key whose private key might be known by the identity provider or other third parties, but unless they can intercept my mail in transit or gain access to it in your mailbox they can't make much use of that.
> After all, they could just “verify” themselves as the other party of your correspondence.
They can't do this at any real scale, because they will be caught. (Because they can't use the same key as the real key--since they don't have the private portion, if designed sanely--so anyone verifying it manually out of band will see that the trust has been violated. Also they will have to continuously modify future communications, which can be difficult, which is another way to get caught.)
If that was possible, we wouldn't need Certificate Transparency (whose purpose is to at scale detect Certificate Authorities doing this and other shenanigans).
Also "being able to catch them" is strictly better than "basically not able to catch them", which is the case for Twitter DMs (and most common DM systems), which was mentioned up-thread.
Why/how would they get caught in the identity-based scheme mentioned in the article? What are you even verifying out of band in this context?
Like all that you wrote is true of webPKI that we use on the internet for TLS, but the article is talking about an alternative that is not PKI, and does not work the same way.
Even if we had a backup probe to launch right now, it would take 46 years for it to get to where voyager 1 is now. How is the cost of repairing voyager 1 greater than 46 years of lost data?
reply