Seems weird to be ok with slave labour and poor practices, simply because you want to ignore the negative externalities and consume cheap things (ie. undermining local economic production).
The article mentions return to office policies as a reason, but you have to wonder what the severance package was.
Anecdotally, a company my friend works at just laid people off, and a majority of the ones that were not let go, wished that they were given the option.
You have to wonder if there's some interesting data with regards to tight (?) labour markets, policy that has nudged severance packages higher, etc.
1.5 month of salary for each year of being at SAP. And people who have been there for 20 years get 33.5 months.
That’s quite high even for Germany where it is difficult to dismiss people. Because it is so difficult, a method often used is a mutual agreement with a severance package above what you would get awarded by a court.
Is this a sarcastic quip or are you able to expand on this?
I use a lot of serverless daily, handling events (even ML inference), and it seems to work great, but would love to understand the alternatives and your perspective.
The overhead of abstracting away the servers is a luxury in many ways. This extra cost I believe was heavily funded by low-interest rates which flushed the VC world with dough. There’s been a lot less serverless talk since the fed started cranking the rates
Sorry, but this feels like a total non sequitir. Serverless or FaaS is pretty mature now. People get the concept, businesses understand the savings, and the services and tooling are stable. We don't talk about it because it's boring.
Maybe the backend is, but the frontend aspect is very bad
Both GCP and AWS have terrible web UI's for their cloud functions offerings and every deploy is so slow I'm lucky that next monday is a holiday so I can rest from the stress of having to use GCP last friday (on a deadline)
Codesandbox should offer their own serverless functions so I can actually have serverless for the whole development cycle
I've used serverless for the past 3 years in production. Unfortunately my experience with it is that it's several orders of magnitude more expensive than a k3s cluster on a cheap provider like Hetzner, and it's slower.
When I last calculated the cost of serverless, it was ~500-5,000x more expensive for the compute compared to k3s and ~10x more expensive for bandwidth at a minimum. To me, removing the burden of maintaining infra didn't justify that level of cost.
Some examples:
- Upstash latency was ~70ms for Redis. Cost was prohibitive.
- AWS Lambda / Cloudflare Worker / Firebase Function cost becomes prohibitive. At least cold starts aren't as bad as they used to be.
- Firebase Realtime Database performance didn't scale, and wound up getting maxed out because of the way it works with nested key updates. Replaced with a Redis instance in k3s which is now running at <2% max capacity and is ~1,000x cheaper.
- Tried Planetscale. Cost was much higher than PostgreSQL.
- Tried Vercel. Bandwidth costs are very scary ($400 / TB egress, or ~350x the cost of Hetzner if you don't count Hetzner's free 20 TB per node)
That being said, I don't know of any good, reasonably-priced GPU offerings.
I don't dispute your claim (and am personally swayed of the argument of technical > admin in most situations) - do you have data that shows this disparity?
Purely conjecture (which means absolutely nothing), I've always assumed that admin:faculty rates are similar, but the volume is where it matters (ie. 2x more admin:faculty in headcount).
I don't, but suppose on a pure hourly basis they're paid the same. I think that's still unfair, given the amount of value a CS lecturer delivers over an admin.
The SAM model is small (4m params) , but requires image embedding to be computed from what is I think a 600m params model. Right now the demo uploads the image to get the embeddings, then runs the actual segmentation locally.
I downloaded the code from their repo, exported their pytorch model to onnx, and ran a prediction against it. Everything ran locally on my system (cpu, no cuda cores) and a prediction for the item to be annotated was made.
This was my first thought too - I'm starting to question it, because they do have a product. Whether they have fit, will be interesting to see.
More than most of the hype seed rounds in 2021.
Anaconda automatically handles things like making sure the correct version of cuDNN for your graphics card is installed. When I tried doing this myself with venv it was really painful.
i use venv this way. i download and compile specific python versions and install them in a non-system dir with all the other versions. then just run the specific binary to create a venv and it seems to work as expected.
Why cause yourself difficulty by drifting towards optionality vs. using the ops suggestion and using venv?
This topic gets posted to HN far too often - I'm starting to think people are deliberately avoiding venv for some reason, because otherwise it's a perfectly capable system for package management.
Seems weird to be ok with slave labour and poor practices, simply because you want to ignore the negative externalities and consume cheap things (ie. undermining local economic production).
reply