> Now it feels like many companies have made attempts to conform to the standards, which is no simple feat, especially for multi-nationals.
You're right, actually conforming to the standard is no easy feat, but luckily, you can do the bare minimum[0] and continue your greenwashing processes.
Your comment is disingenuous. While there are certainly many businesses who do not undertake sustainability efforts in earnest or at all, we (as consumers) have to want companies to try. Otherwise they are even further disincentivized to make an effort. I agree that it’s a bad look when a Nestle company can obtain a B Corp score, but that’s not to discredit the entire program.
B Corp certifying makes a critical distinction for consumers. Is it a panacea for climate crises? Of course not. Will it single-handedly fix profit-incentivized business models that exploit natural resources? Of course not. It’s a start, and you shouldn’t assume that all companies undertaking its certification are merely greenwashing.
You could try to make the argument that enabling greenwashing is detrimental to them environmentalism effort, but then I’d have to get a sense for what you think is a meaningful step toward adjusting business practices.
> I agree that it’s a bad look when a Nestle company can obtain a B Corp score, but that’s not to discredit the entire program.
Can you explain why this shouldn't discredit the entire program? What distinction does b-corp certification actually make? I realize the point is it's supposed to signify that the corporation intends to make a positive impact, but clearly that's not actually a requirement.
> Will it single-handedly fix profit-incentivized business models that exploit natural resources?
It will do this not at all, indeed. It's the materials scientists and the VCs and the investors and the logisticians and the engineers who will solve the problems. B Corp is a marketing thing to allow companies to access middle class wallet share.
And fair enough; whatever differentiates you. But it's not going to solve anything.
To be clear, I did not write my comment to discredit the entire system. B-Corp is certification is the best thing we have to make these decisions a little easier on the consumer.
I made the comment to make people aware that it should still be looked at with caution, as corporations are going to try to game the system.
Kinda. Meta and Twitter want you to join their platforms, they aren't general purpose search engines scraping the entire Internet - they're scraping people that join them. Requests from Meta/Twitter are probably from a link someone put in a post.
ChatGPT can't be an impolite Internet citizen (spoofing UA's) and claim to be using AI for the good for humanity, so they're not going to be dishonest with their user-agent.
> ChatGPT can't be an impolite Internet citizen (spoofing UA's) and claim to be using AI for the good for humanity, so they're not going to be dishonest with their user-agent.
That reads an awful-lot like "Google can't be evil and claim that their motto is 'Don't be evil', so they're not going to be evil" but here we are. The profit motive eventually undoes any principled claim by a company.
Absolutely, but until then, adding bot UA's to a blocklist is somewhat useful.
Like anything else in IT security it's never "set and forget" permanently; the effectiveness of things like that decay over time and must be periodically re-evaluated.
But if something can be used to your advantage now, even if for a while, then why not use it.
A long time ago, I was still in college (UK college, i.e., pre-university), and still learning.
I discovered a classmate was involved in some event, and found the event's website. They didn't have a captcha. By your logic, this was the right choice.
In reality, my dumb ass decided it would be fun to script something that would register millions of users (another classmate ran the script with me). After a few hundred thousand registration, the website was brought to its knees. I was a bit shook, but didn't think much of it.
Next morning I come into class, and was reprimanded by my teacher. Turns out, the owner of said event had threatened to sue the school and me, among other things. What had happened was their servers were down, their email server was brought to its knees, their web servers had died, and generally I had caused a lot of damage without even thinking about it. It caused them to potentially lose some money. None of this was my intention, of course, but I didn't know much better.
Point is, kids will kid, and spammers will spam. There are plenty of bots that just scrape the internet and fill out forms indiscriminately.
Captcha may or may not be the best option here (I'm always of the opinion it's not, especially not reCAPTCHA), but something has to be put in place, even if to stop the majority of bad actors.
You can, but then you discover that places like Bangladesh and Cambodia, that do a fair bit of freelance work on the 'net use a surprisingly tiny number of IPv4 addresses to do it.
For lots of these countries their total allocation of IPv4 addresses is < 20 per 1000 people and the nature of their access (through glorified internet cafes) mean that you will have some IP addresses that really are totally legit, yet have LOTS of users.
One size fits all is very dangerous on the Internet.
One the one hand, I assume bad due to cheap equipment. On the other, it's not like v6 addresses are expensive and you need some way of addressing every subscriber anyway. As more people sign up (as the country gets more people with internet access), you need more equipment which could support v6 out of the box, and the excuse for CGNAT I've always heard is old equipment that is harder to upgrade than to put a NAT router in front of. Could go either way from my POV.
If the roll-out is good, then all those people are already taken care of and the minority left on v4 CGNAT aren't bothered by the collective rate limit.
(To preempt the eventual remark that users can generate a billion addresses in v6: rate limiting on v6 works by limiting whatever prefix the ISP gives out to subscribers, like /56, not individual addresses the way it's often done with v4.)
As an aside, it should also be kept in mind that not every use case involves signing entire countries up for their service, even in an ideal case.
That has been my experience on any mobile network, also in 2007 or so when v4 addresses were still available (because my 15-year-old self wanted to seed torrents with my unlimited data bundle ...on GPRS). It's a fair point that one has to consider this part of the market, though I was primarily thinking of wired connections.
It isn't good and purely being on IPv6 is still a terrible web experience in any event. Huge % of major websites don't properly support IPv6 yet. It's ridiculous.
What you could do is, use both. One sign up from each IP per day before you get a CAPTCHA. Then you're not subjecting 99% of your users to training Google's AI for free but the people at a cafe in Bangladesh can still sign up.
There's nothing wrong with that. VSCode will certainly have a lot more development that a fork would ever see. So the best thing to do here is simply build VSCode with all proprietary stuff removed.
Clangd and rust-analyzer plugins aren't made by MS. Python has a couple separate ones, one of which the official Microsoft plugin uses. I think the only language needing a MS plugin is TypeScript, which is a MS language anyway.
None of these clients filter ads. They just return the API results. Reddit could have in theory returned ads and blocked clients who didn't (or required them to pay some/more money). Instead, they decided to charge extortionate amounts, essentially causing 3rd party apps to be unable to afford them.
It's very obvious their aim isn't necessarily charging, they're more interested in getting rid of 3rd party clients so that people would be forced to use their horrendous app. As a company, they have a right to go towards that route, but they should just say that.
> None of these clients filter ads. They just return the API results. Reddit could have in theory returned ads and blocked clients who didn't (or required them to pay some/more money). Instead, they decided to charge extortionate amounts, essentially causing 3rd party apps to be unable to afford them.
Or, you know, just reasonable price for API access. Even $2/mo gonna compensate them for lost ads multiple times over
Shouldn't moderators and active users (through posting, commenting, voting/curating, etc.) then at the very least get a discount? After all, those people are and have been compensating the platform for any lost ad income far beyond even that $2/mo.
Why is it that companies go towards bad UX when they start ramping up profits? Download size is one thing, but ignoring that, how is it that they thought the new (Reddit) website was a good idea? Sure it's mobile friendly, but it's barely useable on mobile and desktop? For example: why do I have to constantly click read more?
This seems to be a common thing with all kinds of companies, Digg, Facebook, many others. It's not like you can't build ads into the current site, right? Tracking can be easily built-in these days. Sure it'll decrease initial download speeds, but the UX won't be as terrible as what they keep coming up with.
Not based on anything I can verify, but my feeling is that when a company gets to a certain size and becomes high profile, it will attract certain types of employees as well. The more high profile, the more a UI/UX department will want to put their stamp on it. Instead of looking at what would make the product more user friendly and accessible, they'll be attracted to the shiny new things that they can do, simply because they can. The little nuances that are present in the old are allowed to fester until all of them combined lead to a general view that the whole thing is crap. Instead of doing small incremental improvements, they'll convince everybody (up to c-level) that a complete overhaul is the only way to move forward. This usually tends to happen in an echo chamber, where everybody who doesn't have the same vision of grandeur is excluded as part of the conversation. That, combined with business goals around profitability, is a dangerous thing to happen, and from my experience usually ends up in tears.
But a complete revamp doesn't necessarily mean bad UI/UX. It just means something different. It can be done in an equally good or better way. In comparison, the Reddit UI was just horrendous. It's barely usable. Without old.reddit or 3rd party apps, I would have probably stopped using Reddit by now due to how horrendous it is to use.
It used to be somewhat difficult to track a reader's progress solely from the browser's scrollbar. Not impossible, but it wasn't something the average code-camp newbie could figure out. So sticking a stupid "Read More" button on the page was the default solution.
That still seems to be the solution for a whole lot of websites, even though the Intersection Observer API now exists.
Later in life of these social media websites governed by a profit oriented company, the new metric becomes engagement (Ka-ching! $$). It does not matter to them, how the UX suffers ... until it starts to matter. But by then it might be too late.
I find it really wild that anyone would ever recommend ECS. A developer deploying a service involves:
- Setting up certs (managed as TF)
- Setting up ALBs (managed as TF)
- Setting up the actual service definition (often done as a JSON, that is passed into TF)
Possibly other things I'm forgetting.
Some other things. It requires a *developer* to know about certs and ALBs and whatever else.
With EKS, this can all be automated. The devops engineer can set it up so that deploying a service automatically sets up certs, LBs etc. Why are we removing such good abstractions for a proprietary system that is *supposed* to be less management overheads, when in reality, it causes devs to do so much more, and understand so much more?
I honestly don't understand where you're coming from. If a devops engineer can set things up on eks for people to launch without thinking of those things, what's stopping that same engineer from doing similar for ecs?
When I was at Rad AI we went with ECS. I made a terraform module that handled literally everything you're talking about, and developers were able to use that to launch to ECS without even having to think about it. Developers literally launched things in minutes after that, and they didn't have to think about any of those underlying resources.
Handing Terraform to developers has it's own host of issues.
A major benefit of k8s that is usually massively overlooked is it's RBAC system and specifically how nice a namespace per team or per service model can be.
It's probably not something a lot of people think about until they need to handle compliance and controls for SOC II and friends but as someone that has done many such audits it's always been great to be able to simply show exactly how can do what on which service in which environment in a completely declarative way.
You can try achieve the same things with AWS IAM but the sheer complexity of it makes it hard to sell to auditors which have come to associate "Terraform == god powers" and convincing them that you have locked it down enough to safely hand to app teams is... tiresome.
What you say may make sense for a large corporation with hundreds of developers from many teams, all sharing a single cluster, but remember this is a pre-revenue startup with a single dev team of less than a dozen people.
But then with a large cluster you will struggle with splitting the costs. In such scenarios I'd rather give each team its own AWS account and have some devops people set up everything from the landing zone.
In this particular case, every service is set up from less than 100 lines of Terraform, which includes Docker image build and push, as well as the task and service definition that deploys that docker image.
yes, they need to handle Terraform, but it's really not so different from the previous Docker-compose YAML file, not to mention the way it would look if converted to K8s YAML.
Make Terraform run only off git repos, and control commit rights to that repo. That's been a successful approach for me in the past when dealing with auditors.
Why does the developer need to care about the certs and ALBs? The devops engineer you need to set up all those controllers could as well deploy those resources from Terraform.
As I showed in the diagrams from the article this application has a single ALB and a single cert per environment and the internal services only talk to each other through the rabbit MQ queue.
DNS, ALB and TLS certs could be easily handled from just a few lines of Terraform, and nobody needs to touch it ever again.
With EKS you would need multiple controllers and multiple annotations controlling them, and then each controller will end up setting up a single resource per environment.
The controllers make sense if you have a ton of distinct applications sharing the same clusters, but this is not the case here, and would be overkill.
> DNS, ALB and TLS certs could be easily handled from just a few lines of Terraform, and nobody needs to touch it ever again.
Welcome to reality, where this is not the case.
I'm currently working at a company where we're using TF and ECS, and app specific infra is supposedly owned by the service developers.
In reality, what happens is devs write up some janky terraform, potentially using the modules we provide, and then when something goes wrong, they come to us cos they accidentally messed around with the state or whatever. DNS records change. ALB listener rules need to change.
That seems a strange way to look at things to me. If you're going to give credit for things that a devops engineer can do inside the Kubernetes platform, why not given equivalent credit for what a devops engineer can do with a Terraform module that would achieve substantially similar levels of automation and integration with ECS?
Also weird to leave out which things are versioned things that must be installed, maintained, and upgraded by you (e.g. cert-manager, an ALB controller, the Kubernetes control plane) that do not apply to a Terraform (or CloudFormation)-based deployment to ECS.
It was definitely not about being contrarian but about offering first and foremost a more cost effective but still relatively simple, scalable and robust alternative to their current setup.
They have a single small team of less than a dozen people, all working on a single application, with a single frontend component.
Imagine instead this team managing a K8s setup with DNS, ALB and SSL controllers that each set up a single resource. I personally find that overkill.
You're right, actually conforming to the standard is no easy feat, but luckily, you can do the bare minimum[0] and continue your greenwashing processes.
[0] https://www.bcorporation.net/en-us/find-a-b-corp/company/nes...