If both password and MFA are stored in the same shared vault then MFA's purpose is compromised. Anyone getting access to that shared vault has the full keys to the kingdom the same as if MFA wasn't enabled.
Also in this day and age, there's no reason to have the root account creds in a shared vault, no-one should ever need to access the root account, everyone should have IAM accounts with only the necessary permissions.
> If both password and MFA are stored in the same shared vault then MFA's purpose is compromised. Anyone getting access to that shared vault has the full keys to the kingdom the same as if MFA wasn't enabled.
absolutely
> no-one should ever need to access the root account
someone has to be able to access it (rarely)
if you're a micro-org having three people with the ability to get it doesn't seem that bad
everything else they did is however terrible practice
That email screenshot is pretty bad for Arko. It clearly shows intent to sell PII data to a third party during a time when Ruby Central had diminished funds and needed help affording basic services.
I think you are probably right that a lot of engineering burn-out comes from things managers require engineers to do.
But I think it's also true that a lot of what managers say and do is often a lossy representation of things engineers would need to do anyway if they didn't have management.
Remove the managers and the bureaucracy and the things that make programming hard and likely prone to burn-out still exist.
That doesn't mean managers aren't contributors of their own unique frustrations, but I don't think it accounts for the high amount of burn-out in our field.
> Remove the managers and the bureaucracy and the things that make programming hard and likely prone to burn-out still exist.
I don't get burned out working on personal projects. They are written exactly how I want them to be, and can be worked on at a leisurely pace. They don't have scaffolding and ladders littered all over the place, which is equivalent to the output detritus of middle managers and scrum masters. They don't have some coach shouting from the top to "go faster", while they recline on a lawn chair.
Working on a project as a solo dev or in a self-organized group is like scaling a rock wall. You are free to choose how to climb the wall. You can do so without a harness. You can sit at the base and sip on lemonade. You can walk over to a different wall and stare at it for an hour, before deciding not to climb it.
This is compared to being forced to climb the rickety scaffolding and ladders put in place by "people who know better", unable to detach your harness for fear that you'll fall to your death. Even though you can clearly see a much better path to the finish line.
Is one approach theoretically safer than the other? Sure. But when you're bouldering a 20 foot wall with thick pads at the base, all that scaffolding just looks silly.
> That doesn't mean managers aren't contributors of their own unique frustrations, but I don't think it accounts for the high amount of burn-out in our field.
That would require an actual survey.
But I would say that inexplicable direction changes and constant out-of-order requests are major contributors to these frustrations, and those don't come from the practice of writing software.
Using it in practice, the sheer quantity of suggestions (often one for every line) is fatiguing especially when 99% of the time they seem fine.
I posit it becomes increasingly likely over large periods of time over many engineers that severe bug or security issue will be introduced via an AI provided suggestion.
This risk to me is inherently different than the risk accepted that engineers will use bad code from Stack Overflow. Even Stack Overflow has social signals (upvotes, comments) that allow even an inexperienced engineer to quickly estimate quality. The amount of code used by engineers from Stack Overflow or blogs etc, is much smaller.
Github Copilot is constantly recommending things and does not gives you any social signals lower experienced engineers can use to discern quality or correctness. Even worse, these are suggestions that are written by an AI that does not have any self-preserving motivations.
Copilot's default behavior is stupid. You can turn off auto-suggest so that it only recommends something when you prompt it to, and that should really be the default behavior. This would encourage more thoughtful use of the tool, and solve the fatigue problem completely.
In IntelliJ, disabling auto complete just requires clicking on the Copilot icon in the bottom and disabling it. Alt+\ will then trigger a prompt. I know there's a way to do this in VSCode as well, but I don't know how.
> I know there's a way to do this in VSCode as well, but I don't know how.
I dug into this a bit, since I want the same functionality, I found I needed an extension called settings-cycler (https://marketplace.visualstudio.com/items?itemName=hoovercj...) which lets one flip the 'github.copilot.inlineSuggest.enable' setting on and off with a keybind.
Not sure who's in charge of the Copilot extension for VS Code, but if you're out there reading this, the people definitely want this :) Otherwise of course, your tool rocks!
"...does not gives you any social signals lower experienced engineers can use to discern quality or correctness" is very astute.
I experienced this in practice. I was pairing with an inexperienced engineer who was using Copilot. He was blindly accepting every Copilot suggestion that came up.
When I expressed doubt in the generated code (incorrect logic + unnecessarily complex syntax), he didn't believe me and instead trusted that the AI was right.
I would argue that this kind of problem is going to become less of an issue overtime, since they're going to have to also solve the issue of suggesting code samples from deprecated API versions - it's likely that eventually they'll figure out a similar way to promote more secure types of code in the suggestions based on Stack overflow or other types of ranking systems.
Yes, the will surely improve a lot and also train users to write better prompts and comments. With millions of users accepting suggestions, then fixing them, they get tons of free labeling. If they monitor the execution errors they got another useful signal. If they use an execution environment they could use reinforcement learning, like AlphaGo, to generate more training data.
As programmers we take pride in being DRY. Copilot is helping us not reinvent the same concept 1000 times. It also makes developers happier, reduces the need to context switch, increases speed and reduces frustration.
> Github Copilot is constantly recommending things
It's only a momentary problem, will be fixed or worked around. And is this a bad thing to get as many suggestions as you could? I think it's ok as long as you can control its verbiage.
> does not gives you any social signals
I don't see any reason it could not report on the number of stars and votes the code has received. It's a problem of similarity search between the generated code and the training set, thus finding attribution and having the ability to check votes and even the license. All doable.
> an AI that does not have any self-preserving motivations
Why touch on that, people have bodies and AIs like Copilot have only training sets. We can explore and do new things, AIs have to watch and learn but never make a move of their own.
> Copilot is just copy / paste of the code it was trained on.
Every time I hear someone say this, I hear "I've never really tried Copilot, but I have an opinion because I saw something on Twitter."
Given the function name for a test and 1-2 examples of tests you've written, Copilot will write the complete test for you, including building complex data structures for the expected value. It correctly uses complex internal APIs that aren't even hosted on GitHub, much less publicly.
Given nothing but an `@Test` annotation, it will actually generate complete tests that cover cases you haven't yet covered.
There are all kinds of possible attacks on Copilot. If you had said it can copy/paste its training data I wouldn't have argued, but "it just copy/pastes the code it was trained on" is demonstrably false, and anyone who's really tried it will tell you the same thing.
EDIT: There's also this fun Copilot use I stumbled across, which I dare you to find in the training data:
/**
Given this text:
Call me Ishmael. Some years ago - never mind how long precisely - having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world.
Fill in a JSON structure with my name, how much money I had, and where I'm going:
*/
{
"name": "Ishmael",
"money": 0,
"destination": "the watery part of the world"
}
It can even read an invoice, you can ask it "what is the due date?" It's a system that solves due date and Ishmael questions out of the box. And everything in-between.
>> It can even read an invoice, you can ask it "what is the due date?" It's a system that solves due date and Ishmael questions out of the box. And everything in-between.
That's cool.
But emitting copyrighted code without attribution and in violation of the code's license is still copyright infringement.
If I created a robot assistant that cleans your house, does the shopping, and occasionally stole things from the store, it would still be breaking the law.
It's fascinating to see how stretchy the word "steals" is nowadays. You can make anything be theft - copying open online content and sharing? theft, learning from data and generating - also theft. Stealing from a physical store - you guessed it.
While I do enjoy everybody acting as armchair lawyers.... until we get an actual legal ruling, the general consensus seems to be that it is sufficiently transformative as to be considered fair use.
>> If you had said it can copy/paste its training data I wouldn't have argued, but "it just copy/pastes the code it was trained on" is demonstrably false, and anyone who's really tried it will tell you the same thing.
So if "it could commit copyright infringement, but does not always do so" is good enough for your company's legal review team, then go for it.
Has anyone tried to see how similar is their manually written code to other codes out there? I bet small snippets 1-2 lines long are easy to find. It would be funny to realise that we're more "regurgitative" than Copilot by mere happenstance.
> I posit it becomes increasingly likely over large periods of time over many engineers that severe bug or security issue will be introduced via an AI provided suggestion.
AI can also do code review and documentation helping us reduce the number of bugs. Overall it might actually help.
"I posit it becomes increasingly likely over large periods of time over many engineers that severe bug or security issue will be introduced via an AI provided suggestion."
I'll go one further with the "Co-pilot is stupid."
It's supposed to be artificial intelligence. Why in the eff is it suggesting code with a bug or security issue? Isn't the whole point that it can use that fancy AI to analyze the code and check for those kind of things on top of suggesting code?
For IT/Security folks looking for a good rundown of what's new we put this together, talks about Passkeys, RSR, Gatekeeper improvements, and Lockdown mode.
They’ve been listing it as a Ventura upgrade, and all the marketing (more or less) points to this as a Ventura-and-later feature, but it’s on Big Sur and Monterey too.
As an infosec person, I'm trying to get us disentangled from this mess. Lots of orgs install surveillance under the guise of security reqs, but let's be honest, they are doing it because they're afraid folks aren't working. IMO this stuff hurts the security team's mission.
While researching this article we rewatched this interview with Jobs at "All Things D" D3 conference which gives a lot of interesting insights into Jobs' mindset about the evolution of macOS at the time. https://youtu.be/iGXdnLMbnds?t=1798
My favorite Jobs quote (which we featured in the article is)
"Avie Tevanian, the person that was running software at the time, showed us OS X and every time you wanted to load an application into OS X, whether it was off the internet or even off a disc, you had to type your name and password–you had to authenticate. And we gave him incredible shit for that. We said ‘Avie, are you nuts? This is the Mac!’ And he said, ‘trust me.’ And so we deferred to Avie on that after trying to twist his arm for a year. And boy, was he ahead of his time."
I'm sure that's absolutely a very liberal use of the word "we" and it was likely Steve himself banging on Avie's door trying to get him to capitulate and remove the prompt which would have fundamentally set a different tone for OS X security going forward.
> Every child I know diagnosed with ADHD had parents who didn't want to deal with them
You must not know many parents then.
Parents I know that have children with ADHD recognize their children are struggling beyond simple hyperactivity. These are children that are markedly behind their peers in childhood milestones regardless of their family upbringing, education, and socio-economic status. These are children that have a deficiency in the executive function of their brains where hyperactivity is one of many symptoms, and is not even necessarily the most worrying.
These are children that struggle with simple tasks that other children do not.
Parents of these children are no less loving, caring, or capable than parents without ADHD children. Parents should not be shamed for using effective medications (like MDH) so their children can have positive outcomes in their development and adult-life.
> What we really need is a strong emphasis on family development, courses built around it and support groups
ADHD is generally a disorder that you are born with. No amount of family development can prevent the disease.