Entry level devs will need to be much more skilled than I was to enter the field a few years ago.
My internships and the first 6 months of my first full time job are trivial to ChatGPT. Copilot would be needed for work since then (as it is specific to the codebase), but even so, I am far more productive with them.
One of my first internships was hacking together a mobile demo of a digital ID concept. I’d be surprised if it took more than a few hours to replicate a month of vanilla HTML/CSS/JS effort from back then.
I would prefer ChatGPT to me as a co-worker up until about 1.5 years of experience, if simply because it replies instantly and doesn't forget stuff.
Right - I think when the equivalent of CoPilot shows up in incident response, security employment market changes for good. When a “cleared” CoPilot (for govt-supporting work) shows up, total overall.
If you don’t operate in the approach I describe, or are not just an all around tech expert who likes security for some reason, the stable high paying market is around digital forensics/incident response firms. Those folks have a lock bc there’s a small group of who knows assembly and OSs across multiple systems very well and knows if from a security context. Tribal work for a LLM soon enough as it’s just parsing OpCodes and stretching across log sources, end of the day. Scary stuff, glad I’m past entry level and I’m no fool thinking that I don’t have to worry too.
I'm not sure I see this as a reality anytime soon.
> Those folks have a lock bc there’s a small group of who knows assembly and OSs across multiple systems very well and knows if from a security context.
There is two parts to this. The first is for some of these business in that arena I'm sure if they could speed up analysis to take on more client jobs requiring less labor they would have done so. Second is, what output are you going to provide that wouldn't need the very same people to decipher, validate, or explain "what" is going on?
As an example if you get hacked and you make a cyber insurance claim you are going to have to sufficiently explain to the insurance company what happened so they can try to get out of paying you and you won't be able to say "Xyz program says it found malware, just trust what it says." If people don't understand how the result was generated they could be implementing fixes that don't solve the problem because they are depending on the LLM/decision tree to tell them what the problem is. All these models can be gamed just like humans.
I'm not quite sure I agree that a better LLM is what has been keeping people from implementing pipeline logic to produce actionable correlation security alerts. Maybe it does improve but my assumption is much like we still have software developers any automation will just create a new field of support or inquiry that will need people to parse.
Worse maybe. I used to be able to tell when someone was using SO because everyone was blindly copying the same email regex answer. You can carry Mixtral in on a personal computer and transcribe novel output now. It’s so much harder to detect from just looking at a PR.
My internships and the first 6 months of my first full time job are trivial to ChatGPT. Copilot would be needed for work since then (as it is specific to the codebase), but even so, I am far more productive with them.
One of my first internships was hacking together a mobile demo of a digital ID concept. I’d be surprised if it took more than a few hours to replicate a month of vanilla HTML/CSS/JS effort from back then.
I would prefer ChatGPT to me as a co-worker up until about 1.5 years of experience, if simply because it replies instantly and doesn't forget stuff.