Hacker News new | past | comments | ask | show | jobs | submit login
Hardcoded password in Confluence app has been leaked on Twitter (arstechnica.com)
216 points by duxup on July 22, 2022 | hide | past | favorite | 70 comments



> To figure out if a system is vulnerable, Atlassian advised Confluence users to search for accounts with the following information:

    User: disabledsystemuser
    Username: disabledsystemuser
    Email: dontdeletethisuser@email.com
Why does this even exist at all? It doesn't even seem like a default admin user. Is this for automated testing and somehow ended up part of the deployed codebase?


i dunno how a security audit of a staging environment wouldn't pick this up. what security professional wouldn't go "hmm lets see what users this thing created and their permissions" as one of the top 10 things...

i dunno maybe i'm in the wrong field but at least i would check that out.


Security audit means someone ran Nessus and checked that the version strings were in the acceptable versions spreadsheet.


No, it doesn't. You're trivialising a whole industry of a whole range of skills and professionalism. Security audit means what you agree on between your and the contractor / agency / your internal team. (Specifically, if you don't want the service just for the corporate checkbox, you can find a company which will do what you're after) Similarly if I say I'm a bank and you should give me money, it doesn't mean that the whole concept of banking is a scam.


It's correct, AND trivializing a whole industry.


Do you have a source on this? This is quite an accusation, doubling down on it must indicate some certitude...


It’s just absolutely true in my experience. Special agents that gave talks to us in grad school, FBI directors that spoke at security conferences I’ve been to, the people I grew up with that went into security - all morons! The professor I had that was into security and invited some of these people was relatively legit, but literally zero people I’ve met in industry are. One of my heuristics is people that use the word “cybersecurity” don’t know what they’re talking about.


So just to be clear... you listed your personal biases from various experiences that were not security audits as a response to essentially: have you got a source for audits being just nessus scans? For the record, as a person both involved in running security reviews of webapps, and dealing with good and bad companies doing the same, I'm kind of annoyed / disappointed how many people here can't handle nuance or understanding things in context.


This is frustrating to read. The parent is telling the truth, I can’t give you my internal audits because they’re extremely confidential and sometimes a little bit shameful.

It also would affect business relationships which is something every employment contract I’ve ever signed has stated is something which is a violation of my employment terms.

So, to pile on a little: yes, the parent is right in a lot of cases, there are exceptions, but of 12 audits maybe 10 were essentially Nessus version checks. This was needed mostly for rubber stamping.


I'm not disagreeing with you. There's lots of companies doing crap work and lots of companies that need only the compliance checkmark and hope for crap work with no hard-to-fix findings. It's sad, but that wasn't even my issue. I think it's an issue when people start the expectations from the other side: if you had a security audit, it was just a rubber stamp. It's unfair to people who may have invested time and money in a engagement. It doesn't help people who want to run a good audit and hear that it doesn't exist.

Better ways to say the same thing without undermining people's trust in the whole idea: There are lots of scammy security audit companies, Confluence should make sure to engage a good quality ones. Companies should invest in serious pen tests rather than just org compliance for security. Here are companies that actually did a good job...

Unless the goal is to bury the whole idea under a list of issues with bad actors and make sure nobody knows there are alternatives?


This is both hilarious and alarming


I used to let people like that scare me away from using the term “cybersecurity” professionally, and I used to be a bit of a curmudgeon about the term myself.

But when that line of work started putting food on the table, it really softened the ground. “Cybersecurity” is something people understand better than “infosec,” and a part of being a professional is being able to communicate about your work in relatable terms.

So I stopped worrying about those people. I came to discover few of them knew their way around my field, so I stopped being insulted by them. They didn’t know what they were talking about.


I feel like many security professionals could have thought "this is necessary for some internal system operation so I won't touch it, and it's not like it has a hardcoded password or anything" and yet it has a hardcoded password.


I’m a security auditor. I dont think many security auditors think like that.


Security people seem to vary from proper devs with an interest in security to "I used to be a marine now I'm a security contractor" methodology-obsessed types.

My faith in the former is strong, the latter category worries me.


Actually, "devs" are typically not technically qualified either. You have to really understand how things work, not just be able to write a program.

But you're basically right. And now there seems to be a meme going around that it's elitist and inappropriate to expect people to have any technical understanding at all before they start dictating technical decisions around security, and all you need is a willingness to learn the alphabet soup of requirements and mindlessly apply the checklists.


I said proper devs, in my defense. The people I hang around with tend to know how things work.


The ideal is you have both types taking different approaches


I came from dev. I agree.


That's a really sad statement, and proves to me that security audits are just another scene within the grand play that is security theater


Having been in a number of audits (it, insurance, financial, etc) over the the years.... all audits are theater


Yeah, but security theater is the only one that makes you take all of your stuff out of your bags and then take off your shoes.


Yes, in air travel alone it has given us the war on liquid volumes above 3.4oz and the war on shoes. Luckily by 2009 the fervor & public willingness to go along had died down a bit so we all go to keep our underwear after NWA Flight 253.


Audience participation!

………..


I don’t think it’s theatre. It’s difficult to audit applications when you lack context, it could also be seen and assumed benign / intentional. I think that the industry is immature and severely conflicted on the quality:profit ratio.


Looking at the capabilities of the default users created by the software after fresh install is difficult to wonder if it should be audited to ensure nothing hinky has occurred? Fuck me, if that's truly the case, we're all doomed. If the system modified anything, then those modifications should obviously be checked out to have done exactly and only what they were expected to have done. Installers typically have elevated privileges, and are known vectors for oopsies let alone maliciousness. Hell, Zoom left an elevated binary because it was simply easier for them to do things vs malicious. To have an "audit" not inspect these things is such an obvious error on the auditor's behalf.


Applications have default admin users. Nobody saw log4j for 15 years. Hindsight is a great thing to have.

This has nothing to do with elevated binaries or anything else.

To be clear, security is assurance. It’s not just security who screwed up here, it’s also the devs that shipped it, the testers for not raising it, and product managers for not ensuring better quality assurance didn’t occur.


>Applications have default admin users.

Yup. And after install, they should be tested. Default non-admin users should also be tested for the basic thing to ensure they are actually restricted and can't do admin things.

>Nobody saw log4j for 15 years. Hindsight is a great thing to have.

You are moving goalposts here. Nobody mentioned log4j issues. The issues being discussed here do not require hindsight.

Confluence left a hardcoded password. At the point a dev is harcoding an f'ing PASSWORD, alarms should be going off with flashing lights and everything. If an auditor isn't searching the codebase for something simple like 'password = ' to see a hardcoded string, then that's a weak audit. The fact no internal code review didn't catch this is also not a good sign.

100% agree no single person can imagine every single scenario that would potentially cause problems down the road. However, when new things pop up, they should be added to a list of things to check for not an immediate throwing of hands in the air with a "we don't do that kind of thing". Instead, admitting it was checked for because it was such an out of consideration thing, but then saying "we'll keep that in mind for future testing" would have been a much better thing response than a bunch of whataboutisms.


Take it from someone who works in the industry, this stuff is -hard-. I could go and be a developer with similar pay, think half as much, and have half as much responsibility. The industry is broken, and the subject matter is highly complex. Acting as if everyone is incompetent because nobody noticed a hardcoded Cred in a third party plug-in (until someone-did- notice it) alienates people from getting into the industry in the first place.

Security aren’t infallible, nor are developers, nor are you.


I'm saying the Dev that hardcoded a password is near incompetent. The fact nobody else working with that dev in code reviews found is near incompetent. I'm saying that you telling me as an auditor not checking the changes to the system after the software being audited is installed seems ridiculous to not be testing users etc as was claimed isn't normal.

Yes, we're all error prone. Some mistakes are innocent and triggered by multiple layers of things aligning, some mistakes are from not enough experience, some are malicious, some are just other things. Hard coding a password is damn near unforgivable though.


Very well could have, and then someone said "Yeah but we neeeeeed it"


Why would you expect a “questions for confluence” plugin to create a user, or have the ability to create a user…


Does that also mean that whoever owns email.com could reset the password to that account?


https://nitter.42l.fr/fluepke/status/1550471087560982531 or anyone who signed up for this previously not existing email account...


The user name is so bad you would almost think it is designed to intentionally mislead.

Anyone with the right sense of mind would otherwise call this functional-user-questions-plugin.

On the other hand, if someone intentionally wanted to mislead they would have named the user James to mislead even more.


I'd agree, most likely part of the seed data that's needed for install or bootstrapping.


I have disabled users in my project as well - it works great for an importer when something just doesn't quite import as a resource belonging to something (flukes happen).

However, why is it so hard to make sure that a disabled user is actually disabled... I mean, even just setting the password to NULL would result in no possible SHA-256 hash matching (and I use hashing more advanced than SHA-256 alone BTW, just saying for sake of argument here). Instead, some idiot set the password to disabled1system1user6708 hoping that nobody would ever figure it out. Which might have somehow still actually worked because reversing a hash is hard, had they not left it in plain text in the package.


Note that this isn't in default Confluence installations. It applies to installations that have installed the Questions for Confluence plugin, which is an official Atlassian plugin.

The plugin page shows 8K installs when I checked: https://marketplace.atlassian.com/apps/1211644/questions-for...

Disappointing to see this coming from an Atlassian official plugin. I wonder if they outsourced this to some contractors and didn't review it closely, or if they developed this in-house.


Considering all the security issues in Confluence and JIRA lately, I think they have extensive expertise in this area. No need to outsource.


This is the problem with big companies and big software projects. Your security is as weak as your dumbest/least trained dev team. This was probably thought of as a secondary priority project so the B team of C team got assigned to it.


Now that the cat's out of the bag, might as well link to the tweet: https://twitter.com/fluepke/status/1549892089181257729


For reference, the tweets are as follows:

--

Discovered by a fried of mine:

CVE-2022-26138: A remote, unauthenticated attacker with knowledge of the hardcoded password could exploit this to log into Confluence and access all content accessible to users in the confluence-users group

The password is disabled1system1user6708

Proof: https://packages.atlassian.com/maven-atlassian-external/com/...

Also saved to the @internetarchive just to make sure, it stays online: https://web.archive.org/web/20220720225515/https://d34y9yt11...


A pet peeve of mine is when news articles cite public sources like social media but don't bother to link to them. I can understand for NSFW content but that should be the exception not the rule!


Why would you want them to give publicity to a description on how to exploit something? At least when you don't publish the "hack" you can let people know to take action without trivializing the exploit. Yes, I know it's still trivial to find it by searching Twitter.


It's a matter of responsible disclosure. Increased terror causes increased publicity, causing more systems to be fixed, and pushes more people to take urgent action against the vulnerabilities. Your PHB cannot say "hackers would need a password, don't worry".


What's PHB?


The Pointy Haired Boss is the main "villain" in the Dilbert comic strip by Scott Adams.


Yeah, this was the initial discovery/disclosure tweet, as far as I know. Notice it was posted two days ago, same as the announcement from Atlassian.


As least the password isn’t “12345.”


Hey, that's the combination on my luggage!


audits, soc 2, and all the other bs hoops the industry jumps through… for what?

when stuff like this exists it’s pretty clear that large enterprises’ idea of security and risk management is all made up to sound good and lacks teeth.


Well I was forced to add password rotation to an enterprise product by a large enterprise user only a couple of years ago despite password rotation being rejected by research for a decade and no longer recommended even by Microsoft. So, yeah. Like most enterprise stuff it’s all just a bunch of boilerplate.


Your opportunity for vengeance on something like that is to have it put up a big warning that says "This setting is dangerous and violates the guidance of the following long list of experts and government bodies: list goes here. Are you SURE you want to take this security risk?".

And don't forget to log the insecure configuration every time the system starts up, too.


This is the germ of a good idea. I don’t do that kind of work any more (building and selling enterprise applications) but I suspect that the way to push back is to have a document they would need to sign that transfers all associated risk to the customer.

The people in charge of these companies couldn’t give a shit about research or actual good practice, but dear god they hate taking responsibility for stuff.


If you do that, you'll get dinged in your performance review for being insubordinate.


>> when stuff like this exists it’s pretty clear that large enterprises’ idea of security and risk management is all made up to sound good and lacks teeth.

For some reason your comment reminded me of a huge human failing ;-)

Imagine some security consultant running down a checklist of common problems. Number 15: Does your software have any hard coded passwords. Engineer actually thinks of one briefly, but dismisses it in his mind because "it's there for reason XYZ and this guy doesn't know anything about out XYZ need, so it's not what he's talking about" and verbally says "no."

I can't tell you how many apparently smart people have similarly failed to plainly answer plain questions due to this kind of thinking. Not sure how guilty I am of course, but it drives me nuts when other people do it and I notice.


The problem is that the security checklists are often flawed in the other direction - they're poorly targeted towards the system under review, technically out-of-date, and wielded by rigid and inflexible thinkers who won't take anything but "No" for an answer.

A few years I filled out a checklist for a customer (for my entirely cloud-based business) that had the question "The company's file server is secured against physical intrusion: Yes or No?"

then...

"All physical access keys to the file server are under control of company management: Yes or No?"

Blech - what am I supposed to answer here?

In another example, a checklist asked us to verify that our codebase did not contain any instances of libraries implementing the MD5 algorithm. Of course, it was used in a number of places for innocuous, non-cryptographic purposes, which were hard to change due to backward compatibility. This one we couldn't squirm out of - and it took us three months to overcome the fact that we "failed" the security checklist because of that one question.

So, nearly every engineer who is forced to go through one of these stupid checklists learns they have to first transform the question into the mental space of their system, and then transform the technically correct answer into the mental space of the checklist author before determining exactly what to write down.


I think your MD5 scenario is a good example. Just answer the question honestly. The fact that the auditor doesn't understand it not being a security concern is a different problem that needs to be addressed. Yes, its painful, but working these issues IMHO is the best long term solution.


You learn to just tell people what they want to hear. These checklists are basically useless.


What are some other questions others have failed to answer properly? (In addition to the password one)


I work for a security company that helps with the audit process.

Nobody on my team can tell you the first thing about SOC 2. It is an external selling point, not something actually adopted by the org.


definitely. i’ve been doing startups selling to enterprise customers for a while. audits and soc 2 compliance generally do lead to improvements or caught some things we’ve overlooked, but it certainly is not comprehensive. you’re checking boxes in some horrific xlsx file so some folks at Acme Corp can check boxes in another horrific system of theirs.

earlier on i tried to fight the good fight on some points and explain why it was unimportant or nonsensical in our context. that’s a brutal way to live though. i try to go with the flow and have more of an open mind these days. it’s easier for all parties involved. i just want them to write us a check.


The password doesn't look randomly-generated, suggesting it was manually generated - someone thought a hardcoded password is OK in this day and age (presumably this is relatively new).


Good reminder to run https://gitleaks.io on your projects


Would gitleaks have found this? I assume because it contains ‘system’ ‘user’ it would have.


The password was pretty low entropy, I wonder if that makes it harder for tools like GitLeaks to find? But the email address, yes I guess


We spend all this time inventing coding practices and even languages to make our software more secure and then we're reminded that even at the limit, shit like this would still be happening. Sigh.


I doubt anything Atlassian builds is anywhere near that limit.


This says that Atlassian are not running good static analysis on their app source code.

It also says they have either no good peer review, or their developers have no awareness of basic security.

Any decent SAST tool will flag up things like hard coded passwords in most instances.

A normal security audit of the app without access to the source code is unlikely to determine that. You would have to reverse engineer the app.


Is it even possible to write safe software in big teams?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: