People here dislike HackerOne, but afaiu it solves this exact problem. It's the first line of ‘support’ for security reporters.
The fact that the industry currently needs this kind of solution is absurdly comedic. Basically, it would make actual sense to require people to pay ten bucks when posting a report—if they think the report is reasonable and that they would get paid for it.
It’s a good platform, but it really doesn’t solve the time-sink problem, even if you pay for triage services. Triage can knock down well-known patterns of bogus stuff, but so can you; the real problem is the truly wacky stuff people come up with.
Hi @fractionalhare, apologies for the spam but I am trying to read a post you made a few months ago on implementing Exp [1] but it seems the site is down. Is there a different source perhaps? Many thanks for your time.
I'd never enter in a paid interaction without some sort of escrow that will make sure I'm not screwed over for the crime of being a good citizen. The victim could just keep the cost of the report without ever refunding. Innocent people will keep getting caught by this even long after the company acquires a reputation for never providing refunds.
Yeah so? Think about it. How do you make a blob of data unreadable by people on Thursday but readable on Friday? You can't do it without a trusted third-party.
If Eth has a solution for this, it would mean it basically solves trust in general, which is a persistent pain in all of infosec.
posting your email in a not easily parsable way can save you a lot of spam. (rot13 it, break up characters, etc). Atleast that should cut most of the spam. This might be not standard, but I do not really see why we would need security.txt to be parsable by robots.
security.txt is a flag that you may have a bug bounty program, and as a result are a potential source of revenue.
It is time arbitrage between big companies taking security seriously (willing to pay large bounties) and that amount being higher than a monthly or yearly wage in some internet-connected regions. If they throw enough nets into the sea all year, eventually one pays off and they end up living quite well.
The proposal is to place the file at /.well-known/security.txt.
And even if it wasn't, there is plenty of namespace room to put every file someone argues for in a 2-page RFC at root. After all, there are only 1024 low-numbered TCP ports and we haven't run out of those yet.
Don’t get me wrong; I dig file-based interfaces, but each time they add another file, it’s another request.
And it’s Anglocentric to continue to unnecessarily put multiple English words into the path; those can’t be touched-up with a later RFC to support Japanese in the file content via a Lang attribute.
The whole thing is shit bad, I’m sorry. Just come up with something that makes fucking sense for once.
> A link or e-mail address for people to contact you about security issues. Remember to include "https://" for URLs, and "mailto:" for e-mails
Using a landing page should improve the signal/noise ratio. Google, for example, points to a landing page [0] and GitHub points to their HackerOne profile.
Honestly I think a "last reviewed date" or log of dates would be better because it aligns with the actual action that the hostmaster takes, and thus provides the reader with the most relevant facts instead of an arbitrary promise of future validity.
This. They did a bad job of explaining why they chose an expiration date in the draft RFC[1].
> If information and resources referenced in a "security.txt" file are incorrect or not kept up to date, this can result in security reports not being received by the organization or sent to incorrect contacts, thus exposing possible security issues to third parties.
Yes, the information could change after you write the file. No, it is not possible to know, when you write the file, at what future point the information will become incorrect. The document should have a "last reviewed" date, then the consumer can decide for themselves if it has been updated recently enough to be trustworthy.
I was literally going to craft a file and plop it on my site until I hit the "required expiration". I understand why it is there but think it should be optional. I think a better idea would be to steal from DNS and use TTL and serial numbers (maybe just standard http last-modified is enough?) - the point is "this stuff might be stale, reprocess it".
The last thing I need is one more thing to have to remember and update.
By the looks of it, a few others feel it is non-critical and have just skipped it too.
Seems like a better solution that would accomplish the actual goal would be "refresh-after" where you could specify how many days a client should wait until asking again.
Zero maintenance required but still gives a rate-limiting and time window function.
I'm not sure I buy into the idea, but it couldn't have been sold any better. That security.txt generator is such a great way to get people on board. The whole website is really good at explaining the project.
It’s cute but it generates a five line plain text file. I would argue that a better way to sell the idea would be to create an Apache and nginx module so you could specify this stuff from those config files. It would make adoption seem easier to more people.
Serving a 'text file' from a web server module seems to overcomplicate things in my view.
More code || complexity == greater likelihood of bugs (including security bugs).
As ironic as a security bug in a security.txt serving module would be, it's probably best we avoid that possibility and let the ordinary, highly scrutinised file serving code handle it instead.
If you want adoption, make it so easy that it’s harder to not include it. If you run an application, it will take lines of config to serve this file anyways. Might as well make it easy. And if you can’t make a bug free module that writes out 5 lines into a static file… probably shouldn’t be defining web standards.
Conversely, if you are providing a web based service on the public Internet and can't be assed to drop a five line text file in your home directory, you probably should not be running a server on the public Internet.
I think the text file generation is great - it is a standardized "syntax", so being able to just fill out your info in a webpage and getting the .txt to upload to a server (instead of having to "learn" the couple of keys to use for your values) really does make it painless.
A lot of people have web hosting packages that don't give them direct access to the webserver configuration, but do let them upload arbitrary text files to their webroot.
Nothing prevents you from writing this file by hand. But for people who use shared hosting having a security.txt file is likely not as important as companies that run their own infrastructure. And adding yet another file to be deployed into the web root or to be served by your application is likely a touch more work than enabling the module and adding a line or three to the web server config. None of it is a lot of work, but in a sense using the web server’s config file is a lot less friction.
One aspect that is not reflected in this format is that the site/company might have a specific routine for reporting vulns. When I happened to write to Node (iirc) about some potential problem, the mail was just redirected to HackerOne, converted to some kind of a draft, and I got an automatic response saying I need to create an account there. In true marginal-nerd fashion, I have some opinions on which of my email addresses go where, so the account remains uncreated and the problem unreported by me. And Node didn't specify anywhere that this reporting works through HackerOne.
(I also realize that this comment is probably not the right place to complain about the format, but eh.)
Yeah, worst part is when some support engineer asks you to please post the bug to a bug tracker, but the bug tracker requires an account, and when you try to sign up they make you wait for someone to review your account, and at some point you wonder if these people ever get a single bug report from a customer.
Though the UX designer in me thinks that if the policy is important, it would be better to put in up on a webpage and slap that into the ‘contact’ field—as a neighbor comment suggests. At least when the whole process turns out to sidestep email completely.
We've been indexing it for a while now and we haven't seen the number of websites that support it change significantly. It would make notifying organizations easier if this was a more widely-adopted standard. This is how it looks like when you pull an IP that has a service with that file:
A top-level security.txt sounds like a better idea than hiding it under .well-known. I wouldn’t want anyone without access to
the web server’s root to be telling me what the security policy is, anyway.
Having it at top level makes it a sibling / analogous to robots.txt so there is some consistency to the pattern.
`.well-known` is already used for validation for many things, perhaps most crucially acme-challenge which is used by LetsEncrypt to issue domain validation certificates. LetsEncrypt is trusted by all major browsers at this point so it seems that the consensus is that .well-known must be kept secure at any cost. So even if you disagree with `.well-known` it must de facto be kept in the inner-most ring of your security model.
Right, which is why putting this file under .well-known is a small inconvenience.
It's increasingly common for server configurations to have a reverse proxy routing requests to internal containers or servers. Things like SSL renewals are often handled by the reverse proxy (because reasons [1]), so those requests don't get routed to the internal hosts by default.
Site-specific stuff, like this file, probably belongs in the site's root directory.
This is a bit bike-shedding though. It's only a small aggravation to work around.
[1]: Because you want to automate as much of the configuration as possible, so when a new hostname is added to a container or internal server, an SSL certificate just magically happens for it. This requires changes to the reverse proxy's configuration, and that's not something you want the internal containers doing, so it falls to the proxy to handle these itself. Letting the containers handle their own SSL setup means you have to have some kind of privileged communications channel from the container up to the reverse proxy, which is undesirable.
The problem is when there starts to be many "site-specific stuff". It's easy to remember not to route .well-known to user-generated content in your app. It's less easy to remember a list of endpoints, like robots.txt, security.txt, and so on. And what when that list grows? What if you already have a user called "security.txt" (or whatever the next one is)? This is why a .well-known prefix is valuable.
Right. This seems like a good pattern to have. Much like app/framework specific config, I much prefer to have everything under ~/.local/ than a myriad of ~/.foo dotfiles. It's only one endpoint to route, and you don't need to change configs each time you add some new static file.
IMHO, index.html and favicon should be the only files living at top level, and I guess robots.txt since that's a de facto standard (and soon to be actual standard iirc)
My point exactly - validation usually involves write permissions to put a challenge or something else as required by the protocol (ACME in your example). If I put the security.txt file there and certbot gets compromised, there goes my security policy. Putting security.txt up one level so only root (I.e. me) can update it allows me to keep .well-known writable by robots only.
Ha. I see what you did there. But I'll take it literally and for others who aren't aware there's an IANA site describing all 'legal' and 'official' .well-known URIs[0].
Not just Let's Encrypt, if you want to demonstrate control over a web server to get your certificates in the Web PKI ("SSL Certificates") rather than doing something with DNS or emails or whatever, both your options are in the well-known registry.
Certificate Authorities doing ACME (like Let's Encrypt) use /.well-known/acme-challenge/ while those who've rolled their own solution or maybe just adapted some manual process for this modern era are required to use paths starting /.well-known/pki-validation/
Previously to this the problem was if you hand roll solutions a customer eventually says "Oh, I can't create http://example.com/proof.txt you asked for because of [some stupid corporate reason] how about if I name it http://example.com/proof.doc ? And also it will be a JPEG screenshot of an Excel spreadsheet with your text file in it". And eventually your frustrated employee says "Fine, fine, whatever it takes" and next thing you know your policies are so relaxed that a bad guy is able to get a certificate for example.com by uploading the proof file to http://pastebin.example.com/XQGBLP. Oops.
So the Ten Blessed Methods fix a bunch of stuff in this space, clearly control over pastebin.example.com shouldn't get you an example.com certificate for example, but also requiring the files go in specified places in /.well-known/ means chances are if other people can create those files you likely also have a dozen other directory traversal type vulnerabilities and maybe you need to check yourself before you wreck yourself.
Such level playing field rules prevent a race to the bottom for CAs because they don't need to worry that their competitors get an edge by being more lenient than they are, if a competitor breaks the rules you can just rat them out.
Dammit I was hoping this would include password requirements. I remember reading about a similar proposal on HN before.
The way I saw it described was a field for password security requirements, a field for an API url to let you change a password, etc. This would allow password managers to easily and simply change account passwords en masse. I suppose there are security risks as well, so maybe an email going out saying "an automated password change request was made, these often come from your password manager but only if you initiated it. If you want to approve this change, click here"
But mainly if you are responsible for a system and you're willing to do work to improve security your first focus should be "implement WebAuthn so my users can stop worrying about passwords entirely" not "I wonder if more complicated password handling would help somehow?"
Also, how do I know whether to believe the link to the encryption key? That stuff should be in the HTTPS certificate, not a text file. Just use the server's public key to encrypt communications to the website owners.
Whois is not a good place for this data. Whois data is typically abused by spam bots (and most people don’t look there), it can’t be easily extended with security-specific info (a link to the encryption key? a link to the full security policy?), it works only for the registered domain (you can’t have different whois for maps.google.com and mail.google.com), and some registries might have policies that make it difficult to fetch WHOIS data (eg. by blocking IPs of cloud providers, or by forcing you to go to a website to see full subscriber information).
> it can’t be easily extended with security-specific info
Just put a public key into the address field, for example. More abuse of field names is good because it will keep trip up the bots that use e.g. the address field as a spam mail address or pass it to data brokers.
I'd love to see a data broker say "John Doe lives at === BEGIN PGP KEY === 0xA3243ABC3F... Do you want to dox them? Yes/No" and more spam mailers waste their money attempting to send ad mailers to "=== BEGIN PGP KEY === ..."
This is all done over TLS connections, including the link to the encryption key. So the provenance is already at the certificate level. Using PGP means that this provenance can be increased past that level if required.
I'm sure they'd be very excited if I sent them PGP encrypted email message using a public key extracted from some possibly stale public cert of their http server.
Public keys can be big + Why keep the key there if it is already available somewhere else. and the draft allows 3 ways to add a key
- by uri
- by OPENPGPKEY DNS record
- by key fingerprint (assuming it's on a public keyserver)
IMO this totally solves the wrong problem. It's not really so much about "who do I contact if I find a security problem on a website", it's more about the problem on the other side "How do I separate the spam and low quality bug reports from actual defects, especially if I have a bug bounty program that attracts low quality bug reports."
I think what is needed more is a community-managed "reputation score" for security researchers that could be used to indicate who has submitted high quality defects in the past. I shit on pie-in-the-sky blockchain schemes all the time, but this actually seems like one where I could imagine it being useful, i.e like this:
1. A site owner publishes their security team's public key in a well known location, similar to what is described in the security.txt proposal
2. When a user submits a bug report to some site that the owner seems is a bug, the user can sign something on the blockchain that states "User X found a [high,medium,low] defect" on our site.
3. Then, when a user wants to submit a defect to another site, they could show their past history of high quality bug submissions to past sites, and those submissions could even be scored based on some value of the site owner (e.g. finding a high impact defect on Apple would result in a high "reputation score.")
> It's not really so much about "who do I contact if I find a security problem on a website
Cannot confirm. I bought a windshield for my '01 Ford Focus and found a major security bug on their site [1] (they linked a JS file from a non-existant domain)
I talked to CERT, the clerks at the store, tried to contact the owner on linkedin; heck it was even published in one of the largest newspapers of my country but never got anyone who understood the problem or cared.
In the end the bug was fixed because I wrote them on facebook and the kid who's job it was to manage their facebook site was also the web admin
haha, It gets worse when the guy at facebook employee says "Oh I get it, but i can't fix it and I won't try to get it fixed either.", And you so wanna teach them a lesson but just can't.
One is a problem for the reporter, the other is a problem for the recipient.
As a reporter, if I can't find where to report it, you'll find out about your issue when someone forwards my blog post to you. If you ignore my e-mailed report because you don't want to spend the resources on it, e.g. because I haven't built a reputation score, same thing.
Most importantly: If the only way to report a security issue is through a platform with a "community managed reputation score", I'm much more likely to ignore that platform and again, you'll find out about your vulnerability from a blog post.
security.txt actually told me about a contact address for Cloudflare that isn't HackerOne. (HackerOne, in particular, is on my shitlist because they impose terms of service that deter disclosure unless the vendor agrees, and they don't let you publish through their platform if the vendor is unresponsive. If the only way to report to you is through HackerOne... see above.)
I'd be a bit careful if I were you. Bug bounty programs are after all the exception rather than the rule - the security equivalent of open sourcing software - an active decision by the company to sign away normal rights and normal legal protection.
If you find a vuln and publish it and the company does not have an explicit bug bounty program allowing such things, you may be sued or face other legal action.
I know several security researchers who have been sued for hacking, in many countries (mostly across US and Europe), because they assumed they were doing a "good thing", whereas the law doesn't care - it only cares about what is legal or not legal. Apart from the hacking charges, the very nature of bug bounties means it's pretty easy for the lawyers to add a coercion/blackmailing charge as well, which makes it more serious.
There are strong protections in the US regarding vulnerability disclosure due to freedom of speech. If you are able to run software that you own which doesn't have any anti-reverse-engineering ToS on your own computers, you are generally in the clear to publish knowledge of flaws that you find while inspecting the software on your computer.
This doesn't mean that you won't get sued, but it does increase your likelihood of winning such lawsuits when you haven't committed any crimes during your security research & disclosure.
You are not required to ever tell the affected parties at all, and afaik you are also free to stockpile and sell exploits as long as you only sell them domestically (IANAL & TINLA).
That's true, but in my experience vulnerability researchers mostly focus on the online presence/product of internet-active companies (the FAANG:s of the world, and their smaller competitors - companies that could realistically be on HackerOne/BugCrowd without standing out like a sore thumb).
If you've bought some software you install on your computer - like the good old days ( :) ), it's more fair game as you said.
"The law" doesn't sue anyone so the company is the one that "doesn't care". The law (if in a functional system) doesn't follow the letter of the law but the spirit of the law. Otherwise we wouldn't need a court but just a clerk or a low-level AI.
There're only three categories IMO (besides black hat):
1) The researcher disclose a vulnerability the proper way and all's good
2) The "researcher" did something that could cause harm and were punished
3) The system is utterly broken
I have seen all happen and the ones people are up in arms about have always been in category 2 or 3. Your last sentence about blackmail is in category 2 as demanding money for a proper disclosure from someone without a bug bounty program is the definition of blackmail.
For anyone else who was wondering what the legal definition of blackmail might look like, 18 U.S.C. § 873:
"Whoever, under a threat of informing, or as a consideration for not informing, against any violation of any law of the United States, demands or receives any money or other valuable thing, shall be fined under this title or imprisoned not more than one year, or both."
Coercion/blackmail is going to be hard to argue when you've never asked the company for anything, nor made any offer that would involve them giving you anything.
I consider going public directly less risky than talking to the company first: They're much more likely to make legal threats/try to sue you if they think it can help hide their embarrassment. Once public, that incentive goes away, they're in the public eye, if they do go after you they can no longer prevent the disclosure, and you have a decent chance that the additional attention this generates will make them reconsider before you have to spend money on lawyers.
Also you are assuming that the recipients of those reports, the websites, would be truthful in their acknowledgement of issues and would not abuse their new power over researchers' reputations. Especially when each accepted bug leaves a permanent public record.
Right. Not to mention, what's to prevent an unscrupulous "researcher" from paying companies - or starting their own fake ones - to enter positive feedback, etc?
Sorry to burst your bubble, hn_throwaway_99, but this is the same kind of nonsense idea that you typically scoff at. It offers no new benefits, while adding new problems.
I don’t believe that paying for your reputation was what the poster meant. Blockchain can be used for basically “write once” non-alterable records that you can prove you “own” without any currency aspects. That is basic blockchain tech. However it would cost money to run the machines that managed the BC and people would not do that without an economic incentive so maybe you’re right?
How did you arrive at a blockchain? That's putting the cart before the horse, so to speak. Just do what these real-world sites already do, put up a list of researchers and their bounties.
Nobody is going to download a blockchain client from a bounty website in order to look at a text file. That's like the security researcher interviewing an employer.
I think what you are actually referencing is a distributed database, not a blockchain. But, the database you mentioned in your post isn't distributed, it's centralized...
> especially if I have a bug bounty program that attracts low quality bug reports
IMO security.txt provides value for when you DON'T have a bug bounty program. If you don't already have a 'front door' for how to send you security information, you're not going to get it.
Also, I would put some money on someone finding a show stopper bug like shellshock/heartbleed/deserialization-bug-of-the-week/etc and contacting folks from their security.txt in sync with their public release sometime within the next few years.
It always intrigued me: why is it somebody else's job to secure a company's website? This is completely backwards. Rather than investing into security, they let somebody else fix the problems while leaving their users exposed. At this point, it is more ethical to sell whatever vulnerabilities you find to the black market than to "ethically" disclose them.
>why is it somebody else's job to secure a company's website?
Some people find bug bounties to be lucrative, especially in low cost of living countries. Other people find them fun. Other people find they look good on resumes. But no one is required to participate in them. If you don't want to spend your time looking for vulnerabilities in other people's software, don't do it.
>Rather than investing into security
Running a bounty program costs money, both to pay the participants as well as to pay employees to investigate the reports (most of which are junk). Also it's not a one or the other. You can run your own internal red team while also running a bug bounty program.
>they let somebody else fix the problems while leaving their users exposed
It's usually not the bug bounty participants who fix the problem. Usually the bug bounty participant reports the problem, then the company fixes it.
>At this point, it is more ethical to sell whatever vulnerabilities you find to the black market than to "ethically" disclose them.
Why is it more ethical to sell them on the black market? The black market is composed of people who actively want to harm others for their own benefit. Seeking them out and selling them tools specifically for that purpose is unethical. I don't see what's ethically wrong with reporting a vulnerability to a company through its bug bounty program.
There are of course other options besides those 2. Full disclosure for example.
Also you could sell it on the grey market to people who promise to only use it for legal purposes (e.g. for governments to legally hack people). https://zerodium.com/
I don't think this should be a standard pe say. Maybe more of what is a best fit for your risk appetite. Because I could easily see this as a flag to welcome people to attack your website looking for bounties. If that is the case how are your blue team people going to know the difference?
As far as the info contained within the txt file there should only be a email address or contact info if you found something serious absolutely nothing more. No reason to intentionally/unintentionally provide information used for recon.
About the automated scanners... adjust your scope to avoid the file.
Seems like a reasonably good idea. More of a question about RFC’s than the spec itself. I noticed while reading the RFC it mentions this:
> By convention, the file is named "security.txt".
Isn’t the point of the RFC so that it wouldn’t be by convention anymore? Or are they saying the idea for the name was from prior conventions? Or maybe I’m just reading into it too much and it doesn’t really matter.
I agree, it definitely seems that the phrasing there needs work. Overall I would say most RFCs are well substandard especially when compared to ISO standards which tend to be extremely precise. That being said, interoperability within tech is amazing so perhaps there is something to be said about the idea of loose standards and working code after all.
On a more serious note, all these sitemap and bot-friendly standards for webpages always tend to fail. Even RSS, which is probably the most important standard in this space, has issues getting more adoption.
Since it has a structured data format, computers will parse it sooner or later. Why bother setting off comments with `#` at all if it's purely for human consumption? Why even have a standard, security.txt could just be "this is the place where you write whatever you want to about security, here are some suggestions as to what other people might find useful".
I'm not hearing good arguments for not making it TOML...
barring that, (not much space for public keys and preference flags i guess), how about DNS TXT records.
just some file on the root of the webserver seems a security issue itself. typically less people have keys to DNS. also, rare today, but not all domains have webservers.
In my experience WHOIS is basically useless now. All of my domains are registered through a privacy service and every time I've gone to check some sketchy domain's WHOIS out in recent memory I've found the same thing.
well .well-known should be well guarded anyway, since it is used for things like issuing SSL certs by Let's Encrypt (and possibly more CAs). Giving anyone access to your root of webserver is giving anyone access to your website.
.txt is a file extension designating a plain text file. But yes, new well-known files of this sort intended to be included on websites, like humans.txt and now security.txt, do follow the naming convention of robots.txt.
> This site is only using mm/dd/yyyy for the generator input
If this is intended to be some kind of standard, it should follow other standards. I understand that this is "only" in the generator, but why even there?
No, as @madeofpalk mentioned in response to my comment, it is using the standard date input. What you actually see depends on the locale of the browser (I see mm/dd/yyy in Chrome set to US English, and dd/mm/yyyy on FF set to French)
The language used in this draft is way to political and not technical/objective enough - herefore i think it causes more security risks than it resolves.
Example: from line #1 "proper channels" - This is subjective as hell.
Part of me thinks that the root index.html should have link or meta tags for any and all of these. HTTP headers would also work. I get that robots.txt and favicon.ico are historical and humans.txt is cutesy, but this is getting kind of polluted. Perhaps I'm missing some obvious reason for why we're still doing the .txt thing?
> Part of me thinks that the root index.html should have link or meta tags for any and all of these. HTTP headers would also work.
As it is, anything interested in MTA STS can get the associated well-known URL and everything else doesn't care. If we put that into index.html or every HTTP header then everybody is burdened even if they didn't care.
> Perhaps I'm missing some obvious reason for why we're still doing the .txt thing?
If there's a significant software stack in which you're more likely to get what you expected from foo.txt than from naming it just FOO or foo then unless there is also a significant stack in which foo.txt can't work you'd be crazy to insist on not having the extension.
The IETF and W3C do not have enforcement arms, so if they standardise something that's harder to get right, less people get it right, harming its adoption. Sometimes that's just the price you must pay for something that actually delivers on its intended purpose - but often it is not and you should look for the simplest thing that works.
It apparently attracted automated scanners and the signal to noise ratio was atrocious.