Hacker News new | past | comments | ask | show | jobs | submit login
Security.txt (securitytxt.org)
746 points by tosh on March 14, 2021 | hide | past | favorite | 165 comments



The last time this came up on HN it got quite a negative review from someone who had tried it on several sites: https://news.ycombinator.com/item?id=19152145

It apparently attracted automated scanners and the signal to noise ratio was atrocious.


People here dislike HackerOne, but afaiu it solves this exact problem. It's the first line of ‘support’ for security reporters.

The fact that the industry currently needs this kind of solution is absurdly comedic. Basically, it would make actual sense to require people to pay ten bucks when posting a report—if they think the report is reasonable and that they would get paid for it.


It’s a good platform, but it really doesn’t solve the time-sink problem, even if you pay for triage services. Triage can knock down well-known patterns of bogus stuff, but so can you; the real problem is the truly wacky stuff people come up with.


I can see and read the JavaScript source code if I intercept the requests! Please pay me $100.


Hi @fractionalhare, apologies for the spam but I am trying to read a post you made a few months ago on implementing Exp [1] but it seems the site is down. Is there a different source perhaps? Many thanks for your time.

[1] https://www.pseudorandom.com/implementing-exp


I'd never enter in a paid interaction without some sort of escrow that will make sure I'm not screwed over for the crime of being a good citizen. The victim could just keep the cost of the report without ever refunding. Innocent people will keep getting caught by this even long after the company acquires a reputation for never providing refunds.


Since the escrow sees your report, it sounds like HackerOne.

HO will just create paid tiers where for a smallish subscription price, people actually take your reports seriously.


Smart contracts on Ethereum blockchain... force them to set a date to pay you and go public. They can't back out.

Going to need a lot of work in the next few years to make something like this viable, but ETH could make it possible


Are you saying that Eth can somehow withhold info until a future time when it automatically becomes public? How would that work?


Ethereum has a Turing complete programming language inside


Yeah so? Think about it. How do you make a blob of data unreadable by people on Thursday but readable on Friday? You can't do it without a trusted third-party.

If Eth has a solution for this, it would mean it basically solves trust in general, which is a persistent pain in all of infosec.


posting your email in a not easily parsable way can save you a lot of spam. (rot13 it, break up characters, etc). Atleast that should cut most of the spam. This might be not standard, but I do not really see why we would need security.txt to be parsable by robots.


I just use GNUPG and encrypt it.


encrypt with my private key?


security.txt is a flag that you may have a bug bounty program, and as a result are a potential source of revenue.

It is time arbitrage between big companies taking security seriously (willing to pay large bounties) and that amount being higher than a monthly or yearly wage in some internet-connected regions. If they throw enough nets into the sea all year, eventually one pays off and they end up living quite well.


What does it mean when I enabled this on my personal (for fun) websites years ago on a whim? I don't have a bug bounty program.


Putting all of these files at root is going to be like have old rusty cars and stained mattresses in front of your house years from now.

The web is not a junkyard.


The proposal is to place the file at /.well-known/security.txt.

And even if it wasn't, there is plenty of namespace room to put every file someone argues for in a 2-page RFC at root. After all, there are only 1024 low-numbered TCP ports and we haven't run out of those yet.


Don’t get me wrong; I dig file-based interfaces, but each time they add another file, it’s another request.

And it’s Anglocentric to continue to unnecessarily put multiple English words into the path; those can’t be touched-up with a later RFC to support Japanese in the file content via a Lang attribute.

The whole thing is shit bad, I’m sorry. Just come up with something that makes fucking sense for once.


Trap them with a honeypot: disable HSTS and automatically delete all messages containing "HSTS" string.


Does this work?


This would be my concern. It seems like a really good tool to attract the attention of bad actors.


The Contact field can point to a landing page.

> A link or e-mail address for people to contact you about security issues. Remember to include "https://" for URLs, and "mailto:" for e-mails

Using a landing page should improve the signal/noise ratio. Google, for example, points to a landing page [0] and GitHub points to their HackerOne profile.

[0]https://g.co/vulnz

[1]https://hackerone.com/github



The beautiful part of these is they show exactly what happens with these types of files, in that only one of them implements the spec as linked.

(Expires isn’t optional in the proposal on the website.)


Their own security.txt also fails to do this

https://securitytxt.org/.well-known/security.txt

> # If you would like to report a security issue

> # you may report it to us on HackerOne.

> Contact: https://hackerone.com/ed

> Encryption: https://keybase.pub/edoverflow/pgp_key.asc

> Acknowledgements: https://hackerone.com/ed/thanks



I bet these "expired" security.txts will become more common than unexpired ones in the near future. Updating a date every year sounds annoying.


Nah, just write a script to update it every 3 months!


In fact I think requiring an expiry date is a huge negative of the spec and will likely hinder adoption.

An expiry date brings along with it yet another maintenance burden for questionable benefit.


Well, the spec only recommends that the date be no further than a year into the future.

So if you really don't want the burden, just set a date in the year 9999 or something.


It would be way better to not have to "game" the value there if it is going to be garbage data.


Forcing work arounds in implementations so your spec is simpler is the epitome of why standards and specs fail so much.

Design is hard. Good design makes implementation simple.


Honestly I think a "last reviewed date" or log of dates would be better because it aligns with the actual action that the hostmaster takes, and thus provides the reader with the most relevant facts instead of an arbitrary promise of future validity.


This. They did a bad job of explaining why they chose an expiration date in the draft RFC[1].

> If information and resources referenced in a "security.txt" file are incorrect or not kept up to date, this can result in security reports not being received by the organization or sent to incorrect contacts, thus exposing possible security issues to third parties.

Yes, the information could change after you write the file. No, it is not possible to know, when you write the file, at what future point the information will become incorrect. The document should have a "last reviewed" date, then the consumer can decide for themselves if it has been updated recently enough to be trustworthy.

1: https://tools.ietf.org/html/draft-foudil-securitytxt-11#sect...


I was literally going to craft a file and plop it on my site until I hit the "required expiration". I understand why it is there but think it should be optional. I think a better idea would be to steal from DNS and use TTL and serial numbers (maybe just standard http last-modified is enough?) - the point is "this stuff might be stale, reprocess it".

The last thing I need is one more thing to have to remember and update.

By the looks of it, a few others feel it is non-critical and have just skipped it too.


> I think a better idea would be to steal from DNS and use TTL and serial numbers (maybe just standard http last-modified is enough?)

HTTP already has an Expires header: https://tools.ietf.org/html/rfc7234#section-5.3


Seems like a better solution that would accomplish the actual goal would be "refresh-after" where you could specify how many days a client should wait until asking again.

Zero maintenance required but still gives a rate-limiting and time window function.


Expires was optional until draft 10 (August 2020)


What a beautiful world we would be in if RFCs were required to include some sort of test suites wherever possible.


It should be noted that this is not yet an RFC, so compliance with it cannot be tested against an RFC.


Your comment made me think about using Google Search in a more alternative way to figure out who is hiring at a glance:

https://www.google.com/search?q=hiring+well+known+security+f...

Edit: If one is not up for the 2min it takes to parse some publicly available list.


Facebook and LinkedIn do as well (but Microsoft, Apple, Amazon, Twitter, Yahoo, Netflix, Stackoverflow & Salesforce do not).


I'm not sure I buy into the idea, but it couldn't have been sold any better. That security.txt generator is such a great way to get people on board. The whole website is really good at explaining the project.


It’s cute but it generates a five line plain text file. I would argue that a better way to sell the idea would be to create an Apache and nginx module so you could specify this stuff from those config files. It would make adoption seem easier to more people.


Serving a 'text file' from a web server module seems to overcomplicate things in my view.

More code || complexity == greater likelihood of bugs (including security bugs).

As ironic as a security bug in a security.txt serving module would be, it's probably best we avoid that possibility and let the ordinary, highly scrutinised file serving code handle it instead.


If you want adoption, make it so easy that it’s harder to not include it. If you run an application, it will take lines of config to serve this file anyways. Might as well make it easy. And if you can’t make a bug free module that writes out 5 lines into a static file… probably shouldn’t be defining web standards.


Conversely, if you are providing a web based service on the public Internet and can't be assed to drop a five line text file in your home directory, you probably should not be running a server on the public Internet.


I think the text file generation is great - it is a standardized "syntax", so being able to just fill out your info in a webpage and getting the .txt to upload to a server (instead of having to "learn" the couple of keys to use for your values) really does make it painless.


A lot of people have web hosting packages that don't give them direct access to the webserver configuration, but do let them upload arbitrary text files to their webroot.


Nothing prevents you from writing this file by hand. But for people who use shared hosting having a security.txt file is likely not as important as companies that run their own infrastructure. And adding yet another file to be deployed into the web root or to be served by your application is likely a touch more work than enabling the module and adding a line or three to the web server config. None of it is a lot of work, but in a sense using the web server’s config file is a lot less friction.


That would make things much easier especially to have the expiration date automatically update. But also to set a default for all sites on a server.

It would also be nice to have libraries for the popular frameworks


Yes! Make this a Django app and I will use it by default. Make me add it as a text file and I’ll forget to add it.


Not everyone runs their own nginx and apache servers, like gazillion wordpress hosting site, so its better the cute way.


One aspect that is not reflected in this format is that the site/company might have a specific routine for reporting vulns. When I happened to write to Node (iirc) about some potential problem, the mail was just redirected to HackerOne, converted to some kind of a draft, and I got an automatic response saying I need to create an account there. In true marginal-nerd fashion, I have some opinions on which of my email addresses go where, so the account remains uncreated and the problem unreported by me. And Node didn't specify anywhere that this reporting works through HackerOne.

(I also realize that this comment is probably not the right place to complain about the format, but eh.)


I'd have responded the same way.

I'm not creating an account just to do someone else a favour.

I will send an email and that's it. If you don't have an email address then you're not getting a message from me.

It's disappointing how frequently this comes up.


Yeah, worst part is when some support engineer asks you to please post the bug to a bug tracker, but the bug tracker requires an account, and when you try to sign up they make you wait for someone to review your account, and at some point you wonder if these people ever get a single bug report from a customer.


This is what the Policy: key in the format is for.


You're right, that would work.

Though the UX designer in me thinks that if the policy is important, it would be better to put in up on a webpage and slap that into the ‘contact’ field—as a neighbor comment suggests. At least when the whole process turns out to sidestep email completely.


You can put a URL on the contact field.


Ah! Indeed, I missed that alternative.


This is already used by quite a few organizations/ websites:

https://beta.shodan.io/search/report?query=http.securitytxt%...

We've been indexing it for a while now and we haven't seen the number of websites that support it change significantly. It would make notifying organizations easier if this was a more widely-adopted standard. This is how it looks like when you pull an IP that has a service with that file:

https://beta.shodan.io/host/5.28.193.120


(One of the authors here)

Make sure you read through the actual latest draft (especially section 6): https://tools.ietf.org/html/draft-foudil-securitytxt-11

Also, we are in the end stages of the IETF approval process so this should be official later this year (if all goes well): https://datatracker.ietf.org/doc/draft-foudil-securitytxt/


why is there no expires field on https://securitytxt.org/.well-known/security.txt


Strangely, the draft just shows up as an empty page for me in Firefox, but Chromium works fine.


It's likely some kind of caching issue. tools.ietf.org at 4.31.198.62 responds with the draft, but 64.170.98.42 404s



Yes, it's throwing a 404 for me on firefox as well.


It did that for me at first, but it was due to my impatience.

I tried again and waited, and after 10 or 15 seconds, the page finally loaded.


That's not impatience, the page is broken, unless your network is super slow.


Same here, but the it started working once I opened Developer Tools and refreshed the page.


Not sure if there's something new here, but this has popped up before on HN.

https://news.ycombinator.com/item?id=19151213 (2019) https://news.ycombinator.com/item?id=15416198 (2017)


A top-level security.txt sounds like a better idea than hiding it under .well-known. I wouldn’t want anyone without access to the web server’s root to be telling me what the security policy is, anyway.

Having it at top level makes it a sibling / analogous to robots.txt so there is some consistency to the pattern.


`.well-known` is already used for validation for many things, perhaps most crucially acme-challenge which is used by LetsEncrypt to issue domain validation certificates. LetsEncrypt is trusted by all major browsers at this point so it seems that the consensus is that .well-known must be kept secure at any cost. So even if you disagree with `.well-known` it must de facto be kept in the inner-most ring of your security model.


Right, which is why putting this file under .well-known is a small inconvenience.

It's increasingly common for server configurations to have a reverse proxy routing requests to internal containers or servers. Things like SSL renewals are often handled by the reverse proxy (because reasons [1]), so those requests don't get routed to the internal hosts by default.

Site-specific stuff, like this file, probably belongs in the site's root directory.

This is a bit bike-shedding though. It's only a small aggravation to work around.

[1]: Because you want to automate as much of the configuration as possible, so when a new hostname is added to a container or internal server, an SSL certificate just magically happens for it. This requires changes to the reverse proxy's configuration, and that's not something you want the internal containers doing, so it falls to the proxy to handle these itself. Letting the containers handle their own SSL setup means you have to have some kind of privileged communications channel from the container up to the reverse proxy, which is undesirable.


The problem is when there starts to be many "site-specific stuff". It's easy to remember not to route .well-known to user-generated content in your app. It's less easy to remember a list of endpoints, like robots.txt, security.txt, and so on. And what when that list grows? What if you already have a user called "security.txt" (or whatever the next one is)? This is why a .well-known prefix is valuable.


Right. This seems like a good pattern to have. Much like app/framework specific config, I much prefer to have everything under ~/.local/ than a myriad of ~/.foo dotfiles. It's only one endpoint to route, and you don't need to change configs each time you add some new static file.

IMHO, index.html and favicon should be the only files living at top level, and I guess robots.txt since that's a de facto standard (and soon to be actual standard iirc)


It would be great to move robots.txt to .well-known instead.

Same with favicons, although you can specify an arbitrary path to most favicons using index.html


My point exactly - validation usually involves write permissions to put a challenge or something else as required by the protocol (ACME in your example). If I put the security.txt file there and certbot gets compromised, there goes my security policy. Putting security.txt up one level so only root (I.e. me) can update it allows me to keep .well-known writable by robots only.


It's defined on RFC before Let's Encrypt. https://tools.ietf.org/html/rfc5785


How widely adopted is .well-known? I had never heard of it before.


Ha. I see what you did there. But I'll take it literally and for others who aren't aware there's an IANA site describing all 'legal' and 'official' .well-known URIs[0].

[0] - https://www.iana.org/assignments/well-known-uris/well-known-...


Pretty common. Let's Encrypt uses it, few other things I've run across, Matrix homeservers, etc.


Not just Let's Encrypt, if you want to demonstrate control over a web server to get your certificates in the Web PKI ("SSL Certificates") rather than doing something with DNS or emails or whatever, both your options are in the well-known registry.

Certificate Authorities doing ACME (like Let's Encrypt) use /.well-known/acme-challenge/ while those who've rolled their own solution or maybe just adapted some manual process for this modern era are required to use paths starting /.well-known/pki-validation/

Previously to this the problem was if you hand roll solutions a customer eventually says "Oh, I can't create http://example.com/proof.txt you asked for because of [some stupid corporate reason] how about if I name it http://example.com/proof.doc ? And also it will be a JPEG screenshot of an Excel spreadsheet with your text file in it". And eventually your frustrated employee says "Fine, fine, whatever it takes" and next thing you know your policies are so relaxed that a bad guy is able to get a certificate for example.com by uploading the proof file to http://pastebin.example.com/XQGBLP. Oops.

So the Ten Blessed Methods fix a bunch of stuff in this space, clearly control over pastebin.example.com shouldn't get you an example.com certificate for example, but also requiring the files go in specified places in /.well-known/ means chances are if other people can create those files you likely also have a dozen other directory traversal type vulnerabilities and maybe you need to check yourself before you wreck yourself.

Such level playing field rules prevent a race to the bottom for CAs because they don't need to worry that their competitors get an edge by being more lenient than they are, if a competitor breaks the rules you can just rat them out.


it's defined in RFCs 5785 and 8615. not our fault you don't know .well-known


Dammit I was hoping this would include password requirements. I remember reading about a similar proposal on HN before.

The way I saw it described was a field for password security requirements, a field for an API url to let you change a password, etc. This would allow password managers to easily and simply change account passwords en masse. I suppose there are security risks as well, so maybe an email going out saying "an automated password change request was made, these often come from your password manager but only if you initiated it. If you want to approve this change, click here"


You might be thinking about:

https://github.com/apple/password-manager-resources

or the related:

https://github.com/w3c/webappsec-change-password-url

But mainly if you are responsible for a system and you're willing to do work to improve security your first focus should be "implement WebAuthn so my users can stop worrying about passwords entirely" not "I wonder if more complicated password handling would help somehow?"


There's /.well-known/change-password to redirect to the feature on your website to help password managers. https://web.dev/change-password-url/


If curious, past threads:

Show HN: Scanning the Web for Security.txt Files - https://news.ycombinator.com/item?id=25605164 - Jan 2021 (33 comments)

The “security.txt” proposal reached last step in the IETF process - https://news.ycombinator.com/item?id=21766868 - Dec 2019 (89 comments)

Security.txt (2017) - https://news.ycombinator.com/item?id=19151213 - Feb 2019 (54 comments)

Security.txt - https://news.ycombinator.com/item?id=15416198 - Oct 2017 (141 comments)


Why not just put this info into whois?

Also, how do I know whether to believe the link to the encryption key? That stuff should be in the HTTPS certificate, not a text file. Just use the server's public key to encrypt communications to the website owners.


Whois is not a good place for this data. Whois data is typically abused by spam bots (and most people don’t look there), it can’t be easily extended with security-specific info (a link to the encryption key? a link to the full security policy?), it works only for the registered domain (you can’t have different whois for maps.google.com and mail.google.com), and some registries might have policies that make it difficult to fetch WHOIS data (eg. by blocking IPs of cloud providers, or by forcing you to go to a website to see full subscriber information).


If security.txt takes off it will be abused by spam bots also


> it can’t be easily extended with security-specific info

Just put a public key into the address field, for example. More abuse of field names is good because it will keep trip up the bots that use e.g. the address field as a spam mail address or pass it to data brokers.

I'd love to see a data broker say "John Doe lives at === BEGIN PGP KEY === 0xA3243ABC3F... Do you want to dox them? Yes/No" and more spam mailers waste their money attempting to send ad mailers to "=== BEGIN PGP KEY === ..."


This is all done over TLS connections, including the link to the encryption key. So the provenance is already at the certificate level. Using PGP means that this provenance can be increased past that level if required.


I'm sure they'd be very excited if I sent them PGP encrypted email message using a public key extracted from some possibly stale public cert of their http server.


Why not just put the PEM encoded key straight into this file instead of putting it at a separate URL?


Public keys can be big + Why keep the key there if it is already available somewhere else. and the draft allows 3 ways to add a key - by uri - by OPENPGPKEY DNS record - by key fingerprint (assuming it's on a public keyserver)


IMO this totally solves the wrong problem. It's not really so much about "who do I contact if I find a security problem on a website", it's more about the problem on the other side "How do I separate the spam and low quality bug reports from actual defects, especially if I have a bug bounty program that attracts low quality bug reports."

I think what is needed more is a community-managed "reputation score" for security researchers that could be used to indicate who has submitted high quality defects in the past. I shit on pie-in-the-sky blockchain schemes all the time, but this actually seems like one where I could imagine it being useful, i.e like this:

1. A site owner publishes their security team's public key in a well known location, similar to what is described in the security.txt proposal

2. When a user submits a bug report to some site that the owner seems is a bug, the user can sign something on the blockchain that states "User X found a [high,medium,low] defect" on our site.

3. Then, when a user wants to submit a defect to another site, they could show their past history of high quality bug submissions to past sites, and those submissions could even be scored based on some value of the site owner (e.g. finding a high impact defect on Apple would result in a high "reputation score.")


> It's not really so much about "who do I contact if I find a security problem on a website

Cannot confirm. I bought a windshield for my '01 Ford Focus and found a major security bug on their site [1] (they linked a JS file from a non-existant domain)

I talked to CERT, the clerks at the store, tried to contact the owner on linkedin; heck it was even published in one of the largest newspapers of my country but never got anyone who understood the problem or cared.

In the end the bug was fixed because I wrote them on facebook and the kid who's job it was to manage their facebook site was also the web admin

[1] https://blog.haschek.at/2019/threat-vector-legacy-static-web...


haha, It gets worse when the guy at facebook employee says "Oh I get it, but i can't fix it and I won't try to get it fixed either.", And you so wanna teach them a lesson but just can't.


Or when you submit the bug report, and get a response from a lawyer threatening you with legal action, or worse.


sounds familiar too :D


One is a problem for the reporter, the other is a problem for the recipient.

As a reporter, if I can't find where to report it, you'll find out about your issue when someone forwards my blog post to you. If you ignore my e-mailed report because you don't want to spend the resources on it, e.g. because I haven't built a reputation score, same thing.

Most importantly: If the only way to report a security issue is through a platform with a "community managed reputation score", I'm much more likely to ignore that platform and again, you'll find out about your vulnerability from a blog post.

security.txt actually told me about a contact address for Cloudflare that isn't HackerOne. (HackerOne, in particular, is on my shitlist because they impose terms of service that deter disclosure unless the vendor agrees, and they don't let you publish through their platform if the vendor is unresponsive. If the only way to report to you is through HackerOne... see above.)


I'd be a bit careful if I were you. Bug bounty programs are after all the exception rather than the rule - the security equivalent of open sourcing software - an active decision by the company to sign away normal rights and normal legal protection.

If you find a vuln and publish it and the company does not have an explicit bug bounty program allowing such things, you may be sued or face other legal action.

I know several security researchers who have been sued for hacking, in many countries (mostly across US and Europe), because they assumed they were doing a "good thing", whereas the law doesn't care - it only cares about what is legal or not legal. Apart from the hacking charges, the very nature of bug bounties means it's pretty easy for the lawyers to add a coercion/blackmailing charge as well, which makes it more serious.


There are strong protections in the US regarding vulnerability disclosure due to freedom of speech. If you are able to run software that you own which doesn't have any anti-reverse-engineering ToS on your own computers, you are generally in the clear to publish knowledge of flaws that you find while inspecting the software on your computer.

This doesn't mean that you won't get sued, but it does increase your likelihood of winning such lawsuits when you haven't committed any crimes during your security research & disclosure.

You are not required to ever tell the affected parties at all, and afaik you are also free to stockpile and sell exploits as long as you only sell them domestically (IANAL & TINLA).


That's true, but in my experience vulnerability researchers mostly focus on the online presence/product of internet-active companies (the FAANG:s of the world, and their smaller competitors - companies that could realistically be on HackerOne/BugCrowd without standing out like a sore thumb).

If you've bought some software you install on your computer - like the good old days ( :) ), it's more fair game as you said.


"The law" doesn't sue anyone so the company is the one that "doesn't care". The law (if in a functional system) doesn't follow the letter of the law but the spirit of the law. Otherwise we wouldn't need a court but just a clerk or a low-level AI.

There're only three categories IMO (besides black hat):

1) The researcher disclose a vulnerability the proper way and all's good

2) The "researcher" did something that could cause harm and were punished

3) The system is utterly broken

I have seen all happen and the ones people are up in arms about have always been in category 2 or 3. Your last sentence about blackmail is in category 2 as demanding money for a proper disclosure from someone without a bug bounty program is the definition of blackmail.


For anyone else who was wondering what the legal definition of blackmail might look like, 18 U.S.C. § 873:

"Whoever, under a threat of informing, or as a consideration for not informing, against any violation of any law of the United States, demands or receives any money or other valuable thing, shall be fined under this title or imprisoned not more than one year, or both."


Coercion/blackmail is going to be hard to argue when you've never asked the company for anything, nor made any offer that would involve them giving you anything.

I consider going public directly less risky than talking to the company first: They're much more likely to make legal threats/try to sue you if they think it can help hide their embarrassment. Once public, that incentive goes away, they're in the public eye, if they do go after you they can no longer prevent the disclosure, and you have a decent chance that the additional attention this generates will make them reconsider before you have to spend money on lawyers.


> I shit on pie-in-the-sky blockchain schemes all the time

And you're right to, because they're mostly nonsense.

One of the issues with your proposal is that if I'm a security researcher, I now have to pay to maintain my reputation using cryptocurrency.


Also you are assuming that the recipients of those reports, the websites, would be truthful in their acknowledgement of issues and would not abuse their new power over researchers' reputations. Especially when each accepted bug leaves a permanent public record.


Right. Not to mention, what's to prevent an unscrupulous "researcher" from paying companies - or starting their own fake ones - to enter positive feedback, etc?

Sorry to burst your bubble, hn_throwaway_99, but this is the same kind of nonsense idea that you typically scoff at. It offers no new benefits, while adding new problems.


I don’t believe that paying for your reputation was what the poster meant. Blockchain can be used for basically “write once” non-alterable records that you can prove you “own” without any currency aspects. That is basic blockchain tech. However it would cost money to run the machines that managed the BC and people would not do that without an economic incentive so maybe you’re right?


Right. If you want to put things on the blockchain, you need to pay for the privilege somehow - mining blocks, executing contracts on ethereum, etc.


You don’t need a blockchain for that. The thing you’re describing is basically Hackerone: https://hackerone.com/leaderboard


How did you arrive at a blockchain? That's putting the cart before the horse, so to speak. Just do what these real-world sites already do, put up a list of researchers and their bounties.

Nobody is going to download a blockchain client from a bounty website in order to look at a text file. That's like the security researcher interviewing an employer.

I think what you are actually referencing is a distributed database, not a blockchain. But, the database you mentioned in your post isn't distributed, it's centralized...


> especially if I have a bug bounty program that attracts low quality bug reports

IMO security.txt provides value for when you DON'T have a bug bounty program. If you don't already have a 'front door' for how to send you security information, you're not going to get it.

Also, I would put some money on someone finding a show stopper bug like shellshock/heartbleed/deserialization-bug-of-the-week/etc and contacting folks from their security.txt in sync with their public release sometime within the next few years.


Today I learned about .well-known/ :

https://en.wikipedia.org/wiki/List_of_%2F.well-known%2F_serv...

I remember such files used to go into the root of the website's hierarchy; I guess it's nice that now they're mostly bunched in a subfolder.


It always intrigued me: why is it somebody else's job to secure a company's website? This is completely backwards. Rather than investing into security, they let somebody else fix the problems while leaving their users exposed. At this point, it is more ethical to sell whatever vulnerabilities you find to the black market than to "ethically" disclose them.


>why is it somebody else's job to secure a company's website?

Some people find bug bounties to be lucrative, especially in low cost of living countries. Other people find them fun. Other people find they look good on resumes. But no one is required to participate in them. If you don't want to spend your time looking for vulnerabilities in other people's software, don't do it.

>Rather than investing into security

Running a bounty program costs money, both to pay the participants as well as to pay employees to investigate the reports (most of which are junk). Also it's not a one or the other. You can run your own internal red team while also running a bug bounty program.

>they let somebody else fix the problems while leaving their users exposed

It's usually not the bug bounty participants who fix the problem. Usually the bug bounty participant reports the problem, then the company fixes it.

>At this point, it is more ethical to sell whatever vulnerabilities you find to the black market than to "ethically" disclose them.

Why is it more ethical to sell them on the black market? The black market is composed of people who actively want to harm others for their own benefit. Seeking them out and selling them tools specifically for that purpose is unethical. I don't see what's ethically wrong with reporting a vulnerability to a company through its bug bounty program.

There are of course other options besides those 2. Full disclosure for example.

Also you could sell it on the grey market to people who promise to only use it for legal purposes (e.g. for governments to legally hack people). https://zerodium.com/


I don't think this should be a standard pe say. Maybe more of what is a best fit for your risk appetite. Because I could easily see this as a flag to welcome people to attack your website looking for bounties. If that is the case how are your blue team people going to know the difference?

As far as the info contained within the txt file there should only be a email address or contact info if you found something serious absolutely nothing more. No reason to intentionally/unintentionally provide information used for recon.

About the automated scanners... adjust your scope to avoid the file.




Doesn’t for me.


Seems like a reasonably good idea. More of a question about RFC’s than the spec itself. I noticed while reading the RFC it mentions this:

> By convention, the file is named "security.txt".

Isn’t the point of the RFC so that it wouldn’t be by convention anymore? Or are they saying the idea for the name was from prior conventions? Or maybe I’m just reading into it too much and it doesn’t really matter.


I think the latter - they are saying the idea for the name was from prior conventions. Section 4 spells it out explicitly:

> For web-based services, organizations MUST place the security.txt file under the "/.well-known/" path;


I agree, it definitely seems that the phrasing there needs work. Overall I would say most RFCs are well substandard especially when compared to ISO standards which tend to be extremely precise. That being said, interoperability within tech is amazing so perhaps there is something to be said about the idea of loose standards and working code after all.


Seems like there would be a natural tie-in with DNS to publish a pgp key to authenticate the txt file (per the advice to validate the key)


Seems a bit redundant, if the page is being served over HTTPS, which ultimately relies on DNS records.

The advantage of PGP is that it can be verified entirely out-of-band, and distributed widely.



.txt file, uses markdown

On a more serious note, all these sitemap and bot-friendly standards for webpages always tend to fail. Even RSS, which is probably the most important standard in this space, has issues getting more adoption.


I'm pretty sure the hash is meant to symbolize a comment, not a header.


I don't even get why the standard needs to be bot friendly, bots are literally the last thing I want at my security.txt spamming my inbox.


# marks a comment in the text file, not a headline


>A link to a web page where you say thank you to security researchers who have helped you. Remember to include "https://".

This sounds like mafia


This could be TOML. It very nearly is TOML.

Is there a good argument for not making it TOML? You'd get parsers for basically every platform, for free.


I thought it's aimed at humans, not robots.


Humans aren't going to trip up at some extra ''s.

Since it has a structured data format, computers will parse it sooner or later. Why bother setting off comments with `#` at all if it's purely for human consumption? Why even have a standard, security.txt could just be "this is the place where you write whatever you want to about security, here are some suggestions as to what other people might find useful".

I'm not hearing good arguments for not making it TOML...


You might not want robots to parse it. See the spam complaints in the rest of the thread.


Well, it’s not .txt for one?


It's not illegal to put TOML in a .txt file.

It clearly has some INI-style format. Why not a well-specified one?


Funny enough I was thinking about this not long time ago. I have a business idea which would utilize a concept like this.


Wasn't semantic web supposed to generalize the hosting of structured info? Seems like this is just a special case


What about the alternative, hackers.txt?


It looks like one is the evolution of the other: https://stackoverflow.com/a/56332115/3266847


isn't this what whois is for?

barring that, (not much space for public keys and preference flags i guess), how about DNS TXT records.

just some file on the root of the webserver seems a security issue itself. typically less people have keys to DNS. also, rare today, but not all domains have webservers.


In my experience WHOIS is basically useless now. All of my domains are registered through a privacy service and every time I've gone to check some sketchy domain's WHOIS out in recent memory I've found the same thing.


sure, what about TXT records then?

seems dangerous that anyone who can modify the root of a webserver content can impersonate security contacts/pubkeys.


well .well-known should be well guarded anyway, since it is used for things like issuing SSL certs by Let's Encrypt (and possibly more CAs). Giving anyone access to your root of webserver is giving anyone access to your website.


The policy link is broken


security.txt gets its naming from robots.txt?


.txt is a file extension designating a plain text file. But yes, new well-known files of this sort intended to be included on websites, like humans.txt and now security.txt, do follow the naming convention of robots.txt.


The date in stipid US standard (mm/dd/yyyy) will confuse the hell of the rest of the world. Please use not US-centric ISO date format: yyyy-mm-dd.


Your browser is misconfigured. The site uses the YMD "standard" but your computer is configured with a locale that prefers US style for date inputs.

   <input type="date" placeholder="YYYY-MM-DD" class="input" required>


This site is only using mm/dd/yyyy for the generator input, the standard itself is not. The actual format in the security.txt file looks like:

Expires: Tue, 16 Mar 2021 05:09 -0400

Maybe slow down, read the spec, and proofread your comment for obvious errors before trying to troll for no reason.


> This site is only using mm/dd/yyyy for the generator input

If this is intended to be some kind of standard, it should follow other standards. I understand that this is "only" in the generator, but why even there?


It’s just using the standard date input. Formatting of the input UI will depend on browser, OS, and locale of the users device.


Ahhhh! Thanks for the update - I stand corrected.

I am French and my Chrome is set to US English. I just opened a FF session, switched to French and the input is dd/mm/yyyy.


I still think iso standard would be better: https://github.com/securitytxt/securitytxt.org/issues/72


Has the generator been updated? For me (UK) it's localised to dd/mm/yyyy, as I would expect in my region.

I like that the spec uses a universal and unambiguous format. I especially like that this generator is localised to my region though.


No, as @madeofpalk mentioned in response to my comment, it is using the standard date input. What you actually see depends on the locale of the browser (I see mm/dd/yyy in Chrome set to US English, and dd/mm/yyyy on FF set to French)


The language used in this draft is way to political and not technical/objective enough - herefore i think it causes more security risks than it resolves.

Example: from line #1 "proper channels" - This is subjective as hell.


So we have all of these implicit URIs:

    robots.txt
    humans.txt
    .well-known/security.txt
    favicon.ico
    ads.txt
    app-ads.txt
Am I missing any?

Part of me thinks that the root index.html should have link or meta tags for any and all of these. HTTP headers would also work. I get that robots.txt and favicon.ico are historical and humans.txt is cutesy, but this is getting kind of polluted. Perhaps I'm missing some obvious reason for why we're still doing the .txt thing?


> Am I missing any?

As is implied by .well-known/security.txt there is in fact a whole registry for such well known URIs living under the /.well-known/ path https://www.iana.org/assignments/well-known-uris/well-known-...

> Part of me thinks that the root index.html should have link or meta tags for any and all of these. HTTP headers would also work.

As it is, anything interested in MTA STS can get the associated well-known URL and everything else doesn't care. If we put that into index.html or every HTTP header then everybody is burdened even if they didn't care.

> Perhaps I'm missing some obvious reason for why we're still doing the .txt thing?

If there's a significant software stack in which you're more likely to get what you expected from foo.txt than from naming it just FOO or foo then unless there is also a significant stack in which foo.txt can't work you'd be crazy to insist on not having the extension.

The IETF and W3C do not have enforcement arms, so if they standardise something that's harder to get right, less people get it right, harming its adoption. Sometimes that's just the price you must pay for something that actually delivers on its intended purpose - but often it is not and you should look for the simplest thing that works.


That's the purpose of .well-known going forward: prevent pollution and allow access without discovery.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: