At $previous_job we once turned on HTTPS for our entire customer website and online store, only to have our customer support team be bombarded by phone calls claiming that our "website was down."
After much teeth gnashing and research, we determined that a large segment of our user base was still using WinXP and the encryption protocols we offered weren't available to them.
We didn't think this would be a problem because the current version of the software wasn't compatible with WinXP any longer.
There was some debate internally whether the better fix was to including the legacy encryption protocols or just leave the HTTP version of the site running and use Strict-Transport-Security to move capable browsers to HTTPS.
In the end we had to include the legacy protocols so those customers could use our online store.
At $current_job we're currently in the middle of the same thing, but took the precaution of checking logs to see which customers use older encryption protocols (we're B2B), and have given them X months to upgrade their systems before we make the switch on our side.
The logic that was communicated to them was that as a service provider, security a prime concern for us (as it should be for them as well), so we can't keep lagging on this forever. Currently, we have $single_digit merchants we're still waiting to make the switch.
It's made the whole switch process much easier and made customers actually appreciate our pro-activeness in this! :)
I know it's hindsight and all that, but why didn't you check your website analytics first? Seems a fairly massive assumption that should have taken 10 seconds to check.
That would have been really smart. However, this move was driven by the product owner, including the requirement that we must score an "A" on the SSL Test site. I had just assumed he knew what he was asking for.
The scanning of the server logs occurred to us in hindsight as well.
I completely understand where you're coming from, but the User-Agent string is included in regular HTTP requests and you don't need to resort to overbearing client-side analytics to aggregate it; it's right there in the access logs on the server.
It's "spying" when you're gathering data they didn't consent to give, like mining through their contacts, scanning running processes or uploading unrelated content from their computer. The browser User-Agent string is hardly classified information.
In this case, the absurdity and nonsensical character of the 'spying' claim is fairly self-evident.
When a client voluntarily makes a request to a server, it presents a bunch of information for the server to see and consume. This information is not meant to be kept secret from the server. Among such pieces of information can be some about the characteristics of the user agent, including OS. It is disingenuous at best to call collecting such voluntarily-presented and clearly-transmitted data as "spying" on a user.
A basic requirement for spying is for a collecting party to be obtaining information that can be reasonably considered confidential or restricted. Details about the system from which you send a request are by definition of the protocol not confidential or restricted to the recipient of your request. It is not reasonable to expect a server to not look at or use information you present to it. Therefore, it isn't "spying" for the recipient to consume the information. The information might be used in ways some people(e.g., OP) don't like, but that does not make obtaining the information "spying".
I only posted this because it was the second time that day I saw "absurd nonsense" used as a comment with no additional content. It annoyed me enough the first time that it stuck out like a sore thumb the second time, then I noticed it was the same user and it was their last 2 comments.
The whole point is to extract meaning from analysis but not spy on personal information. Knowing which clients support what kind of SSL isn't personal, it is part of the request transaction.
Mere server-side logging can pick out something like this via User-Agent. Is it spying to count the number of times a request with "Windows NT 5.1" is sent to your server?
Aside from my personal opinion (which largely agrees with you), there are jurisdictions where a specific IP address is considered enough to make it (and the rest of the data) personal information, requiring a justification, information (or even consent), and other processes to protect privacy.
That's why Google Analytics has an option to remove the last three digits of an IP.
Well you can use qualsys labs tool to check your ssl and all the main search engines have said they will start flagging sites that use unsafe HTTPS or show the warning page before letting you proceed
> We didn't think this would be a problem because the current version of the software wasn't compatible with WinXP any longer.
> There was some debate internally whether the better fix was to including the legacy encryption protocols or just leave the HTTP version of the site running and use Strict-Transport-Security to move capable browsers to HTTPS.
Where can I read about this? Is there any way to display a special "Your browser is outdated" page for the users on WinXP?
Sorry if these seem like basic questions. I am just curious and would like to hear some expert advice.
At our place, we put a redirect on the front end networking device that detected if a browser couldn't support more modern encryption protocols, and sent them to an HTTP information page (instead of to the application itself) if so. This allowed us to update the core app to force newer protocols, while still providing some sort of UX for those left behind. We used Piwik to track the hits on the redirect page to get a sense for how many users were left behind.
We did a similar thing, but folded it into unsupported and deprecated - unsupported browsers will get an HTML page extolling the virtues of updating your browser once a decade, whilst deprecated browsers (basically IE10 at the time tbh) were treated to a popup explaining that whilst the site probably works just fine, their browser wasnt fully upto date and the experience might suffer.
Eventually, and I doubt we had anything to do with it, IE10 usage dipped below the magic .5% (when it costs is more money to support than it earns us) and it was finally unsupported.
The only crappy browsers we still officially support are ancient safari and IE11, both of which are still going relatively strong for reasons we've never been able to fully explain!
IE11 is the most recent version of IE, it's not like it's old or unsupported. And it has way more compatibility tweaks than Edge, so lots of people haven't switched.
Corporate environments often rewrite employees to use IE 11 because of outdated internal web apps. Where I work the Windows laptops, even Windows 10, only allow IE 11, not even Edge.
Had a similar one at my last role. It was a HTML5 remote desktop thing with websockets, TLS1.2 etc etc. Got a bug report from a user that it didn't work in Safari. We didn't have a Mac in the office to test with, so asked the user for more details.
"Oh no, this isn't a Mac, it's Windows"
This is a user of a highly secure system, containing user PII, who expected to use it on a 5 year old browser with XP.
The software used was a commercial tool made upon Guacamole (Apache) called Inuvika. It's pretty awful, but having Linux and Windows apps on the same virtual desktop is quite cool. I don't know how much of that functionality comes from Guacamole or from the Inuvika addons.
Inuvika/Guacamole also support plain RDP, but we didn't use this, just the html5 client (browser)
If you want to see what open-source can do then look at Guacamole and go from there.
The frustrating point about a similar experience was...
You can support HTTP and the occasional knowledgeable person will suggest you should upgrade. Or you can force TLS with SSLv3 enabled, and suddenly you'll hit a flood of people letting you know you're about to be hacked, based on online scanners. Often complete with requests for a bug bounty.
The other problem with Windows XP and https is SNI. You cant serve more than one domain with different ssl certificates from the same IP address, you either use SANs or different IP addresses. This does not only affect IE on XP but every browser.
This is incredibly detailed; in short, CDNs, cookies/authentication , tons of subdomains, and 3rd-party/user-generated content make it a pain to move onto HTTPS.
I was chatting with a non-engineer friend about why it's hard to estimate how long tasks often take, and this seems like a prime illustration: the dependencies are endless.
I also love the Easter egg:
"The password to our data center is pickles. I didn’t think anyone would read this far and it seemed like a good place to store it."
Stack Exchange is no longer available from my workplace due to this change. We have a strict no-posting-code-fragments policy, and SE was viewed as too risky to allow without some restriction in place to make it read only. Before HTTPS, the IT department had worked out such a read-only restriction by blocking the SE login with firewall rules. But with HTTPS that kludge is no longer possible, so the site is blocked.
Many banks have very strict IT policies on posting things on internet, and they have valid business reasons for that. Not saying you meant that, but it's not like they're some dark, silly workplaces that people should get away from asap.
The enforcement is stupid (both the previous hack and now the block). For me this actually would be a sign that the workplace isn't quite the right fit for me, if the basic assumption is that I ignore the policies anyway - because that's what this seems to indicate?
> The enforcement is stupid (both the previous hack and now the block)
Hack indeed. Seems like blocking POST would block posting stuff while blocking to log in allows you to just copy your cookie, and doesn't allow you to view your notifications.
> Many banks have very strict IT policies on posting things on internet
Yes, they do. And I really love it. Because it means that MY bank eats their lunch, because the bank I work for actually UNDERSTANDS how to use technology, while still keeping (very!) strict controls.
Would be curious which bank you work for. Most do not seem to value technology--which is odd, since most "cash" only exists as data in a computer somewhere. I'd much prefer to patronize a bank that understands and takes seriously their tech.
I work at Capital One. We have been a bank (and are regulated as one), but are trying hard to become a technology company that is specifically focused on banking.
And I'm probably biased, but I think we have some pretty great products also (checking accounts with no fees that pays some interest, savings accounts with very good rates, and so forth), so maybe you'll get a good deal as well as a technical focus.
I don't know how you'd get anything done since there are answers on Stack Overflow that solve problems that otherwise would involve hours to days of fussing to come up with the same non-intuitive solution.
All roads lead to Stack Overflow these days for progrmaming problems.
Just out of curiosity, do those 7.5+ million accepted answers include those closed as duplicates? Because by far my biggest complaint is finding the exact question I have was closed as a duplicate and links to a question that is useless at answering my question.
In that case you can vote to re-open and perhaps even post a bounty. Although bounties tend to invite lots of low-quality, low-effort answers just on the off chance that they might be the top-voted one once the bounty runs out.
I feel you. I've taught myself programming between 13 and, well, I'm now 23; so by the time stackoverflow came around I had figured out how to solve things myself. When I have a question, it's usually either opinion-based (bad fit for SO) or not a common question.
I'd say 1:20 is a good estimate if I ignore answers that didn't read my question (which is most of them), but indeed the facts disagree.
Back then I didn't speak proper English, and how many questions were actually covered on SO in the beginning? It took some years to get to where we are, both for SO and for my English ;)
I have had the same experience with embedded programming questions. I suppose they depend too much on the hardware. I do quite a bit of programming with the beaglebone blacks (or at least the same processor). And it seems the best resource is the mailing list.
This sounds beyond absurd to me.
Do they also block usb ports to prevent you from copying everything on a usb drive or external harddrive, or phone?
Do they lock/solder you machines shut to prevent you from taking out a hard drive / plugging in a new one and then taking it out?
Do they prevent you from .. printing the code? In what parallel world do they exist that they think this would make a difference
As someone who works at a finance related company: yes. No USB storage is allowed, all cloud hosting sites are blocked (not SO, thankfully, they're more worried about us stealing SSNs and other PII than code), and all printers are logged and have drivers that detect if you're printing PII and censor it by default (or so I've been told, I don't really feel the need to test that).
A friend works at an investment firm, and has similar restrictions as the above commenter mentioned (no SO, no USB, no printing, etc), as well as pulling his phone out while at his desk or around any other computer being an immediate fireable offense.
A few years ago, I interviewed at a company called 'G Research' and the security procedures I noticed included:
* A 'secure zone' where work took place.
* All desktops virtualised, using thin clients.
* All Windows, no admin access.
* Screens, filesystem snapshots, and web access recorded, all the time.
* All software installation subject to approval (e.g. Firefox not permitted, only Chrome).
* Desks fixed in place, all cables in locked cable trays.
* Separate internal-only e-mail system.
* No printers.
* Specially printed notepads & other stationary in the 'secure zone', no secure zone stationary to leave or non-secure-zone stationary to enter.
* No cell phones, cameras or laptops permitted (lockers were provided).
* Entry points with human guards and metal detectors.
* No late working outside guards' hours.
While it would have been possible to get around the security if you were inventive enough (e.g. camera with no metal parts) it would be difficult to do so then believably claim it was an accident.
I didn't take the job, because I didn't feel I could be productive with so much bureaucracy.
I've worked in financial software and they do block USB ports for any storage device. They block SD card slots too. All work was done on a VM that could only be accessed from the company network and was remotely hosted.
Leaving aside all the reasons why this policy is super dumb (which I'm sure others will cover quite adequately), I guess your IT department can't figure out how to create their own CA certificate and do SSL interception?
Yeah, I'm amazed and concerned that you have a security team so paranoid that they would make SuperUser read-only but apparently lack the ability to perform SSL interception. Considering the huge value the latter has in any kind of post-compromise scenario and, increasingly, to prevent compromise in the first place... there needs to be a real discussion about getting priorities in order.
There are many enterprise "solutions" that basically do this "out of the box". Yeah it shouldn't be done and a lot of employees are likely unaware that IT can see all of their SSL traffic but it's a big business.
This is nothing that can't be addressed through training. Questions on Stack Overflow with generic code actually get better responses than those bogged down with irrelevant details. You should strip out all labels, namess, even extraneous fields that don't matter. It makes for a more generic problem and solution pair that can help others as well, and eliminates the problem of leaking proprietary information.
Maybe about half the time I end up answering my own question during this step. The act of genericizing the question ends up giving me some new approach, which either works, or leads me to new existing questions-and-answers.
IP protection. In a prior life I saw someone fired for mailing a model to a home account. Pasting code to a public website would violate similar protocols.
A company that trust their employees.
There are so many ways to get around this anyway so it doesn't make sense to try to enforce it in the first place (considering the issues that follows).
Seems obvious that someone high up on the corporate ladder, with no practical knowledge in how the nitty-gritty work gets done, made the decision. Probably to "minimize IP theft".
Why don't they just recompile chromium without support for the textarea element, make that the only officially permitted browser, and call it a day? :-)
Sorry, but such policy is just stupid. There are many, many ways one could get a snapshot of code without posting it online. I respect SE for their decision to make things right, not kneel down against costumers and their faulty "security" practices which can be often seen.
I honestly wonder how exactly places like this want to enforce policies like this. Do they allow you to take a phone into your workplace? Aren't they scared you will take a photo and upload the code fragment?
Well sites doing TS work you can sort of understand that
I knew someone who worked for the scientific civil eservice and they where not allowed to have a phone with a camera.
I have also been for an interview at a site (HMGC) where you have to hand in all electronics at reception - this was an avowed role btw so I am not breaking any laws the organisation even has job adverts on the local buses
Yep - we're aware. I thought about putting in our Content-Security-Policy-Report-Only findings about what all would break, but the post was already a tad long. It's quite a long list of crazy things people do.
As the headers go, here's my current thoughts on each:
- Content-Security-Policy: we're considering it, Report-Only is live on superuser.com today.
- Public-Key-Pins: we are very unlikely to deploy this. Whenever we have to change our certificates it makes life extremely dangerous for little benefit.
- X-XSS-Protection: considering it, but a lot of cross-network many-domain considerations here that most other people don't have or have as many of.
- X-Content-Type-Options: we'll likely deploy this later, there was a quirk with SVG which has passed now.
- Referrer-Policy: probably will not deploy this. We're an open book.
> - Public-Key-Pins: we are very unlikely to deploy this. Whenever we have to change our certificates it makes life extremely dangerous for little benefit.
Is it possible to pin to your CA's root instead of to your own certificate? That would make rotating certs from the same CA easy but changing CAs hard (but changing CAs is already a big undertaking for big orgs).
Many headers presented here are questionable. X-Frame-Options should be replaced by CSP frame-ancestors. X-XSS-Protection: 1 is the default since a long time for browsers supporting it and Chrome blocks by default since two releases. Referrer-Policy is a matter of choice. It's a useful information for the target site as long as the referrer doesn't contain sensitive information. IMO, most sites shouldn't set this header.
> X-XSS-Protection: 1 is the default since a long time for browsers supporting it and Chrome blocks by default since two releases.
Do you have references to back this up?
> Referrer-Policy is a matter of choice. It's a useful information for the target site as long as the referrer doesn't contain sensitive information. IMO, most sites shouldn't set this header.
Exactly. I think its primary use is when the original site's URL contains user supplied input like Google Search page.
The only header that I can think of that might slow down a site is Content-Security-Policy. Even that is negligible as long as you don't have 1000 entries.
Those are 1220 bytes. I'm not sure what they'll compress down to, but it's still non-trivial and not near 0 (anyone want to run the numbers?).
The same pair of headers are 969 bytes for facebook.com and 2,772 for gmail.com.
I don't know what ours would be - since we're open-ended on the image domain side it's a bit apples-to-oranges compared to the big players.
When you take into account that you can only send 10 packets down the first response (in almost all cases today) due to TCP congestion window specifications (google: CWND), they get more expensive as a percentage of what you can send. It may be that you can't send enough of the page to render, or the browser isn't getting to a critical stylesheet link until the second wave of packets after the ACK. This can greatly affect load times.
Does HPACK affect this? Yeah absolutely, but I disagree on "negligible". It depends, and if something critical gets pushed to that 11th packet as a result, you can drastically increase actual page render time for users.
Oh I wasn't clear - I meant that for the same connection headers are not sent for every page but just references for previous values (see [0]). The initial page load is a different matter but that's part of the cost/risk analysis if you need CSP or HPKP (I agree it's not necessary and very easy to mess up).
> When you take into account that you can only send 10 packets down the first response (in almost all cases today) due to TCP congestion window specifications (google: CWND), they get more expensive as a percentage of what you can send. It may be that you can't send enough of the page to render, or the browser isn't getting to a critical stylesheet link until the second wave of packets after the ACK. This can greatly affect load times.
I wonder how much of the page can be rendered in 10 packets...
I explicitly try to ensure that for my sites the first 10kB sent (so less than 10 packets typically) is enough to render all the information above the fold. Anything essential should make it out in the first 2 packets for old TCP slow-start rules. (Lipstick and ads can arrive later, once the user is happy reading or whatever, IMHO.) Has been my policy since about the mid '90s!
The other issue with subdomains is that some customers will insist on typing "www." in front of every domain. Since the wildcard cert won't match, those customers will see an error.
I feel like TLS certificates are fundamentally misdesigned there. It should be possible to have a wildcard certificate that matches all subdomains under a domain, no matter how many layers deep.
Well if it wasn't for someone buying <star>.com back in the day, we probably could have them. Oh and then buying <star>.<star>.com after browsers banned that one, which led to RFC 6125 rule clarifications and restrictions.
Hey, I'm pretty sure that the first real domain name hack was sex.net, which as the proud owner of ex.net [PS: or was it sexnet.com, as we also have exnet.com?] caused some upset for a while, though mainly to disappointed one-handed typists I believe... B^>
BTW, did I blink and miss the "It really is all faster over HTTP/2, even given TLS" bit? My testing for my tiny lightweight sites close to their users (the opposite of what you're dealing with) is that HTTP/2 is slightly slower overall. Even with Cloudflare's advantages such as good DNS. And with the pain of cert management...
Use the Java applet below to search ExNet's main Web pages.
When the ``Status'' indicator stops flashing and says ``Idle'', type key words in the ``Search for:'' box.
The ``Results:'' box will show you the documents that matched your key words, the best matches coming first in the list. Click on any line in the ``Results:'' box, and that document should appear in a new browser window in a few seconds. When you are finished with that document, you can close it without killing your browser.
That code did search-by-word from (IIRC before Google existed, ie Netscape 2) right up until Java applets were dropped, across all compliant browsers AFAIK. It did roughly what G's live search now does.
I would imagine the more resources your page has, the more benefit you can get from HTTP/2 because of Server Push. So if you're comparing a tiny lightweight site, I'm guessing you can't benefit as much from Server Push.
I have relatively little that would benefit from push; basically a tiny hand-crafted CSS file that I currently inline because HTTP/1.1 and even HTTP/2 overhead for having it separate may be too high.
It's a long answer that completely fails to address the possibility of validating ownership of the domain itself by e.g. adding a TXT record, which the ACME protocol already supports.
The general point is that being able to control the parent domain doesn't necessarily mean you control all possible subdomains as well. You need to prove ownership, not just control. Here's the relevant bit from the SO answer:
> If I have ownership of the parent domain example.com then I can freely create and control anything as a subdomain, at any level I choose. Note that here "ownership" is distinct from "control", which is what is validated by the ACME protocol.
Subdomains were killed by SEO a long time ago (afaik, Google does not transfer domain PageRank credit to subdomains), so this is not limited solely by the cost of wildcard certs.
Way less than that. I've got a wildcard SSL cert for my domain for $60, although that was an add-on to the domain itself and hosting, purchased from the provider of the latter.
The Let's Encrypt process is about validating control of the content on a domain, not about OWNERSHIP of the domain. To get a cert, you just have to be able to update a file at a Let's Encrypt specified location on the domain. This is only proving that you are in control of the website for that specific domain, not that you are in control of the DNS for the entire domain and all subdomains.
Of course if I own a domain, I own all the subdomains. However, being in control of the site served at port 80 for a domain does not mean I own it.
But the ACME protocol, the automation underpinning Let's Encrypt, supports validation via a DNS challenge (adding a specific TXT record to the domain). Would it not be possible to issue wildcards if-and-only-if a DNS challenge succeeds?
Not any immediate plans. Decent amount of development is necessary there. There are so many places in our various systems that work with IP addresses, and many of them don't support v6 addresses.
Given the scale of Stack Overflow, you'd think they could set up AAAA records that point to a proper TLS 1.3+ server and leave the peasants on IPv4 going to one that's more...accommodating.
We could - but the network side isn't the problem. There's a lot of logging, user banning, etc. pieces that need IPv6 love first. We just haven't had the time yet.
There are network bits we'd have to evaluate heavily as well, e.g. firewall rules - basically the very limited benefits don't make it a priority, yet. When things change there, we'll do it.
Despite the "Google gives a boost to https" reasoning, which comes from Google itself, in practice I've read several first-hand accounts of how traffic (non XP) dropped significantly right after the switch.
It would be better if scripts like jquery were not encrypted. This forces users to use e.g. a google service instead of caching/hosting the scripts themselves or getting them from another CDN. I do not understand why so many people do not consider the privacy implications of every single webpage requiring calls to google services. There are ways to avoid this, but it gets a lot more complicated when that requires MITM methods for SSL. Please: use a non-tracking CDN, host it yourself, or at least leave it HTTP.
It very much depends on the complexity and scale of your site. StackOverflow is a bit of an extreme case.
For example, if instead of having hundreds of domains serving millions of users with tons of user-generated content you're just serving static content from a single server on a small site, the entire process for you might actually be as simple as just running `certbot-auto` on the production server.
I suspect the difficulty of switching for most sites will fall somewhere between these two extremes.
Yeah, we've been working on this for about a year (not continually, but as we have time to try to work through the problems). We do use subdomains though, so that is part of the problem. We keep feeling like we are getting close, but then we run into another issue. It's like a rabbit hole that has no bottom.
Split horizon would point you at the same data center, rather than the writeable one. So that's more of a .local than a .internal. We discussed this, but ultimately the AD version we're on (pre-2016 Geo-DNS) it's not actually supported the way you'd need, and it's a nightmare to debug.
We'd consider it for a .local, when the support it properly there in 2016. Even subnet prioritization is busted internally, so that's a bit of an issue. Evidently no one tried to use a wildcard with dual records on 2 subnets before (we prioritize the /16, which is a data center) and it's totally busted. Microsoft has simply said this isn't supported and won't be fixed. A records work, unless they're a wildcard. So specifically, the <star>.stackexchange.com record which we mirror internally at <star>.stackexchange.com.internal for that IP set is particularly problematic.
TL;DR: Microsoft AD DNS is busted and they have no intention of fixing it. It's not worth it to try and work around it.
I work at a government facility. Stack Overflow and github are now both blocked (in addition to all social media and webmail). But Hacker News is apparently ok.
After much teeth gnashing and research, we determined that a large segment of our user base was still using WinXP and the encryption protocols we offered weren't available to them.
We didn't think this would be a problem because the current version of the software wasn't compatible with WinXP any longer.
There was some debate internally whether the better fix was to including the legacy encryption protocols or just leave the HTTP version of the site running and use Strict-Transport-Security to move capable browsers to HTTPS.
In the end we had to include the legacy protocols so those customers could use our online store.