This is good. People are getting fed up with replacing their credit card every six months because some online retailer had a breach. You can outsource payment processing to Stripe, Paypal, Square, Yahoo Store, etc. There's no reason every web merchant should see credit card numbers.
Stripe is in Visa's doghouse right now.[1] Their entry on the Visa Global Registry of Service Providers has turned yellow, with an expiration date of Mar 31, 2015. This means they're having some PCI compliance problem.[2] Visa gradually cranks up penalties until the problem is fixed, or, after about 9 months, just pulls the plug. Visa says Square and PayPal are OK right now. Yahoo is also in the yellow doghouse. (If you're a Stripe or a Yahoo Store merchant, they were supposed to inform you that Visa put them in the doghouse, so you can change vendors. Did they?)
The real solution to the problem is use of an integrated circuit card, usually through EMV.
If a web merchant uses 3D secure or Verified By Visa or SafeKey (from MC, Visa and AmEx respectively), the issuing bank can implement the same level of security in a web transaction that occurs in a card present chip transaction. Proof that the transaction was originated by someone who has control over the card, proof that the transaction was originated by someone who has knowledge of the PIN.
In these schemes you can store the PAN all you want. As long as the 3DES key is never read from the card, the PAN does you no good. Hopefully, when the USA catches up to the rest of the world in this regard, PCI will relax security requirements for merchants/acquirers.
Verified by Visa is bad for the consumer: it shifts all the risk of fraud onto them. If the PIN is intercepted, and subsequent purchases are made with that PIN, the owner of the card is liable for all of those purchases; they are considered to have made them because their pin was present at the time of purchase.
That's generally been the case in Europe, but as I understand it this "liability shift" would be more difficult to enact in the U.S., because U.S. law has blanket limits on consumer liability for credit-card fraud (maximum $50 for card-present transactions and $0 for card-not-present). There are exceptions if the bank can show that you were negligent, e.g. you knew a card was stolen but failed to report it in a timely manner, but the burden of proof remains on the bank in those cases.
There is a major use case around "card not present" transactions.
Anything recurring, such as AWS, hotel/car rental express check-out/return, Amazon 1-click and Uber and similar mobile payment use cases will get significantly higher friction if you have to complete a chip-and-pin for each of these.
A quite simple fix to these would be to allow storage of a token linked to the PAN which is locked to a specific merchant - so, they're worthless if stolen, but can be used like the PAN is today to perform "card not present" transactions for that merchant.
Most interchange protocols contain flags for recurring payments and standing authorizations. Only the first such transaction contains chip data to prove that the cardholder actually wants to authorize a standing auth/recurring auth.
In these cases, the standing authorization is already tied to the merchant + PAN + address details. Using chip in the first place is what allows a database compromise which leaks the PAN to not enable a criminal to authorize at another merchant: they won't be able to generate the ARQC needed to authorize.
All subsequent standing auths are card not present anyways.
This solution is already implemented in large parts of the world and good to go! But the incentives are sometimes not right. Ultimately, I want IC payments to be cheaper, as a merchant. I want my incoming IC payments to be in a separate bookkeeping from the non-IC: increase in fees on the latter, I'd like to keep my rates for the former.
Ultimately, I can then pass these savings on to the customers.
But as long as there is no incentive to ask for IC, why would I? It's just an annoyance.
Physical merchants eliminate liability for fraud and get reduced interchange fees by accepting chip.
Web merchants get reduced fees by using 3D secure (and the other scheme's versions). It is the issuing banks decision whether the 3D secure uses a chip or not, not the decision of the merchant. Many banks use sms push, RSA tokens, OTP sent in an envelope, or just passwords.
- The liability shift will pressure merchants just to use EMV-capable terminals. As long as they're using one they won't be penalized for swiping.
- However, the networks require certified EMV terminals to reject swipes from chip cards. (They can tell from the service code in the card's track data.) Unless dipping fails a certain number of times, in which case the terminal can allow swiping as a fallback.
So merchants are incentivized to get an EMV-capable terminal, issuers are incentivized to replace magstripe cards with ICCs, and terminal requirements (mostly) prevent swiping ICCs.
>If a web merchant uses 3D secure or Verified By Visa or SafeKey (from MC, Visa and AmEx respectively)
then he better hope that all his competitors do to. It is a major hassle and risk for the consumer so I haven't set my card up to work with those. If it gets stolen it gets stolen but that can happen even if I get it verified.
Actually Visa's Global Registry is in the doghouse right now. They have a very slow, antiquated process to get listed (only update it once a month, must be ready to go two week prior to that update, you go into a blackhole and never know if you'll actually get swept up in the latest update) which used to be ok but this year they began with a more aggressive approach of marking you orange and delisting you. It's impacting a lot of folks right now and I wouldn't read anything into this and Stripe's status.
So long as stripe.js is linked from your site, there is nothing preventing someone who can breach your server from seeing all CC numbers going through that page (namely by modifying the served stripe.js). It is just as insecure as processing the CC numbers on the server yourself, but deleting them after confirming the transaction.
I know for a fact that Stripe is staffed by some really brilliant people, so maybe I am missing something, but as far as I understood, their business model has always been: "ease the legal requirements on merchants by making use of the technicality of not sending CC info to their servers, while still not significantly adding security to CC processing".
But this is kinda a fundamental issue with the whole CC# system, one that redirecting to "trusted processors" just marginally improves. It stretches belief sometimes that in 2015 we have full-disk encryption and TLS-everywhere, but not a sane financial transaction system based on public-key signatures (hopefully we are moving in that direction now? and getting chips in US cards?).
So long as stripe.js is linked from your site, there is nothing preventing someone who can breach your server from seeing all CC numbers going through that page (namely by modifying the served stripe.js). It is just as insecure as processing the CC numbers on the server yourself, but deleting them after confirming the transaction.
This has always been a weak argument though, because if someone can breach your server, they can impersonate whatever they like from a typical visitor's point of view and see whatever data anyone enters. They can do this even if you're not a legitimate merchant at all and don't even use a credit card payment service.
Redirecting to a external processor still works to some degree, assuming people check for say "[PayPal, Inc [US]]" in their address bar. I am not arguing that the iframe thing is really better (specially not an invisible iframe). I am arguing that there are good reasons for requirements to be more demanding than "you must send your POST requests to a trusted server".
I suppose ideally we should have platform support for this sort of thing, though. Perhaps something like a payments browser API, hopefully supporting multiple processors (like most browser's search bars). After set-up it should be as simple as getting a browser-level pop-up asking you to confirm the amount or cancel it (plus any sort of auth required by the processor, which hopefully should be as simple as nothing for small transactions and tapping your card to the NFC reader for large ones).
> they are changing Stripe.js to now serve up the data in an iFrame so you can keep using their product more or less like before but without heightened requirements
I imagine they're changing things to be like Google Wallet, where you use a pop-out window to type your credit card number into (just the first time, it's save on their side after). That way you know you're giving your CC just to google.com by looking at the URL of the new window.
Stripe's current JS-based version has better UX, but it's a little scary if the merchant whose site you're entering your card into has no legal security requirements. On the other hand, it's a purely theoretical problem afaik - I haven't heard of any breaches resulting from merchants which use JS third party payment solutions having their websites hacked to serve up bad JS.
I've seen non-https sites serve up HTTPS iframes. The whole iframe thing just seems like a bad idea for processing credit information. Ignoring HTTP interception, it's difficult for customers to verify that the iframe is indeed coming from an HTTPS site.
With the old Stripe.js, you serve up the form but the Stripe javascript takes over the form and posts directly to Stripe, so your servers never see the data.
The new Stripe.js will render an iFrame, (Edit:) through which Stripe will send the data, which again posts to directly to stripe.
They basically behave the same way and will look the same way, the only difference is that the iFrame is in it's own Javascript "domain" so that if your site is infected with malicious javascript it can't take over the POST to stripe as easily (although that is debatable).
The former requires really high security requirements now and the latter requires almost none.
The new Stripe.js does not render an iframe, the credit card information is still entered on your site so $("#credit-card-number").val() would return the card number. The transmission of the card data happens through the iframe. So the card number gets copied to the iframe, and the iframe makes the post to Stripe.
> the iFrame is in it's own Javascript "domain" so that if your site is infected with malicious javascript it can't take over the POST to stripe as easily (although that is debatable).
A malicious attacker could simply replace the entire iframe with something else that looks identical, but sends a copy of the CC details to some other server.
I used Google Checkout as a merchant once years ago and it was a horrible experience. Customer service was appalling. They ultimately ended up revoking our account and keeping the balance (it was low enough to not be worth fighting them over).
There is of course alternative technical solutions to this problem, one being virtual credit card that can be used once per purchase. I would not trust that the outsourced payment processing system never get a breach.
For what it's worth, PCI compliance as it stands today is complete BS. It provides a false sense of security and most of the PCI ASVs are the scourge of infosec. I can't tell you how many customers we have that use the cheapest possible PCI ASV for "compliance," but then use us in addition for "real security," despite the fact that we aren't an ASV. We've intentionally stayed away from becoming one thus far, actually, because that is a whole political game in itself. [1]
The new requirements are better. Stringent and hard to comply with, but better.
The real solution, as I see it, is to build automated security testing into your SDLC / Dev process. Penetration tests, when done by a good firm like Matasano, are incredibly useful, but lose their value the next time you push code. Building tools like Tinfoil into your CI process makes sure you don't get owned between pen tests.
PCI is unfortunately written by political minds and lawyers, not engineers or infosec folk. This is an unpopular comment, but is true in my estimation. Comply because you must, but please don't treat it as the end-all be-all. Care about your customers' data just a little bit more.
TinFoil, I remember I invited you to speak at the Boston Security Meetup several years ago!
>The real solution, as I see it, is to build automated security testing into your SDLC / Dev process. Penetration tests, when done by a good firm like Matasano, are incredibly useful, but lose their value the next time you push code. Building tools like Tinfoil into your CI process makes sure you don't get owned between pen tests.
This is false because you're suggesting that security testing of your SDLC, a subset of the compliance program is a more diligent solution than the entire compliance program. I explain myself in a comment below and I'd be glad to debate with you: https://news.ycombinator.com/item?id=9510369
I'm not suggesting that there is absolutely no value to any of PCI. The fact that it forces you to think about security at all is already of some value. However, I am saying that passing a PCI audit is incredibly easy as compared to thorough automated testing, and especially compared to a (good) manual penetration test. Because you can pass a PCI audit relatively easily, people will do that and think it's enough, when in reality there is far more they should be doing to protect their customers.
Should you not do PCI? No, of course you should, as it's required by the processors. Is PCI going to protect you from getting breached? No, it's not enough, and you shouldn't pretend it is.
If you have to pick exactly /one/ thing to do in addition to (or instead of) PCI, building thorough automated security testing into your SDLC process is it.
>If you have to pick exactly /one/ thing to do in addition to (or instead of) PCI, building thorough automated security testing into your SDLC process is it.
I don't understand how SDLC secure testing is an addition to PCI when its really a sub requirement of PCI (Req 6 which addresses SDLC and secure code testing)
I'm going to rephrase because I'm still confused: You're saying that in addition to doing PCI, I should do a sub requirement of PCI.
Show me where it suggests automated testing as a regular part of your SDLC. It recommends applying patches, coding to secure guidelines / best practices, doing code reviews, and running an automated or manual pen test at least once a year. Nowhere does it state in Requirement 6 that you should build automated testing into your SDLC.
Scheduled testing that happens on an automatic basis, or that occurs whenever code is deployed or committed. For example, whenever you would run your unit tests or integration tests, you should also run security tests.
It's the difference between doing an automated or manual penetration test every 12 months and testing your application for vulnerabilities with every deploy.
It's 6.5.1-6.5.10 of PCI v3.1. That's all security testing. You can use a static analyzer like Brakeman to speed things along instead of doing it manually.
Train developers in secure coding techniques, including how to avoid common coding vulnerabilities, and understanding how sensitive data is handled in memory.
Develop applications based on secure coding guidelines.
It says nothing about automated testing, which is precisely my point. The requirement is that you attempt to follow security best practices and train your employees well. My point is that even the best-trained employees make mistakes.
The reason unit tests exist is to make sure you didn't accidentally break stuff when you write code. I'm arguing that automated security tests should exist for the same reason, and that's what we're trying to offer and build.
Incidentally, section 6.6 does state that you should use manual or automated testing at least annually; PCI 3 added a new clause which states 'after any changes.' Also, it explicitly specifies that you need /either/ automated or manual testing, /or/ a WAF. WAFs, as you're likely well aware, miss things very often. [2] WAFs are a good stopgap, but should not be relied upon to provide actual security; they should only be relied upon to attempt to prevent an attack while you are in the process of fixing the vulnerabilities underneath.
Because penetration testing is so expensive, typically, I suspect it will be more common for people to go with the WAF than to do a pentest with every deploy. I don't have stats to back that up, since PCI 3.1 hasn't actually been enforced yet, but I strongly suspect that will be true.
> Penetration tests, when done by a good firm like Matasano, are incredibly useful, but lose their value the next time you push code.
I'd like to nicely but firmly push back on this one, and have longitudinal analysis of clients' applications to back it up. We put a lot of effort into helping our customers improve over time, both formally (writing helpful recommendations) and informally (educating developers during and after the test). There exist customers that ignore our advice, and don't improve, but most have a dramatic improvement in new code quality after the first assessment, and continue to year after year.
Ah, you misunderstood what I meant. I didn't mean to imply that penetration tests, when done well, have no lasting value. I simply meant to imply that without a code freeze, there is always the chance of a new vulnerability creeping in no matter how well you follow checklists, best practices, or retain knowledge about types of vulnerabilities and how not to build them.
For that reason, automated testing on a continuous basis is important.
This is the same reason that you don't QA an application once a year. UIs change, requirements change, and for that you write integration tests, unit tests, etc.
Does that clarify things a bit? I didn't mean to imply Matasano did a poor job of educating their customers; in fact, I think you're among the best.
The real security audit should be done by hackers the same way browser and OS vendors do it. Vendor lists his website on some platform and specifies money he's willing to pay for found vulnerability. Hackers trying to find vulnerabilities. 3-rd party verifies vulnerability and ensures that hackers are paid. More money vendor offers for vulnerability — more hackers trying to crack his site — more confidence clients have.
Good. There's no love lost between me and companies that make significant money doing PCI assessments (they tend to be the bottom-feeding remora of the infosec economy), but the one criticism you could not level against the PCI certification program over the last 10 years is that it was too hard to get certified.
What happens when the iframe allowance is removed, and not even using Stripe can save you from the credit card companies? This seems like a transparent plan to make PCI assessors more money.
Judging from how proactive Stripe is being with respect to recent changes, they probably have a Plan B in case the iframe exception disappears. For example, they could provide a payment page hosted with them that you can customize (basically an enhanced version of Stripe Checkout), or even offer to iframe your webpage the other way around.
Yep, knowing Stripe, I'm not that worried that they won't be able to figure something out. What does worry me is that they need to figure something out at all.
Woah. Precisely the reason that card networks and issuers have been so lax previously is that they want consumers and issuers to use their damn cards. Being painful to use for merchants and consumers is incompatible with that. Until now losses were small enough... Now, PCI assessment industry is such a profoundly small thing next to the losses and risk carried by issuers - let alone the sheer core business volumes at stake here - forgive me for feeling skeptical that the reasons for this security bump are anything other than stakeholders wanting to stem losses and reduce risk.
Wouldn't that happen by just making rules that led to people using third-party services, though? Why is it a requirement that you change users' passwords every 90 days (something which I outright don't want to do), or get audited once a year (which is a considerable expense for no actual feedback, other than running an automated tool)?
Yes, we can argue that the content is less than perfect (are there really no permissible controls to get around 90day passwords, such as 2FA?), I'm just taking issue with the assumption that this is a conspiracy designed to line the pockets of QSAs (it's news to me that they provide zero feedback and business value - but then I'm not so close to PCI stuff).
Edit: I'm sure I've read that PCI 3 wasn't written in a vacuum - surely there is some trend in the data that's not visible to us that prompted the 90 day password thing (keyloggers for one, certain POS manufacturers using the same default passcodes on all their products for over 20 years another).
Maybe I'm wrong, but the few times I've had to fix PCI-scanned sites for compliance, the feedback was just whatever an external automated tool could find, which was almost nothing, and when you fixed the few warnings in the otherwise abysmal codebase, you got the approval.
Well what about grattipay then? I think it's pretty obvious with their latest blog post that it's not just as easy as moving onto stripe and your done.
> The worst offenders however are the requirements that some businesses simply cannot comply with unless they have some serious cash laying around. Examples of this are
>> Quarterly external vulnerability scans must be performed by an Approved Scanning Vendor (ASV), approved by the Payment Card Industry Security Standards Council (PCI SSC).
> and
>> Is external penetration testing performed per the defined methodology, at least annually, and after any significant infrastructure or application changes to the environment (such as an operating system upgrade, a sub-network added to the environment, or an added web server)?
Have you ever had a penetration test done? They basically run a load of OSS automated tools, generate a PDF report, and then charge you $1000s. It gives you no real insight and reveals nothing unless you've been a total noob. Why is this so expensive?
Broadening of PCI scope + needlessly expensive compliance = Smells like a large opportunity.
I'm sorry that's been your experience. What should have happened is a primarily manual penetration test, administered by security engineers who themselves used to be fully competent software developers. Any automated tooling should have had strict manual verification and should not have been the focus of the test. Furthermore, superfluous results should not have been submitted in the PDF report.
Strictly "policy" audits such as PCI compliance differ a bit, but in general they should still involve a technical deep dive into your product's infrastructure, conducted by consultants with expertise in multiple tech stacks and overall experience in a variety of frontend and backend frameworks.
The final deliverable ("PDF report") also should have been hand-written, and in language that conveys technical expertise, complete with recommended steps towards remediation of any issues.
My employer, Accuvant does this, as well as Matasano (more well known here on HN).
As for why it's so expensive...well, I bill out at about $2000 per day. It really comes down to a lot of what people like patio11 and tptacek like to talk about here regarding consulting:
1. This is highly specialized work, with a much smaller population of competent engineers than typical web developers (for example). As such, it naturally receives a higher fee for supply and demand. Now, some people abuse this and run scans like Nessus and call it a day. These are not real infosec firms, they are parasites.
2. More specifically, we ask for it and we receive it, and we do exceedingly well. If people keep paying us five figures a week to perform a penetration test, we're not going to stop asking for it or reduce our prices.
>These are not real infosec firms, they are parasites.
The entire consulting penetration testing market is setup to encourage this behavior. There is no way to prove you actually did anything correct. Someone can write a wonderful PDF analysis by hand and still leave the system full of glaring holes. Customers can't tell a system is broken until it gets hacked.
>More specifically, we ask for it and we receive it, and we do exceedingly well. If people keep paying us five figures a week to perform a penetration test, we're not going to stop asking for it or reduce our prices.
Right, but many times I've seen companies do it because they are desperate to do it for compliance purposes. :/
Essentially there is a non-trivial portion of the market held up by regulatory demand.
>Strictly "policy" audits such as PCI compliance differ a bit, but in general they should still involve a technical deep dive into your product's infrastructure, conducted by consultants with expertise in multiple tech stacks and overall experience in a variety of frontend and backend frameworks.
I'm curious. Do you review every line of code in a customer's codebase? What about the code of every library they import? If you don't review imports, do you leave a big caveat in your report that says their code looks okay, but the libraries could be full of vulnerabilities?
Whilst I'm not the parent commenter I do work in the same industry..
>>These are not real infosec firms, they are parasites.
>The entire consulting penetration testing market is setup to encourage this behavior. There is no way to prove you actually did anything correct. Someone can write a wonderful PDF analysis by hand and still leave the system full of glaring holes. Customers can't tell a system is broken until it gets hacked.
So a good company should be able to provide a methodology detailing the tests they do, you'll also see some who report the tests conducts and the results (positive or negative), so asking for sample reports from consultancies would help to find one closer to your specific needs. Personally I prefer reporting all test results as it keeps both parties straight on what has and has not been covered.
>>More specifically, we ask for it and we receive it, and we do exceedingly well. If people keep paying us five figures a week to perform a penetration test, we're not going to stop asking for it or reduce our prices.
>Right, but many times I've seen companies do it because they are desperate to do it for compliance purposes. :/ Essentially there is a non-trivial portion of the market held up by regulatory demand.
Yeah where people are getting tests for purely compliance reasons there is a tendency to go with cheap suppliers as there's not really good perceived benefit.
>>Strictly "policy" audits such as PCI compliance differ a bit, but in general they should still involve a technical deep dive into your product's infrastructure, conducted by consultants with expertise in multiple tech stacks and overall experience in a variety of frontend and backend frameworks.
>I'm curious. Do you review every line of code in a customer's codebase? What about the code of every library they import? If you don't review imports, do you leave a big caveat in your report that says their code looks okay, but the libraries could be full of vulnerabilities?
Heh this is one of the huge gaping holes in security at the moment. Most applications are now constructed of piles of code acquired from repos (npm, nuget, rubygems etc) that provide absolutely no curation of content and anyone can put any code they like up there. There is (from what I've seen) very little appetite from companies to actually try and audit their whole stack, generally due to the cost of doing that. Manual code review is expensive and when you start importing 100Kloc of 3rd party code into your solution it would not be a cheap excercise to validate...
Can you elaborate on what manual tests are and why they are better? Coming from a software QA perspective, automated tests provide repeatability and prevent omissions and aid with regression testing.
We went through a SAQ D Service Provider 3.0, and paying for an ASV didn't hurt nearly as much as filling out that 80 page questionnaire... In fact, it reminded us apply some recent CVE's to our system before taking it to production.
We used Comodo HackerGuardian which is $250/y, so you don't have to pay $1000s.
I do have a real tool. Its on ComplianceChimp.com. But its down right now because my TLS Certs expired a couple days ago and I never recorded my Key rotation procedures. I did think about temporarily disabling SSL, specifically for this HN post, but thought against it. We want to do compliance right.
We're fixing the certs now and updating our Key rotation procedures for you all to see in our publicly viewable compliance workbook.
There's only so much I can get done with my dwindling runway =(.
But you're right, that getting Key Rotation Procedures documented is the 1st thing I should have done.
This is good feedback actually because now I know that after scoping the assets in my turbo tax-like tool, the very next thing a person should do is write down their key rotation procedures. Its also easier to write out as a procedure because its such a common yet forgetful practice.
We're putting cycles into this right absolutely now.
We should have moved a long time ago to vendor specific credit card numbers (ecommerce isn't exactly a new activity). Say I get from my bank a token which I provide to this vendor, and the first time the vendor uses it to accept a payment, the token locks in to that vendor, i.e. my bank will not allow any payment with this token to another vendor (i.e. to another bank account). Then it doesn't matter if it's stolen, only that vendor can use it anyway. And I could have the option to tell my bank to make it a single use token, or to cancel a multiple use token or to set a payment cap to that token.
That doesn't seem very complex to implement and would alleviate the vast majority of the credit card related problems. I am sure it can be made simpler, have a protocol with redirects to the bank's website that eliminate the need for the client to copy-paste a token, or to have another mechanism with similar effects.
Banks are one of these many industries that don't seem to get technology. They employ massive IT and developer staff but are run by people who don't get it (and to make things worse, are most of the time massive bureaucracies which means that even when they know what they need to do they just can't execute).
This is already the case for instance in Portugal, for quite some time. In fact, a card holder in Portugal can generally just issue a new credit card number for personal use, tied to their account with whatever expiry they wish.
The big problem arises when you booked your hotel on one of these temporary numbers and show up to try to check in to the hotel. The card was not actually issued and some hotels have weird policies in that regard.
Of course, chip card based solutions that devalue the PAN are superior.
This is similar to how bank payment is working in India. When I make payment on seller website, it redirects me to the bank website. I enter my credentials (including phone based or device based OTP) to confirm the payment and its done. And there is also option to make easy subscriptions.
Bank of America has ShopSafe which allows you to generate a temporary credit card number to use with the sketchy online merchant that has the particular gadget I want to buy.
Their implementation leaves much to be desired, but it's a step in the right direction.
Discover card has gone back and forth on this. They had a tool on their site to give you a throwaway CC #. I used it for almost all online purchases. It went away for a short time, then came back. Now it looks like it is gone for good. I quit using their card since that was the only reason I had to use it over others.
Now if they'd only stop sending me "checks" in the mail that are tied to my account... I'm just waiting for those to be spent by someone else.
It checks off the boxes for minimizing PCI scope; you store no payment information, and collect none on your website either. You can either do a transparent redirect (your payment form points to a URL on their domain, which redirects back to your site with a token) or an iframe.
Once you collect payment information, which they tokenize and store, you can run charges/auths/refunds against it using any of 81 different payment processors and gateways. Balanced one day, Stripe the next, and whatever startup is popular after them in a year -- without changing any of your payment code.
Does anyone know if using transparent redirects actually waives your responsibilities for PCI compliance? Even though the credit card details aren't sent to your backend, they are still collected on a form hosted on your infrastructure.
If your servers are compromised and malicious JS is added to your payment form, couldn't an attacker siphon credit card details via AJAX? It seems like the PCI documentation always uses terminology like "sites that collect credit card data", which I think sounds broad enough to include sites that use transparent redirects.
As of PCI-DSS v3 (January 2015), a transparent redirect qualifies you for SAQ A-EP, which is 100+ questions along with quarterly scans and annual pentesting. Basically, your website is "in scope" for securing.
The iframe option still qualifies you for SAQ A, which is the short questionnaire without scanning/testing requirements.
It's generally transparent redirects in the other direction. You load JS from their servers, the form posts to their servers, redirects back to yours. Often even the form isn't hosted on your infrastructure, but instead embedded into your page through some JS that is hosted elsewhere.
If your responsibilities weren't waived or significantly lessened, I'd imagine companies like Stripe would be significantly less successful.
You generally don't need to load any JS from the payment processor to use a transparent redirect, although I'm sure some work this way. Transparent redirects just require setting the form action property on your payment page to a URL on the processor's site. The processor then silently redirects back to the next step in your process.
Using an embedded payment form also shouldn't require any JS, as it is usually done using an iframe. This method should be safe, as the same-origin policy in the iframe would prevent JS on your domain from interacting with form elements in the iframe. But this is not typically what people are talking about when they talk about a transparent redirect.
This is a great article. I would add a couple of data points for context:
- Visa won't come after you. Your merchant account provider is on the hook. They let you process cards so they need to ensure you're PCI compliant. That's how the flow works.
- PCI 3.0 kicked in in January. People reassess annually. So if you reassessed last year under 2.0 standards you're still good until your renewal comes due. That's why this is slowly creeping across the payments space in terms of realization.
- The card networks saw longer, sustained ongoing fraud happening in online commerce from .js or transparent redirects than they did from hosted payment pages. So the big change in PCI 2.0 to 3.0 was this idea of wanting to make it harder to completely build your own custom payment pages vs using a hosted payment page. HPP's are SAQ-A and customized payments pages - A EP
- iFrame's and checkouts are really trying to be the best of both worlds. That's why they're currently treated as SAQ-A. There was definitely a lot of thrashing around how they would be treated when the 3.0 specs were being drafted and published.
Again, I really enjoyed the article and appreciated Spreedly being included as a reference. I would agree with the major premise that in general merchants are unaware that the way they implemented their payments pages now mean they're in greater scope and that the providers aren't doing a good job educating them. It's an open secret in the industry that many payment gateways add a (pure margin) fine for $20 to $50 per month onto your account if you don't have valid certification. In a low margin business that reduces motivation to push small and medium merchants to ensure they're PCI compliant.
Stripe plans to use the iFrame "loophole" to enable Stripe.js customers to qualify under SAQ A-EP as mentioned on their website[1]:
> The new version of Stripe.js meets these criteria by performing all transmission of sensitive cardholder data within an iframe served off of a stripe.com domain controlled by Stripe.
Can someone help me understand how this is practically any more secure than the way Stripe.js currently works? It sounds like the intention of the iFrame exception[2] is to allow a payment form to be loaded, completed, and submitted to a compliant server all within a visible iFrame. From what I can tell, the "compliant" version of Stripe.js just submits the data (similar to the traditional way) via an invisible iFrame - the form is still hosted and completed on the (likely) non-compliant server.
If that's the case, then I'd expect the "loophole" to go away soon and current Stripe.js users will have to adopt a payment flow similar to Stripe Checkout; in other words, it will be obvious that Stripe (a third party) is being used because the end user will be interacting with a form (or part of a form) completely hosted on Stripe's servers.
For companies using Stripe to avoid PCI compliance with self-hosted payment forms, this essentially transforms Stripe into another PayPal checkout-style service.
I'm actually working on a blog post about this, basically the argument is whether or not you use Stripe.js and the invisible iframe and Stripe Checkout is that as soon as you have some malicious JS in your DOM all bets are off, and while stealing credit card info from Stripe Checkout may be harder than just doing $("#credit-card-number").value, its not /that/ much harder.
(As part of my blog post, I actually use some malicious js on the merchant site to steal card info from a Braintree iframe (the drop in))
Very interested to see your blog post. I was under the impression that if the data is collected in an iframe with a same-origin policy, that malicious JS in the containing page wouldn't have access to form elements (or anything) inside the iframe.
Of course if you have malicious JS in your DOM, there's nothing stopping it from rendering it's own legit-looking credit card form that simply passes data off to an external URL.
Thats basically the concept, once you have malicious js you can replace the iframe with a malicious one that looks the same. You can even have it still create a legitimate card token, so in theory the website would never know they are hacked. The other PCI SAQ A scenario is linking off site. So while malicious JS could change the link you redirect to customers to it would be noticed because the customer may see a sketchy url and the merchant would see a decrease in sales.
Historically, the thought has been that the iframe and a redirect could both be treated as SAQ A (the easiest form of compliance), was because if you changed the iframe that was displayed or the page the customer was linked to it would be extremely difficult to steal customer information in a silent way.
So if a merchant links to paypal.com/merchant, and I inject js to change it to paypal.com/matthewarkin. The merchant would immediately know something was wrong because they are no longer receiving money. The issue with how Stripe, Braintree, and others have implemented their javascript and iframe implementations is that is pretty easy to replace the iframe with a malicious url (paypal.com/matthewarkin) but still allow the merchant to receive their funds.
A simple fix for this would be the api keys used to instantiate the iframe only be usable from the iframe and could not be used to call the create token api directly.
At Braintree, we have been working on the approach you mentioned. We’ll soon update our iframe products to allow a merchant to opt-in to only ever receiving cardholder data via the Braintree iframe. With this change, we could actively block malicious JavaScript from rewriting the merchant form by rejecting data not from the Braintree iframe. Things like this aren't a panacea though which is why it’s important for merchants to use technologies like Content Security Policy and leverage as much of the browser security model as possible.
I think more awesome, was the hosted fields you just launched, so that I can have a custom, stylized form where each credit card input is its own iframe.
I agree! I submitted it as separate item because this conversation was about rewriting iframes. Although hosted fields doesn't directly address the rewriting for now, we're looking at it closely.
Credit cards are a bad platform to build on. The duopoly structure is a bad platform for gradual improvement and the regulatory environment is a bad platform for innovation.
We have deeply entrenched kick-it-forward allocation of responsibility and fixes to serious problems are characterized by firefighting, designed-by-committee compliance, cover-your-assness and such. All the hallmarks of a poorly functioning market, poorly functioning organization and general pathologies that occur whenever the way we organize is wrong.
Leaving bitcoin aside,^ I think the fundamental problem is having CCs play the role they do. Instead of customers sending merchants money, merchants request money from CC companies. That is a bad system.
^The reason bitcoin is difficult to insert into the conversation is because it has so many big hairy goals. Government power over money. Macroeconomic theories of monetary policies baked in… Its a big interesting project, but the problem discussed here is only really a subset of what bitcoin is about so it's kind of a tangent.
As a small business taking card payments, we're far more concerned about the absurd rate of failure of perfectly legitimate charges than anything PCI-DSS say. We lose more customers to Stripe charges failing than any other reason, by a considerable margin, and it seems even Stripe don't actually know why because it's organisations further down the line making these decisions.
The whole card payments industry is broken, and the sooner the growing direct payments industry kills off the credit card giants, the better.
One of my side projects is a membership/subscription model Primary Care medical practice, and uses a third-party payment processor and we were recently audited by one of the large payment card issuers.
There was a finding that the third-party processor - which we specifically choose because many of their clients were major gyms with similar monthly membership models - was improperly processing our members payments. If I recall correctly, there is one standard for one time payments and a different standard to be used for recurring payments. A subscription model like ours allows our subscribers to use either, but the third-party processor used the one-time payment standard to process both one-time payments and monthly recurring payments. Even though recurring payments was a major selling point of the processor, when it came down to it, they were not even aware of the distinction and aware of the separate standard. We were actually quiet fortunate in that we had original signature agreements for each and every instance of a member who agreed to the recurring automated payment, but as I recall without those agreements there may have been some kind of repercussion.
Anyway it is a cautionary tale that just because you use a third-party, even a reputable one that serves national franchises, does not necessarily mean they know what they are doing.
The biggest thing to remember when dealing with PCI DSS is that it's not the law.
Your PCI obligations come out of a commercial agreement that you have with your processor, which comes out of commercial agreements they have with VISA/MC/et al. That's not to say that it's not a well-defined standard that you're going to end up having to follow in some way, but rather that statements like "Both Litle and Recurly flat out say that you need SAQ A-EP" have more wiggle-room than it would sound like, depending on the rest of the deal you're presenting them with.
If you're a Level 3, I'd argue the goal should be to keep yourself on SAQ-A - the methods of which are pretty well-understood now. Pick a vendor which has a tokenization service designed to be hit from JS (they all work the same way at their core - download JS which contains an implementation of RSA and a public key, browser-side encrypt the CHD using that, send it off, get back a token). Put your payment form inside an iFrame which is served from a PCI-compliant host (like S3). Once tokenization is complete, send the token from inside the form back out to the containing page using postMessage or in the querystring.
Examples of e-commerce implementations addressed by SAQ A include...[merchant] website provides an inline frame (iFrame) to a PCI DSS compliant third-party processor facilitating the payment process...Examples of e-commerce implementations addressed by SAQ A-EP include...[merchant] website creates the payment form, and the payment data is delivered directly to the payment processor (often referred to as 'Direct Post')
Will they change PCI DSS again to remove the iFrame rules? Maybe, but given the speed the PCI council moves at (and the warnings they give before changing things), I'd deal with it then.
Lastly, if you're thinking of building a service which white-labels credit card processing and sells that processing as a service which your customers can then resell...don't forget about PCI PA-DSS
This move makes sense if you look at PCI's board of advisors[1]- It's a bunch of bank VPs plus the head of security for both First Data and Pay Pal. The people who run PCI compliance are the ones that stand to lose if PCI compliance becomes moot, so they are doing all they can to make it seem like it's the be-all end-all of internet security and that you'd be a fool to trust an online merchant that wasn't PCI compliant.
Interesting point about making vendor specific security tokens for internet transactions in an earlier comment. That would quite obviously help tremendously in the case of a breach, however that would put the onus on banks to be responsible for security instead of on merchants, and again the bank representatives on the BOA at PCI aren't going to go for that.
The problem of credit cards is that when you make a payment, you have to give away your private key. No amount of securitisation will take away this fundamental flaw.
This is one of Bitcoin's evolutionary advantages in this space. To send money with Bitcoin, there is no need to expose one's private key. A massive corporation could take millions of annual payments and their paying customers needn't be concerned about their money being at risk. If the entity has poor security, the only people they endanger are themselves.
You have to pentest only 1) if the web server touches sensitive data and 2) if the sensitive web server being deployed is configured differently from all the other sensitive web servers in that same sensitive environment.
If you're just adding web96 and its configured exactly like web01 through web95, then it doesn't need pen testing.
Every time legacy payment processors ratchet up compliance requirements, cryptocurrencies get another little boost. And while it's easy to forget in the US, where credit cards are handed out in boxes of cereal, getting a credit card is an insuperable hurdle of much of the world's population, eg. the estimated 75% of Indians who work in the informal economy and thus can't prove (or don't have) regular income.
This is great that HN is talking about PCI. The problem with PCI and compliance in general is three fold.
1. People don't want compliance [1]
2. People think compliance is broadly applied
3. People don't know where to start
I'll answer these point by point.
1) The reason people don't want compliance is because the security industry claims that Compliance is bare minimum and not enough. If they told you that Compliance was simply doing your due diligence on your sensitive devices, I think the market would have had a software tool to easily get us through the process by now. (I built a "brilliant" PCI tool btw. Link below)
So let me explain to them why I think Compliance is just doing your due diligence and we'll do that by simply asking this question: If compliance is bare minimum and not enough, what is a comprehensive approach available right now to reasonably protect our sensitive data? The security professionals will tell you Risk Assessments and Pentesting often is the best alternative [2]
Their answer is to specifically switch to Risk Assessment and PenTesting often, which is Requirement 11 and Requirement 12 of PCI. Each one of the bullets written is specifically covered by PCI DSS 3.1, including social engineering/phishing attacks that are provided through security awareness training. They're telling me that compliance is bare minimum, yet their suggestion is to do a subset of compliance. Its circular logic. Since its circular logic and nobody has been able to provide me with a reasonable approachable alternative to going above bare minimum, I hypothesize that compliance is NOT bare minimum, but in fact, due diligence.
Think of a fort. Forts had defined compliance checklists in the old times. In a fort, you go through a security rotation of making sure the pot of boiling oil tips over on time. You practice your smoke signaling so that the appropriate people are notified in the event of a wall breach. Were they spending a majority of their security drills taking half their army, launching it against the fort, fixing what fails, and then doing a risk assessment?
2) A compliance program by definition is only applicable to your sensitive environment. It cannot be applied broadly. Its forcing you to go through the decision making process of asking yourself, "What's most important to protect right now". Only you can classify the sensitivity of your devices. Only you can choose if your code base is sensitive or your employee's SSN is sensitive. But whatever you’ve classified as sensitive must fall under compliance. Let's refresh: Compliance is designed to be applied only to your sensitive data. If you choose to put a non-sensitive devices under a compliance program, then you've specifically applied compliance broadly.
3) To approach any compliance program, ask yourself 6 questions on any specific device.
1. Does this device store, process or transmit sensitive data (e.g. cardholder/health/SSN)?
2. Is there unrestricted access between this device and a sensitive device?
3. Does this device provide authorization, authentication, or access control to a sensitive device?
4. Can this device initiate a connection to a sensitive device?
5. Can a sensitive device initiate a connection to this device?
6. Can this device administer a sensitive device?
If you're able to answer yes to any of those questions then your device is sensitive and due diligence is required. Let's go back to the fort example. Is there unrestricted access from that boiling pot to the sensitive gold chamber? Does the pot provide access control to the sensitive gold chamber? If yes, then configuration settings of the pot, pulley, oil, need to be recorded and monitored periodically. If no, then its possible that you might have a business justification for not putting as much rigor into recording and monitoring the correct functionality of that pot.
You've already started the compliance process by asking yourself, "Does my laptop initiate an outbound connection to a sensitive device?" If yes, then your laptop falls under compliance and due diligence is required.
Everything else is just record keeping. Create a network diagram of only your sensitive assets. Write down how you rotate your keys on those sensitive assets. Write down what log files you periodically review. Go through a practice run of your Incident Response in case there is a breach of your sensitive asset.
This is a pretty long post and I hope it helped.
Shameless plug, I'm building a tool called ComplianceChimp to guide you through this entire process including recording your procedures with Github flavored Markdown. You can see how I'm using our tool to get us through the PCI Compliance process. [3]
What service might I use if I needed to fulfill the following requirements:
1) Say I already have a customer -- he already paid a signup fee and we charge him monthly, so he's already put in his credit card information. At a later date, we need to charge him for something other than his monthly subscription fee. This is something he can do himself by logging in, but also something the site administrator needs to be able to do by selecting his account and clicking a button. In this case, we do not want the user to have to re-enter his credit card information; we want this part to be seamless. Is there a payment API that can do this -- random one-off charges against an existing account without the user having to sign into the third-party service himself?
2) Is there a service that charges bank accounts directly -- as in they enter their bank account number instead of a credit card number? Other than PayPal --they seem to be the only one that does this.
It's only getting harder if you have "inline" payment. Honestly I'm glad to see that go away.
We always used "hosted payment page" solution, it's safer, and we've been expecting tougher rules for some time.
If you want to talk about online payment becoming harder, you should address the increasing number of payment options that online stores need to support. Adding to debit and credit cards are bank transfers (which is different in every country), PayPal, invoicing, part-payments, financing, and an almost unlimited number of local option.
Well, I guess it won't hurt if I offer my services for PCI guidance for startups here :-)
One thing to keep in mind is that PCI is a bare-minimum of security "best practices" that aims at validating that a company transacting with payment cards has an understanding of data classification and protection.
If compliance is bare minimum and not enough, what is a comprehensive approach available right now to reasonably protect our sensitive data? The security professionals will tell you Risk Assessments and Pentesting often is the best alternative [1]
Their answer is to specifically switch to Risk Assessment and PenTesting often, which is Requirement 11 and Requirement 12 of PCI. Each one of the bullets written is specifically covered by PCI DSS 3.1, including social engineering/phishing attacks that are provided through security awareness training. They're telling me that compliance is bare minimum, yet their suggestion is to do a subset of compliance. Its circular logic. Since its circular logic and nobody has been able to provide me with a reasonable approachable alternative to going above bare minimum, I claim that compliance is NOT bare minimum, but in fact, due diligence.
Think of a fort. Forts had defined compliance checklists in the old times. In a fort, you go through a security rotation of making sure the pot of boiling oil tips over on time. You practice your smoke signaling so that the appropriate people are notified in the event of a wall breach. Were they spending a majority of their security drills taking half their army, launching it against the fort, fixing what fails, and then doing a risk assessment?
The most comprehensive approach is to have an InfoSec policy portfolio which permeates into every corner of your organisation and dictates secure operating behaviours and mandates logical and physical security practices. This will include regular vulnerability scans on your code, your application stack and your infrastructure but it will also include instructions on how to classify data and how to handle data according to that classification.
Compliance is a achieved by marking a checklist which is why is fairly easy to botch it up. Sure you can do a subset of the checklist and have compensating controls for everything you've missed but the risk of non-compliance is not being able to do business (at best) and jail time (at worst) so you tell me what is your motivation to fail to meet the bare minimums of security best practices in card payment industry, aka, PCI-DSS.
Think of a castle; It will have several walls, towers, heavy doors, guards etc. It will also be placed in a hill, a mount or otherwise hard to access area (never in a vale for instance). It will also have the largest possible distance between the treasure hall and the front door. The threats your castle faces will continuously evolve, and the walls that stood up against bows and arrows are useless against turrets or cannons, so if you want to keep your treasure you do your best to be one step ahead and you don't get that by making sure your original walls are still in place or any other base requirements are still met.
Stripe is in Visa's doghouse right now.[1] Their entry on the Visa Global Registry of Service Providers has turned yellow, with an expiration date of Mar 31, 2015. This means they're having some PCI compliance problem.[2] Visa gradually cranks up penalties until the problem is fixed, or, after about 9 months, just pulls the plug. Visa says Square and PayPal are OK right now. Yahoo is also in the yellow doghouse. (If you're a Stripe or a Yahoo Store merchant, they were supposed to inform you that Visa put them in the doghouse, so you can change vendors. Did they?)
[1] http://www.visa.com/splisting/searchGrsp.do [2] http://usa.visa.com/download/merchants/Bulletin-PCIENFORCE-1...