Hacker News new | past | comments | ask | show | jobs | submit login

>Emphasis mine. If pushing a door counts, so does changing the user agent header or auto-incrementing IDs. The key here is "without authorization".

I'm not so sure about that. If it's not pushing to request record 334, why is it pushing to request record 335?

But I digress. Normally making standard web requests is analogized to looking, without touching. You have explicit authorization to go through the front door, and anything 'bad' you did inside was restricted to what you looked at.

>I believe this use/break distinction exists, but the distinction isn't something that's determined by the code or the vaguer "design of the code"; it's determined by the purpose of the service.

But then you get into the realm of having TOS be a legal, no matter how inane they are. This seems a far worse alternative.

>To drive that home, the library's hapless database admin who foolishly decides to update the list of books using her own SQL injection bug is not hacking, because she is authorized to fiddle with the database, even though, in your terms, it's bypassing the design of the code.

That's why I only said they lose the presumption of authorization. If all you know is someone SQL injected, you have to resort to other means to figure out if it was authorized. For example, if they already have equivalent access through non-code-bug means, and they simply prefer SQL injection, then there is no problem. But if they were doing it to avoid audit logs, there might be a huge problem.

>In other words, authorization is not the same as the technical artifacts involved in authorization. More generally, I don't think being bad at making software justifies people accessing it when they know it's not meant for them.

When it comes purely to accessing it, when it's non-HIPAA/etc. data, I don't think there needs to be very much justification.

And I don't see 'has no password' as a technical artifact. Details of web servers don't need to be involved here. The design is wrong on a fundamental, user-understandable level.




> I'm not so sure about that. If it's not pushing to request record 334, why is it pushing to request record 335?

For the same reason that it's OK for me to push in my door, but not to push in yours. Or why it's OK for me to type in my password, but not to type in your password.

> You have explicit authorization to go through the front door, and anything 'bad' you did inside was restricted to what you looked at.

There certainly isn't explicit permission; I think you mean implicit. Assuming you do, we just disagree here. You don't have a "presumption of authorization" when accessing something a reasonable person would know isn't meant for them to see. I think most of the rest of our disagreement flows from this.

I also don't see where you've made the case that get-the-employee's-SSN SQL injection attack is relevantly different from the send-an-id case.


>For the same reason that it's OK for me to push in my door, but not to push in yours. Or why it's OK for me to type in my password, but not to type in your password.

Okay, I'm confused by your analogy. I was thinking of the situation as having a single door at the entrance to the establishment. I don't think it makes any sense at all to treat each page as a separate household on private property.

And I meant explicit. There is explicit permission to contact the web server.

>I also don't see where you've made the case that get-the-employee's-SSN SQL injection attack is relevantly different from the send-an-id case.

If I send a non-secret ID into the system and get info back, the system is working as designed. If I use SQL injection, the system is not working as designed. I think that's important. In the former case, I may be doing something unexpected, but I am not exceeding the authority given to me.


Er right, the analogy is bit confusing for me too; let me see if I can clear up what I meant here.

> I don't think it makes any sense at all to treat each page as a separate household on private property.

It's certainly true that some pages on a site might be private to someone else and others not. The page/site distinction is orthogonal to the authorized/unauthorized distinction. It's not clear to me why, in your story, it was OK to ask for ID 334, but it if it were, that doesn't automatically make it OK to grab 335.

The library is itself an analogy and it's a bit broken here, because of course at a library the ISBNs and the books they correspond to aren't meant to be private, unlike the ATT/Weev case. So let's imagine that your iPad automatically looked up your email address using its ICC ID because that was why they built it (I don't actually know if that's the case or not). It doesn't follow that you have permission to look up everyone else's. That's where I was going with the doors analogy. I hope that clears that up.

> There is explicit permission to contact the web server.

I deleted my counter to this because I think it's an irrelevant semantics issue. Let's let that one go.

> In the former case, I may be doing something unexpected, but I am not exceeding the authority given to me.

We keep stumbling on this concept of authority, and this thing about design. The design part I don't get. Didn't the library software's author design the system so that it took query parameters and directly formed SQL strings out of them? So isn't it working exactly as designed? Of course not, because the designer never intended it to be used that way [1]. So their intention is exactly what matters, and the same applies to trying incremental ICC IDs--clearly not the intent of the service. That's why this design/expectation dichotomy isn't there, or at least isn't relevant (it's also why I was distinguishing the "purpose of the software" from the "design of the code" earlier).

As for authority, you continue to see it as something the webserver can provide by virtue of its technical characteristics, and I--and the law--simply don't. Authorization in the technical sense is a technological codification of an authorization policy, which may be implicit. If that policy is obvious (which I think it is here), then it's that policy that matters, not its technical enforcement. I fear we're retreading ground here, though, and this may just boil down to having different axioms.

[1] You could say that there are layers to design and that the flaw in the designs here ("don't do any access control" vs "don't sanitize strings") are at grossly different granularities. That's true, but I'm not sure how you plan to formalize the appropriate level of generality that makes such a design flaw equivalent to permission to entry as opposed to simply not working as designed. I don't think you can without resorting to the designer's intent.


I'm talking about intentional design here. It is on purpose that the system has no authentication. It is on purpose that the system returns records solely in response to an ID request from any client. It is not on purpose that the system can be SQL injected.

Intention of use is entirely different. The high level intention of use / purpose is often opaque and contradictory. Using it as a threshold would be foolish. "it securely stores passwords but also mails you a reminder if you forget" "it sends marketing mails that don't get marked as spam" "it shows people images that they can't save" "people will stay signed up for 15 months and we will profit on the loss leader"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: