I've been contemplating the same thing, actually. I'm not sure if this exact library belongs (maybe), but some of the functionality could definitely be useful.
Looks awesome! I don't have a use for it at the moment, but I'll keep it in mind for future projects.
Edit: Does anyone have more information on the license? I do plan to build amazing things, but it would be nice to know under what conditions I can release those amazing things to the public. I assume it means no restrictions, but I'd hate to go stepping on someone's toes.
E.g. a page giving an overview of BSD, GPLv2, GPLv3, LGPL, the Creative Commons bunch (no idea if they are usable for source code). Basically I'm wondering if there's a "Intro to licensing your crap on github" post somewhere on the net...
Yea, this just seems like rearranging the tokens to achieve the same result. Though, the same sort of thing could be said about path.py, which I absolutely love using.
I was motivated to write it in response to one of Quora's Programming Challenges (http://www.quora.com/about/challenges). It seemed like a fun problem, and I had never had a good reason to read through the applicable RFCs. I wonder if furl was similarly motivated?
I don't find the standard library modules very difficult to use, however, so I haven't spent much more time on it since then.
This looks like a nice library. So is python-requests. Every time I see this sort of libraries though, I can't help to wonder - why standard python libraries are so bad that people need to create these helpers.
They aren't "so bad", you can't expect the standard library to meet the requirements of everyone. Most of what this library achieves can be done with urlparse, it just has some additional "nice to haves". Also, see the comments about not using a MultiDict as a good indicator of why this problem isn't solved in the stdlib.
Much of the stdlib was written before niceties like keyword params and generators, others just don't have as nice a design as they could, hence the helpers.
I love anything that adds developer productivity and reduces developer pain, so I definitely love this. Here's hoping it catches on within the Python community.
very interesting, i ran into urlparse's shortcomings while prototyping a web crawler just last week! i created something similar to identify domain, subdomain, directories, pages, and fqdn. note that if you run sockets.getfqdn against most cloud servers then you get some weird string with ip numbers e.g. 182.43.210.102.static.cloud-server.com