This system doesn't replace Django; it complements it.
You could build 95% of an application with the traditional request-response model and add the 5% of real-time featurs with a system similar to my demo.
(Of course, given your opinions on Django, I don't recommend you build anything with it.)
Well said =)
It's incredibly frustrating to so easily build the entire app with a "traditional" django stack and then be faced with solving for a whole new stack just to avoid wasteful XHR-polling for some simple server-side event driven UI updates.
We've implemented some SSE based solutions lately with gevent & nginx in front of django, and it's been great to keep it all in the django family.
Maybe switching to py3k has some value after all... =)
The C10k problem was originally about serving static files -- quoting the page you linked to: "take four kilobytes from the disk and send them to the network".
The demo could certainly be extended to send larger amounts of data -- left as an exercise for the reader.
I'm not sure to understand your second paragraph -- if I used this system in a real application, I would serve the pages with the traditional handler (with template rendering, middleware, etc.) and then exchange messages over the websocket. These are different roles.
Regarding database connections, the default behavior isn't the one you're describing any more; I implemented persistent connections in Django a few weeks ago.
By "implemented", do you mean "contributed to trunk", or "used it in my application"? If the former, thanks, if the latter, I was under the impression that it was on by default in 1.5?
Yes, I'm just referring to C10k because this demo uses a technique originally created to solve C10k.
Reaching 10 000 connections wasn't difficult in this case; it was just a matter of tuning a few system parameters. Exploring the APIs and studying how they can fit together was much more interesting, and sometimes challenging.
Well, about PgBouncer, I believe this tells more of PostgreSQL than of Django + psql libraries
(After all it will still open and close connections for Django but keeps them open for psql)
Now, for serving 10k connections with template libraries and middlewares and ORM, out of the box? Serving dynamic content for each connection? Impossible =)
Not without some kind of caching (but you can do that with the help of a middleware)
The porting strategy was decided by rough consensus in the core team.
We wanted a solution that would be convenient for authors of pluggable apps, so they could use the same strategy as Django itself. This is why we used six rather than an ad-hoc compatibility library.
In that case, I wouldn't worry about installing for all users. I'd build from source and configure with the --prefix argument. Then I'd start Django using that specific Python.
Or, since Django on Python3 doesn't seem production ready yet, I'd "upgrade" to Debian testing or unstable, and hope that by the time Django became production ready Py3.2 was in Debian stable.
Obviously the Django ecosystem can't start supporting Python 3 until Django itself does.
The porting process (which is still in progress for Django) shouldn't be too difficult for most pluggable applications, as Django strongly encourages using unicode everywhere.
We are a small strategy consulting company focused on exploring new fields of activity. We have launched several spin-offs in the past years.
We are creating a large scale car sharing service (several thousand electric vehicles), launching in Q4. We are looking for highly productive and motivated developers to join our backend development team.
Interns with strong programming skills and learning abilities are welcome.
You could build 95% of an application with the traditional request-response model and add the 5% of real-time featurs with a system similar to my demo.
(Of course, given your opinions on Django, I don't recommend you build anything with it.)