Hacker News new | past | comments | ask | show | jobs | submit login

VJ talks about something called "Evolutionary Pressures" at the end of his Google Talk a few years ago. Watch the video if you haven't already.

SOPA if it passes may create such pressure. It's main target is DNS.

CDN's like Akamai rely on universal use of one DNS, the one that SOPA aims to regulate, to accomplish their kludge.

Food for thought.

I prefer "Chicago" to any of the other alternatives.

End-to-end can be realised. Overlays work for small groups. Small groups can share with each other. Networks connecting to other networks. It's been done. ;)

Evolutionary pressures may help.




I stupidly downvoted you (SOPA! GAH!) but you made a good point, so, sorry.

Generally one of the reasons I don't freak out about laws regulating the Internet is that the Internet as we know it today is rickety anyways.

In the last 10 years we've seen the monumental shift (predicted in the late '90s but widely scoffed at) from ad hoc network protocols and native client/server implementations to a common web platform. Nerds still recoil a little at this, thinking about the native/ad-hoc stuff they still use (like Skype), but if you look at the industry as a whole, particularly in the line-of-business software development that actually dominates the field, it's all web now. TCP ports are mostly irrelevant. If you were going to start a new app today, would it be a native implementation of a custom protocol? Probably not!

One of the things that got us to this state was in fact regulatory: default firewall configurations that don't allow anything but web out, and disallow arbitrary inbound connections.

Over the next 10-15 years, I'm hoping we'll get similar nudges away from first-class addressing only for machines (which are less and less useful as an abstraction for completing tasks using technology) and towards first-class addressing for subjects, interests, content, &c. This is an insight lots of people have had, from David Cheriton & TIBCO in the '90s through the RON work at MIT through VJ's work at PARC & so on.

I wrote off Lessig for a bunch of years after reading _Code_, but I think he fundamentally has it right in the sense that implementors have as powerful a say in how things are going to be regulated as legislators do. America had the Constitutional Convention after the Articles stopped scaling; the Internet will have little blips of architectural reconsideration when the impedance between the technology and what people want to legitimately do with the technology gets too high.

(I'm trying to make a technical point here, not a political one; I support copyright, and am ambivalent about SOPA.)


With the widespread use of anycast, does "first class addressing for machines" even matter any more?

In situations where anycast is used, how do you even know what machine a given address to connecting to?

RON was a step in the right direction, imo. With small overlays, MAC addressing comes into play and it becomes a little easier to know what machines (not simply what "addresses") you are really connected to.


Yes, because conceptually you're still talking to a computer (it just happens to be a globally load balanced computer). It's still the unicast service model, and it's still fundamentally about building a channel between two operating systems.


Imagine if you dialled a full telephone number, including applicable country code and local regional code, and depending on where you were dialling from, you reached a different telephone in a different geographical location.

As long as it's an answering machine and the message is the same at every telephone reached by this number, it does not matter.

But as soon as you want to reach a live person, and not simply "content", then what?

Is end-to-end about accessing "content" or is it about communicating with someone operating another computer?


I wouldn't want to noodle on this too much. I take your point. Anycast abstracts away some of the notion that you're talking to a specific computer. But the unicast service model is inherently about computers talking to each other. Many, maybe most, of the most important Internet applications aren't about 1-1 conversations, or if they are, they're 1-1 conversations in special cases of systems that also work 1-many.


Importance is a subjective concept.

One opinion is that a very important Internet application will inevitably be 1-1.

Who did the FCC just hire as their new CTO? What is happening to POTS?

1-to-many systems, hacked to give an illusion of 1-1 conversations, e.g. smtp middlemen, social networking's http servers or twitter-like broadcast sms, are what we settle for today, but, imo, this is a limitation not a desired goal.


The video in question, great stuff: http://www.youtube.com/watch?v=gqGEMQveoqg




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: