But to others, traffic discrimination just makes sense. If the network is congested then the network has to drop some packets. Some types of traffic need speed but not throughput, while others need throughput not speed. So customers benefit when the congested network drops packets based on whether it has to get out in a hurry to be valuable.
Both views are right. The NN defenders must realize that the treatment of traffic is not, itself, the goal. Rather, it is a means to an end. The end, the goal, the thing we defend is the felt human flourishing–economic and social–that results from a communication space of decentralized creativity and control. This mechanism sometimes shows up in the technical literature as some form of the end-to-end principle, but in social practice its implications are even more profound. When the net empowers people to control both the form and the substance of their conversations without asking the network or anyone else for permission; it lets people speak and coders code, so our internet ecosystem can march forward. We know that when a bit is just a bit–on a network that just hauls them 'round–we do reach this goal; we've seen it. But this uniform traffic treatment may not be the only or the best way to get there. We can do better.
Meanwhile, the pragmatic supporters of Cox must realize that the network should not discriminate. It should comply. This is a fine distinction but crucial. Instead of using patronizing assumptions and invasive guessing to classify our traffic for us–as either “time sensitive” or “time insensitive”–it should trust people to mark their own traffic as belonging to either class. If people mark too many packets as “high” (i.e. if we exceed some quota) then subsequent traffic can be treated as “low” for a stretch, until the quota renews.
The virtue of this is it deals with congestion, but still leaves users in control of both form and content–all without any new protocols. There are already open standards for doing this traffic management. IP already includes bits used to designate a classification, networks should start trusting them. Existing software would not need to do anything new to function and new software that carefully marks its traffic will function even better.
New programs and new devices will be disciplined by their users instead of by the network owners (a precondition for the decentralized flourishing we seek). If a BitTorrent client starts clobbering a network and marks all the traffic as “high” then it will quickly exceed the quota and the network will treat the traffic as “low”. So the network does not have to inquire about the substance of the traffic. Also, this encourages networks and users to cooperate: the user will want to accurately classify his traffic, else later, when he needs it, he will have “spent” his entire quota.
This spending (and renewing) of quota all happens in real time–we are not talking about monthly bandwidth caps, we are talking about how your BitTorrent client marks packets. We are also talking about how new as-yet-undeveloped software can grow. Without asking for the network's permission, new software can mark its traffic as it needs–preserving the decentralized creativity of form. And, without asking for permission, users can install this new software and decide its merits for themselves.
This is not a cure-all. In its wake, another set of issues emerge (e.g., does this undermine the ISP's incentive to build-out since it can just fiddle with quotas instead; how might backbone costs warp the network's incentives; is the set of traffic classes sufficiently standardized across providers to keep it simple for software developers; how should an ISP advertise such a product). But if Cox (and any other ISPs looking at “congestion management”) adopted this technique, it would be a step forward. It deals with reality while preserving principle.