A company called Steorn has been in the news recently. It claims to have invented a perpetual motion machine. Its recent demonstration of that machine, of course, failed. The laws of thermodynamics do not like to be trifled with.
There's another company much like Steorn. It also claims to have done the impossible–but instead of inventing a machine that gets 485% efficiency, it claims to have invented an “infallible” network filtration technology that generates “no false positives.”
Chest-puffing statements like SafeMedia's are easy to deflate, as it would only take one false positive or one instance of a file being transferred over P2P to disprove them. There hasn't been a public demonstration of SafeMedia's technology, naturally–even Steorn put their credibility on the line with their public demo. SafeMedia could release, for instance, working demonstration versions of their products for testing by the Internet community.
A few months ago, in a very funny post, Peter Eckersley at EFF did a great job of tearing apart their unbelievable claims. Jackson West at NewTeeVee interviewed SafeMedia CEO Safwat Fahmy yesterday, and uncovered some more crazy talk.
But let's look past marketing self-aggrandizement: What kind of technology are they really selling? It appears to be nothing more than a variation on easily-circumvented “deep packet inspection,” but this time, “adaptive.” It's so “adaptive” that Fahmy claims that it can detect and start blocking any new P2P technology within three hours.
Once again, we need to realize that even if their technology did exactly what it claims, it would be a terrible thing. Deep packet inspection technology vendors love claiming that their technologies “protect privacy”–they do this by trying to simply block entire categories of traffic. We have gone into considerable detail in our recent filing about network filters on the problems with blocking entire categories of applications from the Internet–technologies like SafeMedia's block all uses of proscribed technologies, even perfectly legitimate ones like the distribution of legal fan recordings, free software, “me to me” transfers, and other lawful uses. Every time a lawful use of a technology is blocked is a “false positive.” Companies like SafeMedia look at P2P technologies as evil in themselves, so dangerous that ordinary civilians can't be trusted with them. To them, P2P is copyright plutonium.
But of course, their technology simply can't work as advertised. Clever P2P programmers–whom even Fahmy admits are “the smartest people in the world”–not only can invent new technologies that can evade detection. They can invent technologies that are in principle indistinguishable from other kinds of traffic. The only way to block those technologies would be to block all sorts of other technologies as well–say, blocking all encrypted traffic, even secure web traffic (like you use to do online banking or to buy a book from Amazon) and email over SSL.
Companies like SafeMedia are preying on the credibility of policymakers and praying that no one looks too deeply into their claims. But we should be working on practical ways to make sure that content creators get paid for their work, instead of trying to do the impossible with perpetual motion machines, harnessing the clouds, squared circles, ropes of sand, and “infallible” network filters.