Just when you figured the politicians had drained the cow on protecting fragile young minds from the wild wacky wilderness (www), Congress enacted the Children’s Internet Protection Act (CIPA). Tucked into the appropriations package for several Cabinet departments, including the Department of Education, the new law requires public schools and libraries to implement content-filtering technology on computers hooked up to the internet as one of the strings attached to universal service dollars. The filtering criteria spelled out in the legislation demands that the high-tech screening software installed to work with school web browsers is capable of blocking “material deemed to be harmful to minors.”

The Federal Communications Commission has until the middle of April to issue regulations for the law’s implementation, but by that time the law already will have met its first court challenges. Both the American Library Association and the American Civil Liberties Union have declared legal war on CIPA and are looking for armament in the free-speech provisions of the First Amendment. Even if you don’t agree with the “free range” approach of these groups to the wide-open spaces of the web, they have been very successful in defeating previous legislative attempts to fence in the wild wild web. At least one federal court (in Mainstream Loudoun v. Board of Trustees of the Loudoun County Library) has declared the mandatory use of content-blocking software unconstitutional.

But the filtering mandate faces an even larger challenge: filtering out material that is “harmful to minors” without gathering useful, or at least harmless, information in the same dragnet. Filtering software is notoriously clumsy at performing its appointed task. It’s like Alice in Wonderland trying to fit in by using the imperfect Eat Me, Drink Me size adjusters. Some software is too lenient, while other programs block sites that mention words like “breast” and nicknames for Richard. Blocking criteria, whether locally determined or established by vendors, is notoriously subjective. But perhaps the most daunting problem faced by educators trying to soften the impact of the wicked web is that students have discovered that most filters can be circumvented with less effort than it takes to get a passing grade in wood shop.

Of course, one of the potential beneficiaries of the law is the software-filtering industry. Some school districts already spend tens of thousands of dollars with companies that charge big bucks to protect students from everything from nicotine to nudity. Beyond the high cost of purchasing content-blockers, some companies keep track of students’ web-surfing habits and sell the results to other companies, which use the information to guide their own advertising plans. The federal government even buys data from filter vendors.

School districts that object to the law or find complying with it too expensive or of little practical value can take refuge in one provision that has received little publicity in all the brouhaha. The filtering law includes a requirement for “local” definition of what internet material is “harmful to minors.” The decision is up to school officials or the local school board. In fact, the law forbids the FCC or any other governmental agency from establishing criteria for deciding what is harmful. Furthermore, once the school system certifies that it has installed and is using “technology” to “filter or block” the evil web stuff, the government is prohibited from reviewing that decision or questioning the criteria used to make the certification.

On this basis, school systems that see the law for what is really is–the exploitation of public fear by politicians–will spend as few scarce education dollars as possible on this boondoggle. For older students, put in place well-written acceptable-use policies and back them up with reasonable supervision and firm enforcement penalties for violations. For younger students, the solution is much simpler–all internet access should be supervised by adults, not software.