Nov 29 2011
Security

How Content Filtering Software Protects Users

Businesses deploy content filters to help enforce acceptable-use policies.

The Internet is a dangerous place, and it’s getting more dangerous every day. Research from web security firm Dasient found that by the end of 2010, more than 1 million web pages were serving up malicious content, and that the average Internet user had a 95 percent chance of visiting a malicious page over the preceding three-month period. Those are scary statistics for organizations seeking to protect their users.

Organizations now realize that with the odds stacked against them, trying to influence user behavior is insufficient. It’s almost inevitable that users will encounter undesirable web pages with a disproportionately damaging impact on the business. To address this risk, many state and local governments use content filtering software to automatically protect users from surfing to undesirable locations, whether accidentally or intentionally.

Enforcing a Use Policy

Enforcing an acceptable-use policy is the primary reason most organizations turn to content filters. They simply want to stop users from visiting websites containing content that violates the organization’s Internet use policies. The reasons for blocking content are many, ranging from preventing sexual harassment claims in the workplace to limiting users’ access to gaming or social media sites, which are notorious drains on productivity.

Most responsible-use filters work by intercepting outbound web requests and extracting the requested URL. They then search for this URL in the content filtering manufacturer’s proprietary database of URL categories. This category information is then compared with the policy defined for the user or network, and appropriate action is taken — either allowing or blocking the request.

Content filtering manufacturers typically offer between 40 and 60 categories, allowing companies to specifically define policy. Categories of filtered content may include:

Adult and mature content;
Illegal drugs;
Personals and dating;
Shopping;
Travel;
Web-based e-mail;
Violent, hateful or racist content.

While category-based filtering covers most enterprise needs, it is not sufficient by itself. Every organization has specific business requirements that may call for exceptions from the one-size-fits-all categorization schemes available in content filtering products. For this reason, administrators may create whitelists of sites that are explicitly allowed and blacklists of sites that are explicitly forbidden. These manual ratings override more-general category-based policies.

Policies also often vary for different user groups. For example, in an elementary school setting students and teachers may have vastly different Internet access requirements. For students, the school’s primary concern is protecting them from the smallest chance of accessing inappropriate content, warranting an extremely strict content filtering policy. Faculty members require broader access to websites for research and personal business. Content filters let administrators create different policies for different networks or classes of users.

Finally, users may sometimes need a one-time exception to a policy that allows access to a specific website for a limited time. For those cases, many products provide a supervisory override feature that lets administrators enter a password for temporary access to a blocked site. This override feature typically appears directly on the error page notifying users that a particular site is blocked, which allows for prompt use of the override when needed. Many products directly link the override to administrative credentials on the organization’s network, which lets administrators override the filter without remembering a second password.

Malware Filtering

When most people think of content filters, the first thought that comes to mind is enforcing an organization’s acceptable-use policy by limiting the URLs users can ­access. This technology can also be ­applied to another significant information security problem: the prevalence of malicious software on the Internet.

Users can search for content that serves a legitimate business purpose and that meets all of the requirements of the organization’s acceptable-use policy, but they may inadvertently get themselves in trouble when they visit a website that is infected with malicious content. For example, a recent McAfee study demonstrated that one in every 10 searches for information about supermodel Heidi Klum offered up web pages that contained malicious content.

Content filters can contribute to the fight against malware embedded on websites in three ways: URL filtering, reputation tracking and signature detection. Each of these techniques plays an important role in protecting the enterprise against malicious software by safeguarding user web browsing.

First, the same URL filtering that prevents users from violating an organization’s acceptable-use policy can also protect them against accidental visits to sites containing malicious software. Content filtering manufacturers support this by tracking malicious websites and developing lists of those sites in their URL categorization database. Administrators can then block sites with malicious software in their security policies in the same manner that they might block pornography or gambling sites.

Web reputation filtering adds another dimension of protection against malicious software and other fraudulent activity. This rating system cuts across the categories used by normal URL filtering and lets administrators block access to sites that don’t meet certain trustworthiness thresholds.

These reputational filter settings apply regardless of any existing URL categorization policies. This means that a known malicious site is blocked even if it resides in a category that is otherwise allowed by the responsible-use policy. URL and reputation-based filtering alone is not sufficient to protect users against malicious software hosted on websites because new sites become infected with this code daily. To protect against the risk of users encountering malware on sites not listed in the URL or reputation-based databases, content filtering providers also incorporate signature-based malware detection technology in their products.

This signature-based software works in a manner similar to that used by traditional desktop antivirus software. It leverages the same databases as those products, containing the patterns associated with known malicious code objects, and then uses those signatures to search for malware in responses from websites. The filtering occurs in a slightly different manner than URL or reputational filtering because it takes place only after the remote site returns the requested content to the user. This last layer of defense provides an added degree of security by screening the actual data returned by sites that have already cleared other content filtering hurdles.

Content filtering plays an important role in the information security strategy of many organizations. In addition to enforcing an organization’s acceptable-use policy by blocking access to sites that contain undesirable web content, it also provides an added degree of protection against malicious code found on the web. With the prevalence of infected sites on the Internet today, this technology is essential to safe web browsing.

Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT