Tuesday, April 27, 2010

Feds vs. Facebook: Just Give Up Now

It was inevitable, really: between the need of Congressional Democrats to rattle the media cage, and the need of Facebook to turn itself into a viable ad platform at any cost, there was bound to be some kind of collision. Predictably, the issue is privacy. Equally predictable is the outcome: Facebook will have to offer at least a few real concessions to hold on to the basic core strategy.

Last week, Facebook announced a bold new audience and advertising strategy that will extend its presence across the Web through partnerships with sites like Pandora, Yelp and Microsoft Docs. The new strategy involves sharing user information from Facebook with these other Web sites. Today four U.S. Senators -- Sen. Charles Schumer, D-NY, Sen. Michael Bennet, D-CO, Sen. Mark Begich, D-AK, and Sen. Al Franken, D-MN -- are demanding that Facebook implement new controls that will make it easier for users to determine how much of their personal information is shared with other Web sites.

I'm no legal expert, so I couldn't say whether Facebook really runs a risk of crossing some regulatory authority on privacy issues (although MediaPost's Wendy Davis thinks the new strategy is problematic). I doubt very much that the company would undertake something of questionable legality at this point in its existence -- but that doesn't mean it will be smooth sailing. In the past misconceived revamps and new programs like Beacon have encountered strident opposition from Facebook users, even though they were perfectly legal and also covered by the site's existing terms of use. In keeping with the basic ethos of a social network, there are other criteria beyond legality to consider -- namely, public opinion.

That's where Congress comes in. Always ready to grandstand unhelpfully, members of Congress seem compelled to insert themselves into controversial media debates, whether or not they possess any actual expertise or useful advice. A good example is the recently concluded Congressional hearings about issues surrounding Arbitron's Portable People Meter for radio ratings. The hearings were held at the behest of minority broadcasters who asserted that Arbitron's audience samples failed to adequately represent minority listeners. Obviously this was a rather complex technical issue involving the firm's sampling methodology, audience sociology, and statistical validity, and it's unclear what a couple sound bites from a full-time politician could contribute to this quantitative quagmire -- but that didn't stop the House Committee on Government Oversight and Reform from holding months of "inquiries."

The Arbitron example is instructive and cautionary for Facebook. After a lot of back and forth and around and around, Congress heard testimony from the Media Rating Council, which reminded Congress that it had created the MRC back in the 1960s for just this very purpose (i.e., maintaining quality standards in media ratings). Surprised at its own foresight, Congress joined MRC boss George Ivie in encouraging minority broadcasters to join the MRC, so they could gain access to confidential information regarding Arbitron's sampling techniques and also have a say in future reviews of PPM methodology. Last week Arbitron, the MRC, and the minority broadcasters announced they reached an agreement wherein Arbitron would take steps to improve minority sampling techniques.

Congress declared victory without really having done anything -- and that's the whole point. Congressional hearings don't seem designed to achieve much of anything; rather, they simply generate a continuous stream of negative publicity for companies accused of political (but not legal) transgressions, until the companies "voluntarily" agree to take whatever measures seem necessary to placate public opinion.

Frankly, in my humble opinion, Facebook would be well advised to just skip this whole process by caving in to the Senators' demands now. Because ultimately you can't win. The really dangerous thing about Congressional hearings is that they can go on forever: members of Congress get paid no matter what, and positively revel in the publicity. They want the hearings to last forever, and the universe of subject matter they can address is effectively infinite.

Meanwhile, ironically enough, Facebook users have already come up with an effective response to the Facebook changes. A message has been making the rounds which reads: "There is a new privacy setting called 'Instant Personalization' that shares data with non-Facebook websites and it is automatically set to 'Allow.' Go to Account > Privacy Settings > Applications and Websites and uncheck 'Allow.' Please copy & repost."

Monday, April 19, 2010

IAB Gets Tough to Play Nice with FTC

The Interactive Advertising Bureau wants to play nice with the Federal Trade Commission. As the online ad industry braces for impending online privacy legislation, the trade association made a key move recently to help the FTC crack down on rogue companies. During the group's annual leadership summit in February, its board of directors and general membership body voted to establish a code of conduct intended to help the government enforce the IAB's privacy guidelines.

Once the IAB establishes the code, its members will be obligated to abide by it. If a company is accused of skirting the rules, the FTC can bring a deceptive practices case against that firm. The goal, explained Mike Zaneis, IAB's VP of public policy, is "to create a federal law enforcement hook."

According to Zaneis, there were no objections raised by board members, who approved the measure unanimously, or by general members attending the conference, who participated in a voice vote. Self-Regulatory Principles for Online Behavioral Advertising put forth last July by a broader industry coalition will provide a basis for the code, which will be written in the next few months, Zaneis said.

The move has flown largely under the radar. Though it may seem a mere procedural change, establishment of a conduct code will provide the organization an enforcement mechanism - one some argue is needed in order to ward off regulatory pressure.

"The key is that we know for the principles to be successful, it has to be a partnership with the Federal Trade Commission," said Zaneis. "It's not necessarily out of the ordinary... It's a major commitment." The FTC mainly has been supportive of the industry's efforts to self-regulate.

Rather than create its own monitoring program, the Better Business Bureau will help the IAB patrol its members' practices, said Zaneis. The IAB partners with the Better Business Bureau's advertising review body, which is overseeing a broad industry initiative that also involves the Association of National Advertisers to self-regulate through consumer education and disclosure.

Yet, as the IAB and the broader industry coalition takes steps towards enforceable self-regulation, new privacy laws are looming that could have serious effects on online advertising. New privacy legislation is expected in the next few weeks, according to Zaneis, who told ClickZ earlier this week, "We absolutely expect to see a draft bill from Congressman [Rick] Boucher in the next several weeks. They seem to have language and are working through the last few substantive and procedural issues before sharing it more widely," Boucher, a Virginia Democrat, chairs the House Subcommittee on Communications, Technology and the Internet, a key body dealing with online ad related issues and online privacy.

According to Zaneis, the IAB's board will draft the code of conduct over the next few months, and vote on final approval once it is written. "It's likely to be an evolving document," he said.

Friday, April 16, 2010

Young Adults Say Web Sites Should Be Required To Delete User Data

Debunking the oft-repeated assertion that young people don't care about privacy, new research shows that Web users between the ages of 18 and 24 are highly protective of certain information about themselves.

"With important exceptions, large percentages of young adults are in harmony with older Americans when it comes to sensitivity about online privacy and policy suggestions," states the study, authored by professors at UC Berkeley and the University of Pennsylvania's Annenberg School.

The study was submitted this week to the Federal Trade Commission, which recently concluded a series of three privacy roundtables. The report, which cost $55,000 to commission, was based on a telephone survey of 1,000 Americans.

One of the most significant findings is that 82% of people ages 18-24 say they have refused to disclose information seen as too personal or unnecessary to businesses. Overall, 88% of people of all age groups surveyed said the same.

In addition, 88% of respondents between the ages of 18 and 24 say that Web sites and ad companies should be required by law to delete all stored information about individuals. That figure compares to 92% of respondents of all ages who said the same.

What's more, 62% of 18- to-24-year-old respondents say they believe the law should give people the right to learn what information Web sites have about them.

While the findings appear to contradict popular wisdom about young people's attitudes, Berkeley Law School's Chris Hoofnagle says the results are consistent with previous research by social media experts like Danah Boyd. "People who have done qualitative research have said for many years now that young people care very much about their social networking privacy. That's evidenced by the fact that they spend so much time grooming their profile," says Hoofnagle, who was one of the study's authors.

He adds that one reason why young people are perceived as indifferent to privacy is because some say they're not concerned about the use of their data by institutions. "Young people's focus is more about who, among their peers, will access their data," he says, adding that it often isn't until people get older and apply for jobs, or products like health insurance, that they realize how corporations or other entities might use personal data.

Hoofnagle says he believes the findings could affect lawmakers' willingness to enact new online privacy protections. "There's been this assumption that future generations will care less. That has caused some inaction among regulators," he says. "One argument that's frequently employed is the idea that we shouldn't regulate now, because laws passed today would reflect the norms of the 35-year-old attorney who works in Washington, D.C., and not the teen users of Internet services."

Some of the findings seem especially relevant for companies that use behavioral advertising techniques. Thirty-three percent of 18- to-24-year-olds say they deleted cookies often, while 25% say they do so sometimes. Among respondents of all ages, 39% say they often delete cookies, while 24% say they sometimes erase cookies.

Wednesday, April 14, 2010

Report: More Users Delete Flash Cookies

Some Web companies have touted the use of Flash cookies as an alternative to HTTP cookies, largely because Flash cookies tend to be harder to delete and therefore, more persistent. But a new study by start-up Scout Analytics suggests that far more consumers now remove Flash cookies than even one year ago.

In a report to be published today, Scout Analytics says that around 7% of the users who receive its Flash cookies delete them within 30 days. That proportion is more than double the 3% of people who deleted Flash cookies last July, when Scout started examining the issue.

Scout arrived at those figures by examining Web traffic for 55 business-to-business sites that draw a total of around 180,000 users. The company, which helps sites analyze their visitors, only tracks people's behavior within particular sites. Scout doesn't track people as they surf the Web.

"Originally we used HTTP cookies to help with tracking, but found that they were being deleted," says Senior Vice President For Strategy Matt Shanahan. "So we moved to Flash cookies. But we found out there was a deletion problem there as well."

He says that Scout is able to recognize users who have deleted their Flash cookies in two ways. First, he says, many have registered at the sites Scout works with and sign in when they visit those sites. Second, Scout also identifies users based on their browser configurations -- which often are unique enough to serve as digital fingerprints.

Shanahan says the increase in Flash cookie deletions might be partly due to the newest version of Adobe, now in "pre-release," which offers a "private browsing" mode that automatically deletes such cookies at the end of a session.

Flash cookies initially were used to store people's preferences for Flash-based applications like online video players, but some companies began using such cookies for tracking in recent years.

Friday, April 9, 2010

New Study Finds Lower Click Fraud Rates on Social Networking Sites

Study of Ad Campaigns on Top Social Networks Found Click Fraud Rate of 11.5 Percent in Q1 2010; Overall Industry Click Fraud Rate Rises to 17.4 Percent

AUSTIN, Texas--(BUSINESS WIRE)--Click Forensics®, Inc. today released advertising audience quality figures for the first quarter 2010 from the industry’s leading independent online advertising and click fraud data reporting service. Now in its fourth year, the Click Forensics reporting service provides statistically significant data collected from Cost Per Click (CPC) advertising campaigns for both large and small advertisers across all leading search engines as well as comparison shopping engines and social networks. Traffic across more than 300 ad networks is reflected in the data. Key findings for Q1 2010 include:
“While a handful of suspected click fraud schemes on social networking sites have been alleged by individual advertisers, it’s widely assumed that these sites are less vulnerable to click fraud schemes”

A study of hundreds of online campaigns from a cross-section of advertisers and third-party ad networks showed traffic from leading social networking sites, including MySpace, Facebook, Twitter, and LinkedIn, to have an average overall click fraud rate of 11.5 percent, significantly lower than the industry average.

The overall industry average click fraud rate was 17.4 percent. That’s up from 15.3 percent for Q4 2009 and the 13.8 percent rate reported for Q1 2009.

In Q1 2010, the countries outside North America with significant CPC traffic producing the greatest volume of click fraud were the Philippines, Ukraine and China, respectively.

“While a handful of suspected click fraud schemes on social networking sites have been alleged by individual advertisers, it’s widely assumed that these sites are less vulnerable to click fraud schemes,” said Paul Pellman, CEO of Click Forensics. “The results of our new study corroborate this by tracking a lower overall click fraud rate on social networks than we’ve ever tracked on traditional PPC venues. Conversely, the overall industry rate seems to be creeping higher, so we recommend marketers continue to be watchful of their campaigns.”
Since 2006, Click Forensics has published online advertising industry data collected from the first independent third-party Cost Per Click (CPC) and online advertising fraud detection service. The service monitors online media traffic across over 300 ad networks as well as billions of clicks from top search engines, comparison shopping engines, social networks, leading publishers and advertiser web sites – providing the most accurate view of online advertising audience quality.
For more details and to read the full report “Click Fraud Rate Q1 2010,” visit http://www.clickforensics.com/resources/click-fraud-index.html.

About Click Forensics, Inc.
Click Forensics® is the industry leader in audience verification and traffic quality improvement for the online advertising community. Click Forensics provides audience verification and traffic quality management solutions for leading online advertisers, publishers and ad networks, including companies such as Progressive Insurance, GenieKnows, Adknowledge, eZanga, Moxy Media, Turn, Traffic Engine, Vegas.com and many others. The company also regularly publishes industry data on online advertising audience quality. Click Forensics is headquartered in Austin, Texas, and is privately held with funding from Sierra Ventures, Austin Ventures, Shasta Ventures and Stanford University. More information on Click Forensics and its offerings is available at http://www.clickforensics.com./

Click Forensics and Click Fraud Index are registered trademarks of Click Forensics, Inc. All other company and product names mentioned are used only for identification and may be trademarks or registered trademarks of their respective companies.

Click Fraud: Using Attribution to Mitigate Risk

Thanks to the influx of the Internet over the past decade, we have all have heard the nightmarish stories of online gone bad, from tales of major security breaches at top retailers to website hacks, spammers and related blunders. And fortunately, marketing departments have gone largely unscathed — until now.

 
Click fraud is a growing internet crime costing marketers across the globe millions in lost spend. In fact, Alex Mindlin wrote in The New York Times that 25.8% of fraudulent ad clicks are in the United States and 44.1% of ad clicks originating from Vietnam are fraudulent.

 
Click fraud occurs when a person, automated script or computer program imitates a legitimate user’s click on an ad, for the purpose of generating a charge per click without having actual interest in that ad. Sometimes it’s done by fraudsters who want to steal a portion of the advertising budget of the marketer.

 
They either manually click on the links from several computers, use a computer program to imitate the manual click and deploy it on several computers, or worst of all, use malicious programs to spread these imitating scripts across several computer networks and use Trojan code to turn the average machines into zombie computers that would run the scripts to generate revenue for the scammer.

 
In some cases, it’s not fraudsters, but a known competitor. It’s sneaky and immoral, but these competitors see no problem in clicking away at ads in order to quickly deplete daily budgets of others in the market, opening the door for them to bid low prices at the prime time, with less competitive pressure.

 
Most marketers know that click fraud exists, yet very few measure it and fully understand how it affects their campaigns. It is high time for marketers to understand how much is at stake.

 
Know If — and How — You’re Affected

 
In analyzing Visual IQ’s customer base, we have learned that marketers are losing about an average of 16.7% of their PPC budgets to the fraudsters every day. Some marketers even lose up to 45% of their budgets without even knowing about it.

 
Marketers can figure out if they’re being affected by looking at the performance of campaigns. Low conversion rates, trivial ROI on PPC campaigns, and lagging behind the competition are signs of click fraud.

 
The percentage of click fraud to the campaign budget varies from marketer to marketer. Each marketer should make their own assessment on how much the click fraud affects their business.

 
Take Simple Steps Towards Prevention

 
Prevention is the best method to deal with click fraud. It can be pretty difficult to prove click fraud and it is more difficult to get back the money lost. Instead of looking back with regret, be proactive about preventing click fraud using the following simple methods:

 
  • Make sure that your PPC campaigns are limited to the geographies where you sell you products and services.
  • Avoid PPC campaigns in the geographical areas that are prone to more click fraud incidents, like Vietnam and Nigeria
  • Have daily budgets. See how you spend your daily budget by hour by hour and flag any activity that looks suspicious.
  • Compare the conversion ratios of your PPC campaign with other campaigns.
  • Tune campaign parameters on a regular basis to avoid potential click fraud.
  • Tackle Click Fraud With Attribution Technology

 
While the above methods are a good first start, the best way to prevent click fraud is by tracking your clicks and analyzing their patterns. Many of today’s attribution technologies can implement click fraud measurement and analysis.

 
These technologies remove a lot of the manual work above, while backing up potential fraud with scientific proof. The most effective technologies leverage attribution to prevent click fraud by conducting the following steps:

 
1. Collecting detailed attributes of every click, including the keywords, geographic locations, IP addresses, time-of-the-day, domains, ISP and publishers.

 
2. Collecting the engagement stack of each of user. Engagement stack of individual users includes the timestamps of every impression, click, conversion and consumption of ads from different channels such as email, online display, search and affiliates.

 
3. Analyzing the click attributes and engagement stacks to find the clusters of potentially fraudulent clicks.

 
4. Grouping these clicks into categories like high/medium/low propensity to be fraudulent.

 
5. Quantifying the damages made by each category — including assessment of damages to the business and losses to the campaigns.

 
6. Using advanced modeling techniques to find the patterns of publishers, geographical areas, IP address groups and ISPs that produce more fraudulent clicks.

 
7. Based on the analytical insights from the models, generating recommendations to the media planners to prevent fraudulent clicks. This step is vital for marketers to effectively take action to mitigate potential violations

 
While it is impossible to prevent 100% of the click fraud, the problem can be highly contained. Marketers with advanced attribution technologies are preventing up to 70% of it and saving millions of dollars every year.

 

John Prescott advocates Google click fraud

Followers of UK politics are well used to the major parties undertaking ‘dirty tricks campaigns’ in the run up to general elections, but Labour MP John Prescott has gone a stage further in the run up to this year’s election.

Prescott has urged followers of his Twitter to go to Google and to type in terms related to the election so that the Tory party’s Adwords Ads appear, and then to click on them to waste the Conservatives’ Adwords budget.

Clicks on Adwords cost the advertiser every time a click is a made, and once the daily budget is exceeded, the ads will stop appearing. However, clicking on Adwords Ads deliberately to stop a competitor’s ads appearing is click fraud, and Google takes this very seriously. Also, there are measures in place to detect when multiple clicks are coming from the same source, or patterns of clicks are emerging just to use up an advertiser’s budget. It is therefore likely that Labour’s efforts wouldn’t have dented David Cameron’s Adwords budget too much.

According to the Financial Times, the Tories were bidding on parliamentary search terms, such as ‘budget’ and ‘hung parliament’. The FT also stated that the Tory party was bidding on specific geo terms for local constituencies, which would work out much cheaper as geo targeting your ads is a better way to get results. For example, searching for ‘General Election Cheshire’ or ‘General Election Wirral’ would produce far fewer results, and as such would be less competitive and cheaper on Adwords, than appearing for ‘General Election’.

Labour meanwhile has a smaller advertising budget than the Tory party, so has reportedly been spending its money on SEO and an effort to become a Google News Publisher. Hopefully Labour started its SEO campaign some time ago, as in an area as competitive as politics it can take a long time to garner natural search rankings.

34% increase recorded in the attempted click fraud rate!

A new report from Anchor Intelligence showed that in the first quarter of 2010 the volume of the average attempted click fraud rate jumped from 25.7% recorded in Q1 2009 to 29.2% in Q1 2010 which is a 34% increase on a year-over-year basis.

Anchor Intelligence says that the current rate of the attempted click fraud likely reflects a dramatic growth in botnet scale and volume around the globe.

The report reveals that the highest attempted click fraud rates were recorded in Vietnam (35.4%), Australia (35.2%), and the U.S. (35%). Most of this came from high velocity botnet traffic and coordinated click fraud rings.

Another Quarter, Another Jump in Click Fraud

Anchor Intelligence has released its report on Q1 2010 traffic quality and as expected, click fraud has increased yet again across the web.

Anchor is reporting a fraud rate of 29.2 percent for the first three months of this year, building on the 25.7 percent click fraud rate over the final quarter of 2009, and representing an almost 14 percent increase. That’s also a 34 percent increase in click fraud from the first quarter of 2009.

According to the report, the continued rise in click fraud is largely due to the dramatic growth of botnets in both scale and volume around the world and we’re inclined to agree. Click fraud rates have been rising steadily for as long as Ecommerce Junkie has been writing about it and it’s no coincidencethat botnets have become an increasingly larger menace over that time period as well.

The report from Anchor also includes data on traffic quality rates by country, with Vietnam (35.4 percent), Australia (35.2 percent) and the U.S. (35 percent) rounding out the top three for highest rates of click fraud among 30 countries across the globe. Again, botnets and click-fraud rings are likely the biggest cause of fraudulent traffic in these countries. The United Kingdom has been hit especially hard by botnet activity over the last six months, with click fraud rates rising to 32 percent there this quarter after only 18 percent in Q4 2009.

“As Internet usage has grown in countries lacking appropriate cybersecurity measures, more and more computers have become infected with malware and used as click fraud zombies,” said Ken Miller, CEO of Anchor Intelligence. “Through this report, we hope to convey the importance of advertising with ad networks and search engines that partner with third-parties such as Anchor to certify their traffic quality.”

Admittedly, it has been a rough few months for cyber security overall, which probably also explains the continued rise in fraud. There have been recent reports from McAfee and Google on a rise in cyber attacks against blogs in Vietnam that were critical of certain mining efforts. And of course, we had more than thirty companies (including Google) who were victims of cyber security breaches originating out of China back in December and January.

Despite the fact that the U.S. economy is beginning to rebound, businesses continue to tread cautiously when it comes to their online advertising operations and click fraud is a big reason why. We’ve heard instances of advertisers being charged extra for multiple clicks from the same web user in certain cases, which is just one example of how damaging and unfair the wrong kind of advertising activity can be to retailers and other web marketers. As always, we strongly recommend that you do your research before embarking on an online advertising campaign. Once you have a campaign going, we also suggest parsing the clicks and data from your traffic server logs yourself instead of relying on the third parties you’re advertising with who may offer tracking software or tools as part of their packages.

We’ll keep tabs on click fraud data and cyber security news as it arises. Leave us your thoughts and comments below.

Thursday, April 8, 2010

Watchdogs Ask FTC To Probe 'Behavioral Targeting On Steroids'

A coalition of advocacy organizations led by the Center for Digital Democracy is asking the Federal Trade Commission to investigate companies that are merging online and offline data about Web users in order to serve targeted ads to highly specific audiences.

Advocates have filed other complaints about online behavioral advertising, or serving users ads based on sites visited, but they say that newer methods of ad targeting are especially problematic -- in part because of the extremely detailed user profiles that result from integrating online and offline information.

"It's behavioral targeting on steroids," says Jeff Chester, CDD's executive director. "Online marketers have made what was science fiction in 'Minority Report' now a reality. They have created a stunning and impressive -- but deeply disturbing -- automated system that, while boosting revenues and efficiency, has very serious consequences to consumers."

The complaint alleges that the "massive and stealth data collection apparatus" threatens privacy and also "robs individual users of the ability to reap the financial benefits of their own data."

Joining CDD in the filing are the U.S. Public Interest Research Group and the World Privacy Forum. The groups are asking for a probe of a host of Web companies, including Microsoft, Google and Yahoo.

They are asking the FTC to require companies that are "involved in real-time online tracking and auction bidding, including providing related data optimization" to obtain consumers' opt-in consent.

"The online marketing industry has to begin paying attention to this notion of consumer empowerment," Chester says.

In addition, the groups argue that consumers should reap some financial reward when data about them is used to convince marketers to pay a premium for ads. "The availability of so-called free content is an insufficient return to a consumer for their loss of privacy, including their autonomy," they allege.

The online ad industry generally argues that targeting poses no privacy threat because companies don't know individuals' names or other identifying information. Also, they say, self-regulatory principles require companies to allow users to opt out of targeting.

But the advocates specifically challenge the idea that targeting is anonymous simply because marketers don't have so-called personally identifiable information. The groups point to a recent spate of deals, such as eXelate's arrangement to add Nielsen's PRIZM data to users' cookies, to show that companies are compiling far more detailed profiles than in the past.

"The incorporation of outside data, including behavioral information, on a single user or set of users, illustrates the need for the FTC to quickly clarify to marketers that they can no longer hide behind the flimsy excuse that targeting is ok because it isn't so-called personally identifiable," the complaint alleges.

In 2006, the CDD and U.S. Public Interest Research Group filed an FTC complaint that kicked off much of the current debate in Washington about behavioral targeting and privacy.

Wednesday, April 7, 2010

Is your Freedom of Speech At Risk??? Well, Feds Say Bring On The Comments

At the order of the Obama administration, the Federal government has adopted a policy that will make it much easier for Federal agencies to use social media, according to OMBWatch, a site which follows the doings of the Office of Management and Budget. This is another, major step forward for officialdom in the social media arena -- and more proof that even the biggest, most risk-averse organizations can find value in social media strategies.

Basically, the OMB has issued a memo which waives cumbersome paperwork requirements for government communications that solicit or enable responses or feedback from private citizens. This paperwork was, ironically enough, required under the terms of the Paperwork Reduction Act of 1980.

Like all good Congressional legislation, the PRA seemed to accomplish the opposite of its name -- at least in this case -- by demanding copious documentation for any government publication that seeks input from regular folks (that's us). Specifically, the PRA requires Federal agencies to take out a control number from the OMB for every form requesting public feedback, which is enough of a disincentive to deter may Federal agencies from undertaking any truly interactive projects to, say, solicit suggestions about improving their Web pages.

Happily, the OMB has waived these requirements in accord with the Open Government Directive issued by the White House on December 8, 2009, according to OMBWatch. The waiver specifically exempts Web-based interactive technologies that enable "unstructured" feedback from the public -- a category that includes visitor comments on Web pages, as well as online initiatives from Federal agencies using sites like Facebook, Twitter, and so on. Anything soliciting "structured" feedback -- online forms, questionnaires, etc. -- is still subject to the terms of the PRA.

Thus the Feds have made a fairly important distinction between two types of communication, which didn't even really exist in 1980 when the PRA was passed. Back then, you had your questionnaires and forms, which were mailed to individuals, who for their part could write a letter and mail it to a Federal agency if they were motivated (read, enraged) enough. The "unstructured" feedback which might appear on agency Web pages clearly resembles the latter more than the former, and so should be exempt from any bureaucratic oversight which applies to communications originating with the government.

Tuesday, April 6, 2010

FTC Reviewing COPPA Rules

The FTC is seeking comment on whether changes should be made to rules imposing certain requirements on Web sites directed at children, including a mandate that they obtain parental consent before collecting personal information from children under the age of 13.

In a Federal Register notice Monday, the FTC said the Children's Online Privacy Protection Act, which went into effect in 2000, requires the agency to review the rules required by the law every five years. While the agency declined to make changes in 2005 when it first reviewed the rules for Web sites aimed at children under 13, the FTC said it now "believes that changes to the online environment over the past five years, including but not limited to children's increasing use of mobile technology to access the Internet, warrant reexamining the rule at this time."

In addition to parental consent, the current FTC rules imposed under COPPA also require Web sites aimed at children under 13 to secure the information they collect from children and bars them from requiring children to provide more information than is "reasonably necessary to participate" in activities provided on the site.

In its request for comments, which are due by June 30, the FTC is asking for input on such issues as whether the definition of "Internet" should be expanded to include mobile communications, interactive television and gaming and other activities and whether the defition of "personal information" also should be expanded to include persistent IP addresses, mobile geolocation data or information used to help target ads at specific Internet users. Other issues, the FTC is seeking comment on include whether changes should be made to the requirements that information be kept secure and private; the requirement that allows parents to review or delete personal information about their children; and on the provision barring the linking of participation in activities on a children's Web site to the collection of personal information.