Thursday, September 30, 2010

Click Fraud to Play Fraud

Interesting discussions lately in ReelSEO and IndustryPace on an issue we can call ‘Play Fraud’. Think of click fraud, but with video plays instead of clicks.

It shouldn’t surprise any of us that this issue has come up. Many video platforms are paid by the play, regardless of whether the viewer watches for any period of time and despite declining CDN costs to deliver content. Ad networks delivering video are paid by the play, so they have a vested interested in defining ‘play’ as liberally as possible to increase plays and revenue. It’s increasingly common to see video ads that auto-play below the fold on a page, communicating nothing to the audience and creating no value for the advertiser…while counting as a video view.

My unscientific, hard-to-quantify definition of a video view is:

A user engages with the content. Someone clicks ‘Play’ on purpose, not accidentally. They normally do so in order to see a given piece of content. Do pre-roll ads count? Maybe. But they have to be relevant and targeted to meet the ‘engages’ criterion. Students sit through lectures. They’re engaged by great instructors. Pre-roll can be either. Do I simply tolerate the Marriot pre-roll ad as if it were a lecture, do I cut class by clicking away, or do I take away the desired message?

A user stays engaged with the content for a period of time long enough to have a reasonable probability of affecting their post-viewing behavior. With eCommerce video, that means watching a video on a product or category page long enough to have the video assist in the buying decision. Other advertising may work higher up in the funnel and hence may have different goals, but the user still has to watch for long enough to affect those goals. Do I watch the Marriot ad long enough to consider staying there on my next business trip?

Yeah…I know you can’t quantify that as easily as initiation of a play. But that’s what a view is from a publisher’s perspective. And the further a billing definition of ‘view’ is from a publisher’s definition, the more friction there will be in the ad-supported and view-driven areas of the industry. Which is why non-view-based monetization models may have a very bright future.

Friday, September 24, 2010

A Browser Cookie That Won't Go Away?

"Evercookie" is the browser cookie that just won't go away. If you're concerned about having your Web browsing history tracked, you, like most people, will probably delete your cookies and clear your browsers' caches. However, evercookie, written in JavaScript, produces "extremely persistent cookies" that can identify a client even after you've removed standard or Flash cookies.

By Alessondra Springmann

Thursday, September 23, 2010

Legal update: What email marketers need to know

By now, every email marketer should be very familiar with the CAN-SPAM Act, using it to create guidelines and best practices for marketing emails. In the coming 12 to 36 months, however, there will be several more regulations coming into effect that could necessitate some tweaks to your program, said Dennis Dayman, chief privacy and deliverability officer at Eloqua, a provider of marketing automation solutions.

“The U.S. is looking at some draft bills that will change the use of personal data and opt-in mechanisms,” Dayman said. “There are also other things going on in the European Union and Canada that will potentially create fundamental changes.” Because of the global nature of business today, these changes should have just as much impact on U.S. marketers, he said. “Transactions move across the world. There’s a very good chance that at some point during your day you’re touching data from someone that lives in the E.U.,” Dayman said.

The draft bill in the U.S.—H.R. 5777 also known as the Best Practices Act, sponsored by Rick Boucher (D-Va.) and Cliff Stearns (R-Fla.)—could change the way email marketers use behavioral data because the bill calls for opt-out consent for “collection and use of covered information by a covered entity.” The bill would require websites to tell visitors how their information is being collected, how it will be used, how long it will be kept and whether or not it will be shared with others.

Additionally, the bill allows prospects to opt not to allow their personal data and behaviors to be tracked and stored by websites and ad networks. That would make it impossible for marketers, for example, to send people who have opted in to an email program any additional materials based on where they went on a site after they left the original email.

By next summer, it will be illegal in the E.U. to put a cookie on someone’s computer without explicit permission, Dayman said. That will affect email marketers that send someone to a landing page containing a cookie. “It doesn’t ban cookies outright,” he said. “Marketers don’t realize that, if they are doing cross-border marketing, now they will need a check box for email and Web pages that says, ‘Can I send you something? Can I put a cookie on your PC?’ Those are the types of changes we’re going to have to make to stay compliant.”

The bottom line, Dayman said, is that marketers will need to start doing what they should have been doing all along. “Marketers are going to have to get hypertransparent about telling people how you are collecting their data and what you are going to do with their information once you have it,” he said.

To that end, companies should update their privacy policies, using plain language and bulleted lists to make them easy to read and understand. “There’s no more room for small print or legalese,” he said. “People are more likely to give you their data and keep receiving emails and other marketing messages when you are upfront with them from the start and don’t change things once you’ve gotten them to sign up.”

By Karen J. Bannan

Thursday, September 9, 2010

Watchdog Cautions FCC Against Sliding Down Slippery 'Paid Prioritization' Slope

The influential watchdog group Center for Democracy & Technology has told the the Federal Communications Commission that allowing companies to pay Internet service providers for prioritized delivery of content would hurt the Web's openness.

"Ensuring that operators do not engage in paid prioritization is fundamental to ensuring that the Internet continues to operate as an open, interconnected platform for commerce, speech, and innovation," the group said Tuesday in a letter filed with the Federal Communications Commission.

In recent weeks, net neutrality advocates, ISPs and policymakers have increasingly focused on "paid prioritization" -- largely because the concept is central to Google and Verizon's joint proposal for neutrality rules. The two companies proposed that ISPs should be prohibited from degrading or prioritizing traffic that currently travels over the so-called public Internet, but allowed to create fast lanes for managed services like telemedicine, distance learning, and new entertainment offerings.

Neutrality advocates generally oppose that plan, arguing that allowing ISPs to create fast lanes would disadvantage innovators and start-ups that can't afford to pay for prioritized delivery.

Last week, AT&T submitted a report to the FCC arguing that paid prioritization already exists online and that the system is "expressly contemplated" by the Internet Engineering Task Force. AT&T alone has hundreds of third-party customers for such services," the company said, adding that these customers include "healthcare providers, community service organizations, restaurant chains, car dealers, electric utilities, banks, municipalities, security/alarm companies, hotels, labor unions, charities, and video-relay service providers.

The CDT's filing on Tuesday was in response to that claim by AT&T. CDT argues that AT&T "mischaracterized" the intent of the Internet Engineering Task Force, or IETF, and its part in shaping policy. "The IETF and other technical standards bodies play a crucial role in designing the protocols that allow networks and devices to interoperate seamlessly, but it is a mistake to project business and policy decisions, about paid prioritization or any other matter, onto technical standards that make no such claims," the CDT says.

By Wendy Davis

Friday, September 3, 2010

Ten Fallacies About Web Privacy

Privacy on the Web is a constant issue for public discussion—and Congress is always considering more regulations on the use of information about people's habits, interests or preferences on the Internet. Unfortunately, these discussions lead to many misconceptions. Here are 10 of the most important:

1) Privacy is free. Many privacy advocates believe it is a free lunch—that is, consumers can obtain more privacy without giving up anything. Not so. There is a strong trade-off between privacy and information: The more privacy consumers have, the less information is available for use in the economy. Since information helps markets work better, the cost of privacy is less efficient markets.

2) If there are costs of privacy, they are borne by companies. Many who do admit that privacy regulations restricting the use of information about consumers have costs believe they are born entirely by firms. Yet consumers get tremendous benefits from the use of information.

Think of all the free stuff on the Web: newspapers, search engines, stock prices, sports scores, maps and much more. Google alone lists more than 50 free services—all ultimately funded by targeted advertising based on the use of information. If revenues from advertising are reduced or if costs increase, then fewer such services will be provided.

3) If consumers have less control over information, then firms must gain and consumers must lose. When firms have better information, they can target advertising better to consumers—who thereby get better and more useful information more quickly. Likewise, when information is used for other purposes—for example, in credit rating—then the cost of credit for all consumers will decrease.

4) Information use is "all or nothing." Many say that firms such as Google will continue to provide services even if their use of information is curtailed. This is sometimes true, but the services will be lower-quality and less valuable to consumers as information use is more restricted.

For example, search engines can better target searches if they know what searchers are looking for. (Google's "Did you mean . . ." to correct typos is a familiar example.) Keeping a past history of searches provides exactly this information. Shorter retained search histories mean less effective targeting.

5) If consumers have less privacy, then someone will know things about them that they may want to keep secret. Most information is used anonymously. To the extent that things are "known" about consumers, they are known by computers. This notion is counterintuitive; we are not used to the concept that something can be known and at the same time no person knows it. But this is true of much online information.

6) Information can be used for price discrimination (differential pricing), which will harm consumers. For example, it might be possible to use a history of past purchases to tell which consumers might place a higher value on a particular good. The welfare implications of discriminatory pricing in general are ambiguous. But if price discrimination makes it possible for firms to provide goods and services that would otherwise not be available (which is common for virtual goods and services such as software, including cell phone apps) then consumers unambiguously benefit.

7) If consumers knew how information about them was being used, they would be irate. When something (such as tainted food) actually harms consumers, they learn about the sources of the harm. But in spite of warnings by privacy advocates, consumers don't bother to learn about information use on the Web precisely because there is no harm from the way it is used.

8) Increasing privacy leads to greater safety and less risk. The opposite is true. Firms can use information to verify identity and reduce Internet crime and identity theft. Think of being called by a credit-card provider and asked a series of questions when using your card in an unfamiliar location, such as on a vacation. If this information is not available, then less verification can occur and risk may actually increase.

9) Restricting the use of information (such as by mandating consumer "opt-in") will benefit consumers. In fact, since the use of information is generally benign and valuable, policies that lead to less information being used are generally harmful.

10)Targeted advertising leads people to buy stuff they don't want or need. This belief is inconsistent with the basis of a market economy. A market economy exists because buyers and sellers both benefit from voluntary transactions. If this were not true, then a planned economy would be more efficient—and we have all seen how that works.

By PAUL H. RUBIN

Mr. Rubin teaches economics at Emory University.