AdWords click fraud thoughts
The bloggers at Inside AdWords recently posted significant information about their invalid click detection efforts. Among the most important points is that the number of “impact” clicks is less than 10% of the total number of clicks, and that this number has remained somewhat constant over time. They also state that fewer than less than 0.02% of invalid clicks are caught by their investigative team.
These numbers sounds encouraging, and by and large they are. In particular, the 0.02% sounds as if the automated filters catch almost all fraudulent and otherwise invalid clicks. While I don’t doubt that they are extremely effective, the 0.02% is misleading taken by itself. It’s not telling the whole story for two primary reasons:
Only a Few Advertisers Request Click Fraud Investigations
The 0.02% isn’t the right number to look at to determine the effectiveness of the automatic filters because it only reflects invalid clicks for that small percentage of advertisers that request — and get — a review. The entire 0.02% comes from advertisers who collectively represent some percentage of AdWords users. If this group accounts for 0.2% of all clicks on AdWords ads (a HUGE number of clicks), then that means one out of every 10 of their clicks was not caught by the automated system and was later determined to be invalid. That’s 10%, or not effective at all compared to the overall estimate of 10%. If this group accounts for 1% of all clicks, then 1 out of every 50 clicks were not caught, or 2%; a vast improvement. Note the way the math works: the fewer contested clicks, the worse the systems work. We don’t know what percentage of total clicks are reviewed, but it would surprise me if it were anywhere near 1%, thus it is likely that the automated filters still leave room for at least 2% click fraud. Unless we know what percentage of clicks are investigated, we still don’t know the real percentage of fraud that is caught by the filters that most of us rely upon.
Click Fraud Can Be Hard To Identify
There is too little information about most clicks for the determination of validity to be made. For instance, one click from a home user on a site might be someone shopping for a product or service, or they could be a competitor incurring one click charge on a competitor after hours. Just as there is no way to determine if this single click is fraudulent, there is no way to evaluate the system for catching them: an effective system would catch the easy cases and all that’s left would be clicks that are difficult or impossible to evaluate. In other words, the better the system for catching fraud, the more difficult it is to determine what remaining clicks are fraudulent — we can’t ever know the real amount.
I believe that Google is working very hard on click fraud, and their point is well-taken:
“At Google’s current revenue rate, every percentage point of invalid clicks we throw out represents over $100 million/year in potential revenue foregone.”
After all, it would scare off a lot of advertising revenue if fraud were thought to be rampant in AdWords. Yet we don’t like the use of the 0.02% as a success metric without elaboration, or any inference (intended or not) that the reviews catch all fraud that the automated systems don’t.
What To Do?
We are not alarmist, and we think that fraud concerns are often overstated. The fact is that some click fraud will always be present and undetectable, thus it is largely a cost of doing business for advertisers. Do what you can to avoid it, but reduce your stress by thinking of it as a percentage of your bids.