We are all about marketing, data, analysis, innovation and technology

Saturday, July 31, 2010

Be careful when looking at those numbers from ComScore, Nielsen and others.

Last semester I began having my students at NYU do month to month comparisons in search share for the big three (Google, Yahoo and Bing). I was particularly interested in doing this when I began noticing that some sources were stating Google as losing share to Bing while others were not.

I thought how can this be? Each of the major players in this online audience measurement space (ComScore, Nielsen and Experian) all have gigantic panels. So I decided to dig a bit further.

Until I began having my students conduct this exercise I did not even realize how far off Experian was versus the others. I thought, once again, how can this be?

Take a look at the below charts one of my students created for my summer Web Analytics class.

Upon examination of these charts we note two things.

First, we note that one of the three sources (Experian) is far off from the others in terms of Google share of search by almost 7 percentage points. Wow! That is definitely a significant difference even with panels in the millions. I wonder who is right.

Secondly, we note that two of the sources (Nielsen and Experian) each states Google search share being down June vs. May while the other source (ComScore) states Google search share is up. Again, I wonder who is right.

Thinking about this more and contemplating writing a blog entry on this topic, one of my summer students flipped me an article dealing with this exact issue. It was a story she had just read in the Los Angeles Times titled:

Hulu's sharp decline in viewership underscores inconsistency in measuring size of online audiences.

In this article it is revealed that when ComScore changed its measurement methodology Hulu's viewership numbers plunged from 43.5 million in May to 24 million in June. The article goes on to talk about these wildly divergent audience numbers and how all the players have been working hard to make improvements in their methodology.

Let me ask you a question.

Do you think it might be a bit harder to create a representative panel to measure what content we are consuming on the web versus doing the same to measure the content consumed on cable or in print?


I have always told my students that the variation in measurements on the web is most likely subject to other errors not typical seen in more stable “environments.” After all, there are very long and dark tails on the web that we might or might not trail off to on any given day given what comes up in search or paid search or where the links our friends tell us to visit via a facebook post. It is pretty vast and dynamic out there on the Internet. So, in a way we should not be surprised to see this variation.

So until such a time comes that measurement is more stable and we can account for this hard to quantify variation, what can you do?

Plan accordingly and use results from various sources and not just one source.

Additionally, remember these variations will in all likelihood be even more severe for those smaller and free web traffic tools offered by Compete, Quantcast, and Alexa. They are all great and I use them regularly for research but definitely with caution.

Never jump to conclusions based on research data from one source alone.