Darn those Confounding Variables!

Every few years, a billboard company in my area runs a little market study. They do a phone survey asking a bunch of people some random fact, to get a baseline for how many people in our area know that fact. Then they run billboards all around town with that random fact on it for a month or so. They then repeat that telephone survey to see how many more people know that random fact. If the number is higher, they can then use that data to prove to potential customers that their billboards do reach people. It’s a great way to get data to support the effectiveness of billboard ads.

They’ve been doing it for years, and when the most recent set of billboards went up, my first thought was “market survey time again!” This time, the random fact was about the first woman to cast a vote in the history of the United States. The billboards looked like this:

And it probably would have worked perfectly this time around, just like it has the other dozen or so times I remember them running this study. Except that the local news station decided that in this election year, because that first vote by a woman happened here in Utah, they would run a news story about it!

Unfortunately for the billboard company, this means that the follow up survey will probably pick up some people who truly did learn this fact from their billboards. But there will ALSO be some people who learned this fact from the news story. SO this unanticipated event (the news story) will confound the results of the billboard experiment and make it unclear how much of an effect is actually the billboards.

Now, I am not affiliated with the billboard company at all, and I honestly don’t know how they will manage this hiccup in their plan. There are a few options, none of which are great:

  1. Ignore it, and add the effect of the news story to the effect of the billboards. Sell more ads. This would, of course, be unethical, and if future billboard study results are much lower than this, it could hurt them long term.
  2. Add another question to the follow up survey, asking how they learned this fact. This has several downsides, like it might be impossible/expensive to change the contract for the pre- and post- surveys, some people might respond that they learned it from both places, or they knew it from school, or it was a “lucky guess” etc. How do you categorize those?
  3. Scrap this version and try again. Also expensive and would involve paying for another study to be designed and set up. Not to mention the billboards would not be available to earn an income for another cycle of advertising.
  4. Acknowledge it in reporting their results. They could tell consumers “We expected to see a 43% increase in how many people knew this, based on past studies. This time we saw a 61% increase, which we believe to be somewhat high due to a news article on the same topic that ran concurrently with the billboards.” This would be both ethical and inexpensive, but might not have the power they hoped for to sell ads.

For all I know they have a plan in place for this possibility already!

In medical studies, there are all kinds of possible confounding variables. Maybe you’re doing a study about the effects of childbirth education on fear in pregnant women, and in the middle of your study a popular TV show like Grey’s Anatomy runs a popular episode about a birth gone horrifically wrong. It might reduce or cancel out the effect of the childbirth classes! Maybe you’re studying the effects of a new medication for labor induction, and the company that makes it gets splashed all over the news for scandalously unclean factories and now study participants don’t want to use it. There are so many things that could happen!