Research Trouble Spot 10: Wrong Outcomes

Picking a small thing that happened to be significant – Sometimes studies set out to see if intervention A has an effect on outcomes 1, 2 or 3. But they find no change in those outcomes, so they comb the data looking for SOMETHING, and come up with some small difference in some small subgroup.

Bad proxy measures – a proxy measure is something that seems to be a way to measure something that is harder to measure. Some proxy measures work pretty well. A good example of this is screening tests. Checking the thickness of the nuchal fold isn’t good enough for a *diagnosis* but it’s a pretty good proxy measure to screen for, and limits the more invasive and expensive testing. But some studies use proxy measures that are not great representations.

Unvalidated instruments When you’re doing research, you can create a new survey from scratch, and test it to ensure it’s a good and reliable survey, or you can use an existing survey that has gone through testing and has been already validated. As a reader, you need to look to be sure that one of the two has happened, because a poor survey can really give bas results.

This is a very complex topic and it’s hard to do it justice in this short post. But thinking about the researcher’s choice of outcomes and how they got there is an important part of reading critically.