NEVER LET THE EVIDENCE GET IN THE WAY OF WHAT YOU KNOW

There is a kind of certainty that comes from having thoroughly investigated a problem, prodding and poking at it from every angle, taking into careful consideration all possible explanations. And there’s the kind that comes from not knowing what the hell you’re talking about.

This story falls into the latter category.

Using a network of wildlife cameras at 15 locations across Stillwater, Oklahoma, graduate student Seraiah Coe and her colleagues identified individual cats over a five-year period. At the end of the study period, they reported that the number of cats they could identify had dropped from 47 to 35 cats (a 25.5 percent reduction) or, “after correcting for detectability,” from 62 cats to 48 cats (a 22.6 percent reduction). Nevertheless, their statistical analysis found the decrease to be insignificant, leading the authors to conclude that “TNR conducted at its current intensity is unlikely to reduce Stillwater’s cat population… add[ing] further evidence to the growing body of scientific literature indicating that TNR is ineffective in reducing cat populations” [1].

In fact, their results are pretty impressive. To produce such results and then insist that they show TNR to be ineffective betrays a rather profound ignorance about the practice of TNR.

Sure, Coe and her co-authors found the reduction of cats to be statistically insignificant—but that probably has more to do with their choice of statistical tools than anything else. As it happens, their results compare quite favorably with what some very sophisticated computer modeling has predicted. Those results show that sterilizing 40 percent of the intact population every six months reduced the population from 200 to 169 cats (15.5 percent) in five years [2].

And, as with an earlier study the authors used as a baseline for their five-year-follow-up [3], no kittens were observed at any of the camera locations. This is Oklahoma in April—and no kittens.

Contrary to what Coe and her co-authors suggest, then, their results provide considerable support for TNR’s effectiveness across a large geographical area. And yet, they go out of their way to suggest just the opposite. How does this happen?

BLISSFUL IGNORANCE I

Coe and her authors observed that 6 of 35 cats they identified were ear-tipped, leading them to calculate a sterilization rate of 17.1 percent. This is based on the dubious assumptions that (1) all cats lacking collars and ear-tips are unsterilized community cats, and (2) only cats wearing collars (8 of 35 in this case) are owned. But a 2009 survey of Oklahoma City spay-neuter facilities and veterinary hospitals found that only 12 percent of cats brought to such clinics were wearing collars, and owners reported that only 11 percent of their cats always wore collars [4].

All of which will come as no surprise to anybody familiar with TNR. Or cat ownership, for that matter. After all, how many indoor-outdoor cats wear collars?

If the results from that 2009 survey are comparable to collar-wearing rates in Stillwater 10 years later, then it’s entirely possible that every one of the 21 cats identified by Coe and her co-authors without a collar or ear-tip was in fact owned—in which case, the true sterilization rate for community cats was 100 percent.* It’s likely, too, that a significant portion of the owned cats were also sterilized. Indeed, this would help explain the fact that no kittens were spotted at the 15 camera stations—despite there being no ear-tipped cats five years earlier [3].

BLISSFUL IGNORANCE II

There are a lot of holes in this study—the result being conclusions that simply aren’t supported by its findings. Again—how does this happen?

Well, three of the four authors were students at the time of the research—but I don’t think that explains it. The most likely explanation has to do with the fourth co-author, Scott Loss, in whose lab all three students have worked. This is the same Scott Loss whose flawed mortality estimates received so much media attention in 2013 (and have been cited frequently ever since).

The same Scott Loss who—along with frequent collaborator and Cat Wars author Peter Marra—has dismissed any research finding TNR to be effective (including my own) as “organized misinformation [that] has clouded consensus and misled policies affecting human health and biodiversity conservation” [5].

The same Scott Loss who’s complained repeatedly of the “population impacts” caused by free-roaming cats—while providing very little in the way of evidence [see, for example, 6,7].

So, how does a study demonstrating some pretty impressive results of community-wide TNR efforts become “further evidence to the growing body of scientific literature indicating that TNR is ineffective in reducing cat populations” [1]? Well, I can’t be certain—but I’ve got a pretty good idea.


*Admittedly, there is a complicating factor: Stillwater’s municipal code requires cats to wear collars and rabies tags. I don’t know that this explains the authors’ assumption, though, as there is no mention of it in either their paper or the one describing the baseline study. And I have no idea what such a provision would do for compliance—I suspect many cat owners have no idea such a law even exists. In any case, I can’t imagine the law drives compliance to the 100 percent level apparently assumed by Coe and her co-authors.

References

  1. Coe, S.T.; Elmore, J.A.; Elizondo, E.C.; Loss, S.R. Free-Ranging Domestic Cat Abundance and Sterilization Percentage Following Five Years of a Trap–Neuter–Return Program. Wildlife Biology 2021, 2021, doi:10.2981/wlb.00799.
  2. Miller, P.S.; Boone, J.D.; Briggs, J.R.; Lawler, D.F.; Levy, J.K.; Nutter, F.B.; Slater, M.; Zawistowski, S. Simulating Free-Roaming Cat Population Management Options in Open Demographic Environments. PLOS ONE 2014, 9, e113553, doi:10.1371/journal.pone.0113553.
  3. Elizondo, E.C.; Loss, S.R. Using Trail Cameras to Estimate Free-Ranging Domestic Cat Abundance in Urban Areas. Wildlife Biology 2016, 22, 246–252, doi:10.2981/wlb.00237.
  4. Slater, M.; Weiss, E.; Lord, L. Current Use of and Attitudes towards Identification in Cats and Dogs in Veterinary Clinics in Oklahoma City, USA. Animal Welfare 2012, 21, 51–57.
  5. Loss, S.R.; Marra, P.P. Merchants of Doubt in the Free‐ranging Cat Conflict. Conservation Biology 2018, 32, 265–266, doi:10.1111/cobi.13085.
  6. Loss, S.R.; Marra, P.P. Population Impacts of Free‐ranging Domestic Cats on Mainland Vertebrates. Frontiers in Ecology and the Environment 2017, 15, 502–509, doi:10.1002/fee.1633.
  7. Loss, S.R.; Will, T.; Marra, P.P. The Impact of Free-Ranging Domestic Cats on Wildlife of the United States. Nature Communications 2013, 4.

2 Comments

It’s interesting research and I’m not qualified to comment on all parts of it. We see a reduction in measured population between the two time points. It’s unsurprising, given the low number of cats observed and the fact statistical methods are used to arrive at the population estimates, that the change is not “statistically significant.” But the authors appear to make a basic error in statistical reasoning. When you hypothesize that there will be a reduction (although in this case it seems to merely be a statistical hypothesis rather than one they actually believe would come to pass), and you observe a reduction but the statistical test yields a “large” p-value, it does not follow that you should then conclude that there was no reduction. Rather, you simply have ambiguous evidence with regard to the question, “did the population of cats decrease?”

It seems this may not be the ideal question anyway — my understanding of the issue is that some of these folks believe cat populations are destined to perpetually increase, perhaps even at a very high rate. A different kind of statistical test could have been done to test against that kind of claim (although I’m not an ecologist so I can’t say in detail exactly how it should be implemented when they are already using some kind of technique to correct for unobserved cats).

My back of the napkin estimate here is that the population probably would have needed to be reduced by half or more for them to have a high probability of detecting a “statistically significant” decrease. In my neck of the woods, we would probably not embark on a study that had such a high probability of providing an ambiguous result like this one.

For a technical explanation of the statistical reasoning error, see points 4-6 and 8 in this document about p values by some distinguished biostatisticians: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4877414/

Thanks for your comment, Jacob—and for the link to the paper. I find such resources very useful!

Add a Response

Your name, email address, and comment are required. We will not publish your email.

The following HTML tags can be used in the comment field: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>