I’ve written a few times about how I’m both utterly impressed and completely horrified about the state of scientific publishing around COVID-19. We’re getting some exceptional research in astounding short periods of time. We’re also seeing poor quality research, lax peer and editorial review and sometimes more of a rush to be first than a desire to be good.
Research should aim to answer a question and there should be a reason to answer that question. Yes, there are some serendipitous discoveries made based on whims, and I’m a big fan of my students doing little side projects to follow up on curious thoughts. However, successful curiosity-driven projects should lead to proper studies, and not be the end result.
A letter in the journal Microbes and Infection (Gao et al. 2020) modelled “whether setting free domestic cats protects people from infection” with COVID-19. The premise seems to be the concern that people might get freaked out about cats and coronavirus, and abandon their cats. Modelling is a critical component of COVID-19 control planning. But, models are only as good as their input data. You have to create the model with numerous assumptions, and the less you know about those assumptions, the less useful the model is. When you have no idea about the parameters but create a model based on one set of guesses, it’s had to say whether it’s useful or counterproductive. The model also has to make biological sense.
Anyway, this modelling study was based on releasing different numbers of cats, and then seeing how many people got COVID-19 from them. Essentially:
- The population consisted of people, indoor cats (that never go outside) and “wild” cats that were outside.
- Each individual could be susceptible, infected or “removed” (i.e. recovered, and assumed to be immune).
- When a person or cat was placed outside of the house, they moved in random directions. (People could go straight, turn 90 degrees left or 90 degrees right. Cats could turn left, right or turn around.)
- During the random wanderings, if a cat bumped into a person or another cat, there was a chance for infection of that person/cat. They assumed there was a 2% chance of cat-cat transmission if the cat was infected but healthy or 5% if infected but sick, and 1% chance of cat-person transmission if the cat was healthy and 2% if it was sick.
I’m not a modeller so I can’t comment too much on the overall approach, but it’s pretty superficial. Assumptions have to be made when modelling, and the farther they get from the real situation, the more likely the model will be inaccurate. They had to pick numbers for transmission based on absolutely no evidence. Are 1%, 2% and 5% reasonable guesses? Maybe, although I don’t think me crossing paths with a cat outside poses any risk whatsoever of SARS-CoV-2 transmission. Since there’s no evidence, I guess you could say those numbers are as good as any, but that doesn’t mean they’re good. Those are the key numbers for everything in the model. If they are off, the data mean absolutely nothing.
It’s an interesting exercise, and something I can see having a student play around with out of curiosity, to see what the get, to see what happens when you change the numbers, to raise some questions and identify the key gaps that need to be addressed for a proper model. You can see a video of their very basic model below.
They concluded that “fear over domestic cats may be unnecessary.” That’s true, but their study didn’t actually look at that. They actually showed that if cats are allowed outside, more people get sick. I’m not sure that’s actually a valid conclusion, but it doesn’t answer their “does abandoning your cat protect you?” question, because the study couldn’t do that. Their next paragraph has a more accurate statement: “The better strategy for controlling the spread of the virus is to quarantine pets at home.” If anything, that’s what their study showed (but I don’t think it really tells us anything new or with any substance). I don’t need a model to tell me that if there no outdoor cats there can be no transmission from outdoor cats.
However, it’s possible to get pretty much anything published these days, even little side projects that don’t have much foundation in a logical question or methodology. Should the publication threshold be “I did this” or “I did this, it makes sense and it told us something useful”? I think my opinion differs from a lot of journals on that.