The COVID-19 pandemic has brought up tons of data questions — what information should we be collecting, what can it tell us (and what does it fail to tell us) and how should the data we use to make decisions be communicated to the public? We’ve started a new series, “COVID Convos,” that brings these questions to the forefront through interviews with the scientists and practitioners who produce and use data on COVID-19.

Each conversation will be a chance for a different scientist to highlight a dataset or data question they think is particularly important and tell us why. This edition features a conversation with Jennifer Nuzzo, an epidemiologist at the Johns Hopkins Coronavirus Resource Center, which was conducted on February 2. Our discussion, which has been condensed and lightly edited, focused on the “test positivity” metric that seems to be everywhere during a COVID surge, and what it can and can’t tell us.

Maggie Koerth: Today, we’re talking about test positivity calculations and how they’re used and misused. I want to start things off by reminding all of our readers — and maybe myself — of what exactly test positivity rates even are. 

Jennifer Nuzzo: When we started tracking test positivity, we did so exclusively to answer the question, “Are we testing enough?” If you remember back to the beginning of the pandemic, there were all these stories about how many thousands of tests we were doing per day. There were all these proposals out there for how many tests the United States needed to be doing. And they literally ranged from the tens of thousands to the tens of millions. And we were like, “Well, which one is it?”

The more my colleagues and I thought about it, we realized that the amount of testing we would have to do would change based on how many infections we think are current. The closest that we could come to getting at that (with the data that were available at the time) was positivity. Initially, the only thing we could do was to look at the number of positive cases versus the number of tests. And we just looked at that ratio.

It became clear that the positivity rate was better than counting the numbers of tests because there are places like Taiwan where the number of tests they were doing was very low, but they also, for a long period of time, had under 500 total cases. At some point, you run out of people to reasonably test if your prevalence of infection is so low.

Really this was to answer the question, “Are we testing enough?” — not to do a backdoor calculation of prevalence, because you can get a different positivity rate depending on who you test. And at various points in the pandemic, the people who were getting tested were different. They still are. Positivity, unfortunately, became misinterpreted as a proxy for prevalence, but it never was.

Read the full article about COVID-19 cases by Maggie Koerth at FiveThirtyEight.