Scott Adams was discussing the need for fast at home testing for Covid-19 yesterday. As we discussed before, having this capability would be really good if we could call it into existence — which we can’t. Adams was suggesting that the FDA’s concern about insufficient accuracy was why no one was working on the product.
As we’ve also discussed, “accuracy” is not the best term to use to describe the limitations of fast testing technology. A better term is “sensitivity” since what we’re concerned about is whether a test is sensitive enough to pick up small amounts of virus.
OK. We’re all up to speed.
Adams used an analogy to drive home the need for fast testing capability. Suppose there were two armies and one army consisted of sharpshooters that hit their targets 100% of the time. The other army was good as well but only hit their target 80% of the time. Who wins?
The answer, according to Adams, is the one with the largest number of soldiers. If the 80% effective army was ten times larger than the sharpshooter army, clearly they would overwhelm the sharpshooter army and win. And, thus, highly accurate lab testing loses as a solution to fast but less sensitive fast testing. Well, maybe, but that depends on a methodology to use the fast testing that results in fewer infected people being in society.
Once upon a time, I worked for a startup company whose product measured how effective a manufacturing test for an integrated circuit actually was. There are a number of ways to do this and all of them are slow and use a lot of computer power. Our product used probability and statistics and delivered a result such as “you’re testing 90% of the chip, plus or minus 4%, with a confidence level of 95%.”
If you’ve had a college-level probability and statistics course, you’re going “OK, I get that.” The people we were selling to all had a college-level probability and statistics course and said “well, that’s really interesting since your product gave us this information ten times faster!”
And then we went out of business because nobody would buy the product.
Because probability and statistics can be very counter intuitive.
One of the issues is the risk the integrated circuit manufacturer takes with that “95% confidence” number. This is because there’s a 5% chance that maybe, just maybe, the actual test coverage is way, way lower than 90%. Maybe using that test actually produces zero good integrated chips because it fails to catch a significant manufacturing defect. Are you feeling good that you got your test number really fast or thinking that maybe you’re about to be fired?
Another oddity was that a customer could add MORE testing and the result would actually go DOWN. This was largely the death knell for the product because it was so counter intuitive and it made customers really, really uncomfortable with the product. See Note for why this happens.
Which brings us to the real fallacy with fast testing which is, just like my semiconductor testing product, unless you can describe exactly what to do with the result, people won’t use the product or will ignore the results.
Adams would say ‘this isn’t a problem because you can just test a lot and it won’t cost much.’ This is true if and only if there’s a methodology to do so — which there might be — but that depends on the test being highly sensitive as the viral load increases. For instance:
- If you receive a positive test, self isolate for 6 hours and retest. If you receive a negative test, then self isolate for 2 more hours. If you receive another negative test, go forth into the world. If not, self isolate for 6 hours again and repeat the process until you get two negative tests in a row.
- If you’ve been sick but are now feeling well, continue to self isolate until you obtain three negative tests in a row.
If there was a methodology similar to this, then fast testing would be significant in reducing virus spread.
The problem is that if too many people receive positives followed by two negatives. Now you have the same problem we had with our product which is that no one believes the first result and will continue to live their normal lives until they get two or three positives in a row.
Another issue would be the “flossing your teeth problem” where we sometimes don’t floss our teeth even though it’s important that we do so. The same would be true with fast testing as well particularly for people who really, really, really need to go to work to make money and a false positive would be really bad. One could argue, and I think correctly, that this isn’t an issue since a number of people would religiously test and, thus, there would be a significant reduction in the overall spread.
To sum up,
- In the army analogy, it’s easy to measure the effectiveness of doing more of something that’s less accurate because you can count dead bodies. This analogy fails when we’re discuss the virus because the test in and of itself doesn’t kill the virus!
- Unless there’s a believable methodology for applying fast tests, they will not be adopted.
- The tests still need to be sensitive enough that you get people to self quarantine fast enough to make a difference.
- If someone has been sick, the sensitivity must be good enough people won’t return to the world too soon. This seems to be the least of the problems since the viral load seems to decline rather rapidly as the disease reaches its end but even here the data is poor.
Note: Let’s say you flip a coin 10 times and you get 6 heads and 4 tails. You know that the actual probability should be 50% so you flip the coin 10 more times to get a better result. Of course, there is some probability that you might actually get all heads so the result would be 16 heads and 4 tails for 20 flips! Yikes! We know that as we continue to flip the coin, we’ll eventually get very close to 50/50 heads/tails but when the number of flips is low, adding more flips may actually make the results look worse in the short run.