Innovation could improve detection of COVID-19 infections: Technique could increase sensitivity of nasal swab tests up to tenfold through simple software updates — ScienceDaily
The team’s results, published in the scientific journal Analytical and Bioanalytical Chemistry, describe a mathematical technique for perceiving comparatively faint signals in diagnostic test data that indicate the presence of the virus. These signals can escape detection when the number of viral particles found in a patient’s nasal swab test sample is low. The team’s method helps a modest signal stand out more clearly.
“Applying our technique could make the swab test up to 10 times more sensitive,” said Paul Patrone, a NIST physicist and a co-author on the team’s paper. “It could potentially spot more people who are carrying the virus but whose viral count is too low for the current test to give a positive result.”
The researchers’ findings prove that the data from a positive test, when expressed in graphical form, takes on a recognizable shape that is always the same. Just as a fingerprint identifies a person, the shape is unique to this type of test. Only the shape’s position, and importantly, its size, differ when graphed, varying with the quantity of viral particles that exist in the sample.
While it was known previously that the shape’s position could vary, the team learned that its size can vary as well. Reprogramming test equipment to recognize this shape, regardless of size or location, is the key to improving test sensitivity.
The swab test employs a lab technique called quantitative polymerase chain reaction, or qPCR, to detect the genetic material carried by the SARS-CoV-2 virus. The qPCR technique takes any strands of viral RNA that exist in a patient’s swab sample and then multiplies them into a far larger quantity of genetic material. Each time a new fragment of this material is made, the reaction releases a fluorescent marker that glows when exposed to light. It is this fluorescence that indicates the presence of the virus.
While the test method usually works well in practice, it can lack sensitivity to low viral particle counts. The test starts with the genetic material that is present and doubles it, then doubles it again, up to 40 times over, so that the fluorescent markers generate enough light to trigger a detector. Doubling, as anyone familiar with compound interest knows, is a powerful amplifier, growing slowly at first and then spiking to high numbers. The doublings produce a graph that is initially flat other than the bumps from systemic background noise, and eventually a telltale spike rises from it.
However, when the initial viral count is low, there may be false starts in the first few cycles. In these cases, even 40 doublings may not build a spike tall enough — or a fluorescence bright enough — to rise above the detection threshold. This issue can cause problems like inconclusive tests or “false negatives,” meaning a person carries the virus but the test does not reveal it.
Preliminary studies indicate that the rate of false negatives may be as high as 30% in qPCR testing for COVID-19, including one study in which chest CT scans indicated positive cases where swab tests had not. Another study shows that asymptomatic and early-disease states are associated with up to 60 times fewer virus particles in patient samples. A JAMA study published in August supports the idea that asymptomatic carriers can spread the virus.
The NIST researchers found that the shape of a positive test graph — a flat, noisy beginning followed by a spike — is found even in data that currently does not trigger a positive test result. Their paper offers a formal proof that the shapes are mathematically “similar,” akin to triangles that have the same angles and proportions despite being larger or smaller than one another. They apply this theoretical evidence in a routine that a computer can use to recognize the reference shape in the data.
“We’re no longer constrained by having to pass a high detection threshold,” Patrone said. “The spikes don’t need to be large. They need to have the right shape.”
Incorporating their findings into tests would immediately help the pandemic response, Patrone said, as it would help determine the number of asymptomatic and presymptomatic cases more accurately.
“In essence, lowering false negatives should help doctors and scientists get a better handle on the actual spread of the virus,” he said. “There is a good chance that we’re missing asymptomatic cases with the testing. The reduction we project in the number of viral RNA detected could pick up a significant number of asymptomatic cases.”
The new test also would be unlikely to generate false positives because it would check that the curve was consistent with a reference shape, not merely that it crossed a detection threshold.
“In standard testing protocols, it is possible to get false positives — for example, if background effects rise to the detection threshold and no one manually checks the result,” Patrone said. “The likelihood of that happening in our analysis is very small because the math automatically rules out such signals.”
Pandemic response workers would not need to do anything differently when collecting samples. Because the team’s approach uses a mathematical algorithm applied after data collection, programmers could apply it by updating the lab equipment software with a few lines of computer code.
“Our work is a potentially easy fix because it’s an advance in the data analysis,” Patrone said. “It can easily be incorporated into the protocol of any lab or testing instrument, so it could have an immediate impact on the trajectory of the health crisis.”