Genetic testing plays a crucial role in the diagnosis, prevention, and
treatment of many cancers, but its power as a clinical tool is blunted
by poor data sharing practices and a lack of standardization in how data
are stored. What do we mean by this?
From a research standpoint, we need an enormous amount of data from
diverse populations to improve interpretation of genetic tests. We can’t
do this if we do not collect data—labs are throwing away genetic testing
data—and if they keep it, they aren’t sharing it.
-
Genetic tests are not standardized either (different labs test
different genes).
From a patient standpoint, we need improved data sharing to improve
diagnosis and treatment. This can happen if we have a more robust
database.
-
To make personalized medicine a reality, we must share data,
standardize tests, and increase the race, gender, and ethnic
diversity of the data pool.
Without sufficient data, many DNA variants cannot be interpreted
confidently, making it difficult to determine if a variant is harmless
or cause for concern. If a benign variant is interpreted as high risk,
it can lead to needless surgery (e.g., mastectomy, removal of ovaries
and Fallopian tubes; removal of colon or colon and rectum). Conversely,
if a high-risk variant is missed, undetected cancer can become
metastatic before it is discovered, with dire consequences.
Variants of uncertain significance (VUS) are a common finding in genetic
testing. VUS’s cause patient anxiety, can lead to unnecessary surgical
interventions, and increase health care costs by failing to prevent
cancer or detect it early.
Data collected by laboratories have the power to broaden our
understanding of the genes associated with different cancers, but the
data exist in silos that hinder interpretation. Each lab has its own
testing and reporting criteria, making analysis even more
challenging.