Discussing the Normative Neuroimaging Library with Dr. James Stone
James R. Stone, MD, PhD is the scientific principal investigator for the Normative Neuroimaging Library, and Associate Professor and Vice Chair of Research, UVA Department of Radiology and Medical Imaging. Dr. Stone is a clinical Interventional Radiologist with a neuroscience background. His laboratory currently explores questions related to improving the clinical diagnosis of traumatic brain injury (TBI) in both preclinical models and human subjects.
Dr. Stone recently discussed the latest developments in advanced neuroimaging and potential applications for clinical care for traumatic brain injury (TBI).
What led to your involvement with Cohen Veterans Bioscience (CVB)?
One of the things CVB has been able to do exceptionally well over the years is bring together stakeholders to help address shared goals. I became involved with CVB as a founding member of the American College of Radiology Head Injury Institute (ACR-HII). I had been working with the ACR-HII to strategize how to move advanced neuroimaging approaches into the patient-care setting. One of the gaps was the lack of high-quality normal neuroimaging using advanced imaging sequences to establish a quantitative baseline for the population. CVB understood the important, foundational nature of the work, thus leading to the current productive collaboration.
Why is the Normative Neuroimaging Library (NNL) program so important?
Brain imaging has a history dating as far back as 100 or more years when Dr. Walter Dandy performed the first visualization of the ventricles. To do this, he used injected air to serve as contrast media. Since then, there has been enormous progress in the ability to image the structural and physiological aspects of the brain with the advent of computed tomography (CT), magnetic resonance imaging (MRI), and nuclear medicine-based techniques such as positron emission tomography (PET). Despite these advances, the paradigm for how medical imaging approaches are used in a clinical setting has remained somewhat static. Specifically, following the acquisition of an imaging examination, the imaging is viewed by a trained radiologist and a narrative report is generated that is filed in the patient’s medical record. This clinical image interpretation has continued largely unchanged, even though advanced neuroimaging techniques have emerged. These techniques are capable of describing white matter microstructure, functional connectivity between brain regions, regional brain volumes, cortical thickness, brain perfusion, and other key insights into brain structure and function. Additionally, emerging advanced analytical techniques are leveraging neural networks and dimensionality reduction approaches to mine information from medical imaging as never before.
Unfortunately, many advanced neuroimaging sequences remain confined to research. The diffuse, quantitative nature of these imaging approaches largely evades effective interpretation by a trained neuroradiologist. This makes advanced neuroimaging difficult to use in the prevailing paradigm of narrative interpretations of imaging acquired for clinical care. Additionally, the analytical tools that are presently used to analyze advanced imaging are primarily designed to evaluate how groups of individuals differ from one another, rather than determine whether a single patient demonstrates quantitative features that are consistent with a specific disease process. As such, to move advanced neuroimaging approaches into clinical care, the field must develop quantitative clinical tools capable of identifying neuroimaging “fingerprints” consistent with a specific disease process. To develop these tools, we must have a library of high-quality imaging of normal individuals. This library would serve as a baseline and can be compared to disease-based databases to identify imaging features that are most predictive of a specific disease process.
What challenges have you faced in building the library and standardizing the sequences?
For the NNL to be useful, it must be relevant to current large-scale data acquisitions studying disease processes of interest to the community. Understanding this, the team spent a considerable amount of time surveying existing large-scale studies to develop the NNL protocol. We’ve been fortunate to work with very talented physicists to support NNL. In the end, the NNL approach integrated acquisition frameworks from: the Human Connectome Project, the Adolescent Brain Cognitive Development (ABCD) study the third iteration of the Alzheimer’s Disease Neuroimaging Initiative (ADNI), the Chronic Effects of Neurotrauma Consortium (CENC), and the Transforming Research and Clinical Knowledge in Traumatic Brain Injury (TRACK-TBI). Of note, the NNL imaging protocol balances the importance of having a comprehensive imaging approach while maintaining a total image acquisition length that is tolerable to the vast majority of study participants. To help minimize site-specific differences, scans of in vitro phantoms are routinely acquired on all scanners used in the study and image quality software is employed to ensure scanners are operating within expected parameters. In addition, at least one volunteer has been assessed on all scanners to ensure similar performance with an in vivo “phantom.”
How do you see the NNL expanding its tools into clinical care?
One of the priorities for the Normative Neuroimaging Library is to establish reference values for advanced neuroimaging across the normal population. Medical laboratory tests used in clinical care have normal reference ranges to help determine when a value acquired from a patient test is abnormal. In principle, we must establish reference values for neuroimaging. In practice, this will be a much more complex endeavor than simply establishing values, above or below which would be considered abnormal. Neuroimaging studies are typically comprised of hundreds of thousands of individual voxels (3D pixels), each of which may vary across the normal population. Establishing normal variation on an individual voxel level will not yield the practical information needed in a clinical setting. Additional steps must be taken to compare databases acquired to study specific disease processes, with the normal library to understand patterns of differences, also known as imaging “features” between normal and disease. By assembling a collection of imaging features highly predictive of a specific disease process and developing an overall repertoire of these disease feature collections, we will have created a data warehouse that could be incorporated into a clinical quantitative neuroimaging platform. A number of large medical imaging companies are currently working on such platforms, and there will likely be opportunities for partnership. Successful utilization of advanced neuroimaging in a clinical setting would transform how medical imaging is used to help guide clinical care.
What impact do you think the NNL will have on clinicians who treat TBI?
At present, neuroimaging has little to no role in patients with mild TBI or concussion. Routine imaging used in the setting of TBI typically describes relatively large injuries such as intracranial blood, compression of vital brain structures, global collection of fluid within the brain, or microhemorrhage. These types of observations are generally associated with moderate-to-severe TBI, yet most brain injuries are mild. Further, we know some of these mild injuries will be associated with full recovery, while others may not fully recover or could have a more prolonged recovery course taking months to years. Currently, we do not have neuroimaging approaches that allow us to predict who may recover from their mild TBI and who may have a more prolonged course. Having this knowledge early may guide early rehabilitative interventions as well as help manage recovery expectations. Additionally, predictive imaging for mild TBI could help inform clinical trials of therapeutics that may hasten recovery. Equipped with the NNL, we can explore whether quantitative advanced neuroimaging features can be identified, ones that are diagnostic of mild TBI and predictive of recovery. If such features could be identified and deployed onto a clinical platform, this would be enormously beneficial to clinicians managing this condition.
How has the NNL team adapted during this period of dramatic research disruption due to COVID-19?
The current global health pandemic has had far reaching impacts on research across the world. Many research institutions have paused much of their research operations to help encourage social distancing and to decrease the overall rate of new infections in their respective communities. Each of the institutions participating in the NNL content acquisition activity has had research placed on hold to varying degrees including recruitment for the NNL study. The team is taking this time to perform an interim analysis of the NNL content. This is a time for us to examine the data collected to date to understand how neuroimaging varies across the normal population. We are examining which demographic and cognitive variables have the greatest impact on neuroimaging. With this information, we will revisit the initial analyses that informed the overall size of the library to determine how best to recruit going forward. While this interim analysis had been planned prior to the COVID-19-related alterations in operations, we are still able to focus on this activity. We are making ample use of the existing array of tools so the team is virtually connected to ensure the high-quality collaborative framework continues despite decreased in-person interactions and overall uncertainty. We know the COVID-19 restrictions will ultimately pass and look forward to the effort emerging even stronger following the current period of reflection on the data and the overall approach.
What do you hope CVB and the NNL can achieve over the next five years?
Over the next five years, the content acquisition for the library should be completed. We should be well into the next phase of using the library to help transform how advanced neuroimaging is used for clinical care. We will have compared large-scale disease datasets to the library to develop a repertoire of predictive features specific for a disease process. Additionally, we will have established partnerships with major medical imaging entities to help deploy this knowledge onto clinical platforms and improve how neuroimaging is being used to diagnose, treat and predict the overall course of a number of neurological conditions. The library will also have been made available to the larger research community to aid with ongoing studies. This may be in the form of utilizing common control data sets; or, to help better inform how controls are used in research studies to help account for the natural variability of imaging across the population. Additionally, we would like to use the NNL as well as the study framework to improve how multi-site research studies are performed. We will help characterize site-specific differences that may be related to scanner hardware, software, or temporal fluctuations of a given scanner, determine the overall effect sizes of these differences, and help develop tools to minimize those differences while preserving ground truth within the data.
What new technology used in TBI research excites you the most?
For a few years, CVB and the University of Virginia have been collaborating on an effort, supported by the Office of Naval Research (ONR), to develop powerful dimensionality reduction techniques to aid identification of latent information in neuroimaging data. As neuroimaging is comprised of hundreds of thousands of 3D pixels, known as voxels, the data dimensionality is extraordinarily high when considering performing statistical testing on acquired imaging data. Many neuroimaging analytical techniques involve some level of dimensionality reduction, often into anatomical regions, in order to decrease the number of comparisons for statistical testing. However, this approach does not consider the data components contributing to the variability in a given dataset, and may not be the best use of statistical power. The ONR-funded effort has supported development of a dimensionality reduction approach known as Similarity-driven Multi-view Linear Reconstruction (SiMLR). This approach is best suited for multi-modal datasets and is capable of considering imaging as well as non-imaging data. It fundamentally relies upon the assumption that underlying neurobiological changes or injuries will manifest across several systems. It is a method that can link measurements across scales and systems to help increase the likelihood of finding a signal. It also reduces the potential impact of corrupted data, as it is likely that spurious signals will not be shared across all modalities. Additionally there is a natural filtering of noise, given noise typically does not covary across measurements. The approach must be used with datasets where there is co-variation across measurements. The method is a generalization of classical methods such as principle components analysis (PCA) and canonical correlation analysis (CCA). However, unlike CCA, the analysis approach is not limited in the number of overall modalities or “views” that may be assessed.
While the above may seem rather technical, this approach is one of the first of its kind allowing for assessment of imaging in a truly multi-modal fashion. Most multi-modal imaging studies to date involve analysis of imaging sequences separately, with some attempt at reconciling findings on each individual modality, after the fact, to try to determine relatedness. The SiMLR approach is a data-driven method to determine the principle axis of variation across the entirety of the dataset and to align all modalities to that axis. This determination of relatedness and variability is performed prior to any actual statistical testing, thus allowing for testing to be performed across components that are responsible for the greatest degree of variability across the entirety of the dataset. This is a powerful tool. It has already been used to identify key observations in military service members exposed to repetitive low-level blast exposure, service members with chronic TBI, and has been used to construct a “brain age” metric that may serve as a summary descriptive tool of brain imaging. There has been considerable interest from other groups in incorporating the tool into their work and the approach will likely have a significant impact on how neuroimaging studies, including those exploring TBI, are analyzed going forward. Of note, SiMLR is available open source as a part of the Advanced Normalization Toolkit, designed by Drs. Brian Avants and Nick Tustison.
Why is funding the NNL and building the library important?
The normative neuroimaging library is a foundational, building block type of effort that will enable many research studies, will help advanced neuroimaging have a role in clinical care, and will also help facilitate multi-site research studies employing magnetic resonance imaging (MRI). The construct of the library is essentially akin to building a bridge to knowledge discovery. While research funding is often allocated towards the implementation of studies to answer specific research questions. This is a knowledge infrastructure development activity. Funding the NNL enables the activity necessary to establish infrastructure, and this infrastructure will catalyze progress in both research as well as the clinical care of patients with a variety of neurological conditions.