INCYT   25562
INSTITUTO DE NEUROCIENCIA COGNITIVA Y TRASLACIONAL
Unidad Ejecutora - UE
congresos y reuniones científicas
Título:
Robust data-driven computational approach for classfying frontotemporal neurodegeneration: A multimodal and multicenter neuroimaging study
Autor/es:
GARCÍA, ADOLFO M.; DONELLY-KEHOE, PATRICIO; PASCARIELLO, GUIDO; SICONG, TU; MANES, FACUNDO; PIGUET, OLIVER; HERRERA, EDUAR; LANDIN-ROMERO, RAMÓN; MATALLANA, DIANA; KUMFOR, FIONA; REYES, PABLO; SANTAMARÍA-GARCÍA, HERNANDO; SEDEÑO, LUCAS; HODGES, JOHN; IBÁÑEZ, AGUSTÍN
Lugar:
Sídney
Reunión:
Conferencia; 11th International Conference on Frontotemporal Dementias; 2018
Resumen:
Accurate early diagnosis of behavioral-variant frontotemporal dementia (bvFTD) remains challenging, as it depends on clinical expertise and broad diagnostic guidelines, which can be subject to diverging interpretation across sites with socio-geographical variability. Recent recommendations highlight the role of multimodal neuroimaging processing and machine learning methods in diagnosis. We developed and validated an automatic, multi-center, multimodal computational approach for robust classification of bvFTD patients and healthy controls (HCs). We analyzed structural MRI and resting‐state functional connectivity (rsFC) from 44 bvFTD patients and 60 HCs (across three centers with different acquisition protocols) using a fully automated processing pipeline (including site-normalization, single and multimodal support vector machine learning, progressive feature elimination procedure, and random forest classifier). Single-modality classification methods (structural MRI and rsFC) were fine-tuned and neurocognitive potential biomarkers for each modality were identified. A two-dimensional space was generated by combining both modalities to analyze relevant information of each one. Cross-center features for each modality were related with frontal and temporal regions. Multimodal classification was highly accurate (91%), sensitive (83.7%), and specific (96.6%). These values surpassed the highest classification results reported in previous works. Our results underscore the potential of combining multimodal imaging results and machine learning as a gold-standard complementary tool for diagnosis that is robust to socio-demographic and acquisition parameter difference across sites. Partially supported by grants from CONICET, CONICYT/FONDECYT Regular (1170010), FONDAP 15150012, INECO Foundation, and the Inter-American Development Bank (IDB). FK is supported by a National Health and Medical Research Institute (NHMRC)-Australian Research Council (ARC) Dementia Research Development Fellowship (APP1097026).