Sort by:
Page 504 of 6386373 results

Jaeckle, F., Bryant, R., Denholm, J., Romero Diaz, J., Schreiber, B., Shenoy, V., Ekundayomi, D., Evans, S., Arends, M., Soilleux, E.

medrxiv logopreprintJun 4 2025
BackgroundCoeliac disease, an autoimmune disorder affecting approximately 1% of the global population, is typically diagnosed on a duodenal biopsy. However, inter-pathologist agreement on coeliac disease diagnosis is only around 80%. Existing machine learning solutions designed to improve coeliac disease diagnosis often lack interpretability, which is essential for building trust and enabling widespread clinical adoption. ObjectiveTo develop an interpretable AI model capable of segmenting key histological structures in duodenal biopsies, generating explainable segmentation masks, estimating intraepithelial lymphocyte (IEL)-to-enterocyte and villus-to-crypt ratios, and diagnosing coeliac disease. DesignSemantic segmentation models were trained to identify villi, crypts, IELs, and enterocytes using 49 annotated 2048x2048 patches at 40x magnification. IEL-to-enterocyte and villus-to-crypt ratios were calculated from segmentation masks, and a logistic regression model was trained on 172 images to diagnose coeliac disease based on these ratios. Evaluation was performed on an independent test set of 613 duodenal biopsy scans from a separate NHS Trust. ResultsThe villus-crypt segmentation model achieved a mean PR AUC of 80.5%, while the IEL-enterocyte model reached a PR AUC of 82%. The diagnostic model classified WSIs with 96% accuracy, 86% positive predictive value, and 98% negative predictive value on the independent test set. ConclusionsOur interpretable AI models accurately segmented key histological structures and diagnosed coeliac disease in unseen WSIs, demonstrating strong generalization performance. These models provide pathologists with reliable IEL-to-enterocyte and villus-to-crypt ratio estimates, enhancing diagnostic accuracy. Interpretable AI solutions like ours are essential for fostering trust among healthcare professionals and patients, complementing existing black-box methodologies. What is already known on this topicPathologist concordance in diagnosing coeliac disease from duodenal biopsies is consistently reported to be below 80%, highlighting diagnostic variability and the need for improved methods. Several recent studies have leveraged artificial intelligence (AI) to enhance coeliac disease diagnosis. However, most of these models operate as "black boxes," offering limited interpretability and transparency. The lack of explainability in AI-driven diagnostic tools prevents widespread adoption by healthcare professionals and reduces patient trust. What this study addsThis study presents an interpretable semantic segmentation algorithm capable of detecting the four key histological structures essential for diagnosing coeliac disease: crypts, villi, intraepithelial lymphocytes (IELs), and enterocytes. The model accurately estimates the IEL-to-enterocyte ratio and the villus-to-crypt ratio, the latter being an indicator of villous atrophy and crypt hyperplasia, thereby providing objective, reproducible metrics for diagnosis. The segmentation outputs allow for transparent, explainable decision-making, supporting pathologists in coeliac disease diagnosis with improved accuracy and confidence. This study presents an AI model that automates the estimation of the IEL-to-enterocyte ratio--a labour-intensive task currently performed manually by pathologists in limited biopsy regions. By minimising diagnostic variability and alleviating time constraints for pathologists, the model provides an efficient and practical solution to streamline the diagnostic workflow. Tested on an independent dataset from a previously unseen source, the model demonstrates explainability and generalizability, enhancing trust and encouraging adoption in routine clinical practice. Furthermore, this approach could set a new standard for AI-assisted duodenal biopsy evaluation, paving the way for the development of interpretable AI tools in pathology to address the critical challenges of limited pathologist availability and diagnostic inconsistencies.

Verde, A. S. C., de Almeida, J. G., Mendes, F., Pereira, M., Lopes, R., Brito, M. J., Urbano, M., Correia, P. S., Gaivao, A. M., Firpo-Betancourt, A., Fonseca, J., Matos, C., Regge, D., Marias, K., Tsiknakis, M., ProCAncer-I Consortium,, Conceicao, R. C., Papanikolaou, N.

medrxiv logopreprintJun 4 2025
While Deep Learning (DL) models trained on Magnetic Resonance Imaging (MRI) have shown promise for prostate cancer detection, their lack of direct biological validation often undermines radiologists trust and hinders clinical adoption. Radiologic-histopathologic (rad-path) correlation has the potential to validate MRI-based lesion detection using digital histopathology. This study uses automated and manually annotated digital histopathology slides as a standard of reference to evaluate the spatial extent of lesion annotations derived from both radiologist interpretations and DL models previously trained on prostate bi-parametric MRI (bp-MRI). 117 histopathology slides were used as reference. Prospective patients with clinically significant prostate cancer performed a bp-MRI examination before undergoing a robotic radical prostatectomy, and each prostate specimen was sliced using a 3D-printed patient-specific mold to ensure a direct comparison between pre-operative imaging and histopathology slides. The histopathology slides and their corresponding T2-weighted MRI images were co-registered. We trained DL models for cancer detection on large retrospective datasets of T2-w MRI only, bp-MRI and histopathology images and did inference in a prospective patient cohort. We evaluated the spatial extent between detected lesions and between detected lesions and the histopathological and radiological ground-truth, using the Dice similarity coefficient (DSC). The DL models trained on digital histopathology tiles and MRI images demonstrated promising capabilities in lesion detection. A low overlap was observed between the lesion detection masks generated by the histopathology and bp-MRI models, with a DSC = 0.10. However, the overlap was equivalent (DSC = 0.08) between radiologist annotations and histopathology ground truth. A rad-path correlation pipeline was established in a prospective patient cohort with prostate cancer undergoing surgery. The correlation between rad-path DL models was low but comparable to the overlap between annotations. While DL models show promise in prostate cancer detection, challenges remain in integrating MRI-based predictions with histopathological findings.

Singh, D., Brima, Y., Levin, F., Becker, M., Hiller, B., Hermann, A., Villar-Munoz, I., Beichert, L., Bernhardt, A., Buerger, K., Butryn, M., Dechent, P., Duezel, E., Ewers, M., Fliessbach, K., D. Freiesleben, S., Glanz, W., Hetzer, S., Janowitz, D., Goerss, D., Kilimann, I., Kimmich, O., Laske, C., Levin, J., Lohse, A., Luesebrink, F., Munk, M., Perneczky, R., Peters, O., Preis, L., Priller, J., Prudlo, J., Prychynenko, D., Rauchmann, B.-S., Rostamzadeh, A., Roy-Kluth, N., Scheffler, K., Schneider, A., Droste zu Senden, L., H. Schott, B., Spottke, A., Synofzik, M., Wiltfang, J., Jessen, F., W

medrxiv logopreprintJun 4 2025
IntroductionExplainable Artificial Intelligence (XAI) methods enhance the diagnostic efficiency of clinical decision support systems by making the predictions of a convolutional neural networks (CNN) on brain imaging more transparent and trustworthy. However, their clinical adoption is limited due to limited validation of the explanation quality. Our study introduces a framework that evaluates XAI methods by integrating neuroanatomical morphological features with CNN-generated relevance maps for disease classification. MethodsWe trained a CNN using brain MRI scans from six cohorts: ADNI, AIBL, DELCODE, DESCRIBE, EDSD, and NIFD (N=3253), including participants that were cognitively normal, with amnestic mild cognitive impairment, dementia due to Alzheimers disease and frontotemporal dementia. Clustering analysis benchmarked different explanation space configurations by using morphological features as proxy-ground truth. We implemented three post-hoc explanations methods: i) by simplifying model decisions, ii) explanation-by-example, and iii) textual explanations. A qualitative evaluation by clinicians (N=6) was performed to assess their clinical validity. ResultsClustering performance improved in morphology enriched explanation spaces, improving both homogeneity and completeness of the clusters. Post hoc explanations by model simplification largely delineated converters and stable participants, while explanation-by-example presented possible cognition trajectories. Textual explanations gave rule-based summarization of pathological findings. Clinicians qualitative evaluation highlighted challenges and opportunities of XAI for different clinical applications. ConclusionOur study refines XAI explanation spaces and applies various approaches for generating explanations. Within the context of AI-based decision support system in dementia research we found the explanations methods to be promising towards enhancing diagnostic efficiency, backed up by the clinical assessments.

Ha SK, Lin LY, Shi M, Wang M, Han JY, Lee NG

pubmed logopapersJun 3 2025
To develop a deep learning model using orbital computed tomography (CT) imaging to accurately distinguish thyroid eye disease (TED) and orbital myositis, two conditions with overlapping clinical presentations. Retrospective, single-center cohort study spanning 12 years including normal controls, TED, and orbital myositis patients with orbital imaging and examination by an oculoplastic surgeon. A deep learning model employing a Visual Geometry Group-16 network was trained on various binary combinations of TED, orbital myositis, and controls using single slices of coronal orbital CT images. A total of 1628 images from 192 patients (110 TED, 51 orbital myositis, 31 controls) were included. The primary model comparing orbital myositis and TED had accuracy of 98.4% and area under the receiver operating characteristic curve (AUC) of 0.999. In detecting orbital myositis, it had a sensitivity, specificity, and F1 score of 0.964, 0.994, and 0.984, respectively. Deep learning models can differentiate TED and orbital myositis based on a single, coronal orbital CT image with high accuracy. Their ability to distinguish these conditions based not only on extraocular muscle enlargement but also other salient features suggests potential applications in diagnostics and treatment beyond these conditions.

Cheng Y, Malekar M, He Y, Bommareddy A, Magdamo C, Singh A, Westover B, Mukerji SS, Dickson J, Das S

pubmed logopapersJun 3 2025
Alzheimer disease and related dementias (ADRD) are complex disorders with overlapping symptoms and pathologies. Comprehensive records of symptoms in electronic health records (EHRs) are critical for not only reaching an accurate diagnosis but also supporting ongoing research studies and clinical trials. However, these symptoms are frequently obscured within unstructured clinical notes in EHRs, making manual extraction both time-consuming and labor-intensive. We aimed to automate symptom extraction from the clinical notes of patients with ADRD using fine-tuned large language models (LLMs), compare its performance to regular expression-based symptom recognition, and validate the results using brain magnetic resonance imaging (MRI) data. We fine-tuned LLMs to extract ADRD symptoms across the following 7 domains: memory, executive function, motor, language, visuospatial, neuropsychiatric, and sleep. We assessed the algorithm's performance by calculating the area under the receiver operating characteristic curve (AUROC) for each domain. The extracted symptoms were then validated in two analyses: (1) predicting ADRD diagnosis using the counts of extracted symptoms and (2) examining the association between ADRD symptoms and MRI-derived brain volumes. Symptom extraction across the 7 domains achieved high accuracy with AUROCs ranging from 0.97 to 0.99. Using the counts of extracted symptoms to predict ADRD diagnosis yielded an AUROC of 0.83 (95% CI 0.77-0.89). Symptom associations with brain volumes revealed that a smaller hippocampal volume was linked to memory impairments (odds ratio 0.62, 95% CI 0.46-0.84; P=.006), and reduced pallidum size was associated with motor impairments (odds ratio 0.73, 95% CI 0.58-0.90; P=.04). These results highlight the accuracy and reliability of our high-throughput ADRD phenotyping algorithm. By enabling automated symptom extraction, our approach has the potential to assist with differential diagnosis, as well as facilitate clinical trials and research studies of dementia.

Kanhere A, Navarathna N, Yi PH, Parekh VS, Pickle J, Cloak CC, Ernst T, Chang L, Li D, Redline S, Isaiah A

pubmed logopapersJun 3 2025
One in ten children experiences sleep-disordered breathing (SDB). Untreated SDB is associated with poor cognition, but the underlying mechanisms are less understood. We assessed the relationship between magnetic resonance imaging (MRI)-derived upper airway volume and children's cognition and regional cortical gray matter volumes. We used five-year data from the Adolescent Brain Cognitive Development study (n=11,875 children, 9-10 years at baseline). Upper airway volumes were derived using a deep learning model applied to 5,552,640 brain MRI slices. The primary outcome was the Total Cognition Composite score from the National Institutes of Health Toolbox (NIH-TB). Secondary outcomes included other NIH-TB measures and cortical gray matter volumes. The habitual snoring group had significantly smaller airway volumes than non-snorers (mean difference=1.2 cm<sup>3</sup>; 95% CI, 1.0-1.4 cm<sup>3</sup>; P<0.001). Deep learning-derived airway volume predicted the Total Cognition Composite score (estimated mean difference=3.68 points; 95% CI, 2.41-4.96; P<0.001) per one-unit increase in the natural log of airway volume (~2.7-fold raw volume increase). This airway volume increase was also associated with an average 0.02 cm<sup>3</sup> increase in right temporal pole volume (95% CI, 0.01-0.02 cm<sup>3</sup>; P<0.001). Similar airway volume predicted most NIH-TB domain scores and multiple frontal and temporal gray matter volumes. These brain volumes mediated the relationship between airway volume and cognition. We demonstrate a novel application of deep learning-based airway segmentation in a large pediatric cohort. Upper airway volume is a potential biomarker for cognitive outcomes in pediatric SDB, offers insights into neurobiological mechanisms, and informs future studies on risk stratification. This article is open access and distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives License 4.0 (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Park JH, Hamimi M, Choi JJE, Figueredo CMS, Cameron MA

pubmed logopapersJun 3 2025
Accurate segmentation of the maxillary sinus from medical images is essential for diagnostic purposes and surgical planning. Manual segmentation of the maxillary sinus, while the gold standard, is time consuming and requires adequate training. To overcome this problem, AI enabled automatic segmentation software's developed. The purpose of this review is to systematically analyse the current literature to investigate the accuracy and efficiency of automatic segmentation techniques of the maxillary sinus to manual segmentation. A systematic approach to perform a thorough analysis of the existing literature using PRISMA guidelines. Data for this study was obtained from Pubmed, Medline, Embase, and Google Scholar databases. The inclusion and exclusion eligibility criteria were used to shortlist relevant studies. The sample size, anatomical structures segmented, experience of operators, type of manual segmentation software used, type of automatic segmentation software used, statistical comparative method used, and length of time of segmentation were analysed. This systematic review presents 10 studies that compared the accuracy and efficiency of automatic segmentation of the maxillary sinus to manual segmentation. All the studies included in this study were found to have a low risk of bias. Samples sizes ranged from 3 to 144, a variety of operators were used to manually segment the CBCT and segmentation was made primarily to 3D slicer and Mimics software. The comparison was primarily made to Unet architecture softwares, with the dice-coefficient being the primary means of comparison. This systematic review showed that automatic segmentation technique was consistently faster than manual segmentation techniques and over 90% accurate when compared to the gold standard of manual segmentation.

Tavarozzi R, Lombardi A, Scarano F, Staiano L, Trattelli G, Farro M, Castellino A, Coppola C

pubmed logopapersJun 3 2025
Superficial lymph node (LN) enlargement is a common ultrasonographic finding and can be associated with a broad spectrum of conditions, from benign reactive hyperplasia to malignant lymphoproliferative disorders (LPDs). LPDs, which include various hematologic malignancies affecting lymphoid tissue, present with diverse immune-morphological and clinical features, making differentiation from other malignant causes of lymphadenopathy challenging. Radiologic assessment is crucial in characterizing lymphadenopathy, with ultrasonography serving as a noninvasive and widely available imaging modality. High-resolution ultrasound allows the evaluation of key features such as LN size, shape, border definition, echogenicity, and the presence of abnormal cortical thickening, loss of the fatty hilum, or altered vascular patterns, which aid in distinguishing benign from malignant processes. This review aims to describe the ultrasonographic characteristics of lymphadenopathy, offering essential diagnostic insights to differentiate malignant disorders, particularly LPDs. We will discuss standard ultrasound techniques, including grayscale imaging and Doppler ultrasound, and explore more advanced methods such as contrast-enhanced ultrasound (CEUS), elastography, and artificial intelligence-assisted imaging, which are gaining prominence in LN evaluation. By highlighting these imaging modalities, we aim to enhance the diagnostic accuracy of ultrasonography in lymphadenopathy assessment and improve early detection of LPDs and other malignant conditions.

Liang J, Tan W, Xie S, Zheng L, Li C, Zhong Y, Li J, Zhou C, Zhang Z, Zhou Z, Gong P, Chen X, Zhang L, Cheng X, Zhang Q, Lu G

pubmed logopapersJun 3 2025
The location of the hemorrhagic of spontaneous intracerebral hemorrhage (sICH) is clinically pivotal for both identifying its etiology and prognosis, but comprehensive and quantitative modeling approach has yet to be thoroughly explored. We employed lesion-symptom mapping to extract the location features of sICH. We registered patients' non-contrast computed tomography image and hematoma masks with standard human brain templates to identify specific affected brain regions. Then, we generated hemorrhage probabilistic maps of different etiologies and prognoses. By integrating radiomics and clinical features into multiple logistic regression models, we developed and validated optimal etiological and prognostic models across three centers, comprising 1162 sICH patients. Hematomas of different etiology have unique spatial distributions. The location-based features demonstrated robust classification of the etiology of spontaneous intracerebral hemorrhage (sICH), with a mean area under the curve (AUC) of 0.825 across diverse datasets. These features provided significant incremental value when integrated into predictive models (fusion model mean AUC = 0.915), outperforming models relying solely on clinical features (mean AUC = 0.828). In prognostic assessments, both hematoma location (mean AUC = 0.762) and radiomic features (mean AUC = 0.837) contributed substantial incremental predictive value, as evidenced by the fusion model's mean AUC of 0.873, compared to models utilizing clinical features alone (mean AUC = 0.771). Our results show that location features were more intrinsically robust, generalizable relative, strong interpretability to the complex modeling of radiomics, our approach demonstrated a novel interpretable, streamlined, comprehensive etiologic classification and prognostic prediction framework for sICH.

Grootjans W, Krainska U, Rezazade Mehrizi MH

pubmed logopapersJun 3 2025
As many radiology departments embark on adopting artificial intelligence (AI) solutions in their clinical practice, they face the challenge that commercial applications often do not fit with their needs. As a result, they engage in a co-creation process with technology companies to collaboratively develop and implement AI solutions. Despite its importance, the process of co-creating AI solutions is under-researched, particularly regarding the range of challenges that may occur and how medical and technological parties can monitor, assess, and guide their co-creation process through an effective collaboration framework. Drawing on the multi-case study of three co-creation projects at an academic medical center in the Netherlands, we examine how co-creation processes happen through different scenarios, depending on the extent to which the two parties engage in "resourcing," "adaptation," and "reconfiguration." We offer a relational framework that helps involved parties monitor, assess, and guide their collaborations in co-creating AI solutions. The framework allows them to discover novel use-cases and reconsider their established assumptions and practices for developing AI solutions, also for redesigning their technological systems, clinical workflow, and their legal and organizational arrangements. Using the proposed framework, we identified distinct co-creation journeys with varying outcomes, which could be mapped onto the framework to diagnose, monitor, and guide collaborations toward desired results. The outcomes of co-creation can vary widely. The proposed framework enables medical institutions and technology companies to assess challenges and make adjustments. It can assist in steering their collaboration toward desired goals. Question How can medical institutions and AI startups effectively co-create AI solutions for radiology, ensuring alignment with clinical needs while steering collaboration effectively? Findings This study provides a co-creation framework allowing assessment of project progress, stakeholder engagement, as well as guidelines for radiology departments to steer co-creation of AI. Clinical relevance By actively involving radiology professionals in AI co-creation, this study demonstrates how co-creation helps bridge the gap between clinical needs and AI development, leading to clinically relevant, user-friendly solutions that enhance the radiology workflow.
Page 504 of 6386373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.