Sort by:
Page 277 of 3463455 results

Brain tau PET-based identification and characterization of subpopulations in patients with Alzheimer's disease using deep learning-derived saliency maps.

Li Y, Wang X, Ge Q, Graeber MB, Yan S, Li J, Li S, Gu W, Hu S, Benzinger TLS, Lu J, Zhou Y

pubmed logopapersJun 9 2025
Alzheimer's disease (AD) is a heterogeneous neurodegenerative disorder in which tau neurofibrillary tangles are a pathological hallmark closely associated with cognitive dysfunction and neurodegeneration. In this study, we used brain tau data to investigate AD heterogeneity by identifying and characterizing the subpopulations among patients. We included 615 cognitively normal and 159 AD brain <sup>18</sup>F-flortaucipr PET scans, along with T1-weighted MRI from the Alzheimer Disease Neuroimaging Initiative database. A three dimensional-convolutional neural network model was employed for AD detection using standardized uptake value ratio (SUVR) images. The model-derived saliency maps were generated and employed as informative image features for clustering AD participants. Among the identified subpopulations, statistical analysis of demographics, neuropsychological measures, and SUVR were compared. Correlations between neuropsychological measures and regional SUVRs were assessed. A generalized linear model was utilized to investigate the sex and APOE ε4 interaction effect on regional SUVRs. Two distinct subpopulations of AD patients were revealed, denoted as S<sub>Hi</sub> and S<sub>Lo</sub>. Compared to the S<sub>Lo</sub> group, the S<sub>Hi</sub> group exhibited a significantly higher global tau burden in the brain, but both groups showed similar cognition distribution levels. In the S<sub>Hi</sub> group, the associations between the neuropsychological measurements and regional tau deposition were weakened. Moreover, a significant interaction effect of sex and APOE ε4 on tau deposition was observed in the S<sub>Lo</sub> group, but no such effect was found in the S<sub>Hi</sub> group. Our results suggest that tau tangles, as shown by SUVR, continue to accumulate even when cognitive function plateaus in AD patients, highlighting the advantages of PET in later disease stages. The differing relationships between cognition and tau deposition, and between gender, APOE4, and tau deposition, provide potential for subtype-specific treatments. Targeting gender-specific and genetic factors influencing tau deposition, as well as interventions aimed at tau's impact on cognition, may be effective.

Comparative accuracy of two commercial AI algorithms for musculoskeletal trauma detection in emergency radiographs.

Huhtanen JT, Nyman M, Blanco Sequeiros R, Koskinen SK, Pudas TK, Kajander S, Niemi P, Aronen HJ, Hirvonen J

pubmed logopapersJun 9 2025
Missed fractures are the primary cause of interpretation errors in emergency radiology, and artificial intelligence has recently shown great promise in radiograph interpretation. This study compared the diagnostic performance of two AI algorithms, BoneView and RBfracture, in detecting traumatic abnormalities (fractures and dislocations) in MSK radiographs. AI algorithms analyzed 998 radiographs (585 normal, 413 abnormal), against the consensus of two MSK specialists. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, and interobserver agreement (Cohen's Kappa) were calculated. 95% confidence intervals (CI) assessed robustness, and McNemar's tests compared sensitivity and specificity between the AI algorithms. BoneView demonstrated a sensitivity of 0.893 (95% CI: 0.860-0.920), specificity of 0.885 (95% CI: 0.857-0.909), PPV of 0.846, NPV of 0.922, and accuracy of 0.889. RBfracture demonstrated a sensitivity of 0.872 (95% CI: 0.836-0.901), specificity of 0.892 (95% CI: 0.865-0.915), PPV of 0.851, NPV of 0.908, and accuracy of 0.884. No statistically significant differences were found in sensitivity (p = 0.151) or specificity (p = 0.708). Kappa was 0.81 (95% CI: 0.77-0.84), indicating almost perfect agreement between the two AI algorithms. Performance was similar in adults and children. Both AI algorithms struggled more with subtle abnormalities, which constituted 66% and 70% of false negatives but only 20% and 18% of true positives for the two AI algorithms, respectively (p < 0.001). BoneView and RBfracture exhibited high diagnostic performance and almost perfect agreement, with consistent results across adults and children, highlighting the potential of AI in emergency radiograph interpretation.

Addressing Limited Generalizability in Artificial Intelligence-Based Brain Aneurysm Detection for Computed Tomography Angiography: Development of an Externally Validated Artificial Intelligence Screening Platform.

Pettersson SD, Filo J, Liaw P, Skrzypkowska P, Klepinowski T, Szmuda T, Fodor TB, Ramirez-Velandia F, Zieliński P, Chang YM, Taussky P, Ogilvy CS

pubmed logopapersJun 9 2025
Brain aneurysm detection models, both in the literature and in industry, continue to lack generalizability during external validation, limiting clinical adoption. This challenge is largely due to extensive exclusion criteria during training data selection. The authors developed the first model to achieve generalizability using novel methodological approaches. Computed tomography angiography (CTA) scans from 2004 to 2023 at the study institution were used for model training, including untreated unruptured intracranial aneurysms without extensive cerebrovascular disease. External validation used digital subtraction angiography-verified CTAs from an international center, while prospective validation occurred at the internal institution over 9 months. A public web platform was created for further model validation. A total of 2194 CTA scans were used for this study. One thousand five hundred eighty-seven patients and 1920 aneurysms with a mean size of 5.3 ± 3.7 mm were included in the training cohort. The mean age of the patients was 69.7 ± 14.9 years, and 1203 (75.8%) were female. The model achieved a training Dice score of 0.88 and a validation Dice score of 0.76. Prospective internal validation on 304 scans yielded a lesion-level (LL) sensitivity of 82.5% (95% CI: 75.5-87.9) and specificity of 89.6 (95% CI: 84.5-93.2). External validation on 303 scans demonstrated an on-par LL sensitivity and specificity of 83.5% (95% CI: 75.1-89.4) and 92.9% (95% CI: 88.8-95.6), respectively. Radiologist LL sensitivity from the external center was 84.5% (95% CI: 76.2-90.2), and 87.5% of the missed aneurysms were detected by the model. The authors developed the first publicly testable artificial intelligence model for aneurysm detection on CTA scans, demonstrating generalizability and state-of-the-art performance in external validation. The model addresses key limitations of previous efforts and enables broader validation through a web-based platform.

Developing a Deep Learning Radiomics Model Combining Lumbar CT, Multi-Sequence MRI, and Clinical Data to Predict High-Risk Adjacent Segment Degeneration Following Lumbar Fusion: A Retrospective Multicenter Study.

Zou C, Wang T, Wang B, Fei Q, Song H, Zang L

pubmed logopapersJun 9 2025
Study designRetrospective cohort study.ObjectivesDevelop and validate a model combining clinical data, deep learning radiomics (DLR), and radiomic features from lumbar CT and multisequence MRI to predict high-risk patients for adjacent segment degeneration (ASDeg) post-lumbar fusion.MethodsThis study included 305 patients undergoing preoperative CT and MRI for lumbar fusion surgery, divided into training (n = 192), internal validation (n = 83), and external test (n = 30) cohorts. Vision Transformer 3D-based deep learning model was developed. LASSO regression was used for feature selection to establish a logistic regression model. ASDeg was defined as adjacent segment degeneration during radiological follow-up 6 months post-surgery. Fourteen machine learning algorithms were evaluated using ROC curves, and a combined model integrating clinical variables was developed.ResultsAfter feature selection, 21 radiomics, 12 DLR, and 3 clinical features were selected. The linear support vector machine algorithm performed best for the radiomic model, and AdaBoost was optimal for the DLR model. A combined model using these and clinical features was developed, with the multi-layer perceptron as the most effective algorithm. The areas under the curve for training, internal validation, and external test cohorts were 0.993, 0.936, and 0.835, respectively. The combined model outperformed the combined predictions of 2 surgeons.ConclusionsThis study developed and validated a combined model integrating clinical, DLR and radiomic features, demonstrating high predictive performance for identifying high-risk ASDeg patients post-lumbar fusion based on clinical data, CT, and MRI. The model could potentially reduce ASDeg-related revision surgeries, thereby reducing the burden on the public healthcare.

Automated Vessel Occlusion Software in Acute Ischemic Stroke: Pearls and Pitfalls.

Aziz YN, Sriwastwa A, Nael K, Harker P, Mistry EA, Khatri P, Chatterjee AR, Heit JJ, Jadhav A, Yedavalli V, Vagal AS

pubmed logopapersJun 9 2025
Software programs leveraging artificial intelligence to detect vessel occlusions are now widely available to aid in stroke triage. Given their proprietary use, there is a surprising lack of information regarding how the software works, who is using the software, and their performance in an unbiased real-world setting. In this educational review of automated vessel occlusion software, we discuss emerging evidence of their utility, underlying algorithms, real-world diagnostic performance, and limitations. The intended audience includes specialists in stroke care in neurology, emergency medicine, radiology, and neurosurgery. Practical tips for onboarding and utilization of this technology are provided based on the multidisciplinary experience of the authorship team.

A Dynamic Contrast-Enhanced MRI-Based Vision Transformer Model for Distinguishing HER2-Zero, -Low, and -Positive Expression in Breast Cancer and Exploring Model Interpretability.

Zhang X, Shen YY, Su GH, Guo Y, Zheng RC, Du SY, Chen SY, Xiao Y, Shao ZM, Zhang LN, Wang H, Jiang YZ, Gu YJ, You C

pubmed logopapersJun 9 2025
Novel antibody-drug conjugates highlight the benefits for breast cancer patients with low human epidermal growth factor receptor 2 (HER2) expression. This study aims to develop and validate a Vision Transformer (ViT) model based on dynamic contrast-enhanced MRI (DCE-MRI) to classify HER2-zero, -low, and -positive breast cancer patients and to explore its interpretability. The model is trained and validated on early enhancement MRI images from 708 patients in the FUSCC cohort and tested on 80 and 101 patients in the GFPH cohort and FHCMU cohort, respectively. The ViT model achieves AUCs of 0.80, 0.73, and 0.71 in distinguishing HER2-zero from HER2-low/positive tumors across the validation set of the FUSCC cohort and the two external cohorts. Furthermore, the model effectively classifies HER2-low and HER2-positive cases, with AUCs of 0.86, 0.80, and 0.79. Transcriptomics analysis identifies significant biological differences between HER2-low and HER2-positive patients, particularly in immune-related pathways, suggesting potential therapeutic targets. Additionally, Cox regression analysis demonstrates that the prediction score is an independent prognostic factor for overall survival (HR, 2.52; p = 0.007). These findings provide a non-invasive approach for accurately predicting HER2 expression, enabling more precise patient stratification to guide personalized treatment strategies. Further prospective studies are warranted to validate its clinical utility.

Diagnostic and Technological Advances in Magnetic Resonance (Focusing on Imaging Technique and the Gadolinium-Based Contrast Media), Computed Tomography (Focusing on Photon Counting CT), and Ultrasound-State of the Art.

Runge VM, Heverhagen JT

pubmed logopapersJun 9 2025
Magnetic resonance continues to evolve and advance as a critical imaging modality for disease diagnosis and monitoring. Hardware and software advances continue to propel this modality to the forefront of the field of diagnostic imaging. Next generation MR contrast media, specifically gadolinium chelates with improved relaxivity and stability (relative to the provided contrast effect), have emerged providing a further boost to the field. Concern regarding gadolinium deposition in the body with primarily the weaker gadolinium chelates (which have been now removed from the market, at least in Europe) continues to be at the forefront of clinicians' minds. This has driven renewed interest in possible development of manganese-based contrast media. The development of photon counting CT and its clinical introduction have made possible a further major advance in CT image quality, along with the potential for decreasing radiation dose. The possibility of major clinical advances in thoracic, cardiac, and musculoskeletal imaging were first recognized, with its broader impact - across all organ systems - now also recognized. The utility of routine acquisition (without penalty in time or radiation dose) of full spectral multi-energy data is now also being recognized as an additional major advance made possible by photon counting CT. Artificial intelligence is now being used in the background across most imaging platforms and modalities, making possible further advances in imaging technique and image quality, although this field is nowhere yet near to realizing its full potential. And last, but not least, the field of ultrasound is on the cusp of further major advances in availability (with development of very low-cost systems) and a possible new generation of microbubble contrast media.

APTOS-2024 challenge report: Generation of synthetic 3D OCT images from fundus photographs

Bowen Liu, Weiyi Zhang, Peranut Chotcomwongse, Xiaolan Chen, Ruoyu Chen, Pawin Pakaymaskul, Niracha Arjkongharn, Nattaporn Vongsa, Xuelian Cheng, Zongyuan Ge, Kun Huang, Xiaohui Li, Yiru Duan, Zhenbang Wang, BaoYe Xie, Qiang Chen, Huazhu Fu, Michael A. Mahr, Jiaqi Qu, Wangyiyang Chen, Shiye Wang, Yubo Tan, Yongjie Li, Mingguang He, Danli Shi, Paisan Ruamviboonsuk

arxiv logopreprintJun 9 2025
Optical Coherence Tomography (OCT) provides high-resolution, 3D, and non-invasive visualization of retinal layers in vivo, serving as a critical tool for lesion localization and disease diagnosis. However, its widespread adoption is limited by equipment costs and the need for specialized operators. In comparison, 2D color fundus photography offers faster acquisition and greater accessibility with less dependence on expensive devices. Although generative artificial intelligence has demonstrated promising results in medical image synthesis, translating 2D fundus images into 3D OCT images presents unique challenges due to inherent differences in data dimensionality and biological information between modalities. To advance generative models in the fundus-to-3D-OCT setting, the Asia Pacific Tele-Ophthalmology Society (APTOS-2024) organized a challenge titled Artificial Intelligence-based OCT Generation from Fundus Images. This paper details the challenge framework (referred to as APTOS-2024 Challenge), including: the benchmark dataset, evaluation methodology featuring two fidelity metrics-image-based distance (pixel-level OCT B-scan similarity) and video-based distance (semantic-level volumetric consistency), and analysis of top-performing solutions. The challenge attracted 342 participating teams, with 42 preliminary submissions and 9 finalists. Leading methodologies incorporated innovations in hybrid data preprocessing or augmentation (cross-modality collaborative paradigms), pre-training on external ophthalmic imaging datasets, integration of vision foundation models, and model architecture improvement. The APTOS-2024 Challenge is the first benchmark demonstrating the feasibility of fundus-to-3D-OCT synthesis as a potential solution for improving ophthalmic care accessibility in under-resourced healthcare settings, while helping to expedite medical research and clinical applications.

Transformer-based robotic ultrasound 3D tracking for capsule robot in GI tract.

Liu X, He C, Wu M, Ping A, Zavodni A, Matsuura N, Diller E

pubmed logopapersJun 9 2025
Ultrasound (US) imaging is a promising modality for real-time monitoring of robotic capsule endoscopes navigating through the gastrointestinal (GI) tract. It offers high temporal resolution and safety but is limited by a narrow field of view, low visibility in gas-filled regions and challenges in detecting out-of-plane motions. This work addresses these issues by proposing a novel robotic ultrasound tracking system capable of long-distance 3D tracking and active re-localization when the capsule is lost due to motion or artifacts. We develop a hybrid deep learning-based tracking framework combining convolutional neural networks (CNNs) and a transformer backbone. The CNN component efficiently encodes spatial features, while the transformer captures long-range contextual dependencies in B-mode US images. This model is integrated with a robotic arm that adaptively scans and tracks the capsule. The system's performance is evaluated using ex vivo colon phantoms under varying imaging conditions, with physical perturbations introduced to simulate realistic clinical scenarios. The proposed system achieved continuous 3D tracking over distances exceeding 90 cm, with a mean centroid localization error of 1.5 mm and over 90% detection accuracy. We demonstrated 3D tracking in a more complex workspace featuring two curved sections to simulate anatomical challenges. This suggests the strong resilience of the tracking system to motion-induced artifacts and geometric variability. The system maintained real-time tracking at 9-12 FPS and successfully re-localized the capsule within seconds after tracking loss, even under gas artifacts and acoustic shadowing. This study presents a hybrid CNN-transformer system for automatic, real-time 3D ultrasound tracking of capsule robots over long distances. The method reliably handles occlusions, view loss and image artifacts, offering millimeter-level tracking accuracy. It significantly reduces clinical workload through autonomous detection and re-localization. Future work includes improving probe-tissue interaction handling and validating performance in live animal and human trials to assess physiological impacts.

Transfer learning for accurate brain tumor classification in MRI: a step forward in medical diagnostics.

Khan MA, Hussain MZ, Mehmood S, Khan MF, Ahmad M, Mazhar T, Shahzad T, Saeed MM

pubmed logopapersJun 9 2025
Brain tumor classification is critical for therapeutic applications that benefit from computer-aided diagnostics. Misdiagnosing a brain tumor can significantly reduce a patient's chances of survival, as it may lead to ineffective treatments. This study proposes a novel approach for classifying brain tumors in MRI images using Transfer Learning (TL) with state-of-the-art deep learning models: AlexNet, MobileNetV2, and GoogleNet. Unlike previous studies that often focus on a single model, our work comprehensively compares these architectures, fine-tuned specifically for brain tumor classification. We utilize a publicly available dataset of 4,517 MRI scans, consisting of three prevalent types of brain tumors-glioma (1,129 images), meningioma (1,134 images), and pituitary tumors (1,138 images)-as well as 1,116 images of normal brains (no tumor). Our approach addresses key research gaps, including class imbalance, through data augmentation and model efficiency, leveraging lightweight architectures like MobileNetV2. The GoogleNet model achieves the highest classification accuracy of 99.2%, outperforming previous studies using the same dataset. This demonstrates the potential of our approach to assist physicians in making rapid and precise decisions, thereby improving patient outcomes. The results highlight the effectiveness of TL in medical diagnostics and its potential for real-world clinical deployment. This study advances the field of brain tumor classification and provides a robust framework for future research in medical image analysis.
Page 277 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.