Sort by:
Page 90 of 1591582 results

Developing a Deep Learning Radiomics Model Combining Lumbar CT, Multi-Sequence MRI, and Clinical Data to Predict High-Risk Adjacent Segment Degeneration Following Lumbar Fusion: A Retrospective Multicenter Study.

Zou C, Wang T, Wang B, Fei Q, Song H, Zang L

pubmed logopapersJun 9 2025
Study designRetrospective cohort study.ObjectivesDevelop and validate a model combining clinical data, deep learning radiomics (DLR), and radiomic features from lumbar CT and multisequence MRI to predict high-risk patients for adjacent segment degeneration (ASDeg) post-lumbar fusion.MethodsThis study included 305 patients undergoing preoperative CT and MRI for lumbar fusion surgery, divided into training (n = 192), internal validation (n = 83), and external test (n = 30) cohorts. Vision Transformer 3D-based deep learning model was developed. LASSO regression was used for feature selection to establish a logistic regression model. ASDeg was defined as adjacent segment degeneration during radiological follow-up 6 months post-surgery. Fourteen machine learning algorithms were evaluated using ROC curves, and a combined model integrating clinical variables was developed.ResultsAfter feature selection, 21 radiomics, 12 DLR, and 3 clinical features were selected. The linear support vector machine algorithm performed best for the radiomic model, and AdaBoost was optimal for the DLR model. A combined model using these and clinical features was developed, with the multi-layer perceptron as the most effective algorithm. The areas under the curve for training, internal validation, and external test cohorts were 0.993, 0.936, and 0.835, respectively. The combined model outperformed the combined predictions of 2 surgeons.ConclusionsThis study developed and validated a combined model integrating clinical, DLR and radiomic features, demonstrating high predictive performance for identifying high-risk ASDeg patients post-lumbar fusion based on clinical data, CT, and MRI. The model could potentially reduce ASDeg-related revision surgeries, thereby reducing the burden on the public healthcare.

Automated Vessel Occlusion Software in Acute Ischemic Stroke: Pearls and Pitfalls.

Aziz YN, Sriwastwa A, Nael K, Harker P, Mistry EA, Khatri P, Chatterjee AR, Heit JJ, Jadhav A, Yedavalli V, Vagal AS

pubmed logopapersJun 9 2025
Software programs leveraging artificial intelligence to detect vessel occlusions are now widely available to aid in stroke triage. Given their proprietary use, there is a surprising lack of information regarding how the software works, who is using the software, and their performance in an unbiased real-world setting. In this educational review of automated vessel occlusion software, we discuss emerging evidence of their utility, underlying algorithms, real-world diagnostic performance, and limitations. The intended audience includes specialists in stroke care in neurology, emergency medicine, radiology, and neurosurgery. Practical tips for onboarding and utilization of this technology are provided based on the multidisciplinary experience of the authorship team.

A Dynamic Contrast-Enhanced MRI-Based Vision Transformer Model for Distinguishing HER2-Zero, -Low, and -Positive Expression in Breast Cancer and Exploring Model Interpretability.

Zhang X, Shen YY, Su GH, Guo Y, Zheng RC, Du SY, Chen SY, Xiao Y, Shao ZM, Zhang LN, Wang H, Jiang YZ, Gu YJ, You C

pubmed logopapersJun 9 2025
Novel antibody-drug conjugates highlight the benefits for breast cancer patients with low human epidermal growth factor receptor 2 (HER2) expression. This study aims to develop and validate a Vision Transformer (ViT) model based on dynamic contrast-enhanced MRI (DCE-MRI) to classify HER2-zero, -low, and -positive breast cancer patients and to explore its interpretability. The model is trained and validated on early enhancement MRI images from 708 patients in the FUSCC cohort and tested on 80 and 101 patients in the GFPH cohort and FHCMU cohort, respectively. The ViT model achieves AUCs of 0.80, 0.73, and 0.71 in distinguishing HER2-zero from HER2-low/positive tumors across the validation set of the FUSCC cohort and the two external cohorts. Furthermore, the model effectively classifies HER2-low and HER2-positive cases, with AUCs of 0.86, 0.80, and 0.79. Transcriptomics analysis identifies significant biological differences between HER2-low and HER2-positive patients, particularly in immune-related pathways, suggesting potential therapeutic targets. Additionally, Cox regression analysis demonstrates that the prediction score is an independent prognostic factor for overall survival (HR, 2.52; p = 0.007). These findings provide a non-invasive approach for accurately predicting HER2 expression, enabling more precise patient stratification to guide personalized treatment strategies. Further prospective studies are warranted to validate its clinical utility.

Diagnostic and Technological Advances in Magnetic Resonance (Focusing on Imaging Technique and the Gadolinium-Based Contrast Media), Computed Tomography (Focusing on Photon Counting CT), and Ultrasound-State of the Art.

Runge VM, Heverhagen JT

pubmed logopapersJun 9 2025
Magnetic resonance continues to evolve and advance as a critical imaging modality for disease diagnosis and monitoring. Hardware and software advances continue to propel this modality to the forefront of the field of diagnostic imaging. Next generation MR contrast media, specifically gadolinium chelates with improved relaxivity and stability (relative to the provided contrast effect), have emerged providing a further boost to the field. Concern regarding gadolinium deposition in the body with primarily the weaker gadolinium chelates (which have been now removed from the market, at least in Europe) continues to be at the forefront of clinicians' minds. This has driven renewed interest in possible development of manganese-based contrast media. The development of photon counting CT and its clinical introduction have made possible a further major advance in CT image quality, along with the potential for decreasing radiation dose. The possibility of major clinical advances in thoracic, cardiac, and musculoskeletal imaging were first recognized, with its broader impact - across all organ systems - now also recognized. The utility of routine acquisition (without penalty in time or radiation dose) of full spectral multi-energy data is now also being recognized as an additional major advance made possible by photon counting CT. Artificial intelligence is now being used in the background across most imaging platforms and modalities, making possible further advances in imaging technique and image quality, although this field is nowhere yet near to realizing its full potential. And last, but not least, the field of ultrasound is on the cusp of further major advances in availability (with development of very low-cost systems) and a possible new generation of microbubble contrast media.

APTOS-2024 challenge report: Generation of synthetic 3D OCT images from fundus photographs

Bowen Liu, Weiyi Zhang, Peranut Chotcomwongse, Xiaolan Chen, Ruoyu Chen, Pawin Pakaymaskul, Niracha Arjkongharn, Nattaporn Vongsa, Xuelian Cheng, Zongyuan Ge, Kun Huang, Xiaohui Li, Yiru Duan, Zhenbang Wang, BaoYe Xie, Qiang Chen, Huazhu Fu, Michael A. Mahr, Jiaqi Qu, Wangyiyang Chen, Shiye Wang, Yubo Tan, Yongjie Li, Mingguang He, Danli Shi, Paisan Ruamviboonsuk

arxiv logopreprintJun 9 2025
Optical Coherence Tomography (OCT) provides high-resolution, 3D, and non-invasive visualization of retinal layers in vivo, serving as a critical tool for lesion localization and disease diagnosis. However, its widespread adoption is limited by equipment costs and the need for specialized operators. In comparison, 2D color fundus photography offers faster acquisition and greater accessibility with less dependence on expensive devices. Although generative artificial intelligence has demonstrated promising results in medical image synthesis, translating 2D fundus images into 3D OCT images presents unique challenges due to inherent differences in data dimensionality and biological information between modalities. To advance generative models in the fundus-to-3D-OCT setting, the Asia Pacific Tele-Ophthalmology Society (APTOS-2024) organized a challenge titled Artificial Intelligence-based OCT Generation from Fundus Images. This paper details the challenge framework (referred to as APTOS-2024 Challenge), including: the benchmark dataset, evaluation methodology featuring two fidelity metrics-image-based distance (pixel-level OCT B-scan similarity) and video-based distance (semantic-level volumetric consistency), and analysis of top-performing solutions. The challenge attracted 342 participating teams, with 42 preliminary submissions and 9 finalists. Leading methodologies incorporated innovations in hybrid data preprocessing or augmentation (cross-modality collaborative paradigms), pre-training on external ophthalmic imaging datasets, integration of vision foundation models, and model architecture improvement. The APTOS-2024 Challenge is the first benchmark demonstrating the feasibility of fundus-to-3D-OCT synthesis as a potential solution for improving ophthalmic care accessibility in under-resourced healthcare settings, while helping to expedite medical research and clinical applications.

Transformer-based robotic ultrasound 3D tracking for capsule robot in GI tract.

Liu X, He C, Wu M, Ping A, Zavodni A, Matsuura N, Diller E

pubmed logopapersJun 9 2025
Ultrasound (US) imaging is a promising modality for real-time monitoring of robotic capsule endoscopes navigating through the gastrointestinal (GI) tract. It offers high temporal resolution and safety but is limited by a narrow field of view, low visibility in gas-filled regions and challenges in detecting out-of-plane motions. This work addresses these issues by proposing a novel robotic ultrasound tracking system capable of long-distance 3D tracking and active re-localization when the capsule is lost due to motion or artifacts. We develop a hybrid deep learning-based tracking framework combining convolutional neural networks (CNNs) and a transformer backbone. The CNN component efficiently encodes spatial features, while the transformer captures long-range contextual dependencies in B-mode US images. This model is integrated with a robotic arm that adaptively scans and tracks the capsule. The system's performance is evaluated using ex vivo colon phantoms under varying imaging conditions, with physical perturbations introduced to simulate realistic clinical scenarios. The proposed system achieved continuous 3D tracking over distances exceeding 90 cm, with a mean centroid localization error of 1.5 mm and over 90% detection accuracy. We demonstrated 3D tracking in a more complex workspace featuring two curved sections to simulate anatomical challenges. This suggests the strong resilience of the tracking system to motion-induced artifacts and geometric variability. The system maintained real-time tracking at 9-12 FPS and successfully re-localized the capsule within seconds after tracking loss, even under gas artifacts and acoustic shadowing. This study presents a hybrid CNN-transformer system for automatic, real-time 3D ultrasound tracking of capsule robots over long distances. The method reliably handles occlusions, view loss and image artifacts, offering millimeter-level tracking accuracy. It significantly reduces clinical workload through autonomous detection and re-localization. Future work includes improving probe-tissue interaction handling and validating performance in live animal and human trials to assess physiological impacts.

Transfer learning for accurate brain tumor classification in MRI: a step forward in medical diagnostics.

Khan MA, Hussain MZ, Mehmood S, Khan MF, Ahmad M, Mazhar T, Shahzad T, Saeed MM

pubmed logopapersJun 9 2025
Brain tumor classification is critical for therapeutic applications that benefit from computer-aided diagnostics. Misdiagnosing a brain tumor can significantly reduce a patient's chances of survival, as it may lead to ineffective treatments. This study proposes a novel approach for classifying brain tumors in MRI images using Transfer Learning (TL) with state-of-the-art deep learning models: AlexNet, MobileNetV2, and GoogleNet. Unlike previous studies that often focus on a single model, our work comprehensively compares these architectures, fine-tuned specifically for brain tumor classification. We utilize a publicly available dataset of 4,517 MRI scans, consisting of three prevalent types of brain tumors-glioma (1,129 images), meningioma (1,134 images), and pituitary tumors (1,138 images)-as well as 1,116 images of normal brains (no tumor). Our approach addresses key research gaps, including class imbalance, through data augmentation and model efficiency, leveraging lightweight architectures like MobileNetV2. The GoogleNet model achieves the highest classification accuracy of 99.2%, outperforming previous studies using the same dataset. This demonstrates the potential of our approach to assist physicians in making rapid and precise decisions, thereby improving patient outcomes. The results highlight the effectiveness of TL in medical diagnostics and its potential for real-world clinical deployment. This study advances the field of brain tumor classification and provides a robust framework for future research in medical image analysis.

optiGAN: A Deep Learning-Based Alternative to Optical Photon Tracking in Python-Based GATE (10+).

Mummaneni G, Trigila C, Krah N, Sarrut D, Roncali E

pubmed logopapersJun 9 2025
To accelerate optical photon transport simulations in the GATE medical physics framework using a Generative Adversarial Network (GAN), while ensuring high modeling accuracy. Traditionally, detailed optical Monte Carlo methods have been the gold standard for modeling photon interactions in detectors, but their high computational cost remains a challenge. This study explores the integration of optiGAN, a Generative Adversarial Network (GAN) model into GATE 10, the new Python-based version of the GATE medical physics simulation framework released in November 2024.
Approach: The goal of optiGAN is to accelerate optical photon transport simulations while maintaining modelling accuracy. The optiGAN model, based on a GAN architecture, was integrated into GATE 10 as a computationally efficient alternative to traditional optical Monte Carlo simulations. To ensure consistency, optical photon transport modules were implemented in GATE 10 and validated against GATE v9.3 under identical simulation conditions. Subsequently, simulations using full Monte Carlo tracking in GATE 10 were compared to those using GATE 10-optiGAN.
Main results: Validation studies confirmed that GATE 10 produces results consistent with GATE v9.3. Simulations using GATE 10-optiGAN showed over 92% similarity to Monte Carlo-based GATE 10 results, based on the Jensen-Shannon distance across multiple photon transport parameters. optiGAN successfully captured multimodal distributions of photon position, direction, and energy at the photodetector face. Simulation time analysis revealed a reduction of approximately 50% in execution time with GATE 10-optiGAN compared to full Monte Carlo simulations.
Significance: The study confirms both the fidelity of optical photon transport modeling in GATE 10 and the effective integration of deep learning-based acceleration through optiGAN. This advancement enables large-scale, high-fidelity optical simulations with significantly reduced computational cost, supporting broader applications in medical imaging and detector design.

Snap-and-tune: combining deep learning and test-time optimization for high-fidelity cardiovascular volumetric meshing

Daniel H. Pak, Shubh Thaker, Kyle Baylous, Xiaoran Zhang, Danny Bluestein, James S. Duncan

arxiv logopreprintJun 9 2025
High-quality volumetric meshing from medical images is a key bottleneck for physics-based simulations in personalized medicine. For volumetric meshing of complex medical structures, recent studies have often utilized deep learning (DL)-based template deformation approaches to enable fast test-time generation with high spatial accuracy. However, these approaches still exhibit limitations, such as limited flexibility at high-curvature areas and unrealistic inter-part distances. In this study, we introduce a simple yet effective snap-and-tune strategy that sequentially applies DL and test-time optimization, which combines fast initial shape fitting with more detailed sample-specific mesh corrections. Our method provides significant improvements in both spatial accuracy and mesh quality, while being fully automated and requiring no additional training labels. Finally, we demonstrate the versatility and usefulness of our newly generated meshes via solid mechanics simulations in two different software platforms. Our code is available at https://github.com/danpak94/Deep-Cardiac-Volumetric-Mesh.

Advancing respiratory disease diagnosis: A deep learning and vision transformer-based approach with a novel X-ray dataset.

Alghadhban A, Ramadan RA, Alazmi M

pubmed logopapersJun 9 2025
With the increasing prevalence of respiratory diseases such as pneumonia and COVID-19, timely and accurate diagnosis is critical. This paper makes significant contributions to the field of respiratory disease classification by utilizing X-ray images and advanced machine learning techniques such as deep learning (DL) and Vision Transformers (ViT). First, the paper systematically reviews the current diagnostic methodologies, analyzing the recent advancement in DL and ViT techniques through a comprehensive analysis of the review articles published between 2017 and 2024, excluding short reviews and overviews. The review not only analyses the existing knowledge but also identifies the critical gaps in the field as well as the lack of diversity of the comprehensive and diverse datasets for training the machine learning models. To address such limitations, the paper extensively evaluates DL-based models on publicly available datasets, analyzing key performance metrics such as accuracy, precision, recall, and F1-score. Our evaluations reveal that the current datasets are mostly limited to the narrow subsets of pulmonary diseases, which might lead to some challenges, including overfitting, poor generalization, and reduced possibility of using advanced machine learning techniques in real-world applications. For instance, DL and ViT models require extensive data for effective learning. The primary contribution of this paper is not only the review of the most recent articles and surveys of respiratory diseases and DL models, including ViT, but also introduces a novel, diverse dataset comprising 7867 X-ray images from 5263 patients across three local hospitals, covering 49 distinct pulmonary diseases. The dataset is expected to enhance DL and ViT model training and improve the generalization of those models in various real-world medical image scenarios. By addressing the data scarcity issue, this paper paves the for more reliable and robust disease classification, improving clinical decision-making. Additionally, the article highlights the critical challenges that still need to be addressed, such as dataset bias and variations of X-ray image quality, as well as the need for further clinical validation. Furthermore, the study underscores the critical role of DL in medical diagnosis and highlights the necessity of comprehensive, well-annotated datasets to improve model robustness and clinical reliability. Through these contributions, the paper provides the basis and foundation of future research on respiratory disease diagnosis using AI-driven methodologies. Although the paper tries to cover all the work done between 2017 and 2024, this research might have some limitations of this research, including the review period before 2017 might have foundational work. At the same time, the rapid development of AI might make the earlier methods less relevant.
Page 90 of 1591582 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.