Sort by:
Page 119 of 1401395 results

Enhancing Lesion Detection in Inflammatory Myelopathies: A Deep Learning-Reconstructed Double Inversion Recovery MRI Approach.

Fang Q, Yang Q, Wang B, Wen B, Xu G, He J

pubmed logopapersJun 3 2025
The imaging of inflammatory myelopathies has advanced significantly across time, with MRI techniques playing a pivotal role in enhancing lesion detection. However, the impact of deep learning (DL)-based reconstruction on 3D double inversion recovery (DIR) imaging for inflammatory myelopathies remains unassessed. This study aimed to compare the acquisition time, image quality, diagnostic confidence, and lesion detection rates among sagittal T2WI, standard DIR, and DL-reconstructed DIR in patients with inflammatory myelopathies. In this observational study, patients diagnosed with inflammatory myelopathies were recruited between June 2023 and March 2024. Each patient underwent sagittal conventional TSE sequences and standard 3D DIR (T2WI and standard 3D DIR were used as references for comparison), followed by an undersampled accelerated double inversion recovery deep learning (DIR<sub>DL</sub>) examination. Three neuroradiologists evaluated the images using a 4-point Likert scale (from 1 to 4) for overall image quality, perceived SNR, sharpness, artifacts, and diagnostic confidence. The acquisition times and lesion detection rates were also compared among the acquisition protocols. A total of 149 participants were evaluated (mean age, 40.6 [SD, 16.8] years; 71 women). The median acquisition time for DIR<sub>DL</sub> was significantly lower than for standard DIR (298 seconds [interquartile range, 288-301 seconds] versus 151 seconds [interquartile range, 148-155 seconds]; <i>P</i> < .001), showing a 49% time reduction. DIR<sub>DL</sub> images scored higher in overall quality, perceived SNR, and artifact noise reduction (all <i>P</i> < .001). There were no significant differences in sharpness (<i>P</i> = .07) or diagnostic confidence (<i>P</i> = .06) between the standard DIR and DIR<sub>DL</sub> protocols. Additionally, DIR<sub>DL</sub> detected 37% more lesions compared with T2WI (300 versus 219; <i>P</i> < .001). DIR<sub>DL</sub> significantly reduces acquisition time and improves image quality compared with standard DIR, without compromising diagnostic confidence. Additionally, DIR<sub>DL</sub> enhances lesion detection in patients with inflammatory myelopathies, making it a valuable tool in clinical practice. These findings underscore the potential for incorporating DIR<sub>DL</sub> into future imaging guidelines.

Open-PMC-18M: A High-Fidelity Large Scale Medical Dataset for Multimodal Representation Learning

Negin Baghbanzadeh, Sajad Ashkezari, Elham Dolatabadi, Arash Afkanpour

arxiv logopreprintJun 3 2025
Compound figures, which are multi-panel composites containing diverse subfigures, are ubiquitous in biomedical literature, yet large-scale subfigure extraction remains largely unaddressed. Prior work on subfigure extraction has been limited in both dataset size and generalizability, leaving a critical open question: How does high-fidelity image-text alignment via large-scale subfigure extraction impact representation learning in vision-language models? We address this gap by introducing a scalable subfigure extraction pipeline based on transformer-based object detection, trained on a synthetic corpus of 500,000 compound figures, and achieving state-of-the-art performance on both ImageCLEF 2016 and synthetic benchmarks. Using this pipeline, we release OPEN-PMC-18M, a large-scale high quality biomedical vision-language dataset comprising 18 million clinically relevant subfigure-caption pairs spanning radiology, microscopy, and visible light photography. We train and evaluate vision-language models on our curated datasets and show improved performance across retrieval, zero-shot classification, and robustness benchmarks, outperforming existing baselines. We release our dataset, models, and code to support reproducible benchmarks and further study into biomedical vision-language modeling and representation learning.

Deep Learning-Based Opportunistic CT Osteoporosis Screening and Establishment of Normative Values

Westerhoff, M., Gyftopoulos, S., Dane, B., Vega, E., Murdock, D., Lindow, N., Herter, F., Bousabarah, K., Recht, M. P., Bredella, M. A.

medrxiv logopreprintJun 3 2025
BackgroundOsteoporosis is underdiagnosed and undertreated prompting the exploration of opportunistic screening using CT and artificial intelligence (AI). PurposeTo develop a reproducible deep learning-based convolutional neural network to automatically place a 3D region of interest (ROI) in trabecular bone, develop a correction method to normalize attenuation across different CT protocols or and scanner models, and to establish thresholds for osteoporosis in a large diverse population. MethodsA deep learning-based method was developed to automatically quantify trabecular attenuation using a 3D ROI of the thoracic and lumbar spine on chest, abdomen, or spine CTs, adjusted for different tube voltages and scanner models. Normative values, thresholds for osteoporosis of trabecular attenuation of the spine were established across a diverse population, stratified by age, sex, race, and ethnicity using reported prevalence of osteoporosis by the WHO. Results538,946 CT examinations from 283,499 patients (mean age 65 years{+/-}15, 51.2% women and 55.5% White), performed on 50 scanner models using six different tube voltages were analyzed. Hounsfield Units at 80 kVp versus 120 kVp differed by 23%, and different scanner models resulted in differences of values by < 10%. Automated ROI placement of 1496 vertebra was validated by manual radiologist review, demonstrating >99% agreement. Mean trabecular attenuation was higher in young women (<50 years) than young men (p<.001) and decreased with age, with a steeper decline in postmenopausal women. In patients older than 50 years, trabecular attention was higher in males than females (p<.001). Trabecular attenuation was highest in Blacks, followed by Asians and lowest in Whites (p<.001). The threshold for L1 in diagnosing osteoporosis was 80 HU. ConclusionDeep learning-based automated opportunistic osteoporosis screening can identify patients with low bone mineral density that undergo CT scans for clinical purposes on different scanners and protocols. Key Results 3 main results/conclusionsO_LIIn a study of 538,946 CT examinations performed in 283,499 patients using different scanner models and imaging protocols, an automated deep learning-based convolutional neural network was able to accurately place a three-dimensional regions of interest within thoracic and lumbar vertebra to measure trabecular attenuation. C_LIO_LITube voltage had a larger influence on attenuation values (23%) than scanner model (<10%). C_LIO_LIA threshold of 80 HU was identified for L1 to diagnose osteoporosis using an automated three-dimensional region of interest. C_LI

MRI super-resolution reconstruction using efficient diffusion probabilistic model with residual shifting.

Safari M, Wang S, Eidex Z, Li Q, Qiu RLJ, Middlebrooks EH, Yu DS, Yang X

pubmed logopapersJun 3 2025
Magnetic resonance imaging (MRI) is essential in clinical and research contexts, providing exceptional soft-tissue contrast. However, prolonged acquisition times often lead to patient discomfort and motion artifacts. Diffusion-based deep learning super-resolution (SR) techniques reconstruct high-resolution (HR) images from low-resolution (LR) pairs, but they involve extensive sampling steps, limiting real-time application. To overcome these issues, this study introduces a residual error-shifting mechanism markedly reducing sampling steps while maintaining vital anatomical details, thereby accelerating MRI reconstruction. We developed Res-SRDiff, a novel diffusion-based SR framework incorporating residual error shifting into the forward diffusion process. This integration aligns the degraded HR and LR distributions, enabling efficient HR image reconstruction. We evaluated Res-SRDiff using ultra-high-field brain T1 MP2RAGE maps and T2-weighted prostate images, benchmarking it against Bicubic, Pix2pix, CycleGAN, SPSR, I2SB, and TM-DDPM methods. Quantitative assessments employed peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), gradient magnitude similarity deviation (GMSD), and learned perceptual image patch similarity (LPIPS). Additionally, we qualitatively and quantitatively assessed the proposed framework's individual components through an ablation study and conducted a Likert-based image quality evaluation. Res-SRDiff significantly surpassed most comparison methods regarding PSNR, SSIM, and GMSD for both datasets, with statistically significant improvements (p-values≪0.05). The model achieved high-fidelity image reconstruction using only four sampling steps, drastically reducing computation time to under one second per slice. In contrast, traditional methods like TM-DDPM and I2SB required approximately 20 and 38 seconds per slice, respectively. Qualitative analysis showed Res-SRDiff effectively preserved fine anatomical details and lesion morphologies. The Likert study indicated that our method received the highest scores, 4.14±0.77(brain) and 4.80±0.40(prostate). Res-SRDiff demonstrates efficiency and accuracy, markedly improving computational speed and image quality. Incorporating residual error shifting into diffusion-based SR facilitates rapid, robust HR image reconstruction, enhancing clinical MRI workflow and advancing medical imaging research. Code available at https://github.com/mosaf/Res-SRDiff.

Deep learning-based automatic segmentation of arterial vessel walls and plaques in MR vessel wall images for quantitative assessment.

Yang L, Yang X, Gong Z, Mao Y, Lu SS, Zhu C, Wan L, Huang J, Mohd Noor MH, Wu K, Li C, Cheng G, Li Y, Liang D, Liu X, Zheng H, Hu Z, Zhang N

pubmed logopapersJun 3 2025
To develop and validate a deep-learning-based automatic method for vessel walls and atherosclerotic plaques segmentation for quantitative evaluation in MR vessel wall images. A total of 193 patients (107 patients for training and validation, 39 patients for internal test, 47 patients for external test) with atherosclerotic plaque from five centers underwent T1-weighted MRI scans and were included in the dataset. The first step of the proposed method was constructing a purely learning-based convolutional neural network (CNN) named Vessel-SegNet to segment the lumen and the vessel wall. The second step is using the vessel wall priors (including manual prior and Tversky-loss-based automatic prior) to improve the plaque segmentation, which utilizes the morphological similarity between the vessel wall and the plaque. The Dice similarity coefficient (DSC), intraclass correlation coefficient (ICC), etc., were used to evaluate the similarity, agreement, and correlations. Most of the DSCs for lumen and vessel wall segmentation were above 90%. The introduction of vessel wall priors can increase the DSC for plaque segmentation by over 10%, reaching 88.45%. Compared to dice-loss-based vessel wall priors, the Tversky-loss-based priors can further improve DSC by nearly 3%, reaching 82.84%. Most of the ICC values between the Vessel-SegNet and manual methods in the 6 quantitative measurements are greater than 85% (p-value < 0.001). The proposed CNN-based segmentation model can quickly and accurately segment vessel walls and plaques for quantitative evaluation. Due to the lack of testing with other equipment, populations, and anatomical studies, the reliability of the research results still requires further exploration. Question How can the accuracy and efficiency of vessel component segmentation for quantification, including the lumen, vessel wall, and plaque, be improved? Findings Improved CNN models, manual/automatic vessel wall priors, and Tversky loss can improve the performance of semi-automatic/automatic vessel components segmentation for quantification. Clinical relevance Manual segmentation of vessel components is a time-consuming yet important process. Rapid and accurate segmentation of the lumen, vessel walls, and plaques for quantification assessment helps patients obtain more accurate, efficient, and timely stroke risk assessments and clinical recommendations.

PARADIM: A Platform to Support Research at the Interface of Data Science and Medical Imaging.

Lemaréchal Y, Couture G, Pelletier F, Lefol R, Asselin PL, Ouellet S, Bernard J, Ebrahimpour L, Manem VSK, Topalis J, Schachtner B, Jodogne S, Joubert P, Jeblick K, Ingrisch M, Després P

pubmed logopapersJun 3 2025
This paper describes PARADIM, a digital infrastructure designed to support research at the interface of data science and medical imaging, with a focus on Research Data Management best practices. The platform is built from open-source components and rooted in the FAIR principles through strict compliance with the DICOM standard. It addresses key needs in data curation, governance, privacy, and scalable resource management. Supporting every stage of the data science discovery cycle, the platform offers robust functionalities for user identity and access management, data de-identification, storage, annotation, as well as model training and evaluation. Rich metadata are generated all along the research lifecycle to ensure the traceability and reproducibility of results. PARADIM hosts several medical image collections and allows the automation of large-scale, computationally intensive pipelines (e.g., automatic segmentation, dose calculations, AI model evaluation). The platform fills a gap at the interface of data science and medical imaging, where digital infrastructures are key in the development, evaluation, and deployment of innovative solutions in the real world.

Evaluating the Diagnostic Accuracy of ChatGPT-4.0 for Classifying Multimodal Musculoskeletal Masses: A Comparative Study with Human Raters.

Bosbach WA, Schoeni L, Beisbart C, Senge JF, Mitrakovic M, Anderson SE, Achangwa NR, Divjak E, Ivanac G, Grieser T, Weber MA, Maurer MH, Sanal HT, Daneshvar K

pubmed logopapersJun 3 2025
Novel artificial intelligence tools have the potential to significantly enhance productivity in medicine, while also maintaining or even improving treatment quality. In this study, we aimed to evaluate the current capability of ChatGPT-4.0 to accurately interpret multimodal musculoskeletal tumor cases.We created 25 cases, each containing images from X-ray, computed tomography, magnetic resonance imaging, or scintigraphy. ChatGPT-4.0 was tasked with classifying each case using a six-option, two-choice question, where both a primary and a secondary diagnosis were allowed. For performance evaluation, human raters also assessed the same cases.When only the primary diagnosis was taken into account, the accuracy of human raters was greater than that of ChatGPT-4.0 by a factor of nearly 2 (87% vs. 44%). However, in a setting that also considered secondary diagnoses, the performance gap shrank substantially (accuracy: 94% vs. 71%). Power analysis relying on Cohen's w confirmed the adequacy of the sample set size (n: 25).The tested artificial intelligence tool demonstrated lower performance than human raters. Considering factors such as speed, constant availability, and potential future improvements, it appears plausible that artificial intelligence tools could serve as valuable assistance systems for doctors in future clinical settings. · ChatGPT-4.0 classifies musculoskeletal cases using multimodal imaging inputs.. · Human raters outperform AI in primary diagnosis accuracy by a factor of nearly two.. · Including secondary diagnoses improves AI performance and narrows the gap.. · AI demonstrates potential as an assistive tool in future radiological workflows.. · Power analysis confirms robustness of study findings with the current sample size.. · Bosbach WA, Schoeni L, Beisbart C et al. Evaluating the Diagnostic Accuracy of ChatGPT-4.0 for Classifying Multimodal Musculoskeletal Masses: A Comparative Study with Human Raters. Rofo 2025; DOI 10.1055/a-2594-7085.

Patient-specific prostate segmentation in kilovoltage images for radiation therapy intrafraction monitoring via deep learning.

Mylonas A, Li Z, Mueller M, Booth JT, Brown R, Gardner M, Kneebone A, Eade T, Keall PJ, Nguyen DT

pubmed logopapersJun 3 2025
During radiation therapy, the natural movement of organs can lead to underdosing the cancer and overdosing the healthy tissue, compromising treatment efficacy. Real-time image-guided adaptive radiation therapy can track the tumour and account for the motion. Typically, fiducial markers are implanted as a surrogate for the tumour position due to the low radiographic contrast of soft tissues in kilovoltage (kV) images. A segmentation approach that does not require markers would eliminate the costs, delays, and risks associated with marker implantation. We trained patient-specific conditional Generative Adversarial Networks for prostate segmentation in kV images. The networks were trained using synthetic kV images generated from each patient's own imaging and planning data, which are available prior to the commencement of treatment. We validated the networks on two treatment fractions from 30 patients using multi-centre data from two clinical trials. Here, we present a large-scale proof-of-principle study of x-ray-based markerless prostate segmentation for globally available cancer therapy systems. Our results demonstrate the feasibility of a deep learning approach using kV images to track prostate motion across the entire treatment arc for 30 patients with prostate cancer. The mean absolute deviation is 1.4 and 1.6 mm in the anterior-posterior/lateral and superior-inferior directions, respectively. Markerless segmentation via deep learning may enable real-time image guidance on conventional cancer therapy systems without requiring implanted markers or additional hardware, thereby expanding access to real-time adaptive radiation therapy.

Computer-Aided Decision Support Systems of Alzheimer's Disease Diagnosis - A Systematic Review.

Günaydın T, Varlı S

pubmed logopapersJun 3 2025
The incidence of Alzheimer's disease is rising with the increasing elderly population worldwide. While no cure exists, early diagnosis can significantly slow disease progression. Computer-aided diagnostic systems are becoming critical tools for assisting in the early detection of Alzheimer's disease. In this systematic review, we aim to evaluate recent advancements in computer-aided decision support systems for Alzheimer's disease diagnosis, focusing on data modalities, machine learning methods, and performance metrics. We conducted a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Studies published between 2021 and 2024 were retrieved from PubMed, IEEEXplore and Web of Science, using search terms related to Alzheimer's disease classification, neuroimaging, machine learning, and diagnostic performance. A total of 39 studies met the inclusion criteria, focusing on the use of Magnetic Resonance Imaging, Positron Emission Tomography, and biomarkers for Alzheimer's disease classification using machine learning models. Multimodal approaches, combining Magnetic Resonance Imaging with Positron Emission Tomography and Cognitive assessments, outperformed single-modality studies in diagnostic accuracy reliability. Convolutional Neural Networks were the most commonly used machine learning models, followed by hybrid models and Random Forest. The highest accuracy reported for binary classification was 100%, while multi-class classification achieved up to 99.98%. Techniques like Synthetic Minority Over-sampling Technique and data augmentation were frequently employed to address data imbalance, improving model generalizability. Our review highlights the advantages of using multimodal data in computer-aided decision support systems for more accurate Alzheimer's disease diagnosis. However, we also identified several limitations, including data imbalance, small sample sizes, and the lack of external validation in most studies. Future research should utilize larger, more diverse datasets, incorporate longitudinal data, and validate models in real-world clinical trials. Additionally, there is a growing need for explainability in machine learning models to ensure they are interpretable and trusted in clinical settings. While computer-aided decision support systems show great promise in improving the early diagnosis of Alzheimer's disease, further work is needed to enhance their robustness, generalizability, and clinical applicability. By addressing these challenges, computer-aided decision support systems could play a pivotal role in the early detection and management of Alzheimer's disease, potentially improving patient outcomes and reducing healthcare costs.

Open-PMC-18M: A High-Fidelity Large Scale Medical Dataset for Multimodal Representation Learning

Negin Baghbanzadeh, Sajad Ashkezari, Elham Dolatabadi, Arash Afkanpour

arxiv logopreprintJun 3 2025
Compound figures, which are multi-panel composites containing diverse subfigures, are ubiquitous in biomedical literature, yet large-scale subfigure extraction remains largely unaddressed. Prior work on subfigure extraction has been limited in both dataset size and generalizability, leaving a critical open question: How does high-fidelity image-text alignment via large-scale subfigure extraction impact representation learning in vision-language models? We address this gap by introducing a scalable subfigure extraction pipeline based on transformer-based object detection, trained on a synthetic corpus of 500,000 compound figures, and achieving state-of-the-art performance on both ImageCLEF 2016 and synthetic benchmarks. Using this pipeline, we release OPEN-PMC-18M, a large-scale high quality biomedical vision-language dataset comprising 18 million clinically relevant subfigure-caption pairs spanning radiology, microscopy, and visible light photography. We train and evaluate vision-language models on our curated datasets and show improved performance across retrieval, zero-shot classification, and robustness benchmarks, outperforming existing baselines. We release our dataset, models, and code to support reproducible benchmarks and further study into biomedical vision-language modeling and representation learning.
Page 119 of 1401395 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.