Sort by:
Page 1 of 19181 results
Next
You are viewing papers added to our database from 2025-09-22 to 2025-09-28.View all papers

Ultra-low-field MRI: a David versus Goliath challenge in modern imaging.

Gagliardo C, Feraco P, Contrino E, D'Angelo C, Geraci L, Salvaggio G, Gagliardo A, La Grutta L, Midiri M, Marrale M

pubmed logopapersSep 26 2025
Ultra-low-field magnetic resonance imaging (ULF-MRI), operating below 0.2 Tesla, is gaining renewed interest as a re-emerging diagnostic modality in a field dominated by high- and ultra-high-field systems. Recent advances in magnet design, RF coils, pulse sequences, and AI-based reconstruction have significantly enhanced image quality, mitigating traditional limitations such as low signal- and contrast-to-noise ratio and reduced spatial resolution. ULF-MRI offers distinct advantages: reduced susceptibility artifacts, safer imaging in patients with metallic implants, low power consumption, and true portability for point-of-care use. This narrative review synthesizes the physical foundations, technological advances, and emerging clinical applications of ULF-MRI. A focused literature search across PubMed, Scopus, IEEE Xplore, and Google Scholar was conducted up to August 11, 2025, using combined keywords targeting hardware, software, and clinical domains. Inclusion emphasized scientific rigor and thematic relevance. A comparative analysis with other imaging modalities highlights the specific niche ULF-MRI occupies within the broader diagnostic landscape. Future directions and challenges for clinical translation are explored. In a world increasingly polarized between the push for ultra-high-field excellence and the need for accessible imaging, ULF-MRI embodies a modern "David versus Goliath" theme, offering a sustainable, democratizing force capable of expanding MRI access to anyone, anywhere.

Ultra-fast whole-brain T2-weighted imaging in 7 seconds using dual-type deep learning reconstruction with single-shot acquisition: clinical feasibility and comparison with conventional methods.

Ikebe Y, Fujima N, Kameda H, Harada T, Shimizu Y, Kwon J, Yoneyama M, Kudo K

pubmed logopapersSep 26 2025
To evaluate the image quality and clinical utility of ultra-fast T2-weighted imaging (UF-T2WI), which acquires all slice data in 7 s using a single-shot turbo spin-echo technique combined with dual-type deep learning (DL) reconstruction, incorporating DL-based image denoising and super-resolution processing, by comparing UF-T2WI with conventional T2WI. We analyzed data from 38 patients who underwent both conventional T2WI and UF-T2WI with the dual-type DL-based image reconstruction. Two board-certified radiologists independently performed blinded qualitative assessments of the patients' images obtained with UF-T2WI with DL and conventional T2WI, evaluating the overall image quality, anatomical structure visibility, and levels of noise and artifacts. In cases that included central nervous system diseases, the lesions' delineation was also assessed. The quantitative analysis included measurements of signal-to-noise ratios in white and gray matter and the contrast-to-noise ratio between gray and white matter. Compared to conventional T2WI, UF-T2WI with DL received significantly higher ratings for overall image quality and lower noise and artifact levels (p < 0.001 for both readers). The anatomical visibility was significantly better in UF-T2WI for one reader, with no significant difference for the other reader. The lesion visibility in UF-T2WI was comparable to that in conventional T2WI. Quantitatively, the SNRs and CNRs were all significantly higher in UF-T2WI than conventional T2WI (p < 0.001). The combination of SSTSE with dual-type DL reconstruction allows for the acquisition of clinically acceptable T2WI images in just 7 s. This technique shows strong potential to reduce MRI scan times and improve clinical workflow efficiency.

COVID-19 Pneumonia Diagnosis Using Medical Images: Deep Learning-Based Transfer Learning Approach.

Dharmik A

pubmed logopapersSep 26 2025
SARS-CoV-2, the causative agent of COVID-19, remains a global health concern due to its high transmissibility and evolving variants. Although vaccination efforts and therapeutic advancements have mitigated disease severity, emerging mutations continue to challenge diagnostics and containment strategies. As of mid-February 2025, global test positivity has risen to 11%, marking the highest level in over 6 months, despite widespread immunization efforts. Newer variants demonstrate enhanced host cell binding, increasing both infectivity and diagnostic complexity. This study aimed to evaluate the effectiveness of deep transfer learning in delivering a rapid, accurate, and mutation-resilient COVID-19 diagnosis from medical imaging, with a focus on scalability and accessibility. An automated detection system was developed using state-of-the-art convolutional neural networks, including VGG16 (Visual Geometry Group network-16 layers), ResNet50 (residual network-50 layers), ConvNeXtTiny (convolutional next-tiny), MobileNet (mobile network), NASNetMobile (neural architecture search network-mobile version), and DenseNet121 (densely connected convolutional network-121 layers), to detect COVID-19 from chest X-ray and computed tomography (CT) images. Among all the models evaluated, DenseNet121 emerged as the best-performing architecture for COVID-19 diagnosis using X-ray and CT images. It achieved an impressive accuracy of 98%, with a precision of 96.9%, a recall of 98.9%, an F1-score of 97.9%, and an area under the curve score of 99.8%, indicating a high degree of consistency and reliability in detecting both positive and negative cases. The confusion matrix showed minimal false positives and false negatives, underscoring the model's robustness in real-world diagnostic scenarios. Given its performance, DenseNet121 is a strong candidate for deployment in clinical settings and serves as a benchmark for future improvements in artificial intelligence-assisted diagnostic tools. The study results underscore the potential of artificial intelligence-powered diagnostics in supporting early detection and global pandemic response. With careful optimization, deep learning models can address critical gaps in testing, particularly in settings constrained by limited resources or emerging variants.

A Deep Learning-Based EffConvNeXt Model for Automatic Classification of Cystic Bronchiectasis: An Explainable AI Approach.

Tekin V, Tekinhatun M, Özçelik STA, Fırat H, Üzen H

pubmed logopapersSep 25 2025
Cystic bronchiectasis and pneumonia are respiratory conditions that significantly impact morbidity and mortality worldwide. Diagnosing these diseases accurately is crucial, as early detection can greatly improve patient outcomes. These diseases are respiratory conditions that present with overlapping features on chest X-rays (CXR), making accurate diagnosis challenging. Recent advancements in deep learning (DL) have improved diagnostic accuracy in medical imaging. This study proposes the EffConvNeXt model, a hybrid approach combining EfficientNetB1 and ConvNeXtTiny, designed to enhance classification accuracy for cystic bronchiectasis, pneumonia, and normal cases in CXRs. The model effectively balances EfficientNetB1's efficiency with ConvNeXtTiny's advanced feature extraction, allowing for better identification of complex patterns in CXR images. Additionally, the EffConvNeXt model combines EfficientNetB1 and ConvNeXtTiny, addressing limitations of each model individually: EfficientNetB1's SE blocks improve focus on critical image areas while keeping the model lightweight and fast, and ConvNeXtTiny enhances detection of subtle abnormalities, making the combined model highly effective for rapid and accurate CXR image analysis in clinical settings. For the performance analysis of the EffConvNeXt model, experimental studies were conducted using 5899 CXR images collected from Dicle University Medical Faculty. When used individually, ConvNeXtTiny achieved an accuracy rate of 97.12%, while EfficientNetB1 reached 97.79%. By combining both models, the EffConvNeXt raised the accuracy to 98.25%, showing a 0.46% improvement. With this result, the other tested DL models fell behind. These findings indicate that EffConvNeXt provides a reliable, automated solution for distinguishing cystic bronchiectasis and pneumonia, supporting clinical decision-making with enhanced diagnostic accuracy.

Single-centre, prospective cohort to predict optimal individualised treatment response in multiple sclerosis (POINT-MS): a cohort profile.

Christensen R, Cruciani A, Al-Araji S, Bianchi A, Chard D, Fourali S, Hamed W, Hammam A, He A, Kanber B, Maccarrone D, Moccia M, Mohamud S, Nistri R, Passalis A, Pozzilli V, Prados Carrasco F, Samdanidou E, Song J, Wingrove J, Yam C, Yiannakas M, Thompson AJ, Toosy A, Hacohen Y, Barkhof F, Ciccarelli O

pubmed logopapersSep 25 2025
Multiple sclerosis (MS) is a chronic neurological condition that affects approximately 150 000 people in the UK and presents a significant healthcare burden, including the high costs of disease-modifying treatments (DMTs). DMTs have substantially reduced the risk of relapse and moderately reduced disability progression. Patients exhibit a wide range of responses to available DMTs. The Predicting Optimal INdividualised Treatment response in MS (POINT-MS) cohort was established to predict the individual treatment response by integrating comprehensive clinical phenotyping with imaging, serum and genetic biomarkers of disease activity and progression. Here, we present the baseline characteristics of the cohort and provide an overview of the study design, laying the groundwork for future analyses. POINT-MS is a prospective, observational research cohort and biobank of 781 adult participants with a diagnosis of MS who consented to study enrolment on initiation of a DMT at the Queen Square MS Centre (National Hospital of Neurology and Neurosurgery, University College London Hospital NHS Trust, London) between 01/07/2019 and 31/07/2024. All patients were invited for clinical assessments, including the expanded disability status scale (EDSS) score, brief international cognitive assessment for MS and various patient-reported outcome measures (PROMs). They additionally underwent MRI at 3T, optical coherence tomography and blood tests (for genotyping and serum biomarkers quantification), at baseline (i.e., within 3 months from commencing a DMT), and between 6-12 (re-baseline), 18-24, 30-36, 42-48 and 54-60 months after DMT initiation. 748 participants provided baseline data. They were mostly female (68%) and White (75%) participants, with relapsing-remitting MS (94.3%), and with an average age of 40.8 (±10.9) years and a mean disease duration of 7.9 (±7.4) years since symptom onset. Despite low disability (median EDSS 2.0), cognitive impairment was observed in 40% of participants. Most patients (98.4%) had at least one comorbidity. At study entry, 59.2% were treatment naïve, and 83.2% initiated a high-efficacy DMT. Most patients (76.4%) were in either full- or part-time employment. PROMs indicated heterogeneous impairments in physical and mental health, with a greater psychological than physical impact and with low levels of fatigue. When baseline MRI scans were compared with previous scans (available in 668 (89%) patients; mean time since last scan 9±8 months), 26% and 8.5% of patients had at least one new brain or spinal cord lesion at study entry, respectively. Patients showed a median volume of brain lesions of 6.14 cm<sup>3</sup>, with significant variability among patients (CI 1.1 to 34.1). When brain tissue volumes z-scores were obtained using healthy subjects (N=113, (mean age 42.3 (± 11.8) years, 61.9% female)) from a local MRI database, patients showed a slight reduction in the volumes of the whole grey matter (-0.16 (-0.22 to -0.09)), driven by the deep grey matter (-0.47 (-0.55 to -0.40)), and of the whole white matter (-0.18 (-0.28 to -0.09)), but normal cortical grey matter volumes (0.10 (0.05 to 0.15)). The mean upper cervical spinal cord cross-sectional area (CSA), as measured from volumetric brain scans, was 62.3 (SD 7.5) mm<sup>2</sup>. When CSA z-scores were obtained from the same healthy subjects used for brain measures, patients showed a slight reduction in CSA (-0.15 (-0.24 to -0.10)). Modelling with both standard statistics and machine learning approaches is currently planned to predict individualised treatment response by integrating all the demographic, socioeconomic, clinical data with imaging, genetic and serum biomarkers. The long-term output of this research is a stratification tool that will guide the selection of DMTs in clinical practice on the basis of the individual prognostic profile. We will complete long-term follow-up data in 4 years (January 2029). The biobank and MRI repository will be used for collaborative research on the mechanisms of disability in MS.

End-to-end CNN-based deep learning enhances breast lesion characterization using quantitative ultrasound (QUS) spectral parametric images.

Osapoetra LO, Moslemi A, Moore-Palhares D, Halstead S, Alberico D, Hwang A, Sannachi L, Curpen B, Czarnota GJ

pubmed logopapersSep 25 2025
QUS spectral parametric imaging offers a fast and accurate method for breast lesion characterization. This study explored using deep CNNs to classify breast lesions from QUS spectral parametric images, aiming to enhance radiomics and conventional machine learning. Predictive models were developed using transfer learning with pre-trained CNNs to distinguish malignant from benign lesions. The dataset included 276 participants: 184 malignant (median age, 51 years [IQR: 27-81 years]) and 92 benign cases (median age, 46 years [IQR: 18-75 years]). QUS spectral parametric imaging was applied to the US RF data and resulted in 1764 images of QUS spectral (MBF, SS, and SI), along with QUS scattering parameters (ASD and AAC). The data were randomly split into 60% training, 20% validation, and 20% test sets, stratified by lesion subtype, and repeated five times. The number of convolutional blocks was optimized, and the final convolutional layer was fine-tuned. Models tested included ResNet, Inception-v3, Xception, and EfficientNet. Xception-41 achieved a recall of 86 ± 3%, specificity of 87 ± 5%, balanced accuracy of 87 ± 3%, and an AUC of 0.93 ± 0.02 on test sets. EfficientNetV2-M showed similar performance with a recall of 91 ± 1%, specificity of 81 ± 7%, balanced accuracy of 86 ± 3%, and an AUC of 0.92 ± 0.02. CNN models outperformed radiomics and conventional machine learning (p-values < 0.05). This study demonstrated the capability of end-to-end CNN-based models for the accurate characterization of breast masses from QUS spectral parametric images.

MRI grading of lumbar disc herniation based on AFFM-YOLOv8 system.

Wang Y, Yang Z, Cai S, Wu W, Wu W

pubmed logopapersSep 25 2025
Magnetic resonance imaging (MRI) serves as the clinical gold standard for diagnosing lumbar disc herniation (LDH). This multicenter study was to develop and clinically validate a deep learning (DL) model utilizing axial T2-weighted lumbar MRI sequences to automate LDH detection, following the Michigan State University (MSU) morphological classification criteria. A total of 8428 patients (100000 axial lumbar MRIs) with spinal surgeons annotating the datasets per MSU criteria, which classifies LDH into 11 subtypes based on morphology and neural compression severity, were analyzed. A DL architecture integrating adaptive multi-scale feature fusion titled as AFFM-YOLOv8 was developed. Model performance was validated against radiologists' annotations using accuracy, precision, recall, F1-score, and Cohen's κ (95% confidence intervals). The proposed model demonstrated superior diagnostic performance with a 91.01% F1-score (3.05% improvement over baseline) and 3% recall enhancement across all evaluation metrics. For surgical indication prediction, strong inter-rater agreement was achieved with senior surgeons (κ = 0.91, 95% CI 90.6-91.4) and residents (κ = 0.89, 95% CI 88.5-89.4), reaching consensus levels comparable to expert-to-expert agreement (senior surgeons: κ = 0.89; residents: κ = 0.87). This study establishes a DL framework for automated LDH diagnosis using large-scale axial MRI datasets. The model achieves clinician-level accuracy in MUS-compliant classification, addressing key limitations of prior binary classification systems. By providing granular spatial and morphological insights, this tool holds promise for standardizing LDH assessment and reducing diagnostic delays in resource-constrained settings.

Automated segmentation of brain metastases in magnetic resonance imaging using deep learning in radiotherapy.

Zhang R, Liu Y, Li M, Jin A, Chen C, Dai Z, Zhang W, Jia L, Peng P

pubmed logopapersSep 25 2025
Brain metastases (BMs) are the most common intracranial tumors and stereotactic radiotherapy improved the life quality of patient with BMs, while it requires more time and experience to delineate BMs precisely by oncologists. Deep Learning techniques showed promising applications in radiation oncology. Therefore, we proposed a deep learning-based automatic segmentation of primary tumor volumes for BMs in this work. Magnetic resonance imaging (MRI) of 158 eligible patients with BMs was retrospectively collected in the study. An automatic segmentation model called BUC-Net based on U-Net with cascade strategy and bottleneck module was proposed for auto-segmentation of BMs. The proposed model was evaluated using geometric metrics (Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95) and Average surface distance (ASD)) for the performance of automatic segmentation, and Precision recall (PR) and Receiver operating characteristic (ROC) curve for the performance of automatic detection, and relative volume difference (RVD) for evaluation. Compared with U-Net and U-Net Cascade, the BUC-Net achieved the average DSC of 0.912 and 0.797, HD95 of 0.901 mm and 0.922 mm, ASD of 0.332 mm and 0.210 mm for the evaluation of automatic segmentation in binary classification and multiple classification, respectively. The average Area Under Curve (AUC) of 0.934 and 0.835 for (Precision-Recall) PR and Receiver Operating Characteristic (ROC) curve for the tumor detection. It also performed the minimum RVD with various diameter ranges in the clinical evaluation. The BUC-Net can achieve the segmentation and modification of BMs for one patient within 10 min, instead of 3-6 h by the conventional manual modification, which is conspicuous to improve the efficiency and accuracy of radiation therapy.

Multimodal text guided network for chest CT pneumonia classification.

Feng Y, Huang G, Ju F, Cui H

pubmed logopapersSep 25 2025
Pneumonia is a prevalent and serious respiratory disease, responsible for a significant number of cases globally. With advancements in deep learning, the automatic diagnosis of pneumonia has attracted significant research attention in medical image classification. However, current methods still face several challenges. First, since lesions are often visible in only a few slices, slice-based classification algorithms may overlook critical spatial contextual information in CT sequences, and slice-level annotations are labor-intensive. Moreover, chest CT sequence-based pneumonia classification algorithms that rely solely on sequence-level coarse-grained labels remain limited, especially in integrating multi-modal information. To address these challenges, we propose a Multi-modal Text-Guided Network (MTGNet) for pneumonia classification using chest CT sequences. In this model, we design a sequential graph pooling network to encode the CT sequences by gradually selecting important slice features to obtain a sequence-level representation. Additionally, a CT description encoder is developed to learn representations from textual reports. To simulate the clinical diagnostic process, we employ multi-modal training and single-modal testing. A modal transfer module is proposed to generate simulated textual features from CT sequences. Cross-modal attention is then employed to fuse the sequence-level and simulated textual representations, thereby enhancing feature learning within the CT sequences by incorporating semantic information from textual descriptions. Furthermore, contrastive learning is applied to learn discriminative features by maximizing the similarity of positive sample pairs and minimizing the similarity of negative sample pairs. Extensive experiments on a self-constructed pneumonia CT sequences dataset demonstrate that the proposed model significantly improves classification performance.

Machine and Deep Learning applied to Medical Microwave Imaging: a Scoping Review from Reconstruction to Classification.

Silva T, Conceicao RC, Godinho DM

pubmed logopapersSep 25 2025
Microwave Imaging (MWI) is a promising modality due to its noninvasive nature and lower cost compared to other medical imaging techniques. These characteristics make it a potential alternative to traditional imaging techniques. It has various medical applications, particularly exploited in breast and brain imaging. Machine Learning (ML) has also been increasingly used for medical applications. This paper provides a scoping review of the role of ML in MWI, focusing on two key areas: image reconstruction and classification. The reconstruction section discusses various ML algorithms used to enhance image quality and computational efficiency, highlighting methods such as Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs). The classification section delves into the application of ML for distinguishing between different tissue types, including applications in breast cancer detection and neurological disorder classification. By analyzing the latest studies and methodologies, this review aims review to the current state of ML-enhanced MWI and sheds light on its potential for clinical applications.
Page 1 of 19181 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.