Sort by:
Page 1 of 1111109 results
Next

Enhancing Spinal Cord and Canal Segmentation in Degenerative Cervical Myelopathy : The Role of Interactive Learning Models with manual Click.

Han S, Oh JK, Cho W, Kim TJ, Hong N, Park SB

pubmed logopapersSep 29 2025
We aim to develop an interactive segmentation model that can offer accuracy and reliability for the segmentation of the irregularly shaped spinal cord and canal in degenerative cervical myelopathy (DCM) through manual click and model refinement. A dataset of 1444 frames from 294 magnetic resonance imaging records of DCM patients was used and we developed two different segmentation models for comparison : auto-segmentation and interactive segmentation. The former was based on U-Net and utilized a pretrained ConvNeXT-tiny as its encoder. For the latter, we employed an interactive segmentation model structured by SimpleClick, a large model that utilizes a vision transformer as its backbone, together with simple fine-tuning. The segmentation performance of the two models were compared in terms of their Dice scores, mean intersection over union (mIoU), Average Precision and Hausdorff distance. The efficiency of the interactive segmentation model was evaluated by the number of clicks required to achieve a target mIoU. Our model achieved better scores across all four-evaluation metrics for segmentation accuracy, showing improvements of +6.4%, +1.8%, +3.7%, and -53.0% for canal segmentation, and +11.7%, +6.0%, +18.2%, and -70.9% for cord segmentation with 15 clicks, respectively. The required clicks for the interactive segmentation model to achieve a 90% mIoU for spinal canal with cord cases and 80% mIoU for spinal cord cases were 11.71 and 11.99, respectively. We found that the interactive segmentation model significantly outperformed the auto-segmentation model. By incorporating simple manual inputs, the interactive model effectively identified regions of interest, particularly in the complex and irregular shapes of the spinal cord, demonstrating both enhanced accuracy and adaptability.

Beyond tractography in brain connectivity mapping with dMRI morphometry and functional networks.

Wang JT, Lin CP, Liu HM, Pierpaoli C, Lo CZ

pubmed logopapersSep 27 2025
Traditional brain connectivity studies have focused mainly on structural connectivity, often relying on tractography with diffusion MRI (dMRI) to reconstruct white matter pathways. In parallel, studies of functional connectivity have examined correlations in brain activity using fMRI. However, emerging methodologies are advancing our understanding of brain networks. Here we explore advanced connectivity approaches beyond conventional tractography, focusing on dMRI morphometry and the integration of structural and functional connectivity analysis. dMRI morphometry enables quantitative assessment of white matter pathway volumes through statistical comparison with normative populations, while functional connectivity reveals network organization that is not restricted to direct anatomical connections. More recently, approaches that combine diffusion tensor imaging (DTI) with functional correlation tensor (FCT) analysis have been introduced, and these complementary methods provide new perspectives into brain structure-function relationships. Together, such approaches have important implications for neurodevelopmental and neurological disorders as well as brain plasticity. The integration of these methods with artificial intelligence techniques have the potential to support both basic neuroscience research and clinical applications.

Ultra-low-field MRI: a David versus Goliath challenge in modern imaging.

Gagliardo C, Feraco P, Contrino E, D'Angelo C, Geraci L, Salvaggio G, Gagliardo A, La Grutta L, Midiri M, Marrale M

pubmed logopapersSep 26 2025
Ultra-low-field magnetic resonance imaging (ULF-MRI), operating below 0.2 Tesla, is gaining renewed interest as a re-emerging diagnostic modality in a field dominated by high- and ultra-high-field systems. Recent advances in magnet design, RF coils, pulse sequences, and AI-based reconstruction have significantly enhanced image quality, mitigating traditional limitations such as low signal- and contrast-to-noise ratio and reduced spatial resolution. ULF-MRI offers distinct advantages: reduced susceptibility artifacts, safer imaging in patients with metallic implants, low power consumption, and true portability for point-of-care use. This narrative review synthesizes the physical foundations, technological advances, and emerging clinical applications of ULF-MRI. A focused literature search across PubMed, Scopus, IEEE Xplore, and Google Scholar was conducted up to August 11, 2025, using combined keywords targeting hardware, software, and clinical domains. Inclusion emphasized scientific rigor and thematic relevance. A comparative analysis with other imaging modalities highlights the specific niche ULF-MRI occupies within the broader diagnostic landscape. Future directions and challenges for clinical translation are explored. In a world increasingly polarized between the push for ultra-high-field excellence and the need for accessible imaging, ULF-MRI embodies a modern "David versus Goliath" theme, offering a sustainable, democratizing force capable of expanding MRI access to anyone, anywhere.

Ultra-fast whole-brain T2-weighted imaging in 7 seconds using dual-type deep learning reconstruction with single-shot acquisition: clinical feasibility and comparison with conventional methods.

Ikebe Y, Fujima N, Kameda H, Harada T, Shimizu Y, Kwon J, Yoneyama M, Kudo K

pubmed logopapersSep 26 2025
To evaluate the image quality and clinical utility of ultra-fast T2-weighted imaging (UF-T2WI), which acquires all slice data in 7 s using a single-shot turbo spin-echo technique combined with dual-type deep learning (DL) reconstruction, incorporating DL-based image denoising and super-resolution processing, by comparing UF-T2WI with conventional T2WI. We analyzed data from 38 patients who underwent both conventional T2WI and UF-T2WI with the dual-type DL-based image reconstruction. Two board-certified radiologists independently performed blinded qualitative assessments of the patients' images obtained with UF-T2WI with DL and conventional T2WI, evaluating the overall image quality, anatomical structure visibility, and levels of noise and artifacts. In cases that included central nervous system diseases, the lesions' delineation was also assessed. The quantitative analysis included measurements of signal-to-noise ratios in white and gray matter and the contrast-to-noise ratio between gray and white matter. Compared to conventional T2WI, UF-T2WI with DL received significantly higher ratings for overall image quality and lower noise and artifact levels (p < 0.001 for both readers). The anatomical visibility was significantly better in UF-T2WI for one reader, with no significant difference for the other reader. The lesion visibility in UF-T2WI was comparable to that in conventional T2WI. Quantitatively, the SNRs and CNRs were all significantly higher in UF-T2WI than conventional T2WI (p < 0.001). The combination of SSTSE with dual-type DL reconstruction allows for the acquisition of clinically acceptable T2WI images in just 7 s. This technique shows strong potential to reduce MRI scan times and improve clinical workflow efficiency.

Automated deep learning method for whole-breast segmentation in contrast-free quantitative MRI.

Gao W, Zhang Y, Gao B, Xia Y, Liang W, Yang Q, Shi F, He T, Han G, Li X, Su X, Zhang Y

pubmed logopapersSep 26 2025
To develop a deep learning segmentation method utilizing the nnU-Net architecture for fully automated whole-breast segmentation based on diffusion-weighted imaging (DWI) and synthetic MRI (SyMRI) images. A total of 98 patients with 196 breasts were evaluated. All patients underwent 3.0T magnetic resonance (MR) examinations, which incorporated DWI and SyMRI techniques. The ground truth for breast segmentation was established through a manual, slice-by-slice approach performed by two experienced radiologists. The U-Net and nnU-Net deep learning algorithms were employed to segment the whole-breast. Performance was evaluated using various metrics, including the Dice Similarity Coefficient (DSC), accuracy, and Pearson's correlation coefficient. For DWI and proton density (PD) of SyMRI, the nnU-Net outperformed the U-Net achieving the higher DSC in both the testing set (DWI, 0.930 ± 0.029 vs. 0.785 ± 0.161; PD, 0.969 ± 0.010 vs. 0.936 ± 0.018) and independent testing set (DWI, 0.953 ± 0.019 vs. 0.789 ± 0.148; PD, 0.976 ± 0.008 vs. 0.939 ± 0.018). The PD of SyMRI exhibited better performance than DWI, attaining the highest DSC and accuracy. The correlation coefficients R² for nnU-Net were 0.99 ~ 1.00 for DWI and PD, significantly surpassing the performance of U-Net. The nnU-Net exhibited exceptional segmentation performance for fully automated breast segmentation of contrast-free quantitative images. This method serves as an effective tool for processing large-scale clinical datasets and represents a significant advancement toward computer-aided quantitative analysis of breast DWI and SyMRI images.

Leveraging multi-modal foundation model image encoders to enhance brain MRI-based headache classification.

Rafsani F, Sheth D, Che Y, Shah J, Siddiquee MMR, Chong CD, Nikolova S, Ross K, Dumkrieger G, Li B, Wu T, Schwedt TJ

pubmed logopapersSep 26 2025
Headaches are a nearly universal human experience traditionally diagnosed based solely on symptoms. Recent advances in imaging techniques and artificial intelligence (AI) have enabled the development of automated headache detection systems, which can enhance clinical diagnosis, especially when symptom-based evaluations are insufficient. Current AI models often require extensive data, limiting their clinical applicability where data availability is low. However, deep learning models, particularly pre-trained ones and fine-tuned with smaller, targeted datasets can potentially overcome this limitation. By leveraging BioMedCLIP, a pre-trained foundational model combining a vision transformer (ViT) image encoder with PubMedBERT text encoder, we fine-tuned the pre-trained ViT model for the specific purpose of classifying headaches and detecting biomarkers from brain MRI data. The dataset consisted of 721 individuals: 424 healthy controls (HC) from the IXI dataset and 297 local participants, including migraine sufferers (n = 96), individuals with acute post-traumatic headache (APTH, n = 48), persistent post-traumatic headache (PPTH, n = 49), and additional HC (n = 104). The model achieved high accuracy across multiple balanced test sets, including 89.96% accuracy for migraine versus HC, 88.13% for APTH versus HC, and 83.13% for PPTH versus HC, all validated through five-fold cross-validation for robustness. Brain regions identified by Gradient-weighted Class Activation Mapping analysis as responsible for migraine classification included the postcentral cortex, supramarginal gyrus, superior temporal cortex, and precuneus cortex; for APTH, rostral middle frontal and precentral cortices; and, for PPTH, cerebellar cortex and precentral cortex. To our knowledge, this is the first study to leverage a multimodal biomedical foundation model in the context of headache classification and biomarker detection using structural MRI, offering complementary insights into the causes and brain changes associated with headache disorders.

A novel deep neural architecture for efficient and scalable multidomain image classification.

Nobel SMN, Tasir MAM, Noor H, Monowar MM, Hamid MA, Sayeed MS, Islam MR, Mridha MF, Dey N

pubmed logopapersSep 26 2025
Deep learning has significantly advanced the field of computer vision; however, developing models that generalize effectively across diverse image domains remains a major research challenge. In this study, we introduce DeepFreqNet, a novel deep neural architecture specifically designed for high-performance multi-domain image classification. The innovative aspect of DeepFreqNet lies in its combination of three powerful components: multi-scale feature extraction for capturing patterns at different resolutions, depthwise separable convolutions for enhanced computational efficiency, and residual connections to maintain gradient flow and accelerate convergence. This hybrid design improves the architecture's ability to learn discriminative features and ensures scalability across domains with varying data complexities. Unlike traditional transfer learning models, DeepFreqNet adapts seamlessly to diverse datasets without requiring extensive reconfiguration. Experimental results from nine benchmark datasets, including MRI tumor classification, blood cell classification, and sign language recognition, demonstrate superior performance, achieving classification accuracies between 98.96% and 99.97%. These results highlight the effectiveness and versatility of DeepFreqNet, showcasing a significant improvement over existing state-of-the-art methods and establishing it as a robust solution for real-world image classification challenges.

Deep learning-driven contactless ECG in MRI via beat pilot tone for motion-resolved image reconstruction and heart rate monitoring.

Sun H, Ding Q, Zhong S, Zhang Z

pubmed logopapersSep 26 2025
Electrocardiogram (ECG) is crucial for synchronizing cardiovascular magnetic resonance imaging (CMRI) acquisition with the cardiac cycle and for continuous heart rate monitoring during prolonged scans. However, conventional electrode-based ECG systems in clinical MRI environments suffer from tedious setup, magnetohydrodynamic (MHD) waveform distortion, skin burn risks, and patient discomfort. This study proposes a contactless ECG measurement method in MRI to address these challenges. We integrated Beat Pilot Tone (BPT)-a contactless, high motion sensitivity, and easily integrable RF motion sensing modality-into CMRI to capture cardiac motion without direct patient contact. A deep neural network was trained to map the BPT-derived cardiac mechanical motion signals to corresponding ECG waveforms. The reconstructed ECG was evaluated against simultaneously acquired ground truth ECG through multiple metrics: Pearson correlation coefficient, relative root mean square error (RRMSE), cardiac trigger timing accuracy, and heart rate estimation error. Additionally, we performed MRI retrospective binning reconstruction using reconstructed ECG reference and evaluated image quality under both standard clinical conditions and challenging scenarios involving arrhythmias and subject motion. To examine scalability of our approach across field strength, the model pretrained on 1.5T data was applied to 3T BPT cardiac acquisitions. In optimal acquisition scenarios, the reconstructed ECG achieved a median Pearson correlation of 89% relative to the ground truth, while cardiac triggering accuracy reached 94%, and heart rate estimation error remained below 1 bpm. The quality of the reconstructed images was comparable to that of ground truth synchronization. The method exhibited a degree of adaptability to irregular heart rate patterns and subject motion, and scaled effectively across MRI systems operating at different field strengths. The proposed contactless ECG measurement method has the potential to streamline CMRI workflows, improve patient safety and comfort, mitigate MHD distortion challenges and find a robust clinical application.

Deep learning reconstruction for temporomandibular joint MRI: diagnostic interchangeability, image quality, and scan time reduction.

Jo GD, Jeon KJ, Choi YJ, Lee C, Han SS

pubmed logopapersSep 25 2025
To evaluate the diagnostic interchangeability, image quality, and scan time of deep learning (DL)-reconstructed magnetic resonance imaging (MRI) compared with conventional MRI for the temporomandibular joint (TMJ). Patients with suspected TMJ disorder underwent sagittal proton density-weighted (PDW) and T2-weighted fat-suppressed (T2W FS) MRI using both conventional and DL reconstruction protocols in a single session. Three oral radiologists independently assessed disc shape, disc position, and joint effusion. Diagnostic interchangeability for these findings was evaluated by comparing interobserver agreement, with equivalence defined as a 95% confidence interval (CI) within ±5%. Qualitative image quality (sharpness, noise, artifacts, overall) was rated on a 5-point scale. Quantitative image quality was assessed by measuring the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) in the condyle, disc, and background air. Image quality scores were compared using the Wilcoxon signed-rank test, and SNR/CNR using paired t-tests. Scan times were directly compared. A total of 176 TMJs from 88 patients (mean age, 37 ± 16 years; 43 men) were analyzed. DL-reconstructed MRI demonstrated diagnostic equivalence to conventional MRI for disc shape, position, and effusion (equivalence indices < 3%; 95% CIs within ±5%). DL reconstruction significantly reduced noise in PDW and T2W FS sequences (p < 0.05) while maintaining sharpness and artifact levels. SNR and CNR were significantly improved (p < 0.05), except for disc SNR in PDW (p = 0.189). Scan time was reduced by 49.2%. DL-reconstructed TMJ MRI is diagnostically interchangeable with conventional MRI, offering improved image quality with a shorter scan time. Question Long MRI scan times in patients with temporomandibular disorders can increase pain and motion-related artifacts, often compromising image quality in diagnostic settings. Findings DL reconstruction is diagnostically interchangeable with conventional MRI for assessing disc shape, disc position, and effusion, while improving image quality and reducing scan time. Clinical relevance DL reconstruction enables faster and more tolerable TMJ MRI workflows without compromising diagnostic accuracy, facilitating broader adoption in clinical settings where long scan times and motion artifacts often limit diagnostic efficiency.

SAFNet: a spatial adaptive fusion network for dual-domain undersampled MRI reconstruction.

Huo Y, Zhang H, Ge D, Ren Z

pubmed logopapersSep 25 2025
Undersampled magnetic resonance imaging (MRI) reconstruction reduces scanning time while preserving image quality, improving patient comfort and clinical efficiency. Current parallel reconstruction strategies leverage k-space and image domains information to improve feature extraction and accuracy. However, most existing dual-domain reconstruction methods rely on simplistic fusion strategies that ignore spatial feature variations, suffer from constrained receptive fields limiting complex anatomical structure modeling, and employ static frameworks lacking adaptability to the heterogeneous artifact profiles induced by diverse undersampling patterns. This paper introduces a Spatial Adaptive Fusion Network (SAFNet) for dual-domain undersampled MRI reconstruction. SAFNet comprises two parallel reconstruction branches. A Dynamic Perception Initialization Module (DPIM) in each encoder enriches receptive fields for multi-scale information capture. Spatial Adaptive Fusion Modules (SAFM) within each branch's decoder achieve pixel-wise adaptive fusion of dual-domain features and incorporate original magnitude information, ensuring faithful preservation of intensity details. The Weighted Shortcut Module (WSM) enables dynamic strategy adaptation by scaling shortcut connections to adaptively balance residual learning and direct reconstruction. Experiments demonstrate SAFNet's superior accuracy and adaptability over state-of-the-art methods, offering valuable insights for image reconstruction and multimodal information fusion.
Page 1 of 1111109 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.