Sort by:
Page 18 of 72720 results

Patch2Loc: Learning to Localize Patches for Unsupervised Brain Lesion Detection

Hassan Baker, Austin J. Brockmeier

arxiv logopreprintJun 25 2025
Detecting brain lesions as abnormalities observed in magnetic resonance imaging (MRI) is essential for diagnosis and treatment. In the search of abnormalities, such as tumors and malformations, radiologists may benefit from computer-aided diagnostics that use computer vision systems trained with machine learning to segment normal tissue from abnormal brain tissue. While supervised learning methods require annotated lesions, we propose a new unsupervised approach (Patch2Loc) that learns from normal patches taken from structural MRI. We train a neural network model to map a patch back to its spatial location within a slice of the brain volume. During inference, abnormal patches are detected by the relatively higher error and/or variance of the location prediction. This generates a heatmap that can be integrated into pixel-wise methods to achieve finer-grained segmentation. We demonstrate the ability of our model to segment abnormal brain tissues by applying our approach to the detection of tumor tissues in MRI on T2-weighted images from BraTS2021 and MSLUB datasets and T1-weighted images from ATLAS and WMH datasets. We show that it outperforms the state-of-the art in unsupervised segmentation. The codebase for this work can be found on our \href{https://github.com/bakerhassan/Patch2Loc}{GitHub page}.

Preoperative Assessment of Lymph Node Metastasis in Rectal Cancer Using Deep Learning: Investigating the Utility of Various MRI Sequences.

Zhao J, Zheng P, Xu T, Feng Q, Liu S, Hao Y, Wang M, Zhang C, Xu J

pubmed logopapersJun 24 2025
This study aimed to develop a deep learning (DL) model based on three-dimensional multi-parametric magnetic resonance imaging (mpMRI) for preoperative assessment of lymph node metastasis (LNM) in rectal cancer (RC) and to investigate the contribution of different MRI sequences. A total of 613 eligible patients with RC from four medical centres who underwent preoperative mpMRI were retrospectively enrolled and randomly assigned to training (n = 372), validation (n = 106), internal test (n = 88) and external test (n = 47) cohorts. A multi-parametric multi-scale EfficientNet (MMENet) was designed to effectively extract LNM-related features from mpMR for preoperative LNM assessment. Its performance was compared with other DL models and radiologists using metrics of area under the receiver operating curve (AUC), accuracy (ACC), sensitivity, specificity and average precision with 95% confidence interval (CI). To investigate the utility of various MRI sequences, the performances of the mono-parametric model and the MMENet with different sequences combinations as input were compared. The MMENet using a combination of T2WI, DWI and DCE sequence achieved an AUC of 0.808 (95% CI 0.720-0.897) with an ACC of 71.6% (95% CI 62.3-81.0) in the internal test cohort and an AUC of 0.782 (95% CI 0.636-0.925) with an ACC of 76.6% (95% CI 64.6-88.6) in the external test cohort, outperforming the mono-parametric model, the MMENet with other sequences combinations and the radiologists. The MMENet, leveraging a combination of T2WI, DWI and DCE sequences, can accurately assess LNM in RC preoperatively and holds great promise for automated evaluation of LNM in clinical practice.

Advances and Integrations of Computer-Assisted Planning, Artificial Intelligence, and Predictive Modeling Tools for Laser Interstitial Thermal Therapy in Neurosurgical Oncology.

Warman A, Moorthy D, Gensler R, Horowtiz MA, Ellis J, Tomasovic L, Srinivasan E, Ahmed K, Azad TD, Anderson WS, Rincon-Torroella J, Bettegowda C

pubmed logopapersJun 24 2025
Laser interstitial thermal therapy (LiTT) has emerged as a minimally invasive, MRI-guided treatment of brain tumors that are otherwise considered inoperable because of their location or the patient's poor surgical candidacy. By directing thermal energy at neoplastic lesions while minimizing damage to surrounding healthy tissue, LiTT offers promising therapeutic outcomes for both newly diagnosed and recurrent tumors. However, challenges such as postprocedural edema, unpredictable heat diffusion near blood vessels and ventricles in real time underscore the need for improved planning and monitoring. Incorporating artificial intelligence (AI) presents a viable solution to many of these obstacles. AI has already demonstrated effectiveness in optimizing surgical trajectories, predicting seizure-free outcomes in epilepsy cases, and generating heat distribution maps to guide real-time ablation. This technology could be similarly deployed in neurosurgical oncology to identify patients most likely to benefit from LiTT, refine trajectory planning, and predict tissue-specific heat responses. Despite promising initial studies, further research is needed to establish the robust data sets and clinical trials necessary to develop and validate AI-driven LiTT protocols. Such advancements have the potential to bolster LiTT's efficacy, minimize complications, and ultimately transform the neurosurgical management of primary and metastatic brain tumors.

Systematic Review of Pituitary Gland and Pituitary Adenoma Automatic Segmentation Techniques in Magnetic Resonance Imaging

Mubaraq Yakubu, Navodini Wijethilake, Jonathan Shapey, Andrew King, Alexander Hammers

arxiv logopreprintJun 24 2025
Purpose: Accurate segmentation of both the pituitary gland and adenomas from magnetic resonance imaging (MRI) is essential for diagnosis and treatment of pituitary adenomas. This systematic review evaluates automatic segmentation methods for improving the accuracy and efficiency of MRI-based segmentation of pituitary adenomas and the gland itself. Methods: We reviewed 34 studies that employed automatic and semi-automatic segmentation methods. We extracted and synthesized data on segmentation techniques and performance metrics (such as Dice overlap scores). Results: The majority of reviewed studies utilized deep learning approaches, with U-Net-based models being the most prevalent. Automatic methods yielded Dice scores of 0.19--89.00\% for pituitary gland and 4.60--96.41\% for adenoma segmentation. Semi-automatic methods reported 80.00--92.10\% for pituitary gland and 75.90--88.36\% for adenoma segmentation. Conclusion: Most studies did not report important metrics such as MR field strength, age and adenoma size. Automated segmentation techniques such as U-Net-based models show promise, especially for adenoma segmentation, but further improvements are needed to achieve consistently good performance in small structures like the normal pituitary gland. Continued innovation and larger, diverse datasets are likely critical to enhancing clinical applicability.

ReCoGNet: Recurrent Context-Guided Network for 3D MRI Prostate Segmentation

Ahmad Mustafa, Reza Rastegar, Ghassan AlRegib

arxiv logopreprintJun 24 2025
Prostate gland segmentation from T2-weighted MRI is a critical yet challenging task in clinical prostate cancer assessment. While deep learning-based methods have significantly advanced automated segmentation, most conventional approaches-particularly 2D convolutional neural networks (CNNs)-fail to leverage inter-slice anatomical continuity, limiting their accuracy and robustness. Fully 3D models offer improved spatial coherence but require large amounts of annotated data, which is often impractical in clinical settings. To address these limitations, we propose a hybrid architecture that models MRI sequences as spatiotemporal data. Our method uses a deep, pretrained DeepLabV3 backbone to extract high-level semantic features from each MRI slice and a recurrent convolutional head, built with ConvLSTM layers, to integrate information across slices while preserving spatial structure. This combination enables context-aware segmentation with improved consistency, particularly in data-limited and noisy imaging conditions. We evaluate our method on the PROMISE12 benchmark under both clean and contrast-degraded test settings. Compared to state-of-the-art 2D and 3D segmentation models, our approach demonstrates superior performance in terms of precision, recall, Intersection over Union (IoU), and Dice Similarity Coefficient (DSC), highlighting its potential for robust clinical deployment.

A Multicentre Comparative Analysis of Radiomics, Deep-learning, and Fusion Models for Predicting Postpartum Hemorrhage.

Zhang W, Zhao X, Meng L, Lu L, Guo J, Cheng M, Tian H, Ren N, Yin J, Zhang X

pubmed logopapersJun 24 2025
This study compared the capabilities of two-dimensional (2D) and three-dimensional (3D) deep learning (DL), radiomics, and fusion models to predict postpartum hemorrhage (PPH), using sagittal T2-weighted MRI images. This retrospective study successively included 581 pregnant women suspected of placenta accreta spectrum (PAS) disorders who underwent placental MRI assessment between May 2018 and June 2024 in two hospitals. Clinical information was collected, and MRI images were analyzed by two experienced radiologists. The study cohort was divided into training (hospital 1, n=470) and validation (hospital 2, n=160) sets. Radiomics features were extracted after image segmentation to develop the radiomics model, 2D and 3D DL models were developed, and two fusion strategies (early and late fusion) were used to construct the fusion models. ROC curves, AUC, sensitivity, specificity, calibration curves, and decision curve analysis were used to evaluate the models' performance. The late-fusion model (DLRad_LF) yielded the highest performance, with AUCs of 0.955 (95% CI: 0.935-0.974) and 0.898 (95% CI: 0.848-0.949) in the training and validation sets, respectively. In the validation set, the AUC of the 3D DL model was significantly larger than those of the radiomics (AUC=0.676, P<0.001) and 2D DL (AUC=0.752, P<0.001) models. Subgroup analysis found that placenta previa and PAS did not impact the models' performance significantly. The DLRad_LF model could predict PPH reasonably accurately based on sagittal T2-weighted MRI images.

From Faster Frames to Flawless Focus: Deep Learning HASTE in Postoperative Single Sequence MRI.

Hosse C, Fehrenbach U, Pivetta F, Malinka T, Wagner M, Walter-Rittel T, Gebauer B, Kolck J, Geisel D

pubmed logopapersJun 24 2025
This study evaluates the feasibility of a novel deep learning-accelerated half-fourier single-shot turbo spin-echo sequence (HASTE-DL) compared to the conventional HASTE sequence (HASTE<sub>S</sub>) in postoperative single-sequence MRI for the detection of fluid collections following abdominal surgery. As small fluid collections are difficult to visualize using other techniques, HASTE-DL may offer particular advantages in this clinical context. A retrospective analysis was conducted on 76 patients (mean age 65±11.69 years) who underwent abdominal MRI for suspected septic foci following abdominal surgery. Imaging was performed using 3-T MRI scanners, and both sequences were analyzed in terms of image quality, contrast, sharpness, and artifact presence. Quantitative assessments focused on fluid collection detectability, while qualitative assessments evaluated visualization of critical structures. Inter-reader agreement was measured using Cohen's kappa coefficient, and statistical significance was determined with the Mann-Whitney U test. HASTE-DL achieved a 46% reduction in scan time compared to HASTE<sub>S</sub>, while significantly improving overall image quality (p<0.001), contrast (p<0.001), and sharpness (p<0.001). The inter-reader agreement for HASTE-DL was excellent (κ=0.960), with perfect agreement on overall image quality and fluid collection detection (κ=1.0). Fluid detectability and characterization scores were higher for HASTE-DL, and visualization of critical structures was significantly enhanced (p<0.001). No relevant artifacts were observed in either sequence. HASTE-DL offers superior image quality, improved visualization of critical structures, such as drainages, vessels, bile and pancreatic ducts, and reduced acquisition time, making it an effective alternative to the standard HASTE sequence, and a promising complementary tool in the postoperative imaging workflow.

Adaptive Mask-guided K-space Diffusion for Accelerated MRI Reconstruction

Qinrong Cai, Yu Guan, Zhibo Chen, Dong Liang, Qiuyun Fan, Qiegen Liu

arxiv logopreprintJun 23 2025
As the deep learning revolution marches on, masked modeling has emerged as a distinctive approach that involves predicting parts of the original data that are proportionally masked during training, and has demonstrated exceptional performance in multiple fields. Magnetic Resonance Imaging (MRI) reconstruction is a critical task in medical imaging that seeks to recover high-quality images from under-sampled k-space data. However, previous MRI reconstruction strategies usually optimized the entire image domain or k-space, without considering the importance of different frequency regions in the k-space This work introduces a diffusion model based on adaptive masks (AMDM), which utilizes the adaptive adjustment of frequency distribution based on k-space data to develop a hybrid masks mechanism that adapts to different k-space inputs. This enables the effective separation of high-frequency and low-frequency components, producing diverse frequency-specific representations. Additionally, the k-space frequency distribution informs the generation of adaptive masks, which, in turn, guide a closed-loop diffusion process. Experimental results verified the ability of this method to learn specific frequency information and thereby improved the quality of MRI reconstruction, providing a flexible framework for optimizing k-space data using masks in the future.

Multimodal deep learning for predicting neoadjuvant treatment outcomes in breast cancer: a systematic review.

Krasniqi E, Filomeno L, Arcuri T, Ferretti G, Gasparro S, Fulvi A, Roselli A, D'Onofrio L, Pizzuti L, Barba M, Maugeri-Saccà M, Botti C, Graziano F, Puccica I, Cappelli S, Pelle F, Cavicchi F, Villanucci A, Paris I, Calabrò F, Rea S, Costantini M, Perracchio L, Sanguineti G, Takanen S, Marucci L, Greco L, Kayal R, Moscetti L, Marchesini E, Calonaci N, Blandino G, Caravagna G, Vici P

pubmed logopapersJun 23 2025
Pathological complete response (pCR) to neoadjuvant systemic therapy (NAST) is an established prognostic marker in breast cancer (BC). Multimodal deep learning (DL), integrating diverse data sources (radiology, pathology, omics, clinical), holds promise for improving pCR prediction accuracy. This systematic review synthesizes evidence on multimodal DL for pCR prediction and compares its performance against unimodal DL. Following PRISMA, we searched PubMed, Embase, and Web of Science (January 2015-April 2025) for studies applying DL to predict pCR in BC patients receiving NAST, using data from radiology, digital pathology (DP), multi-omics, and/or clinical records, and reporting AUC. Data on study design, DL architectures, and performance (AUC) were extracted. A narrative synthesis was conducted due to heterogeneity. Fifty-one studies, mostly retrospective (90.2%, median cohort 281), were included. Magnetic resonance imaging and DP were common primary modalities. Multimodal approaches were used in 52.9% of studies, often combining imaging with clinical data. Convolutional neural networks were the dominant architecture (88.2%). Longitudinal imaging improved prediction over baseline-only (median AUC 0.91 vs. 0.82). Overall, the median AUC across studies was 0.88, with 35.3% achieving AUC ≥ 0.90. Multimodal models showed a modest but consistent improvement over unimodal approaches (median AUC 0.88 vs. 0.83). Omics and clinical text were rarely primary DL inputs. DL models demonstrate promising accuracy for pCR prediction, especially when integrating multiple modalities and longitudinal imaging. However, significant methodological heterogeneity, reliance on retrospective data, and limited external validation hinder clinical translation. Future research should prioritize prospective validation, integration underutilized data (multi-omics, clinical), and explainable AI to advance DL predictors to the clinical setting.

DCLNet: Double Collaborative Learning Network on Stationary-Dynamic Functional Brain Network for Brain Disease Classification.

Zhou J, Jie B, Wang Z, Zhang Z, Bian W, Yang Y, Li H, Sun F, Liu M

pubmed logopapersJun 23 2025
Stationary functional brain networks (sFBNs) and dynamic functional brain networks (dFBNs) derived from resting-state functional MRI characterize the complex interactions of the human brain from different aspects and could offer complementary information for brain disease analysis. Most current studies focus on sFBN or dFBN analysis, thus limiting the performance of brain network analysis. A few works have explored integrating sFBN and dFBN to identify brain diseases, and achieved better performance than conventional methods. However, these studies still ignore some valuable discriminative information, such as the distribution information of subjects between and within categories. This paper presents a Double Collaborative Learning Network (DCLNet), which takes advantage of both collaborative encoder and collaborative contrastive learning, to learn complementary information of sFBN and dFBN and distribution information of subjects between inter- and intra-categories for brain disease classification. Specifically, we first construct sFBN and dFBN using traditional correlation-based methods with rs-fMRI data, respectively. Then, we build a collaborative encoder to extract brain network features at different levels (i.e., connectivity-based, brain-region-based, and brain-network-based features), and design a prune-graft transformer module to embed the complementary information of the features at each level between two kinds of FBNs. We also develop a collaborative contrastive learning module to capture the distribution information of subjects between and within different categories, thereby learning the more discriminative features of brain networks. We evaluate the DCLNet on two real brain disease datasets with rs-fMRI data, with experimental results demonstrating the superiority of the proposed method.
Page 18 of 72720 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.