Sort by:
Page 85 of 3133125 results

Advancing Early Detection of Major Depressive Disorder Using Multisite Functional Magnetic Resonance Imaging Data: Comparative Analysis of AI Models.

Mansoor M, Ansari K

pubmed logopapersJul 15 2025
Major depressive disorder (MDD) is a highly prevalent mental health condition with significant public health implications. Early detection is crucial for timely intervention, but current diagnostic methods often rely on subjective clinical assessments, leading to delayed or inaccurate diagnoses. Advances in neuroimaging and machine learning (ML) offer the potential for objective and accurate early detection. This study aimed to develop and validate ML models using multisite functional magnetic resonance imaging data for the early detection of MDD, compare their performance, and evaluate their clinical applicability. We used functional magnetic resonance imaging data from 1200 participants (600 with early-stage MDD and 600 healthy controls) across 3 public datasets. In total, 4 ML models-support vector machine, random forest, gradient boosting machine, and deep neural network-were trained and evaluated using a 5-fold cross-validation framework. Models were assessed for accuracy, sensitivity, specificity, F1-score, and area under the receiver operating characteristic curve. Shapley additive explanations values and activation maximization techniques were applied to interpret model predictions. The deep neural network model demonstrated superior performance with an accuracy of 89% (95% CI 86%-92%) and an area under the receiver operating characteristic curve of 0.95 (95% CI 0.93-0.97), outperforming traditional diagnostic methods by 15% (P<.001). Key predictive features included altered functional connectivity between the dorsolateral prefrontal cortex, anterior cingulate cortex, and limbic regions. The model achieved 78% sensitivity (95% CI 71%-85%) in identifying individuals who developed MDD within a 2-year follow-up period, demonstrating good generalizability across datasets. Our findings highlight the potential of artificial intelligence-driven approaches for the early detection of MDD, with implications for improving early intervention strategies. While promising, these tools should complement rather than replace clinical expertise, with careful consideration of ethical implications such as patient privacy and model biases.

OMT and tensor SVD-based deep learning model for segmentation and predicting genetic markers of glioma: A multicenter study.

Zhu Z, Wang H, Li T, Huang TM, Yang H, Tao Z, Tan ZH, Zhou J, Chen S, Ye M, Zhang Z, Li F, Liu D, Wang M, Lu J, Zhang W, Li X, Chen Q, Jiang Z, Chen F, Zhang X, Lin WW, Yau ST, Zhang B

pubmed logopapersJul 15 2025
Glioma is the most common primary malignant brain tumor and preoperative genetic profiling is essential for the management of glioma patients. Our study focused on tumor regions segmentation and predicting the World Health Organization (WHO) grade, isocitrate dehydrogenase (IDH) mutation, and 1p/19q codeletion status using deep learning models on preoperative MRI. To achieve accurate tumor segmentation, we developed an optimal mass transport (OMT) approach to transform irregular MRI brain images into tensors. In addition, we proposed an algebraic preclassification (APC) model utilizing multimode OMT tensor singular value decomposition (SVD) to estimate preclassification probabilities. The fully automated deep learning model named OMT-APC was used for multitask classification. Our study incorporated preoperative brain MRI data from 3,565 glioma patients across 16 datasets spanning Asia, Europe, and America. Among these, 2,551 patients from 5 datasets were used for training and internal validation. In comparison, 1,014 patients from 11 datasets, including 242 patients from The Cancer Genome Atlas (TCGA), were used as independent external test. The OMT segmentation model achieved mean lesion-wise Dice scores of 0.880. The OMT-APC model was evaluated on the TCGA dataset, achieving accuracies of 0.855, 0.917, and 0.809, with AUC scores of 0.845, 0.908, and 0.769 for WHO grade, IDH mutation, and 1p/19q codeletion, respectively, which outperformed the four radiologists in all tasks. These results highlighted the effectiveness of our OMT and tensor SVD-based methods in brain tumor genetic profiling, suggesting promising applications for algebraic and geometric methods in medical image analysis.

Fully Automated Online Adaptive Radiation Therapy Decision-Making for Cervical Cancer Using Artificial Intelligence.

Sun S, Gong X, Cheng S, Cao R, He S, Liang Y, Yang B, Qiu J, Zhang F, Hu K

pubmed logopapersJul 15 2025
Interfraction variations during radiation therapy pose a challenge for patients with cervical cancer, highlighting the benefits of online adaptive radiation therapy (oART). However, adaptation decisions rely on subjective image reviews by physicians, leading to high interobserver variability and inefficiency. This study explores the feasibility of using artificial intelligence for decision-making in oART. A total of 24 patients with cervical cancer who underwent 671 fractions of daily fan-beam computed tomography (FBCT) guided oART were included in this study, with each fraction consisting of a daily FBCT image series and a pair of scheduled and adaptive plans. Dose deviations of scheduled plans exceeding predefined criteria were labeled as "trigger," otherwise as "nontrigger." A data set comprising 588 fractions from 21 patients was used for model development. For the machine learning model (ML), 101 morphologic, gray-level, and dosimetric features were extracted, with feature selection by the least absolute shrinkage and selection operator (LASSO) and classification by support vector machine (SVM). For deep learning, a Siamese network approach was used: the deep learning model of contour (DL_C) used only imaging data and contours, whereas a deep learning model of contour and dose (DL_D) also incorporated dosimetric data. A 5-fold cross-validation strategy was employed for model training and testing, and model performance was evaluated using the area under the curve (AUC), accuracy, precision, and recall. An independent data set comprising 83 fractions from 3 patients was used for model evaluation, with predictions compared against trigger labels assigned by 3 experienced radiation oncologists. Based on dosimetric labels, the 671 fractions were classified into 492 trigger and 179 nontrigger cases. The ML model selected 39 key features, primarily reflecting morphologic and gray-level changes in the clinical target volume (CTV) of the uterus (CTV_U), the CTV of the cervix, vagina, and parametrial tissues (CTV_C), and the small intestine. It achieved an AUC of 0.884, with accuracy, precision, and recall of 0.825, 0.824, and 0.827, respectively. The DL_C model demonstrated superior performance with an AUC of 0.917, accuracy of 0.869, precision of 0.860, and recall of 0.881. The DL_D model, which incorporated additional dosimetric data, exhibited a slight decline in performance compared with DL_C. Heatmap analyses indicated that for trigger fractions, the deep learning models focused on regions where the reference CT's CTV_U did not fully encompass the daily FBCT's CTV_U. Evaluation on an independent data set confirmed the robustness of all models. The weighted model's prediction accuracy significantly outperformed the physician consensus (0.855 vs 0.795), with comparable precision (0.917 vs 0.925) but substantially higher recall (0.887 vs 0.790). This study proposes machine learning and deep learning models to identify treatment fractions that may benefit from adaptive replanning in radical radiation therapy for cervical cancer, providing a promising decision-support tool to assist clinicians in determining when to trigger the oART workflow during treatment.

Direct-to-Treatment Adaptive Radiation Therapy: Live Planning of Spine Metastases Using Novel Cone Beam Computed Tomography.

McGrath KM, MacDonald RL, Robar JL, Cherpak A

pubmed logopapersJul 15 2025
Cone beam computed tomography (CBCT)-based online adaptive radiation therapy is carried out using a synthetic CT (sCT) created through deformable registration between the patient-specific fan-beam CT, fan-beam computed tomography (FBCT), and daily CBCT. Ethos 2.0 allows for plan calculation directly on HyperSight CBCT and uses artificial intelligence-informed tools for daily contouring without the use of a priori information. This breaks an important link between daily adaptive sessions and initial reference plan preparation. This study explores adaptive radiation therapy for spine metastases without prior patient-specific imaging or treatment planning. We hypothesize that adaptive plans can be created when patient-specific positioning and anatomy is incorporated only once the patient has arrived at the treatment unit. An Ethos 2.0 emulator was used to create initial reference plans on 10 patient-specific FBCTs. Reference plans were also created using FBCTs of (1) a library patient with clinically acceptable contours and (2) a water-equivalent phantom with placeholder contours. Adaptive sessions were simulated for each patient using the 3 different starting points. Resulting adaptive plans were compared with determine the significance of patient-specific information prior to the start of treatment. The library patient and phantom reference plans did not generate adaptive plans that differed significantly from the standard workflow for all clinical constraints for target coverage and organ at risk sparing (P > .2). Gamma comparison between the 3 adaptive plans for each patient (3%/3 mm) demonstrated overall similarity of dose distributions (pass rate > 95%), for all but 2 cases. Failures occurred mainly in low-dose regions, highlighting difference in fluence used to achieve the same clinical goals. This study confirmed feasibility of a procedure for treatment of spine metastases that does not rely on previously acquired patient-specific imaging, contours or plan. Reference-free direct-to-treatment workflows are possible and can condense a multistep process to a single location with dedicated resources.

Artificial Intelligence-Empowered Multistep Integrated Radiation Therapy Workflow for Nasopharyngeal Carcinoma.

Yang YX, Yang X, Jiang XB, Lin L, Wang GY, Sun WZ, Zhang K, Li BH, Li H, Jia LC, Wei ZQ, Liu YF, Fu DN, Tang JX, Zhang W, Zhou JJ, Diao WC, Wang YJ, Chen XM, Xu CD, Lin LW, Wu JY, Wu JW, Peng LX, Pan JF, Liu BZ, Feng C, Huang XY, Zhou GQ, Sun Y

pubmed logopapersJul 15 2025
To establish an artificial intelligence (AI)-empowered multistep integrated (MSI) radiation therapy (RT) workflow for patients with nasopharyngeal carcinoma (NPC) and evaluate its feasibility and clinical performance. Patients with NPC scheduled for MSI RT workflow were prospectively enrolled. This workflow integrates RT procedures from computed tomography (CT) scan to beam delivery, all performed with the patient on the treatment couch. Workflow performance, tumor response, patient-reported acute toxicities, and quality of life were evaluated. From March 2022 to October 2023, 120 newly diagnosed, nonmetastatic patients with NPC were enrolled. Of these, 117 completed the workflow with a median duration of 23.2 minutes (range, 16.3-45.8). Median translation errors were 0.2 mm (from CT scan to planning approval) and 0.1 mm (during beam delivery). AI-generated contours required minimal revision for the high-risk clinical target volume and organs at risk, minor revision for the involved cervical lymph nodes and low-risk clinical target volume (median Dice similarity coefficients (DSC), 0.98 and 0.94), and more revision for the gross tumor at the primary site and the involved retropharyngeal lymph nodes (median DSC, 0.84). Of 117 AI-generated plans, 108 (92.3%) passed after the first optimization, with ≥97.8% of target volumes receiving ≥100% of the prescribed dose. Dosimetric constraints were met for most organs at risk, except the thyroid and submandibular glands. One hundred and fifteen patients achieved a complete response at week 12 post-RT, while 14 patients reported any acute toxicity as "very severe" from the start of RT to week 12 post-RT. AI-empowered MSI RT workflow for patients with NPC is clinically feasible in a single institutional setting compared with standard, human-based RT workflow.

Patient-Specific Deep Learning Tracking Framework for Real-Time 2D Target Localization in Magnetic Resonance Imaging-Guided Radiation Therapy.

Lombardo E, Velezmoro L, Marschner SN, Rabe M, Tejero C, Papadopoulou CI, Sui Z, Reiner M, Corradini S, Belka C, Kurz C, Riboldi M, Landry G

pubmed logopapersJul 15 2025
We propose a tumor tracking framework for 2D cine magnetic resonance imaging (MRI) based on a pair of deep learning (DL) models relying on patient-specific (PS) training. The chosen DL models are: (1) an image registration transformer and (2) an auto-segmentation convolutional neural network (CNN). We collected over 1,400,000 cine MRI frames from 219 patients treated on a 0.35 T MRI-linac plus 7500 frames from additional 35 patients that were manually labeled and subdivided into fine-tuning, validation, and testing sets. The transformer was first trained on the unlabeled data (without segmentations). We then continued training (with segmentations) either on the fine-tuning set or for PS models based on 8 randomly selected frames from the first 5 seconds of each patient's cine MRI. The PS auto-segmentation CNN was trained from scratch with the same 8 frames for each patient, without pre-training. Furthermore, we implemented B-spline image registration as a conventional model, as well as different baselines. Output segmentations of all models were compared on the testing set using the Dice similarity coefficient, the 50% and 95% Hausdorff distance (HD<sub>50%</sub>/HD<sub>95%</sub>), and the root-mean-square-error of the target centroid in superior-inferior direction. The PS transformer and CNN significantly outperformed all other models, achieving a median (interquartile range) dice similarity coefficient of 0.92 (0.03)/0.90 (0.04), HD<sub>50%</sub> of 1.0 (0.1)/1.0 (0.4) mm, HD<sub>95%</sub> of 3.1 (1.9)/3.8 (2.0) mm, and root-mean-square-error of the target centroid in superior-inferior direction of 0.7 (0.4)/0.9 (1.0) mm on the testing set. Their inference time was about 36/8 ms per frame and PS fine-tuning required 3 min for labeling and 8/4 min for training. The transformer was better than the CNN in 9/12 patients, the CNN better in 1/12 patients, and the 2 PS models achieved the same performance on the remaining 2/12 testing patients. For targets in the thorax, abdomen, and pelvis, we found 2 PS DL models to provide accurate real-time target localization during MRI-guided radiotherapy.

3D isotropic high-resolution fetal brain MRI reconstruction from motion corrupted thick data based on physical-informed unsupervised learning.

Wu J, Chen L, Li Z, Li X, Sun T, Wang L, Wang R, Wei H, Zhang Y

pubmed logopapersJul 15 2025
High-quality 3D fetal brain MRI reconstruction from motion-corrupted 2D slices is crucial for precise clinical diagnosis and advancing our understanding of fetal brain development. This necessitates reliable slice-to-volume registration (SVR) for motion correction and super-resolution reconstruction (SRR) techniques. Traditional approaches have their limitations, but deep learning (DL) offers the potential in enhancing SVR and SRR. However, most of DL methods require large-scale external 3D high-resolution (HR) training datasets, which is challenging in clinical fetal MRI. To address this issue, we propose an unsupervised iterative joint SVR and SRR DL framework for 3D isotropic HR volume reconstruction. Specifically, our method conceptualizes SVR as a function that maps a 2D slice and a 3D target volume to a rigid transformation matrix, aligning the slice to the underlying location within the target volume. This function is parameterized by a convolutional neural network, which is trained by minimizing the difference between the volume slicing at the predicted position and the actual input slice. For SRR, a decoding network embedded within a deep image prior framework, coupled with a comprehensive image degradation model, is used to produce the HR volume. The deep image prior framework offers a local consistency prior to guide the reconstruction of HR volumes. By performing a forward degradation model, the HR volume is optimized by minimizing the loss between the predicted slices and the acquired slices. Experiments on both large-magnitude motion-corrupted simulation data and clinical data have shown that our proposed method outperforms current state-of-the-art fetal brain reconstruction methods. The source code is available at https://github.com/DeepBMI/SUFFICIENT.

Restore-RWKV: Efficient and Effective Medical Image Restoration with RWKV.

Yang Z, Li J, Zhang H, Zhao D, Wei B, Xu Y

pubmed logopapersJul 15 2025
Transformers have revolutionized medical image restoration, but the quadratic complexity still poses limitations for their application to high-resolution medical images. The recent advent of the Receptance Weighted Key Value (RWKV) model in the natural language processing field has attracted much attention due to its ability to process long sequences efficiently. To leverage its advanced design, we propose Restore-RWKV, the first RWKV-based model for medical image restoration. Since the original RWKV model is designed for 1D sequences, we make two necessary modifications for modeling spatial relations in 2D medical images. First, we present a recurrent WKV (Re-WKV) attention mechanism that captures global dependencies with linear computational complexity. Re-WKV incorporates bidirectional attention as basic for a global 16 receptive field and recurrent attention to effectively model 2D dependencies from various scan directions. Second, we develop an omnidirectional token shift (Omni-Shift) layer that enhances local dependencies by shifting tokens from all directions and across a wide context range. These adaptations make the proposed Restore-RWKV an efficient and effective model for medical image restoration. Even a lightweight variant of Restore-RWKV, with only 1.16 million parameters, achieves comparable or even superior results compared to existing state-of-the-art (SOTA) methods. Extensive experiments demonstrate that the resulting Restore-RWKV achieves SOTA performance across a range of medical image restoration tasks, including PET image synthesis, CT image denoising, MRI image superresolution, and all-in-one medical image restoration. Code is available at: https://github.com/Yaziwel/Restore-RWKV.

LADDA: Latent Diffusion-based Domain-adaptive Feature Disentangling for Unsupervised Multi-modal Medical Image Registration.

Yuan P, Dong J, Zhao W, Lyu F, Xue C, Zhang Y, Yang C, Wu Z, Gao Z, Lyu T, Coatrieux JL, Chen Y

pubmed logopapersJul 15 2025
Deformable image registration (DIR) is critical for accurate clinical diagnosis and effective treatment planning. However, patient movement, significant intensity differences, and large breathing deformations hinder accurate anatomical alignment in multi-modal image registration. These factors exacerbate the entanglement of anatomical and modality-specific style information, thereby severely limiting the performance of multi-modal registration. To address this, we propose a novel LAtent Diffusion-based Domain-Adaptive feature disentangling (LADDA) framework for unsupervised multi-modal medical image registration, which explicitly addresses the representation disentanglement. First, LADDA extracts reliable anatomical priors from the Latent Diffusion Model (LDM), facilitating downstream content-style disentangled learning. A Domain-Adaptive Feature Disentangling (DAFD) module is proposed to promote anatomical structure alignment further. This module disentangles image features into content and style information, boosting the network to focus on cross-modal content information. Next, a Neighborhood-Preserving Hashing (NPH) is constructed to further perceive and integrate hierarchical content information through local neighbourhood encoding, thereby maintaining cross-modal structural consistency. Furthermore, a Unilateral-Query-Frozen Attention (UQFA) module is proposed to enhance the coupling between upstream prior and downstream content information. The feature interaction within intra-domain consistent structures improves the fine recovery of detailed textures. The proposed framework is extensively evaluated on large-scale multi-center datasets, demonstrating superior performance across diverse clinical scenarios and strong generalization on out-of-distribution (OOD) data.

Placenta segmentation redefined: review of deep learning integration of magnetic resonance imaging and ultrasound imaging.

Jittou A, Fazazy KE, Riffi J

pubmed logopapersJul 15 2025
Placental segmentation is critical for the quantitative analysis of prenatal imaging applications. However, segmenting the placenta using magnetic resonance imaging (MRI) and ultrasound is challenging because of variations in fetal position, dynamic placental development, and image quality. Most segmentation methods define regions of interest with different shapes and intensities, encompassing the entire placenta or specific structures. Recently, deep learning has emerged as a key approach that offer high segmentation performance across diverse datasets. This review focuses on the recent advances in deep learning techniques for placental segmentation in medical imaging, specifically MRI and ultrasound modalities, and cover studies from 2019 to 2024. This review synthesizes recent research, expand knowledge in this innovative area, and highlight the potential of deep learning approaches to significantly enhance prenatal diagnostics. These findings emphasize the importance of selecting appropriate imaging modalities and model architectures tailored to specific clinical scenarios. In addition, integrating both MRI and ultrasound can enhance segmentation performance by leveraging complementary information. This review also discusses the challenges associated with the high costs and limited availability of advanced imaging technologies. It provides insights into the current state of placental segmentation techniques and their implications for improving maternal and fetal health outcomes, underscoring the transformative impact of deep learning on prenatal diagnostics.
Page 85 of 3133125 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.