Sort by:
Page 9 of 2252246 results

CUAMT: A MRI semi-supervised medical image segmentation framework based on contextual information and mixed uncertainty.

Xiao H, Wang Y, Xiong S, Ren Y, Zhang H

pubmed logopapersJul 1 2025
Semi-supervised medical image segmentation is a class of machine learning paradigms for segmentation model training and inference using both labeled and unlabeled medical images, which can effectively reduce the data labeling workload. However, existing consistency semi-supervised segmentation models mainly focus on investigating more complex consistency strategies and lack efficient utilization of volumetric contextual information, which leads to vague or uncertain understanding of the boundary between the object and the background by the model, resulting in ambiguous or even erroneous boundary segmentation results. For this reason, this study proposes a hybrid uncertainty network CUAMT based on contextual information. In this model, a contextual information extraction module CIE is proposed, which learns the connection between image contexts by extracting semantic features at different scales, and guides the model to enhance learning contextual information. In addition, a hybrid uncertainty module HUM is proposed, which guides the model to focus on segmentation boundary information by combining the global and local uncertainty information of two different networks to improve the segmentation performance of the networks at the boundary. In the left atrial segmentation and brain tumor segmentation dataset, validation experiments were conducted on the proposed model. The experiments show that our model achieves 89.84%, 79.89%, and 8.73 on the Dice metric, Jaccard metric, and 95HD metric, respectively, which significantly outperforms several current SOTA semi-supervised methods. This study confirms that the CIE and HUM strategies are effective. A semi-supervised segmentation framework is proposed for medical image segmentation.

Cascade learning in multi-task encoder-decoder networks for concurrent bone segmentation and glenohumeral joint clinical assessment in shoulder CT scans.

Marsilio L, Marzorati D, Rossi M, Moglia A, Mainardi L, Manzotti A, Cerveri P

pubmed logopapersJul 1 2025
Osteoarthritis is a degenerative condition that affects bones and cartilage, often leading to structural changes, including osteophyte formation, bone density loss, and the narrowing of joint spaces. Over time, this process may disrupt the glenohumeral (GH) joint functionality, requiring a targeted treatment. Various options are available to restore joint functions, ranging from conservative management to surgical interventions, depending on the severity of the condition. This work introduces an innovative deep learning framework to process shoulder CT scans. It features the semantic segmentation of the proximal humerus and scapula, the 3D reconstruction of bone surfaces, the identification of the GH joint region, and the staging of three common osteoarthritic-related conditions: osteophyte formation (OS), GH space reduction (JS), and humeroscapular alignment (HSA). Each condition was stratified into multiple severity stages, offering a comprehensive analysis of shoulder bone structure pathology. The pipeline comprised two cascaded CNN architectures: 3D CEL-UNet for segmentation and 3D Arthro-Net for threefold classification. A retrospective dataset of 571 CT scans featuring patients with various degrees of GH osteoarthritic-related pathologies was used to train, validate, and test the pipeline. Root mean squared error and Hausdorff distance median values for 3D reconstruction were 0.22 mm and 1.48 mm for the humerus and 0.24 mm and 1.48 mm for the scapula, outperforming state-of-the-art architectures and making it potentially suitable for a PSI-based shoulder arthroplasty preoperative plan context. The classification accuracy for OS, JS, and HSA consistently reached around 90% across all three categories. The computational time for the entire inference pipeline was less than 15 s, showcasing the framework's efficiency and compatibility with orthopedic radiology practice. The achieved reconstruction and classification accuracy, combined with the rapid processing time, represent a promising advancement towards the medical translation of artificial intelligence tools. This progress aims to streamline the preoperative planning pipeline, delivering high-quality bone surfaces and supporting surgeons in selecting the most suitable surgical approach according to the unique patient joint conditions.

One for multiple: Physics-informed synthetic data boosts generalizable deep learning for fast MRI reconstruction.

Wang Z, Yu X, Wang C, Chen W, Wang J, Chu YH, Sun H, Li R, Li P, Yang F, Han H, Kang T, Lin J, Yang C, Chang S, Shi Z, Hua S, Li Y, Hu J, Zhu L, Zhou J, Lin M, Guo J, Cai C, Chen Z, Guo D, Yang G, Qu X

pubmed logopapersJul 1 2025
Magnetic resonance imaging (MRI) is a widely used radiological modality renowned for its radiation-free, comprehensive insights into the human body, facilitating medical diagnoses. However, the drawback of prolonged scan times hinders its accessibility. The k-space undersampling offers a solution, yet the resultant artifacts necessitate meticulous removal during image reconstruction. Although deep learning (DL) has proven effective for fast MRI image reconstruction, its broader applicability across various imaging scenarios has been constrained. Challenges include the high cost and privacy restrictions associated with acquiring large-scale, diverse training data, coupled with the inherent difficulty of addressing mismatches between training and target data in existing DL methodologies. Here, we present a novel Physics-Informed Synthetic data learning Framework for fast MRI, called PISF. PISF marks a breakthrough by enabling generalizable DL for multi-scenario MRI reconstruction through a single trained model. Our approach separates the reconstruction of a 2D image into many 1D basic problems, commencing with 1D data synthesis to facilitate generalization. We demonstrate that training DL models on synthetic data, coupled with enhanced learning techniques, yields in vivo MRI reconstructions comparable to or surpassing those of models trained on matched realistic datasets, reducing the reliance on real-world MRI data by up to 96 %. With a single trained model, our PISF supports the high-quality reconstruction under 4 sampling patterns, 5 anatomies, 6 contrasts, 5 vendors, and 7 centers, exhibiting remarkable generalizability. Its adaptability to 2 neuro and 2 cardiovascular patient populations has been validated through evaluations by 10 experienced medical professionals. In summary, PISF presents a feasible and cost-effective way to significantly boost the widespread adoption of DL in various fast MRI applications.

Deep Learning Model for Real-Time Nuchal Translucency Assessment at Prenatal US.

Zhang Y, Yang X, Ji C, Hu X, Cao Y, Chen C, Sui H, Li B, Zhen C, Huang W, Deng X, Yin L, Ni D

pubmed logopapersJul 1 2025
Purpose To develop and evaluate an artificial intelligence-based model for real-time nuchal translucency (NT) plane identification and measurement in prenatal US assessments. Materials and Methods In this retrospective multicenter study conducted from January 2022 to October 2023, the Automated Identification and Measurement of NT (AIM-NT) model was developed and evaluated using internal and external datasets. NT plane assessment, including identification of the NT plane and measurement of NT thickness, was independently conducted by AIM-NT and experienced radiologists, with the results subsequently audited by radiology specialists and accuracy compared between groups. To assess alignment of artificial intelligence with radiologist workflow, discrepancies between the AIM-NT model and radiologists in NT plane identification time and thickness measurements were evaluated. Results The internal dataset included a total of 3959 NT images from 3153 fetuses, and the external dataset included 267 US videos from 267 fetuses. On the internal testing dataset, AIM-NT achieved an area under the receiver operating characteristic curve of 0.92 for NT plane identification. On the external testing dataset, there was no evidence of differences between AIM-NT and radiologists in NT plane identification accuracy (88.8% vs 87.6%, <i>P</i> = .69) or NT thickness measurements on standard and nonstandard NT planes (<i>P</i> = .29 and .59). AIM-NT demonstrated high consistency with radiologists in NT plane identification time, with 1-minute discrepancies observed in 77.9% of cases, and NT thickness measurements, with a mean difference of 0.03 mm and mean absolute error of 0.22 mm (95% CI: 0.19, 0.25). Conclusion AIM-NT demonstrated high accuracy in identifying the NT plane and measuring NT thickness on prenatal US images, showing minimal discrepancies with radiologist workflow. <b>Keywords:</b> Ultrasound, Fetus, Segmentation, Feature Detection, Diagnosis, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Horii in this issue.

"Recon-all-clinical": Cortical surface reconstruction and analysis of heterogeneous clinical brain MRI.

Gopinath K, Greve DN, Magdamo C, Arnold S, Das S, Puonti O, Iglesias JE

pubmed logopapersJul 1 2025
Surface-based analysis of the cerebral cortex is ubiquitous in human neuroimaging with MRI. It is crucial for tasks like cortical registration, parcellation, and thickness estimation. Traditionally, such analyses require high-resolution, isotropic scans with good gray-white matter contrast, typically a T1-weighted scan with 1 mm resolution. This requirement precludes application of these techniques to most MRI scans acquired for clinical purposes, since they are often anisotropic and lack the required T1-weighted contrast. To overcome this limitation and enable large-scale neuroimaging studies using vast amounts of existing clinical data, we introduce recon-all-clinical, a novel methodology for cortical reconstruction, registration, parcellation, and thickness estimation for clinical brain MRI scans of any resolution and contrast. Our approach employs a hybrid analysis method that combines a convolutional neural network (CNN) trained with domain randomization to predict signed distance functions (SDFs), and classical geometry processing for accurate surface placement while maintaining topological and geometric constraints. The method does not require retraining for different acquisitions, thus simplifying the analysis of heterogeneous clinical datasets. We evaluated recon-all-clinical on multiple public datasets like ADNI, HCP, AIBL, OASIS and including a large clinical dataset of over 9,500 scans. The results indicate that our method produces geometrically precise cortical reconstructions across different MRI contrasts and resolutions, consistently achieving high accuracy in parcellation. Cortical thickness estimates are precise enough to capture aging effects, independently of MRI contrast, even though accuracy varies with slice thickness. Our method is publicly available at https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all-clinical, enabling researchers to perform detailed cortical analysis on the huge amounts of already existing clinical MRI scans. This advancement may be particularly valuable for studying rare diseases and underrepresented populations where research-grade MRI data is scarce.

ConnectomeAE: Multimodal brain connectome-based dual-branch autoencoder and its application in the diagnosis of brain diseases.

Zheng Q, Nan P, Cui Y, Li L

pubmed logopapersJul 1 2025
Exploring the dependencies between multimodal brain networks and integrating node features to enhance brain disease diagnosis remains a significant challenge. Some work has examined only brain connectivity changes in patients, ignoring important information about radiomics features such as shape and texture of individual brain regions in structural images. To this end, this study proposed a novel deep learning approach to integrate multimodal brain connectome information and regional radiomics features for brain disease diagnosis. A dual-branch autoencoder (ConnectomeAE) based on multimodal brain connectomes was proposed for brain disease diagnosis. Specifically, a matrix of radiomics feature extracted from structural magnetic resonance image (MRI) was used as Rad_AE branch inputs for learning important brain region features. Functional brain network built from functional MRI image was used as inputs to Cycle_AE for capturing brain disease-related connections. By separately learning node features and connection features from multimodal brain networks, the method demonstrates strong adaptability in diagnosing different brain diseases. ConnectomeAE was validated on two publicly available datasets. The experimental results show that ConnectomeAE achieved excellent diagnostic performance with an accuracy of 70.7 % for autism spectrum disorder and 90.5 % for Alzheimer's disease. A comparison of training time with other methods indicated that ConnectomeAE exhibits simplicity and efficiency suitable for clinical applications. Furthermore, the interpretability analysis of the model aligned with previous studies, further supporting the biological basis of ConnectomeAE. ConnectomeAE could effectively leverage the complementary information between multimodal brain connectomes for brain disease diagnosis. By separately learning radiomic node features and connectivity features, ConnectomeAE demonstrated good adaptability to different brain disease classification tasks.

Deep learning-assisted detection of meniscus and anterior cruciate ligament combined tears in adult knee magnetic resonance imaging: a crossover study with arthroscopy correlation.

Behr J, Nich C, D'Assignies G, Zavastin C, Zille P, Herpe G, Triki R, Grob C, Pujol N

pubmed logopapersJul 1 2025
We aimed to compare the diagnostic performance of physicians in the detection of arthroscopically confirmed meniscus and anterior cruciate ligament (ACL) tears on knee magnetic resonance imaging (MRI), with and without assistance from a deep learning (DL) model. We obtained preoperative MR images from 88 knees of patients who underwent arthroscopic meniscal repair, with or without ACL reconstruction. Ninety-eight MR images of knees without signs of meniscus or ACL tears were obtained from a publicly available database after matching on age and ACL status (normal or torn), resulting in a global dataset of 186 MRI examinations. The Keros<sup>®</sup> (Incepto, Paris) DL algorithm, previously trained for the detection and characterization of meniscus and ACL tears, was used for MRI assessment. Magnetic resonance images were individually, and blindly annotated by three physicians and the DL algorithm. After three weeks, the three human raters repeated image assessment with model assistance, performed in a different order. The Keros<sup>®</sup> algorithm achieved an area under the curve (AUC) of 0.96 (95% CI 0.93, 0.99), 0.91 (95% CI 0.85, 0.96), and 0.99 (95% CI 0.98, 0.997) in the detection of medial meniscus, lateral meniscus and ACL tears, respectively. With model assistance, physicians achieved higher sensitivity (91% vs. 83%, p = 0.04) and similar specificity (91% vs. 87%, p = 0.09) in the detection of medial meniscus tears. Regarding lateral meniscus tears, sensitivity and specificity were similar with/without model assistance. Regarding ACL tears, physicians achieved higher specificity when assisted by the algorithm (70% vs. 51%, p = 0.01) but similar sensitivity with/without model assistance (93% vs. 96%, p = 0.13). The current model consistently helped physicians in the detection of medial meniscus and ACL tears, notably when they were combined. Diagnostic study, Level III.

Cycle-conditional diffusion model for noise correction of diffusion-weighted images using unpaired data.

Zhu P, Liu C, Fu Y, Chen N, Qiu A

pubmed logopapersJul 1 2025
Diffusion-weighted imaging (DWI) is a key modality for studying brain microstructure, but its signals are highly susceptible to noise due to the thermal motion of water molecules and interactions with tissue microarchitecture, leading to significant signal attenuation and a low signal-to-noise ratio (SNR). In this paper, we propose a novel approach, a Cycle-Conditional Diffusion Model (Cycle-CDM) using unpaired data learning, aimed at improving DWI quality and reliability through noise correction. Cycle-CDM leverages a cycle-consistent translation architecture to bridge the domain gap between noise-contaminated and noise-free DWIs, enabling the restoration of high-quality images without requiring paired datasets. By utilizing two conditional diffusion models, Cycle-CDM establishes data interrelationships between the two types of DWIs, while incorporating synthesized anatomical priors from the cycle translation process to guide noise removal. In addition, we introduce specific constraints to preserve anatomical fidelity, allowing Cycle-CDM to effectively learn the underlying noise distribution and achieve accurate denoising. Our experiments conducted on simulated datasets, as well as children and adolescents' datasets with strong clinical relevance. Our results demonstrate that Cycle-CDM outperforms comparative methods, such as U-Net, CycleGAN, Pix2Pix, MUNIT and MPPCA, in terms of noise correction performance. We demonstrated that Cycle-CDM can be generalized to DWIs with head motion when they were acquired using different MRI scannsers. Importantly, the denoised DWI data produced by Cycle-CDM exhibit accurate preservation of underlying tissue microstructure, thus substantially improving their medical applicability.

Multi-label pathology editing of chest X-rays with a Controlled Diffusion Model.

Chu H, Qi X, Wang H, Liang Y

pubmed logopapersJul 1 2025
Large-scale generative models have garnered significant attention in the field of medical imaging, particularly for image editing utilizing diffusion models. However, current research has predominantly concentrated on pathological editing involving single or a limited number of labels, making it challenging to achieve precise modifications. Inaccurate alterations may lead to substantial discrepancies between the generated and original images, thereby impacting the clinical applicability of these models. This paper presents a diffusion model with untangling capabilities applied to chest X-ray image editing, incorporating a mask-based mechanism for bone and organ information. We successfully perform multi-label pathological editing of chest X-ray images without compromising the integrity of the original thoracic structure. The proposed technology comprises a chest X-ray image classifier and an intricate organ mask; the classifier supplies essential feature labels that require untangling for the stabilized diffusion model, while the complex organ mask facilitates directed and controllable edits to chest X-rays. We assessed the outcomes of our proposed algorithm, named Chest X-rays_Mpe, using MS-SSIM and CLIP scores alongside qualitative evaluations conducted by radiology experts. The results indicate that our approach surpasses existing algorithms across both quantitative and qualitative metrics.

Lightweight Multi-Stage Aggregation Transformer for robust medical image segmentation.

Wang X, Zhu Y, Cui Y, Huang X, Guo D, Mu P, Xia M, Bai C, Teng Z, Chen S

pubmed logopapersJul 1 2025
Capturing rich multi-scale features is essential to address complex variations in medical image segmentation. Multiple hybrid networks have been developed to integrate the complementary benefits of convolutional neural networks (CNN) and Transformers. However, existing methods may suffer from either huge computational cost required by the complicated networks or unsatisfied performance of lighter networks. How to give full play to the advantages of both convolution and self-attention and design networks that are both effective and efficient still remains an unsolved problem. In this work, we propose a robust lightweight multi-stage hybrid architecture, named Multi-stage Aggregation Transformer version 2 (MA-TransformerV2), to extract multi-scale features with progressive aggregations for accurate segmentation of highly variable medical images at a low computational cost. Specifically, lightweight Trans blocks and lightweight CNN blocks are parallelly introduced into the dual-branch encoder module in each stage, and then a vector quantization block is incorporated at the bottleneck to discretizes the features and discard the redundance. This design not only enhances the representation capabilities and computational efficiency of the model, but also makes the model interpretable. Extensive experimental results on public datasets show that our method outperforms state-of-the-art methods, including CNN-based, Transformer-based, advanced hybrid CNN-Transformer-based models, and several lightweight models, in terms of both segmentation accuracy and model capacity. Code will be made publicly available at https://github.com/zjmiaprojects/MATransformerV2.
Page 9 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.