Sort by:
Page 22 of 2072068 results

Structural uncertainty estimation for medical image segmentation.

Yang B, Zhang X, Zhang H, Li S, Higashita R, Liu J

pubmed logopapersJul 1 2025
Precise segmentation and uncertainty estimation are crucial for error identification and correction in medical diagnostic assistance. Existing methods mainly rely on pixel-wise uncertainty estimations. They (1) neglect the global context, leading to erroneous uncertainty indications, and (2) bring attention interference, resulting in the waste of extensive details and potential understanding confusion. In this paper, we propose a novel structural uncertainty estimation method, based on Convolutional Neural Networks (CNN) and Active Shape Models (ASM), named SU-ASM, which incorporates global shape information for providing precise segmentation and uncertainty estimation. The SU-ASM consists of three components. Firstly, multi-task generation provides multiple outcomes to assist ASM initialization and shape optimization via a multi-task learning module. Secondly, information fusion involves the creation of a Combined Boundary Probability (CBP) and along with a rapid shape initialization algorithm, Key Landmark Template Matching (KLTM), to enhance boundary reliability and select appropriate shape templates. Finally, shape model fitting where multiple shape templates are matched to the CBP while maintaining their intrinsic shape characteristics. Fitted shapes generate segmentation results and structural uncertainty estimations. The SU-ASM has been validated on cardiac ultrasound dataset, ciliary muscle dataset of the anterior eye segment, and the chest X-ray dataset. It outperforms state-of-the-art methods in terms of segmentation and uncertainty estimation.

Machine learning approaches for fine-grained symptom estimation in schizophrenia: A comprehensive review.

Foteinopoulou NM, Patras I

pubmed logopapersJul 1 2025
Schizophrenia is a severe yet treatable mental disorder, and it is diagnosed using a multitude of primary and secondary symptoms. Diagnosis and treatment for each individual depends on the severity of the symptoms. Therefore, there is a need for accurate, personalised assessments. However, the process can be both time-consuming and subjective; hence, there is a motivation to explore automated methods that can offer consistent diagnosis and precise symptom assessments, thereby complementing the work of healthcare practitioners. Machine Learning has demonstrated impressive capabilities across numerous domains, including medicine; the use of Machine Learning in patient assessment holds great promise for healthcare professionals and patients alike, as it can lead to more consistent and accurate symptom estimation. This survey reviews methodologies utilising Machine Learning for diagnosing and assessing schizophrenia. Contrary to previous reviews that primarily focused on binary classification, this work recognises the complexity of the condition and, instead, offers an overview of Machine Learning methods designed for fine-grained symptom estimation. We cover multiple modalities, namely Medical Imaging, Electroencephalograms and Audio-Visual, as the illness symptoms can manifest in a patient's pathology and behaviour. Finally, we analyse the datasets and methodologies used in the studies and identify trends, gaps, as opportunities for future research.

Reconstruction-based approach for chest X-ray image segmentation and enhanced multi-label chest disease classification.

Hage Chehade A, Abdallah N, Marion JM, Hatt M, Oueidat M, Chauvet P

pubmed logopapersJul 1 2025
U-Net is a commonly used model for medical image segmentation. However, when applied to chest X-ray images that show pathologies, it often fails to include these critical pathological areas in the generated masks. To address this limitation, in our study, we tackled the challenge of precise segmentation and mask generation by developing a novel approach, using CycleGAN, that encompasses the areas affected by pathologies within the region of interest, allowing the extraction of relevant radiomic features linked to pathologies. Furthermore, we adopted a feature selection approach to focus the analysis on the most significant features. The results of our proposed pipeline are promising, with an average accuracy of 92.05% and an average AUC of 89.48% for the multi-label classification of effusion and infiltration acquired from the ChestX-ray14 dataset, using the XGBoost model. Furthermore, applying our methodology to the classification of the 14 diseases in the ChestX-ray14 dataset resulted in an average AUC of 83.12%, outperforming previous studies. This research highlights the importance of effective pathological mask generation and features selection for accurate classification of chest diseases. The promising results of our approach underscore its potential for broader applications in the classification of chest diseases.

Challenges, optimization strategies, and future horizons of advanced deep learning approaches for brain lesion segmentation.

Zaman A, Yassin MM, Mehmud I, Cao A, Lu J, Hassan H, Kang Y

pubmed logopapersJul 1 2025
Brain lesion segmentation is challenging in medical image analysis, aiming to delineate lesion regions precisely. Deep learning (DL) techniques have recently demonstrated promising results across various computer vision tasks, including semantic segmentation, object detection, and image classification. This paper offers an overview of recent DL algorithms for brain tumor and stroke segmentation, drawing on literature from 2021 to 2024. It highlights the strengths, limitations, current research challenges, and unexplored areas in imaging-based brain lesion classification based on insights from over 250 recent review papers. Techniques addressing difficulties like class imbalance and multi-modalities are presented. Optimization methods for improving performance regarding computational and structural complexity and processing speed are discussed. These include lightweight neural networks, multilayer architectures, and computationally efficient, highly accurate network designs. The paper also reviews generic and latest frameworks of different brain lesion detection techniques and highlights publicly available benchmark datasets and their issues. Furthermore, open research areas, application prospects, and future directions for DL-based brain lesion classification are discussed. Future directions include integrating neural architecture search methods with domain knowledge, predicting patient survival levels, and learning to separate brain lesions using patient statistics. To ensure patient privacy, future research is anticipated to explore privacy-preserving learning frameworks. Overall, the presented suggestions serve as a guideline for researchers and system designers involved in brain lesion detection and stroke segmentation tasks.

CUAMT: A MRI semi-supervised medical image segmentation framework based on contextual information and mixed uncertainty.

Xiao H, Wang Y, Xiong S, Ren Y, Zhang H

pubmed logopapersJul 1 2025
Semi-supervised medical image segmentation is a class of machine learning paradigms for segmentation model training and inference using both labeled and unlabeled medical images, which can effectively reduce the data labeling workload. However, existing consistency semi-supervised segmentation models mainly focus on investigating more complex consistency strategies and lack efficient utilization of volumetric contextual information, which leads to vague or uncertain understanding of the boundary between the object and the background by the model, resulting in ambiguous or even erroneous boundary segmentation results. For this reason, this study proposes a hybrid uncertainty network CUAMT based on contextual information. In this model, a contextual information extraction module CIE is proposed, which learns the connection between image contexts by extracting semantic features at different scales, and guides the model to enhance learning contextual information. In addition, a hybrid uncertainty module HUM is proposed, which guides the model to focus on segmentation boundary information by combining the global and local uncertainty information of two different networks to improve the segmentation performance of the networks at the boundary. In the left atrial segmentation and brain tumor segmentation dataset, validation experiments were conducted on the proposed model. The experiments show that our model achieves 89.84%, 79.89%, and 8.73 on the Dice metric, Jaccard metric, and 95HD metric, respectively, which significantly outperforms several current SOTA semi-supervised methods. This study confirms that the CIE and HUM strategies are effective. A semi-supervised segmentation framework is proposed for medical image segmentation.

Cascade learning in multi-task encoder-decoder networks for concurrent bone segmentation and glenohumeral joint clinical assessment in shoulder CT scans.

Marsilio L, Marzorati D, Rossi M, Moglia A, Mainardi L, Manzotti A, Cerveri P

pubmed logopapersJul 1 2025
Osteoarthritis is a degenerative condition that affects bones and cartilage, often leading to structural changes, including osteophyte formation, bone density loss, and the narrowing of joint spaces. Over time, this process may disrupt the glenohumeral (GH) joint functionality, requiring a targeted treatment. Various options are available to restore joint functions, ranging from conservative management to surgical interventions, depending on the severity of the condition. This work introduces an innovative deep learning framework to process shoulder CT scans. It features the semantic segmentation of the proximal humerus and scapula, the 3D reconstruction of bone surfaces, the identification of the GH joint region, and the staging of three common osteoarthritic-related conditions: osteophyte formation (OS), GH space reduction (JS), and humeroscapular alignment (HSA). Each condition was stratified into multiple severity stages, offering a comprehensive analysis of shoulder bone structure pathology. The pipeline comprised two cascaded CNN architectures: 3D CEL-UNet for segmentation and 3D Arthro-Net for threefold classification. A retrospective dataset of 571 CT scans featuring patients with various degrees of GH osteoarthritic-related pathologies was used to train, validate, and test the pipeline. Root mean squared error and Hausdorff distance median values for 3D reconstruction were 0.22 mm and 1.48 mm for the humerus and 0.24 mm and 1.48 mm for the scapula, outperforming state-of-the-art architectures and making it potentially suitable for a PSI-based shoulder arthroplasty preoperative plan context. The classification accuracy for OS, JS, and HSA consistently reached around 90% across all three categories. The computational time for the entire inference pipeline was less than 15 s, showcasing the framework's efficiency and compatibility with orthopedic radiology practice. The achieved reconstruction and classification accuracy, combined with the rapid processing time, represent a promising advancement towards the medical translation of artificial intelligence tools. This progress aims to streamline the preoperative planning pipeline, delivering high-quality bone surfaces and supporting surgeons in selecting the most suitable surgical approach according to the unique patient joint conditions.

One for multiple: Physics-informed synthetic data boosts generalizable deep learning for fast MRI reconstruction.

Wang Z, Yu X, Wang C, Chen W, Wang J, Chu YH, Sun H, Li R, Li P, Yang F, Han H, Kang T, Lin J, Yang C, Chang S, Shi Z, Hua S, Li Y, Hu J, Zhu L, Zhou J, Lin M, Guo J, Cai C, Chen Z, Guo D, Yang G, Qu X

pubmed logopapersJul 1 2025
Magnetic resonance imaging (MRI) is a widely used radiological modality renowned for its radiation-free, comprehensive insights into the human body, facilitating medical diagnoses. However, the drawback of prolonged scan times hinders its accessibility. The k-space undersampling offers a solution, yet the resultant artifacts necessitate meticulous removal during image reconstruction. Although deep learning (DL) has proven effective for fast MRI image reconstruction, its broader applicability across various imaging scenarios has been constrained. Challenges include the high cost and privacy restrictions associated with acquiring large-scale, diverse training data, coupled with the inherent difficulty of addressing mismatches between training and target data in existing DL methodologies. Here, we present a novel Physics-Informed Synthetic data learning Framework for fast MRI, called PISF. PISF marks a breakthrough by enabling generalizable DL for multi-scenario MRI reconstruction through a single trained model. Our approach separates the reconstruction of a 2D image into many 1D basic problems, commencing with 1D data synthesis to facilitate generalization. We demonstrate that training DL models on synthetic data, coupled with enhanced learning techniques, yields in vivo MRI reconstructions comparable to or surpassing those of models trained on matched realistic datasets, reducing the reliance on real-world MRI data by up to 96 %. With a single trained model, our PISF supports the high-quality reconstruction under 4 sampling patterns, 5 anatomies, 6 contrasts, 5 vendors, and 7 centers, exhibiting remarkable generalizability. Its adaptability to 2 neuro and 2 cardiovascular patient populations has been validated through evaluations by 10 experienced medical professionals. In summary, PISF presents a feasible and cost-effective way to significantly boost the widespread adoption of DL in various fast MRI applications.

Deep Learning Model for Real-Time Nuchal Translucency Assessment at Prenatal US.

Zhang Y, Yang X, Ji C, Hu X, Cao Y, Chen C, Sui H, Li B, Zhen C, Huang W, Deng X, Yin L, Ni D

pubmed logopapersJul 1 2025
Purpose To develop and evaluate an artificial intelligence-based model for real-time nuchal translucency (NT) plane identification and measurement in prenatal US assessments. Materials and Methods In this retrospective multicenter study conducted from January 2022 to October 2023, the Automated Identification and Measurement of NT (AIM-NT) model was developed and evaluated using internal and external datasets. NT plane assessment, including identification of the NT plane and measurement of NT thickness, was independently conducted by AIM-NT and experienced radiologists, with the results subsequently audited by radiology specialists and accuracy compared between groups. To assess alignment of artificial intelligence with radiologist workflow, discrepancies between the AIM-NT model and radiologists in NT plane identification time and thickness measurements were evaluated. Results The internal dataset included a total of 3959 NT images from 3153 fetuses, and the external dataset included 267 US videos from 267 fetuses. On the internal testing dataset, AIM-NT achieved an area under the receiver operating characteristic curve of 0.92 for NT plane identification. On the external testing dataset, there was no evidence of differences between AIM-NT and radiologists in NT plane identification accuracy (88.8% vs 87.6%, <i>P</i> = .69) or NT thickness measurements on standard and nonstandard NT planes (<i>P</i> = .29 and .59). AIM-NT demonstrated high consistency with radiologists in NT plane identification time, with 1-minute discrepancies observed in 77.9% of cases, and NT thickness measurements, with a mean difference of 0.03 mm and mean absolute error of 0.22 mm (95% CI: 0.19, 0.25). Conclusion AIM-NT demonstrated high accuracy in identifying the NT plane and measuring NT thickness on prenatal US images, showing minimal discrepancies with radiologist workflow. <b>Keywords:</b> Ultrasound, Fetus, Segmentation, Feature Detection, Diagnosis, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Horii in this issue.

"Recon-all-clinical": Cortical surface reconstruction and analysis of heterogeneous clinical brain MRI.

Gopinath K, Greve DN, Magdamo C, Arnold S, Das S, Puonti O, Iglesias JE

pubmed logopapersJul 1 2025
Surface-based analysis of the cerebral cortex is ubiquitous in human neuroimaging with MRI. It is crucial for tasks like cortical registration, parcellation, and thickness estimation. Traditionally, such analyses require high-resolution, isotropic scans with good gray-white matter contrast, typically a T1-weighted scan with 1 mm resolution. This requirement precludes application of these techniques to most MRI scans acquired for clinical purposes, since they are often anisotropic and lack the required T1-weighted contrast. To overcome this limitation and enable large-scale neuroimaging studies using vast amounts of existing clinical data, we introduce recon-all-clinical, a novel methodology for cortical reconstruction, registration, parcellation, and thickness estimation for clinical brain MRI scans of any resolution and contrast. Our approach employs a hybrid analysis method that combines a convolutional neural network (CNN) trained with domain randomization to predict signed distance functions (SDFs), and classical geometry processing for accurate surface placement while maintaining topological and geometric constraints. The method does not require retraining for different acquisitions, thus simplifying the analysis of heterogeneous clinical datasets. We evaluated recon-all-clinical on multiple public datasets like ADNI, HCP, AIBL, OASIS and including a large clinical dataset of over 9,500 scans. The results indicate that our method produces geometrically precise cortical reconstructions across different MRI contrasts and resolutions, consistently achieving high accuracy in parcellation. Cortical thickness estimates are precise enough to capture aging effects, independently of MRI contrast, even though accuracy varies with slice thickness. Our method is publicly available at https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all-clinical, enabling researchers to perform detailed cortical analysis on the huge amounts of already existing clinical MRI scans. This advancement may be particularly valuable for studying rare diseases and underrepresented populations where research-grade MRI data is scarce.

ConnectomeAE: Multimodal brain connectome-based dual-branch autoencoder and its application in the diagnosis of brain diseases.

Zheng Q, Nan P, Cui Y, Li L

pubmed logopapersJul 1 2025
Exploring the dependencies between multimodal brain networks and integrating node features to enhance brain disease diagnosis remains a significant challenge. Some work has examined only brain connectivity changes in patients, ignoring important information about radiomics features such as shape and texture of individual brain regions in structural images. To this end, this study proposed a novel deep learning approach to integrate multimodal brain connectome information and regional radiomics features for brain disease diagnosis. A dual-branch autoencoder (ConnectomeAE) based on multimodal brain connectomes was proposed for brain disease diagnosis. Specifically, a matrix of radiomics feature extracted from structural magnetic resonance image (MRI) was used as Rad_AE branch inputs for learning important brain region features. Functional brain network built from functional MRI image was used as inputs to Cycle_AE for capturing brain disease-related connections. By separately learning node features and connection features from multimodal brain networks, the method demonstrates strong adaptability in diagnosing different brain diseases. ConnectomeAE was validated on two publicly available datasets. The experimental results show that ConnectomeAE achieved excellent diagnostic performance with an accuracy of 70.7 % for autism spectrum disorder and 90.5 % for Alzheimer's disease. A comparison of training time with other methods indicated that ConnectomeAE exhibits simplicity and efficiency suitable for clinical applications. Furthermore, the interpretability analysis of the model aligned with previous studies, further supporting the biological basis of ConnectomeAE. ConnectomeAE could effectively leverage the complementary information between multimodal brain connectomes for brain disease diagnosis. By separately learning radiomic node features and connectivity features, ConnectomeAE demonstrated good adaptability to different brain disease classification tasks.
Page 22 of 2072068 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.