Sort by:
Page 1 of 89890 results
Next

Enhancing Spinal Cord and Canal Segmentation in Degenerative Cervical Myelopathy : The Role of Interactive Learning Models with manual Click.

Han S, Oh JK, Cho W, Kim TJ, Hong N, Park SB

pubmed logopapersSep 29 2025
We aim to develop an interactive segmentation model that can offer accuracy and reliability for the segmentation of the irregularly shaped spinal cord and canal in degenerative cervical myelopathy (DCM) through manual click and model refinement. A dataset of 1444 frames from 294 magnetic resonance imaging records of DCM patients was used and we developed two different segmentation models for comparison : auto-segmentation and interactive segmentation. The former was based on U-Net and utilized a pretrained ConvNeXT-tiny as its encoder. For the latter, we employed an interactive segmentation model structured by SimpleClick, a large model that utilizes a vision transformer as its backbone, together with simple fine-tuning. The segmentation performance of the two models were compared in terms of their Dice scores, mean intersection over union (mIoU), Average Precision and Hausdorff distance. The efficiency of the interactive segmentation model was evaluated by the number of clicks required to achieve a target mIoU. Our model achieved better scores across all four-evaluation metrics for segmentation accuracy, showing improvements of +6.4%, +1.8%, +3.7%, and -53.0% for canal segmentation, and +11.7%, +6.0%, +18.2%, and -70.9% for cord segmentation with 15 clicks, respectively. The required clicks for the interactive segmentation model to achieve a 90% mIoU for spinal canal with cord cases and 80% mIoU for spinal cord cases were 11.71 and 11.99, respectively. We found that the interactive segmentation model significantly outperformed the auto-segmentation model. By incorporating simple manual inputs, the interactive model effectively identified regions of interest, particularly in the complex and irregular shapes of the spinal cord, demonstrating both enhanced accuracy and adaptability.

Deep learning-based cardiac computed tomography angiography left atrial segmentation and quantification in atrial fibrillation patients: a multi-model comparative study.

Feng L, Lu W, Liu J, Chen Z, Jin J, Qian N, Pan J, Wang L, Xiang J, Jiang J, Wang Y

pubmed logopapersSep 26 2025
Quantitative assessment of left atrial volume (LAV) is an important factor in the study of the pathogenesis of atrial fibrillation. However, automated left atrial segmentation with quantitative assessment usually faces many challenges. The main objective of this study was to find the optimal left atrial segmentation model based on cardiac computed tomography angiography (CTA) and to perform quantitative LAV measurement. A multi-center left atrial study cohort containing 182 cardiac CTAs with atrial fibrillation was created, each case accompanied by expert image annotation by a cardiologist. Then, based on this left atrium dataset, five recent states-of-the-art (SOTA) models in the field of medical image segmentation were used to train and validate the left atrium segmentation model, including DAResUNet, nnFormer, xLSTM-UNet, UNETR, and VNet, respectively. Further, the optimal segmentation model was used to assess the consistency validation of the LAV. DAResUNet achieved the best performance in DSC (0.924 ± 0.023) and JI (0.859 ± 0.065) among all models, while VNet is the best performer in HD (12.457 ± 6.831) and ASD (1.034 ± 0.178). The Bland-Altman plot demonstrated the extremely strong agreement (mean bias - 5.69 mL, 95% LoA - 19-7.6 mL) between the model's automatic prediction and manual measurements. Deep learning models based on a study cohort of 182 CTA left atrial images were capable of achieving competitive results in left atrium segmentation. LAV assessment based on deep learning models may be useful for biomarkers of the onset of atrial fibrillation.

Automated deep learning method for whole-breast segmentation in contrast-free quantitative MRI.

Gao W, Zhang Y, Gao B, Xia Y, Liang W, Yang Q, Shi F, He T, Han G, Li X, Su X, Zhang Y

pubmed logopapersSep 26 2025
To develop a deep learning segmentation method utilizing the nnU-Net architecture for fully automated whole-breast segmentation based on diffusion-weighted imaging (DWI) and synthetic MRI (SyMRI) images. A total of 98 patients with 196 breasts were evaluated. All patients underwent 3.0T magnetic resonance (MR) examinations, which incorporated DWI and SyMRI techniques. The ground truth for breast segmentation was established through a manual, slice-by-slice approach performed by two experienced radiologists. The U-Net and nnU-Net deep learning algorithms were employed to segment the whole-breast. Performance was evaluated using various metrics, including the Dice Similarity Coefficient (DSC), accuracy, and Pearson's correlation coefficient. For DWI and proton density (PD) of SyMRI, the nnU-Net outperformed the U-Net achieving the higher DSC in both the testing set (DWI, 0.930 ± 0.029 vs. 0.785 ± 0.161; PD, 0.969 ± 0.010 vs. 0.936 ± 0.018) and independent testing set (DWI, 0.953 ± 0.019 vs. 0.789 ± 0.148; PD, 0.976 ± 0.008 vs. 0.939 ± 0.018). The PD of SyMRI exhibited better performance than DWI, attaining the highest DSC and accuracy. The correlation coefficients R² for nnU-Net were 0.99 ~ 1.00 for DWI and PD, significantly surpassing the performance of U-Net. The nnU-Net exhibited exceptional segmentation performance for fully automated breast segmentation of contrast-free quantitative images. This method serves as an effective tool for processing large-scale clinical datasets and represents a significant advancement toward computer-aided quantitative analysis of breast DWI and SyMRI images.

Theranostics in nuclear medicine: the era of precision oncology.

Gandhi N, Alaseem AM, Deshmukh R, Patel A, Alsaidan OA, Fareed M, Alasiri G, Patel S, Prajapati B

pubmed logopapersSep 26 2025
Theranostics represents a transformative advancement in nuclear medicine by integrating molecular imaging and targeted radionuclide therapy within the paradigm of personalized oncology. This review elucidates the historical evolution and contemporary clinical applications of theranostics, emphasizing its pivotal role in precision cancer management. The theranostic approach involves the coupling of diagnostic and therapeutic radionuclides that target identical molecular biomarkers, enabling simultaneous visualization and treatment of malignancies such as neuroendocrine tumors (NETs), prostate cancer, and differentiated thyroid carcinoma. Key theranostic radiopharmaceutical pairs, including Gallium-68-labeled DOTA-Tyr3-octreotate (Ga-68-DOTATATE) with Lutetium-177-labeled DOTA-Tyr3-octreotate (Lu-177-DOTATATE), and Gallium-68-labeled Prostate-Specific Membrane Antigen (Ga-68-PSMA) with Lutetium-177-labeled Prostate-Specific Membrane Antigen (Lu-177-PSMA), exemplify the "see-and-treat" principle central to this modality. This article further explores critical molecular targets such as somatostatin receptor subtype 2, prostate-specific membrane antigen, human epidermal growth factor receptor 2, CD20, and C-X-C chemokine receptor type 4, along with design principles for radiopharmaceuticals that optimize target specificity while minimizing off-target toxicity. Advances in imaging platforms, including positron emission tomography/computed tomography (PET/CT), single-photon emission computed tomography/CT (SPECT/CT), and hybrid positron emission tomography/magnetic resonance imaging (PET/MRI), have been instrumental in accurate dosimetry, therapeutic response assessment, and adaptive treatment planning. Integration of artificial intelligence (AI) and radiomics holds promise for enhanced image segmentation, predictive modeling, and individualized dosimetric planning. The review also addresses regulatory, manufacturing, and economic considerations, including guidelines from the United States Food and Drug Administration (USFDA) and European Medicines Agency (EMA), Good Manufacturing Practice (GMP) standards, and reimbursement frameworks, which collectively influence global adoption of theranostics. In summary, theranostics is poised to become a cornerstone of next-generation oncology, catalyzing a paradigm shift toward biologically driven, real-time personalized cancer care that seamlessly links diagnosis and therapy.

MultiD4CAD: Multimodal Dataset composed of CT and Clinical Features for Coronary Artery Disease Analysis.

Prinzi F, Militello C, Sollami G, Toia P, La Grutta L, Vitabile S

pubmed logopapersSep 26 2025
Multimodal datasets offer valuable support for developing Clinical Decision Support Systems (CDSS), which leverage predictive models to enhance clinicians' decision-making. In this observational study, we present a dataset of suspected Coronary Artery Disease (CAD) patients - called MultiD4CAD - comprising imaging and clinical data. The imaging data obtained from Coronary Computed Tomography Angiography (CCTA) includes epicardial (EAT) and pericoronary (PAT) adipose tissue segmentations. These metabolically active fat tissues play a key role in cardiovascular diseases. In addition, clinical data include a set of biomarkers recognized as CAD risk factors. The validated EAT and PAT segmentations make the dataset suitable for training predictive models based on radiomics and deep learning architectures. The inclusion of CAD disease labels allows for its application in supervised learning algorithms to predict CAD outcomes. MultiD4CAD has revealed important correlations between imaging features, clinical biomarkers, and CAD status. The article concludes by discussing some challenges, such as classification, segmentation, radiomics, and deep training tasks, that can be investigated and validated using the proposed dataset.

A novel open-source ultrasound dataset with deep learning benchmarks for spinal cord injury localization and anatomical segmentation.

Kumar A, Kotkar K, Jiang K, Bhimreddy M, Davidar D, Weber-Levine C, Krishnan S, Kerensky MJ, Liang R, Leadingham KK, Routkevitch D, Hersh AM, Ashayeri K, Tyler B, Suk I, Son J, Theodore N, Thakor N, Manbachi A

pubmed logopapersSep 26 2025
While deep learning has catalyzed breakthroughs across numerous domains, its broader adoption in clinical settings is inhibited by the costly and time-intensive nature of data acquisition and annotation. To further facilitate medical machine learning, we present an ultrasound dataset of 10,223 brightness-mode (B-mode) images consisting of sagittal slices of porcine spinal cords (N = 25) before and after a contusion injury. We additionally benchmark the performance metrics of several state-of-the-art object detection algorithms to localize the site of injury and semantic segmentation models to label the anatomy for comparison and creation of task-specific architectures. Finally, we evaluate the zero-shot generalization capabilities of the segmentation models on human ultrasound spinal cord images to determine whether training on our porcine dataset is sufficient for accurately interpreting human data. Our results show that the YOLOv8 detection model outperforms all evaluated models for injury localization, achieving a mean Average Precision (mAP50-95) score of 0.606. Segmentation metrics indicate that the DeepLabv3 segmentation model achieves the highest accuracy on unseen porcine anatomy, with a Mean Dice score of 0.587, while SAMed achieves the highest mean Dice score generalizing to human anatomy (0.445). To the best of our knowledge, this is the largest annotated dataset of spinal cord ultrasound images made publicly available to researchers and medical professionals, as well as the first public report of object detection and segmentation architectures to assess anatomical markers in the spinal cord for methodology development and clinical applications.

Exploring learning transferability in deep segmentation of colorectal cancer liver metastases.

Abbas M, Badic B, Andrade-Miranda G, Bourbonne V, Jaouen V, Visvikis D, Conze PH

pubmed logopapersSep 26 2025
Ensuring the seamless transfer of knowledge and models across various datasets and clinical contexts is of paramount importance in medical image segmentation. This is especially true for liver lesion segmentation which plays a key role in pre-operative planning and treatment follow-up. Despite the progress of deep learning algorithms using Transformers, automatically segmenting small hepatic metastases remains a persistent challenge. This can be attributed to the degradation of small structures due to the intrinsic process of feature down-sampling inherent to many deep architectures, coupled with the imbalance between foreground metastases voxels and background. While similar challenges have been observed for liver tumors originated from hepatocellular carcinoma, their manifestation in the context of liver metastasis delineation remains under-explored and require well-defined guidelines. Through comprehensive experiments, this paper aims to bridge this gap and to demonstrate the impact of various transfer learning schemes from off-the-shelf datasets to a dataset containing liver metastases only. Our scale-specific evaluation reveals that models trained from scratch or with domain-specific pre-training demonstrate greater proficiency.

Segmental airway volume as a predictive indicator of postoperative extubation timing in patients with oral and maxillofacial space infections: a retrospective analysis.

Liu S, Shen H, Zhu B, Zhang X, Zhang X, Li W

pubmed logopapersSep 26 2025
The objective of this study was to investigate the significance of segmental airway volume in developing a predictive model to guide the timing of postoperative extubation in patients with oral and maxillofacial space infections (OMSIs). A retrospective cohort study was performed to analyse clinical data from 177 medical records, with a focus on key variables related to disease severity and treatment outcomes. The inclusion criteria of this study were as follows: adherence to the OMSI diagnostic criteria (local tissue inflammation characterized by erythema, oedema, hyperthermia and tenderness); compromised functions such as difficulties opening the mouth, swallowing, or breathing; the presence of purulent material confirmed by puncture or computed tomography (CT); and laboratory examinations indicating an underlying infection process. The data included age, sex, body mass index (BMI), blood test results, smoking history, history of alcohol abuse, the extent of mouth opening, the number of infected spaces, and the source of infection. DICOM files were imported into 3D Slicer for manual segmentation, followed by volume measurement of each segment. We observed statistically significant differences in age, neutrophil count, lymphocyte count, and C4 segment volume among patient subgroups stratified by extubation time. Regression analysis revealed that age and C4 segment volume were significantly correlated with extubation time. Additionally, the machine learning models yielded good evaluation metrics. Segmental airway volume shows promise as an indicator for predicting extubation time. Predictive models constructed using machine learning algorithms yield good predictive performance and may facilitate clinical decision-making.

An open deep learning-based framework and model for tooth instance segmentation in dental CBCT.

Zhou Y, Xu Y, Khalil B, Nalley A, Tarce M

pubmed logopapersSep 25 2025
Current dental CBCT segmentation tools often lack accuracy, accessibility, or comprehensive anatomical coverage. To address this, we constructed a densely annotated dental CBCT dataset and developed a deep learning model, OraSeg, for tooth-level instance segmentation, which is then deployed as a one-click tool and made freely accessible for non-commercial use. We established a standardized annotated dataset covering 35 key oral anatomical structures and employed UNetR as the backbone network, combining Swin Transformer and the spatial Mamba module for multi-scale residual feature fusion. The OralSeg model was designed and optimized for precise instance segmentation of dental CBCT images, and integrated into the 3D Slicer platform, providing a graphical user interface for one-click segmentation. OralSeg had a Dice similarity coefficient of 0.8316 ± 0.0305 on CBCT instance segmentation compared to SwinUNETR and 3D U-Net. The model significantly improves segmentation performance, especially in complex oral anatomical structures, such as apical areas, alveolar bone margins, and mandibular nerve canals. The OralSeg model presented in this study provides an effective solution for instance segmentation of dental CBCT images. The tool allows clinical dentists and researchers with no AI background to perform one-click segmentation, and may be applicable in various clinical and research contexts. OralSeg can offer researchers and clinicians a user-friendly tool for tooth-level instance segmentation, which may assist in clinical diagnosis, educational training, and research, and contribute to the broader adoption of digital dentistry in precision medicine.

Active-Supervised Model for Intestinal Ulcers Segmentation Using Fuzzy Labeling.

Chen J, Lin Y, Saeed F, Ding Z, Diyan M, Li J, Wang Z

pubmed logopapersSep 25 2025
Inflammatory bowel disease (IBD) is a chronic inflammatory condition of the intestines with a rising global incidence. Colonoscopy remains the gold standard for IBD diagnosis, but traditional image-scoring methods are subjective and complex, impacting diagnostic accuracy and efficiency. To address these limitations, this paper investigates machine learning techniques for intestinal ulcer segmentation, focusing on multi-category ulcer segmentation to enhance IBD diagnosis. We identified two primary challenges in intestinal ulcer segmentation: 1) labeling noise, where inaccuracies in medical image annotation introduce ambiguity, hindering model training, and 2) performance variability across datasets, where models struggle to maintain high accuracy due to medical image diversity. To address these challenges, we propose an active ulcer segmentation algorithm based on fuzzy labeling. A collaborative training segmentation model is designed to utilize pixel-wise confidence extracted from fuzzy labels, distinguishing high- and low-confidence regions, and enhancing robustness to noisy labels through network cooperation. To mitigate performance disparities, we introduce a data adaptation strategy leveraging active learning. By selecting high-information samples based on uncertainty and diversity, the strategy enables incremental model training, improving adaptability. Extensive experiments on public and hospital datasets validate the proposed methods. Our collaborative training model and active learning strategy show significant advantages in handling noisy labels and enhancing model performance across datasets, paving the way for more precise and efficient IBD diagnosis.
Page 1 of 89890 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.