Sort by:
Page 90 of 99986 results

The Role of Digital Technologies in Personalized Craniomaxillofacial Surgical Procedures.

Daoud S, Shhadeh A, Zoabi A, Redenski I, Srouji S

pubmed logopapersMay 17 2025
Craniomaxillofacial (CMF) surgery addresses complex challenges, balancing aesthetic and functional restoration. Digital technologies, including advanced imaging, virtual surgical planning, computer-aided design, and 3D printing, have revolutionized this field. These tools improve accuracy and optimize processes across all surgical phases, from diagnosis to postoperative evaluation. CMF's unique demands are met through patient-specific solutions that optimize outcomes. Emerging technologies like artificial intelligence, extended reality, robotics, and bioprinting promise to overcome limitations, driving the future of personalized, technology-driven CMF care.

Exploring interpretable echo analysis using self-supervised parcels.

Majchrowska S, Hildeman A, Mokhtari R, Diethe T, Teare P

pubmed logopapersMay 17 2025
The application of AI for predicting critical heart failure endpoints using echocardiography is a promising avenue to improve patient care and treatment planning. However, fully supervised training of deep learning models in medical imaging requires a substantial amount of labelled data, posing significant challenges due to the need for skilled medical professionals to annotate image sequences. Our study addresses this limitation by exploring the potential of self-supervised learning, emphasising interpretability, robustness, and safety as crucial factors in cardiac imaging analysis. We leverage self-supervised learning on a large unlabelled dataset, facilitating the discovery of features applicable to a various downstream tasks. The backbone model not only generates informative features for training smaller models using simple techniques but also produces features that are interpretable by humans. The study employs a modified Self-supervised Transformer with Energy-based Graph Optimisation (STEGO) network on top of self-DIstillation with NO labels (DINO) as a backbone model, pre-trained on diverse medical and non-medical data. This approach facilitates the generation of self-segmented outputs, termed "parcels", which identify distinct anatomical sub-regions of the heart. Our findings highlight the robustness of these self-learned parcels across diverse patient profiles and phases of the cardiac cycle phases. Moreover, these parcels offer high interpretability and effectively encapsulate clinically relevant cardiac substructures. We conduct a comprehensive evaluation of the proposed self-supervised approach on publicly available datasets, demonstrating its adaptability to a wide range of requirements. Our results underscore the potential of self-supervised learning to address labelled data scarcity in medical imaging, offering a path to improve cardiac imaging analysis and enhance the efficiency and interpretability of diagnostic procedures, thus positively impacting patient care and clinical decision-making.

AI in motion: the impact of data augmentation strategies on mitigating MRI motion artifacts.

Westfechtel SD, Kußmann K, Aßmann C, Huppertz MS, Siepmann RM, Lemainque T, Winter VR, Barabasch A, Kuhl CK, Truhn D, Nebelung S

pubmed logopapersMay 17 2025
Artifacts in clinical MRI can compromise the performance of AI models. This study evaluates how different data augmentation strategies affect an AI model's segmentation performance under variable artifact severity. We used an AI model based on the nnU-Net architecture to automatically quantify lower limb alignment using axial T2-weighted MR images. Three versions of the AI model were trained with different augmentation strategies: (1) no augmentation ("baseline"), (2) standard nnU-net augmentations ("default"), and (3) "default" plus augmentations that emulate MR artifacts ("MRI-specific"). Model performance was tested on 600 MR image stacks (right and left; hip, knee, and ankle) from 20 healthy participants (mean age, 23 ± 3 years, 17 men), each imaged five times under standardized motion to induce artifacts. Two radiologists graded each stack's artifact severity as none, mild, moderate, and severe, and manually measured torsional angles. Segmentation quality was assessed using the Dice similarity coefficient (DSC), while torsional angles were compared between manual and automatic measurements using mean absolute deviation (MAD), intraclass correlation coefficient (ICC), and Pearson's correlation coefficient (r). Statistical analysis included parametric tests and a Linear Mixed-Effects Model. MRI-specific augmentation resulted in slightly (yet not significantly) better performance than the default strategy. Segmentation quality decreased with increasing artifact severity, which was partially mitigated by default and MRI-specific augmentations (e.g., severe artifacts, proximal femur: DSC<sub>baseline</sub> = 0.58 ± 0.22; DSC<sub>default</sub> = 0.72 ± 0.22; DSC<sub>MRI-specific</sub> = 0.79 ± 0.14 [p < 0.001]). These augmentations also maintained precise torsional angle measurements (e.g., severe artifacts, femoral torsion: MAD<sub>baseline</sub> = 20.6 ± 23.5°; MAD<sub>default</sub> = 7.0 ± 13.0°; MAD<sub>MRI-specific</sub> = 5.7 ± 9.5° [p < 0.001]; ICC<sub>baseline</sub> = -0.10 [p = 0.63; 95% CI: -0.61 to 0.47]; ICC<sub>default</sub> = 0.38 [p = 0.08; -0.17 to 0.76]; ICC<sub>MRI-specific</sub> = 0.86 [p < 0.001; 0.62 to 0.95]; r<sub>baseline</sub> = 0.58 [p < 0.001; 0.44 to 0.69]; r<sub>default</sub> = 0.68 [p < 0.001; 0.56 to 0.77]; r<sub>MRI-specific</sub> = 0.86 [p < 0.001; 0.81 to 0.9]). Motion artifacts negatively impact AI models, but general-purpose augmentations enhance robustness effectively. MRI-specific augmentations offer minimal additional benefit. Question Motion artifacts negatively impact the performance of diagnostic AI models for MRI, but mitigation methods remain largely unexplored. Findings Domain-specific augmentation during training can improve the robustness and performance of a model for quantifying lower limb alignment in the presence of severe artifacts. Clinical relevance Excellent robustness and accuracy are crucial for deploying diagnostic AI models in clinical practice. Including domain knowledge in model training can benefit clinical adoption.

An integrated deep learning model for early and multi-class diagnosis of Alzheimer's disease from MRI scans.

Vinukonda ER, Jagadesh BN

pubmed logopapersMay 17 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that severely affects memory, behavior, and cognitive function. Early and accurate diagnosis is crucial for effective intervention, yet detecting subtle changes in the early stages remains a challenge. In this study, we propose a hybrid deep learning-based multi-class classification system for AD using magnetic resonance imaging (MRI). The proposed approach integrates an improved DeepLabV3+ (IDeepLabV3+) model for lesion segmentation, followed by feature extraction using the LeNet-5 model. A novel feature selection method based on average correlation and error probability is employed to enhance classification efficiency. Finally, an Enhanced ResNext (EResNext) model is used to classify AD into four stages: non-dementia (ND), very mild dementia (VMD), mild dementia (MD), and moderate dementia (MOD). The proposed model achieves an accuracy of 98.12%, demonstrating its superior performance over existing methods. The area under the ROC curve (AUC) further validates its effectiveness, with the highest score of 0.97 for moderate dementia. This study highlights the potential of hybrid deep learning models in improving early AD detection and staging, contributing to more accurate clinical diagnosis and better patient care.

Intracranial hemorrhage segmentation and classification framework in computer tomography images using deep learning techniques.

Ahmed SN, Prakasam P

pubmed logopapersMay 17 2025
By helping the neurosurgeon create treatment strategies that increase the survival rate, automotive diagnosis and CT (Computed Tomography) hemorrhage segmentation (CT) could be beneficial. Owing to the significance of medical image segmentation and the difficulties in carrying out human operations, a wide variety of automated techniques for this purpose have been developed, with a primary focus on particular image modalities. In this paper, MUNet (Multiclass-UNet) based Intracranial Hemorrhage Segmentation and Classification Framework (IHSNet) is proposed to successfully segment multiple kinds of hemorrhages while the fully connected layers help in classifying the type of hemorrhages.The segmentation accuracy rates for hemorrhages are 98.53% with classification accuracy stands at 98.71% when using the suggested approach. There is potential for this suggested approach to be expanded in the future to handle further medical picture segmentation issues. Intraventricular hemorrhage (IVH), Epidural hemorrhage (EDH), Intraparenchymal hemorrhage (IPH), Subdural hemorrhage (SDH), Subarachnoid hemorrhage (SAH) are the subtypes involved in intracranial hemorrhage (ICH) whose DICE coefficients are 0.77, 0.84, 0.64, 0.80, and 0.92 respectively.The proposed method has great deal of clinical application potential for computer-aided diagnostics, which can be expanded in the future to handle further medical picture segmentation and to tackle with the involved issues.

Fair ultrasound diagnosis via adversarial protected attribute aware perturbations on latent embeddings.

Xu Z, Tang F, Quan Q, Yao Q, Kong Q, Ding J, Ning C, Zhou SK

pubmed logopapersMay 17 2025
Deep learning techniques have significantly enhanced the convenience and precision of ultrasound image diagnosis, particularly in the crucial step of lesion segmentation. However, recent studies reveal that both train-from-scratch models and pre-trained models often exhibit performance disparities across sex and age attributes, leading to biased diagnoses for different subgroups. In this paper, we propose APPLE, a novel approach designed to mitigate unfairness without altering the parameters of the base model. APPLE achieves this by learning fair perturbations in the latent space through a generative adversarial network. Extensive experiments on both a publicly available dataset and an in-house ultrasound image dataset demonstrate that our method improves segmentation and diagnostic fairness across all sensitive attributes and various backbone architectures compared to the base models. Through this study, we aim to highlight the critical importance of fairness in medical segmentation and contribute to the development of a more equitable healthcare system.

Artificial intelligence generated 3D body composition predicts dose modifications in patients undergoing neoadjuvant chemotherapy for rectal cancer.

Besson A, Cao K, Mardinli A, Wirth L, Yeung J, Kokelaar R, Gibbs P, Reid F, Yeung JM

pubmed logopapersMay 16 2025
Chemotherapy administration is a balancing act between giving enough to achieve the desired tumour response while limiting adverse effects. Chemotherapy dosing is based on body surface area (BSA). Emerging evidence suggests body composition plays a crucial role in the pharmacokinetic and pharmacodynamic profile of cytotoxic agents and could inform optimal dosing. This study aims to assess how lumbosacral body composition influences adverse events in patients receiving neoadjuvant chemotherapy for rectal cancer. A retrospective study (February 2013 to March 2023) examined the impact of body composition on neoadjuvant treatment outcomes for rectal cancer patients. Staging CT scans were analysed using a validated AI model to measure lumbosacral skeletal muscle (SM), intramuscular adipose tissue (IMAT), visceral adipose tissue (VAT), and subcutaneous adipose tissue volume and density. Multivariate analyses explored the relationship between body composition and chemotherapy outcomes. 242 patients were included (164 males, 78 Females), median age 63.4 years. Chemotherapy dose reductions occurred more frequently in females (26.9% vs. 15.9%, p = 0.042) and in females with greater VAT density (-82.7 vs. -89.1, p = 0.007) and SM: IMAT + VAT volume ratio (1.99 vs. 1.36, p = 0.042). BSA was a poor predictor of dose reduction (AUC 0.397, sensitivity 38%, specificity 60%) for female patients, whereas the SM: IMAT + VAT volume ratio (AUC 0.651, sensitivity 76%, specificity 61%) and VAT density (AUC 0.699, sensitivity 57%, specificity 74%) showed greater predictive ability. Body composition didn't influence dose adjustment of male patients. Lumbosacral body composition outperformed BSA in predicting adverse events in female patients with rectal cancer undergoing neoadjuvant chemotherapy.

Enhancing Craniomaxillofacial Surgeries with Artificial Intelligence Technologies.

Do W, van Nistelrooij N, Bergé S, Vinayahalingam S

pubmed logopapersMay 16 2025
Artificial intelligence (AI) can be applied in multiple subspecialties in craniomaxillofacial (CMF) surgeries. This article overviews AI fundamentals focusing on classification, object detection, and segmentation-core tasks used in CMF applications. The article then explores the development and integration of AI in dentoalveolar surgery, implantology, traumatology, oncology, craniofacial surgery, and orthognathic and feminization surgery. It highlights AI-driven advancements in diagnosis, pre-operative planning, intra-operative assistance, post-operative management, and outcome prediction. Finally, the challenges in AI adoption are discussed, including data limitations, algorithm validation, and clinical integration.

Pretrained hybrid transformer for generalizable cardiac substructures segmentation from contrast and non-contrast CTs in lung and breast cancers

Aneesh Rangnekar, Nikhil Mankuzhy, Jonas Willmann, Chloe Choi, Abraham Wu, Maria Thor, Andreas Rimner, Harini Veeraraghavan

arxiv logopreprintMay 16 2025
AI automated segmentations for radiation treatment planning (RTP) can deteriorate when applied in clinical cases with different characteristics than training dataset. Hence, we refined a pretrained transformer into a hybrid transformer convolutional network (HTN) to segment cardiac substructures lung and breast cancer patients acquired with varying imaging contrasts and patient scan positions. Cohort I, consisting of 56 contrast-enhanced (CECT) and 124 non-contrast CT (NCCT) scans from patients with non-small cell lung cancers acquired in supine position, was used to create oracle with all 180 training cases and balanced (CECT: 32, NCCT: 32 training) HTN models. Models were evaluated on a held-out validation set of 60 cohort I patients and 66 patients with breast cancer from cohort II acquired in supine (n=45) and prone (n=21) positions. Accuracy was measured using DSC, HD95, and dose metrics. Publicly available TotalSegmentator served as the benchmark. The oracle and balanced models were similarly accurate (DSC Cohort I: 0.80 \pm 0.10 versus 0.81 \pm 0.10; Cohort II: 0.77 \pm 0.13 versus 0.80 \pm 0.12), outperforming TotalSegmentator. The balanced model, using half the training cases as oracle, produced similar dose metrics as manual delineations for all cardiac substructures. This model was robust to CT contrast in 6 out of 8 substructures and patient scan position variations in 5 out of 8 substructures and showed low correlations of accuracy to patient size and age. A HTN demonstrated robustly accurate (geometric and dose metrics) cardiac substructures segmentation from CTs with varying imaging and patient characteristics, one key requirement for clinical use. Moreover, the model combining pretraining with balanced distribution of NCCT and CECT scans was able to provide reliably accurate segmentations under varied conditions with far fewer labeled datasets compared to an oracle model.

Pancreas segmentation using AI developed on the largest CT dataset with multi-institutional validation and implications for early cancer detection.

Mukherjee S, Antony A, Patnam NG, Trivedi KH, Karbhari A, Nagaraj M, Murlidhar M, Goenka AH

pubmed logopapersMay 16 2025
Accurate and fully automated pancreas segmentation is critical for advancing imaging biomarkers in early pancreatic cancer detection and for biomarker discovery in endocrine and exocrine pancreatic diseases. We developed and evaluated a deep learning (DL)-based convolutional neural network (CNN) for automated pancreas segmentation using the largest single-institution dataset to date (n = 3031 CTs). Ground truth segmentations were performed by radiologists, which were used to train a 3D nnU-Net model through five-fold cross-validation, generating an ensemble of top-performing models. To assess generalizability, the model was externally validated on the multi-institutional AbdomenCT-1K dataset (n = 585), for which volumetric segmentations were newly generated by expert radiologists and will be made publicly available. In the test subset (n = 452), the CNN achieved a mean Dice Similarity Coefficient (DSC) of 0.94 (SD 0.05), demonstrating high spatial concordance with radiologist-annotated volumes (Concordance Correlation Coefficient [CCC]: 0.95). On the AbdomenCT-1K dataset, the model achieved a DSC of 0.96 (SD 0.04) and a CCC of 0.98, confirming its robustness across diverse imaging conditions. The proposed DL model establishes new performance benchmarks for fully automated pancreas segmentation, offering a scalable and generalizable solution for large-scale imaging biomarker research and clinical translation.
Page 90 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.