Sort by:
Page 188 of 6526512 results

Fuster-Matanzo A, Picó-Peris A, Bellvís-Bataller F, Jimenez-Pastor A, Weiss GJ, Martí-Bonmatí L, Lázaro Sánchez A, Bazaga D, Banna GL, Addeo A, Camps C, Seijo LM, Alberich-Bayarri Á

pubmed logopapersSep 8 2025
In non-small cell lung cancer (NSCLC), non-invasive alternatives to biopsy-dependent driver mutation analysis are needed. We reviewed the effectiveness of radiomics alone or with clinical data and assessed the performance of artificial intelligence (AI) models in predicting oncogene mutation status. A PRISMA-compliant literature review for studies predicting oncogene mutation status in NSCLC patients using radiomics was conducted by a multidisciplinary team. Meta-analyses evaluating the performance of AI-based models developed with CT-derived radiomics features alone or combined with clinical data were performed. A meta-regression to analyze the influence of different predictors was also conducted. Of 890 studies identified, 124 evaluating models for the prediction of epidermal growth factor-1 (EGFR), anaplastic lymphoma kinase (ALK), and Kirsten rat sarcoma virus (KRAS) mutations were included in the systematic review, of which 51 were meta-analyzed. The AI algorithms' sensitivity/false positive rate (FPR) in predicting mutation status using radiomics-based models was 0.754 (95% CI 0.727-0.780)/0.344 (95% CI 0.308-0.381) for EGFR, 0.754 (95% CI 0.638-0.841)/0.225 (95% CI 0.163-0.302) for ALK and 0.475 (95% CI 0.153-0.820)/0.181 (95% CI 0.054-0.461) for KRAS. A meta-analysis of combined models was possible for EGFR mutation, revealing a sensitivity of 0.806 (95% CI 0.777-0.833) and a FPR of 0.315 (95% CI 0.270-0.364). No statistically significant results were obtained in the meta-regression. Radiomics-based models may offer a non-invasive alternative for determining oncogene mutation status in NSCLC. Further research is required to analyze whether clinical data might boost their performance. Question Can imaging-based radiomics and artificial intelligence non-invasively predict oncogene mutation status to improve diagnosis in non-small cell lung cancer (NSCLC)? Findings Radiomics-based models achieved high performance in predicting mutation status in NSCLC; adding clinical data showed limited improvement in predictive performance. Clinical relevance Radiomics and AI tools offer a non-invasive strategy to support molecular profiling in NSCLC. Validation studies addressing clinical and methodological aspects are essential to ensure their reliability and integration into routine clinical practice.

Diego Fajardo-Rojas, Levente Baljer, Jordina Aviles Verdera, Megan Hall, Daniel Cromb, Mary A. Rutherford, Lisa Story, Emma C. Robinson, Jana Hutter

arxiv logopreprintSep 8 2025
Preterm birth is a major cause of mortality and lifelong morbidity in childhood. Its complex and multifactorial origins limit the effectiveness of current clinical predictors and impede optimal care. In this study, a dual-branch deep learning architecture (PUUMA) was developed to predict gestational age (GA) at birth using T2* fetal MRI data from 295 pregnancies, encompassing a heterogeneous and imbalanced population. The model integrates both global whole-uterus and local placental features. Its performance was benchmarked against linear regression using cervical length measurements obtained by experienced clinicians from anatomical MRI and other Deep Learning architectures. The GA at birth predictions were assessed using mean absolute error. Accuracy, sensitivity, and specificity were used to assess preterm classification. Both the fully automated MRI-based pipeline and the cervical length regression achieved comparable mean absolute errors (3 weeks) and good sensitivity (0.67) for detecting preterm birth, despite pronounced class imbalance in the dataset. These results provide a proof of concept for automated prediction of GA at birth from functional MRI, and underscore the value of whole-uterus functional imaging in identifying at-risk pregnancies. Additionally, we demonstrate that manual, high-definition cervical length measurements derived from MRI, not currently routine in clinical practice, offer valuable predictive information. Future work will focus on expanding the cohort size and incorporating additional organ-specific imaging to improve generalisability and predictive performance.

Sepehr Salem, M. Moein Esfahani, Jingyu Liu, Vince Calhoun

arxiv logopreprintSep 8 2025
Data scarcity hinders deep learning for medical imaging. We propose a framework for breast cancer classification in thermograms that addresses this using a Diffusion Probabilistic Model (DPM) for data augmentation. Our DPM-based augmentation is shown to be superior to both traditional methods and a ProGAN baseline. The framework fuses deep features from a pre-trained ResNet-50 with handcrafted nonlinear features (e.g., Fractal Dimension) derived from U-Net segmented tumors. An XGBoost classifier trained on these fused features achieves 98.0\% accuracy and 98.1\% sensitivity. Ablation studies and statistical tests confirm that both the DPM augmentation and the nonlinear feature fusion are critical, statistically significant components of this success. This work validates the synergy between advanced generative models and interpretable features for creating highly accurate medical diagnostic tools.

Evgeny Alves Limarenko, Anastasiia Alexandrovna Studenikina

arxiv logopreprintSep 8 2025
In multi-task learning (MTL), gradient conflict poses a significant challenge. Effective methods for addressing this problem, including PCGrad, CAGrad, and GradNorm, in their original implementations are computationally demanding, which significantly limits their application in modern large models and transformers. We propose Gradient Conductor (GCond), a method that builds upon PCGrad principles by combining them with gradient accumulation and an adaptive arbitration mechanism. We evaluated GCond on self-supervised learning tasks using MobileNetV3-Small and ConvNeXt architectures on the ImageNet 1K dataset and a combined head and neck CT scan dataset, comparing the proposed method against baseline linear combinations and state-of-the-art gradient conflict resolution methods. The stochastic mode of GCond achieved a two-fold computational speedup while maintaining optimization quality, and demonstrated superior performance across all evaluated metrics, achieving lower L1 and SSIM losses compared to other methods on both datasets. GCond exhibited high scalability, being successfully applied to both compact models (MobileNetV3-Small) and large architectures (ConvNeXt-tiny and ConvNeXt-Base). It also showed compatibility with modern optimizers such as AdamW and Lion/LARS. Therefore, GCond offers a scalable and efficient solution to the problem of gradient conflicts in multi-task learning.

Raja Mallina, Bryar Shareef

arxiv logopreprintSep 8 2025
Background: Precise breast ultrasound (BUS) segmentation supports reliable measurement, quantitative analysis, and downstream classification, yet remains difficult for small or low-contrast lesions with fuzzy margins and speckle noise. Text prompts can add clinical context, but directly applying weakly localized text-image cues (e.g., CAM/CLIP-derived signals) tends to produce coarse, blob-like responses that smear boundaries unless additional mechanisms recover fine edges. Methods: We propose XBusNet, a novel dual-prompt, dual-branch multimodal model that combines image features with clinically grounded text. A global pathway based on a CLIP Vision Transformer encodes whole-image semantics conditioned on lesion size and location, while a local U-Net pathway emphasizes precise boundaries and is modulated by prompts that describe shape, margin, and Breast Imaging Reporting and Data System (BI-RADS) terms. Prompts are assembled automatically from structured metadata, requiring no manual clicks. We evaluate on the Breast Lesions USG (BLU) dataset using five-fold cross-validation. Primary metrics are Dice and Intersection over Union (IoU); we also conduct size-stratified analyses and ablations to assess the roles of the global and local paths and the text-driven modulation. Results: XBusNet achieves state-of-the-art performance on BLU, with mean Dice of 0.8765 and IoU of 0.8149, outperforming six strong baselines. Small lesions show the largest gains, with fewer missed regions and fewer spurious activations. Ablation studies show complementary contributions of global context, local boundary modeling, and prompt-based modulation. Conclusions: A dual-prompt, dual-branch multimodal design that merges global semantics with local precision yields accurate BUS segmentation masks and improves robustness for small, low-contrast lesions.

Mansour S, Anter E, Mohamed AK, Dahaba MM, Mousa A

pubmed logopapersSep 8 2025
The purpose of this study was to assess the accuracy of a customized deep learning model based on CNN and U-Net for detecting and segmenting the second mesiobuccal canal (MB2) of maxillary first molar teeth on cone beam computed tomography (CBCT) scans. CBCT scans of 37 patients were imported into 3D slicer software to crop and segment the canals of the mesiobuccal (MB) root of the maxillary first molar. The annotated data were divided into two groups: 80% for training and validation and 20% for testing. The data were used to train the AI model in 2 separate steps: a classification model based on a customized CNN and a segmentation model based on U-Net. A confusion matrix and receiver-operating characteristic (ROC) analysis were used in the statistical evaluation of the results of the classification model, whereas the Dice-coefficient (DCE) was used to express the segmentation accuracy. F1 score, testing accuracy, recall and precision values were 0.93, 0.87, 1.0 and 0.87 respectively, for the cropped images of MB root of maxillary 1st molar teeth in the testing group. The testing loss was 0.4, and the area under the curve (AUC) value was 0.57. The segmentation accuracy results were satisfactory, where the DCE of training was 0.85 and DCE of testing was 0.79. MB2 in the maxillary first molar can be precisely detected and segmented via the developed AI algorithm in CBCT images. Current Controlled Trial Number NCT05340140. April 22, 2022.

Sukhdeep Bal, Emma Colbourne, Jasmine Gan, Ludovica Griffanti, Taylor Hanayik, Nele Demeyere, Jim Davies, Sarah T Pendlebury, Mark Jenkinson

arxiv logopreprintSep 8 2025
Quantification of brain atrophy currently requires visual rating scales which are time consuming and automated brain image analysis is warranted. We validated our automated deep learning (DL) tool measuring the Global Cerebral Atrophy (GCA) score against trained human raters, and associations with age and cognitive impairment, in representative older (>65 years) patients. CT-brain scans were obtained from patients in acute medicine (ORCHARD-EPR), acute stroke (OCS studies) and a legacy sample. Scans were divided in a 60/20/20 ratio for training, optimisation and testing. CT-images were assessed by two trained raters (rater-1=864 scans, rater-2=20 scans). Agreement between DL tool-predicted GCA scores (range 0-39) and the visual ratings was evaluated using mean absolute error (MAE) and Cohen's weighted kappa. Among 864 scans (ORCHARD-EPR=578, OCS=200, legacy scans=86), MAE between the DL tool and rater-1 GCA scores was 3.2 overall, 3.1 for ORCHARD-EPR, 3.3 for OCS and 2.6 for the legacy scans and half had DL-predicted GCA error between -2 and 2. Inter-rater agreement was Kappa=0.45 between the DL-tool and rater-1, and 0.41 between the tool and rater- 2 whereas it was lower at 0.28 for rater-1 and rater-2. There was no difference in GCA scores from the DL-tool and the two raters (one-way ANOVA, p=0.35) or in mean GCA scores between the DL-tool and rater-1 (paired t-test, t=-0.43, p=0.66), the tool and rater-2 (t=1.35, p=0.18) or between rater-1 and rater-2 (t=0.99, p=0.32). DL-tool GCA scores correlated with age and cognitive scores (both p<0.001). Our DL CT-brain analysis tool measured GCA score accurately and without user input in real-world scans acquired from older patients. Our tool will enable extraction of standardised quantitative measures of atrophy at scale for use in health data research and will act as proof-of-concept towards a point-of-care clinically approved tool.

Bansal S, Peterson BS, Gupte C, Sawardekar S, Gonzalez Anaya MJ, Ordonez M, Bhojwani D, Santoro JD, Bansal R

pubmed logopapersSep 8 2025
We propose a Biophysically Restrained Analog Integrated Neural Network (BRAINN), an analog electrical network that models the dynamics of brain function. The network interconnects analog electrical circuits that simulate two tightly coupled brain processes: (1) propagation of an action potential, and (2) regional cerebral blood flow in response to the metabolic demands of signal propagation. These two processes are modeled by two branches of an electrical circuit comprising a resistor, a capacitor, and an inductor. We estimated the electrical components from in vivo multimodal MRI together with the biophysical properties of the brain applied to state-space equations, reducing arbitrary parameters such that the dynamic behavior is determined by neuronal integrity. Electrical circuits were interconnected at Brodmann areas to form a network using neural pathways traced with diffusion tensor imaging data. We built BRAINN in Simulink, MATLAB, using longitudinal multimodal MRI data from 20 healthy controls and 19 children with leukemia. BRAINN stimulated by an impulse applied to the lateral temporal region generated sustained activity. Stimulated BRAINN functional connectivity was comparable (within ±1.3 standard deviations) to measured resting-state functional connectivity in 40 of the 55 pairs of brain regions. Control system analyses showed that BRAINN was stable for all participants. BRAINN controllability in patients relative to healthy participants was disrupted prior to treatment but improved during treatment. BRAINN is scalable as more detailed regions and fiber tracts are traced in the MRI data. A scalable BRAINN will facilitate study of brain behavior in health and illness, and help identify targets and design transcranial stimulation for optimally modulating brain activity.

Pulik Ł, Czech P, Kaliszewska J, Mulewicz B, Pykosz M, Wiszniewska J, Łęgosz P

pubmed logopapersSep 8 2025
<b>Background</b>: Developmental dysplasia of the hip (DDH), if not treated, can lead to osteoarthritis and disability. Ultrasound (US) is a primary screening method for the detection of DDH, but its interpretation remains highly operator-dependent. We propose a supervised machine learning (ML) image segmentation model for the automated recognition of anatomical structures in hip US images. <b>Methods</b>: We conducted a retrospective observational analysis based on a dataset of 10,767 hip US images from 311 patients. All images were annotated for eight key structures according to the Graf method and split into training (75.0%), validation (9.5%), and test (15.5%) sets. Model performance was assessed using the Intersection over Union (IoU) and Dice Similarity Coefficient (DSC). <b>Results</b>: The best-performing model was based on the SegNeXt architecture with an MSCAN_L backbone. The model achieved high segmentation accuracy (IoU; DSC) for chondro-osseous border (0.632; 0.774), femoral head (0.916; 0.956), labrum (0.625; 0.769), cartilaginous (0.672; 0.804), and bony roof (0.725; 0.841). The average Euclidean distance for point-based landmarks (bony rim and lower limb) was 4.8 and 4.5 pixels, respectively, and the baseline deflection angle was 1.7 degrees. <b>Conclusions</b>: This ML-based approach demonstrates promising accuracy and may enhance the reliability and accessibility of US-based DDH screening. Future applications could integrate real-time angle measurement and automated classification to support clinical decision-making.

Weihsbach C, Kruse CN, Bigalke A, Heinrich MP

pubmed logopapersSep 8 2025
Applying pre-trained medical deep learning segmentation models to out-of-domain images often yields predictions of insufficient quality. In this study, we propose using a robust generalizing descriptor, along with augmentation, to enable domain-generalized pre-training and test-time adaptation, thereby achieving high-quality segmentation in unseen domains. In this study, five different publicly available datasets, including 3D CT and MRI images, are used to evaluate segmentation performance in out-of-domain scenarios. The settings include abdominal, spine, and cardiac imaging. Domain-generalized pre-training on source data is used to obtain the best initial performance in the target domain. We introduce a combination of the generalizing SSC descriptor and GIN intensity augmentation for optimal generalization. Segmentation results are subsequently optimized at test time, where we propose adapting the pre-trained models for every unseen scan using a consistency scheme with the augmentation-descriptor combination. The proposed generalized pre-training and subsequent test-time adaptation improve model performance significantly in CT to MRI cross-domain prediction for abdominal (+46.2 and +28.2 Dice), spine (+72.9), and cardiac (+14.2 and +55.7 Dice) scenarios (<i>p</i> < 0.001). Our method enables the optimal, independent use of source and target data, successfully bridging domain gaps with a compact and efficient methodology.
Page 188 of 6526512 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.