Sort by:
Page 22 of 1411403 results

Comparison of DLIR and ASIR-V algorithms for virtual monoenergetic imaging in carotid CTA under a triple-low protocol.

Long J, Wang C, Yu M, Liu X, Xu W, Liu Z, Wang C, Wu Y, Sun A, Zhang S, Hu C, Xu K, Meng Y

pubmed logopapersSep 9 2025
Stroke, frequently associated with carotid artery disease, is evaluated using carotid computed tomography angiography (CTA). Dual-energy CTA (DE-CTA) enhances imaging quality but presents challenges in maintaining high image clarity with low-dose scans. To compare the image quality of 50 keV virtual monoenergetic images (VMI) generated using Deep Learning Image Reconstruction (DLIR) and Adaptive Statistical Iterative Reconstruction-V (ASIR-V) algorithms under a triple-low scanning protocol in carotid CTA. A prospective study was conducted with 120 patients undergoing DE-CTA. The control group (Group 1), with a noise index (NI) of 4.0 and a contrast agent dose of 0.5 mL/kg, used the ASIR-V algorithm. The experimental group was divided into four subgroups: Group 2 (ASIR-V 50%), Group 3 (DLIR-L), Group 4 (DLIR-M), and Group 5 (DLIR-H), with a higher NI of 13.0 and a reduced contrast agent dose of 0.4 mL/kg. Objective image quality was assessed through signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and standard deviation (SD), while subjective quality was evaluated using a 5-point Likert scale. Radiation dose and contrast agent volume were also measured. The triple-low scanning protocol reduced radiation exposure by 53.2%, contrast agent volume by 19.7%, and injection rate by 19.8%. The DLIR-H setting outperformed ASIR-V, demonstrating superior image quality, better noise suppression, and improved contrast in small vessels. VMI at 50 keV showed enhanced diagnostic clarity with minimal radiation and contrast agent usage. The DLIR algorithm, particularly at high settings, significantly enhances image quality in DE-CTA VMI under a triple-low scanning protocol, offering a better balance between radiation dose reduction and image clarity.

Artificial intelligence in medical imaging empowers precision neoadjuvant immunochemotherapy in esophageal squamous cell carcinoma.

Fu J, Huang X, Fang M, Feng X, Zhang XY, Xie X, Zheng Z, Dong D

pubmed logopapersSep 9 2025
Neoadjuvant immunochemotherapy (nICT) has demonstrated significant potential in improving pathological response rates and survival outcomes for patients with locally advanced esophageal squamous cell carcinoma (ESCC). However, substantial interindividual variability in therapeutic outcomes highlights the urgent need for more precise predictive tools to guide clinical decision-making. Traditional biomarkers remain limited in both predictive performance and clinical feasibility. In recent years, the application of artificial intelligence (AI) in medical imaging has expanded rapidly. By incorporating voxel-level feature maps, the combination of radiomics and deep learning enables the extraction of rich textural, morphological, and microstructural features, while autonomously learning high-level abstract representations from clinical CT images, thereby revealing biological heterogeneity that is often imperceptible to conventional assessments. Leveraging these high-dimensional representations, AI models can provide more accurate predictions of nICT response. Future advancements in foundation models, multimodal integration, and dynamic temporal modeling are expected to further enhance the generalizability and clinical applicability of AI. AI-powered medical imaging is poised to support all stages of perioperative management in ESCC, playing a pivotal role in high-risk patient identification, dynamic monitoring of therapeutic response, and individualized treatment adjustment, thereby comprehensively advancing precision nICT.

Early Detection of Lung Metastases in Breast Cancer Using YOLOv10 and Transfer Learning: A Diagnostic Accuracy Study.

Taş HG, Taş MBH, Yildiz E, Aydin S

pubmed logopapersSep 9 2025
BACKGROUND This study used CT imaging analyzed with deep learning techniques to assess the diagnostic accuracy of lung metastasis detection in patients with breast cancer. The aim of the research was to create and verify a system for detecting malignant and metastatic lung lesions that uses YOLOv10 and transfer learning. MATERIAL AND METHODS From January 2023 to 2024, CT scans of 16 patients with breast cancer who had confirmed lung metastases were gathered retrospectively from Erzincan Mengücek Gazi Training and Research Hospital. The YOLOv10 deep learning system was used to assess a labeled dataset of 1264 enhanced CT images. RESULTS A total of 1264 labeled images from 16 patients were included. With an accuracy of 96.4%, sensitivity of 94.1%, specificity of 97.1%, and precision of 90.3%, the ResNet-50 model performed best. The robustness of the model was shown by the remarkable area under the curve (AUC), which came in at 0.96. After dataset tuning, the GoogLeNet model's accuracy was 97.3%. These results highlight our approach's improved diagnostic capabilities over current approaches. CONCLUSIONS This study shows how YOLOv10 and transfer learning can be used to improve the diagnostic precision of pulmonary metastases in patients with breast cancer. The model's effectiveness is demonstrated by the excellent performance metrics attained, opening the door for its application in clinical situations. The suggested approach supports prompt and efficient treatment decisions by lowering radiologists; workload and improving the early diagnosis of metastatic lesions.

Brain CT for Diagnosis of Intracranial Disease in Ambulatory Cancer Patients: Assessment of the Diagnostic Value of Scanning Without Contrast Prior to With Contrast.

Wang E, Darbandi A, Tu L, Ballester LY, Morales CJ, Chen M, Gule-Monroe MK, Johnson JM

pubmed logopapersSep 9 2025
Brain imaging with MRI or CT is standard in screening for intracranial disease among ambulatory cancer patients. Although MRI offers greater sensitivity, CT is frequently employed due to its accessibility, affordability, and faster acquisition time. However, the necessity of routinely performing a non-contrast CT with the contrast-enhanced study is unknown. This study evaluates the clinical and economic utility of the non-contrast portion of the brain CT examination. A board-certified neuroradiologist reviewed 737 brain CT reports from outpatients at MD Anderson Cancer Center who underwent contrast and non-contrast CT for cancer staging (October 2014 to March 2016) to assess if significant findings were identified only on non-contrast CT. A GPT-3 model was then fine-tuned to extract reports with a high likelihood of unique and significant non-contrast findings from 1,980 additional brain CT reports (January 2017 to April 2022). These reports were manually reviewed by two neuroradiologists, with adjudication by a third reviewer if needed. The incremental cost-effectiveness ratio of non-contrast CT inclusion was then calculated based on Medicare reimbursement and the 95% confidence interval of the proportion of all reports in which non-contrast CT was necessary for identifying significant findings RESULTS: Seven of 737 reports in the initial dataset revealed significant findings unique to the non-contrast CT, all of which were hemorrhage. The GPT-3 model identified 145 additional reports with a high unique non-contrast CT finding likelihood for manual review from the second dataset of 1,980 reports. 19 of these reports were found to have unique and significant non-contrast CT findings. In total, 0.96% (95% CI: 0.63% -1.40%) of reports had significant findings identified only on non-contrast CT. The incremental cost-effectiveness ratio for identification of a single significant finding on non-contrast CT missed on the contrast-enhanced study was $1,855 to $4,122. In brain CT for ambulatory screening for intracranial disease in cancer patients, non-contrast CT offers limited additional diagnostic value compared to contrast-enhanced CT alone. Considering the associated financial cost, workload, and patient radiation exposure associated with performing a non-contrast CT, contrast-enhanced brain CT alone is sufficient for cancer staging in asymptomatic cancer patients. GPT-3= Generative Pretrained Transformers 3.

YOLOv12 Algorithm-Aided Detection and Classification of Lateral Malleolar Avulsion Fracture and Subfibular Ossicle Based on CT Images: A Multicenter Study.

Liu J, Sun P, Yuan Y, Chen Z, Tian K, Gao Q, Li X, Xia L, Zhang J, Xu N

pubmed logopapersSep 9 2025
Lateral malleolar avulsion fracture (LMAF) and subfibular ossicle (SFO) are distinct entities that both present as small bone fragments near the lateral malleolus on imaging, yet require different treatment strategies. Clinical and radiological differentiation is challenging, which can impede timely and precise management. On imaging, magnetic resonance imaging (MRI) is the diagnostic gold standard for differentiating LMAF from SFO, whereas radiological differentiation on computed tomography (CT) alone is challenging in routine practice. Deep convolutional neural networks (DCNNs) have shown promise in musculoskeletal imaging diagnostics, but robust, multicenter evidence in this specific context is lacking. To evaluate several state-of-the-art DCNNs-including the latest YOLOv12 algorithm - for detecting and classifying LMAF and SFO on CT images, using MRI-based diagnoses as the gold standard, and to compare model performance with radiologists reading CT alone. In this retrospective study, 1,918 patients (LMAF: 1253, SFO: 665) were enrolled from two hospitals in China between 2014 and 2024. MRI served as the gold standard and was independently interpreted by two senior musculoskeletal radiologists. Only CT images were used for model training, validation, and testing. CT images were manually annotated with bounding boxes. The cohort was randomly split into a training set (n=1,092), internal validation set (n=476), and external test set (n=350). Four deep learning models - Faster R-CNN, SSD, RetinaNet, and YOLOv12 - were trained and evaluated using identical procedures. Model performance was assessed using mean average precision at IoU=0.5 (mAP50), area under the receiver-operating curve (AUC), accuracy, sensitivity, and specificity. The external test set was also independently interpreted by two musculoskeletal radiologists with 7 and 15 years of experience, with results compared to the best performing model. Saliency maps were generated using Shapley values to enhance interpretability. Among the evaluated models, YOLOv12 achieved the highest detection and classification performance, with a mAP50 of 92.1% and an AUC of 0.983 on the external test set - significantly outperforming Faster R-CNN (mAP50: 63.7%, AUC: 0.79), SSD (mAP50 63.0%, AUC 0.63), and RetinaNet (mAP50: 67.0%, AUC: 0.73) (all P < .05). When using CT alone, radiologists performed at a moderate level (accuracy: 75.6%/69.1%; sensitivity: 75.0%/65.2%; specificity: 76.0%/71.1%), whereas YOLOv12 approached MRI-based reference performance (accuracy: 92.0%; sensitivity: 86.7%; specificity: 82.2%). Saliency maps corresponded well with expert-identified regions. While MRI (read by senior radiologists) is the gold standard for distinguishing LMAF from SFO, CT-based differentiation is challenging for radiologists. A CT-only DCNN (YOLOv12) achieved substantially higher performance than radiologists reading CT alone and approached the MRI-based reference standard, highlighting its potential to augment CT-based decision-making where MRI is limited or unavailable.

Transposing intensive care innovation from modern warfare to other resource-limited settings.

Jarrassier A, de Rocquigny G, Delagarde C, Ezanno AC, Josse F, Dubost C, Duranteau O, Boussen S, Pasquier P

pubmed logopapersSep 9 2025
Delivering intensive care in conflict zones and other resource-limited settings presents unique clinical, logistical, and ethical challenges. These contexts, characterized by disrupted infrastructure, limited personnel, and prolonged field care, require adapted strategies to ensure critical care delivery under resource-limited settings. This scoping review aims to identify and characterize medical innovations developed or implemented in recent conflicts that may be relevant and transposable to intensive care units operating in other resource-limited settings. A scoping review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines. Five major databases were searched for English-language publications from 2014 to 2025. Studies describing innovations applicable to intensive care in modern warfare or resource-limited settings were included. While many studies relied on experimental or simulated models, a subset described real-world applications in resource-limited environments, including ultrasound-guided regional analgesia, resuscitative endovascular balloon occlusion of the aorta, portable blood transfusion platforms, and artificial intelligence-supported monitoring of traumatic brain injury. Training strategies such as teleconsultation/telementoring and low-cost simulation were also emphasized. Few of these intensive care innovations were validated in real-life wartime conditions. Innovations from modern warfare offer pragmatic and potentially transposable solutions for intensive care in resource-limited settings. Successfully adapting them requires validation and contextual adaptation, as well as the implementation of concrete collaborative strategies, including tailored training programs, joint simulation exercises, and structured knowledge translation initiatives, to ensure effective and sustainable integration.

Prediction of oncogene mutation status in non-small cell lung cancer: a systematic review and meta-analysis with a special focus on artificial intelligence-based methods.

Fuster-Matanzo A, Picó-Peris A, Bellvís-Bataller F, Jimenez-Pastor A, Weiss GJ, Martí-Bonmatí L, Lázaro Sánchez A, Bazaga D, Banna GL, Addeo A, Camps C, Seijo LM, Alberich-Bayarri Á

pubmed logopapersSep 8 2025
In non-small cell lung cancer (NSCLC), non-invasive alternatives to biopsy-dependent driver mutation analysis are needed. We reviewed the effectiveness of radiomics alone or with clinical data and assessed the performance of artificial intelligence (AI) models in predicting oncogene mutation status. A PRISMA-compliant literature review for studies predicting oncogene mutation status in NSCLC patients using radiomics was conducted by a multidisciplinary team. Meta-analyses evaluating the performance of AI-based models developed with CT-derived radiomics features alone or combined with clinical data were performed. A meta-regression to analyze the influence of different predictors was also conducted. Of 890 studies identified, 124 evaluating models for the prediction of epidermal growth factor-1 (EGFR), anaplastic lymphoma kinase (ALK), and Kirsten rat sarcoma virus (KRAS) mutations were included in the systematic review, of which 51 were meta-analyzed. The AI algorithms' sensitivity/false positive rate (FPR) in predicting mutation status using radiomics-based models was 0.754 (95% CI 0.727-0.780)/0.344 (95% CI 0.308-0.381) for EGFR, 0.754 (95% CI 0.638-0.841)/0.225 (95% CI 0.163-0.302) for ALK and 0.475 (95% CI 0.153-0.820)/0.181 (95% CI 0.054-0.461) for KRAS. A meta-analysis of combined models was possible for EGFR mutation, revealing a sensitivity of 0.806 (95% CI 0.777-0.833) and a FPR of 0.315 (95% CI 0.270-0.364). No statistically significant results were obtained in the meta-regression. Radiomics-based models may offer a non-invasive alternative for determining oncogene mutation status in NSCLC. Further research is required to analyze whether clinical data might boost their performance. Question Can imaging-based radiomics and artificial intelligence non-invasively predict oncogene mutation status to improve diagnosis in non-small cell lung cancer (NSCLC)? Findings Radiomics-based models achieved high performance in predicting mutation status in NSCLC; adding clinical data showed limited improvement in predictive performance. Clinical relevance Radiomics and AI tools offer a non-invasive strategy to support molecular profiling in NSCLC. Validation studies addressing clinical and methodological aspects are essential to ensure their reliability and integration into routine clinical practice.

GCond: Gradient Conflict Resolution via Accumulation-based Stabilization for Large-Scale Multi-Task Learning

Evgeny Alves Limarenko, Anastasiia Alexandrovna Studenikina

arxiv logopreprintSep 8 2025
In multi-task learning (MTL), gradient conflict poses a significant challenge. Effective methods for addressing this problem, including PCGrad, CAGrad, and GradNorm, in their original implementations are computationally demanding, which significantly limits their application in modern large models and transformers. We propose Gradient Conductor (GCond), a method that builds upon PCGrad principles by combining them with gradient accumulation and an adaptive arbitration mechanism. We evaluated GCond on self-supervised learning tasks using MobileNetV3-Small and ConvNeXt architectures on the ImageNet 1K dataset and a combined head and neck CT scan dataset, comparing the proposed method against baseline linear combinations and state-of-the-art gradient conflict resolution methods. The stochastic mode of GCond achieved a two-fold computational speedup while maintaining optimization quality, and demonstrated superior performance across all evaluated metrics, achieving lower L1 and SSIM losses compared to other methods on both datasets. GCond exhibited high scalability, being successfully applied to both compact models (MobileNetV3-Small) and large architectures (ConvNeXt-tiny and ConvNeXt-Base). It also showed compatibility with modern optimizers such as AdamW and Lion/LARS. Therefore, GCond offers a scalable and efficient solution to the problem of gradient conflicts in multi-task learning.

Explainable Machine Learning for Estimating the Contrast Material Arrival Time in Computed Tomography Pulmonary Angiography.

Meng XP, Yu H, Pan C, Chen FM, Li X, Wang J, Hu C, Fang X

pubmed logopapersSep 8 2025
To establish an explainable machine learning (ML) approach using patient-related and noncontrast chest CT-derived features to predict the contrast material arrival time (TARR) in CT pulmonary angiography (CTPA). This retrospective study included consecutive patients referred for CTPA between September 2023 to October 2024. Sixteen clinical and 17 chest CT-derived parameters were used as inputs for the ML approach, which employed recursive feature elimination for feature selection and XGBoost with SHapley Additive exPlanations (SHAP) for explainable modeling. The prediction target was abnormal TARR of the pulmonary artery (ie, TARR <7 seconds or >10 s), determined by the time to peak enhancement in the test bolus, with 2 models distinguishing these cases. External validation was conducted. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). A total of 666 patients (mean age, 70 [IQR, 59.3 to 78.0]; 46.8% female participants) were split into training (n = 353), testing (n = 151), and external validation (n = 162) sets. 86 cases (12.9%) had TARR <7 seconds, and 138 cases (20.7%) had TARR >10 seconds. The ML models exhibited good performance in their respective testing and external validation sets (AUC: 0.911 and 0.878 for TARR <7 s; 0.834 and 0.897 for TARR >10 s). SHAP analysis identified the measurements of the vena cava and pulmonary artery as key features for distinguishing abnormal TARR. The explainable ML algorithm accurately identified normal and abnormal TARR of the pulmonary artery, facilitating personalized CTPA scans.

Radiologist-AI Collaboration for Ischemia Diagnosis in Small Bowel Obstruction: Multicentric Development and External Validation of a Multimodal Deep Learning Model

Vanderbecq, Q., Xia, W. F., Chouzenoux, E., Pesquet, J.-c., Zins, M., Wagner, M.

medrxiv logopreprintSep 8 2025
PurposeTo develop and externally validate a multimodal AI model for detecting ischaemia complicating small-bowel obstruction (SBO). MethodsWe combined 3D CT data with routine laboratory markers (C-reactive protein, neutrophil count) and, optionally, radiology report text. From two centers, 1,350 CT examinations were curated; 771 confirmed SBO scans were used for model development with patient-level splits. Ischemia labels were defined by surgical confirmation within 24 hours of imaging. Models (MViT, ResNet-101, DaViT) were trained as unimodal and multimodal variants. External testing was used for 66 independent cases from a third center. Two radiologists (attending, resident) read the test set with and without AI assistance. Performance was assessed using AUC, sensitivity, specificity, and 95% bootstrap confidence intervals; predictions included a confidence score. ResultsThe image-plus-laboratory model performed best on external testing (AUC 0.69 [0.59-0.79], sensitivity 0.89 [0.76-1.00], and specificity 0.44 [0.35-0.54]). Adding report text improved internal validation but did not generalize externally; image+text and full multimodal variants did not exceed image+laboratory performance. Without AI, the attending outperformed the resident (AUC 0.745 [0.617-0.845] vs 0.706 [0.581-0.818]); with AI, both improved, attending 0.752 [0.637-0.853] and resident 0.752 [0.629-0.867], rising to 0.750 [0.631-0.839] and 0.773 [0.657-0.867] with confidence display; differences were not statistically significant. ConclusionA multimodal AI that combines CT images with routine laboratory markers outperforms single-modality approaches and boosts radiologist readers performance notably junior, supporting earlier, more consistent decisions within the first 24 hours. Key PointsA multimodal artificial intelligence (AI) model that combines CT images with laboratory markers detected ischemia in small-bowel obstruction with AUC 0.69 (95% CI 0.59-0.79) and sensitivity 0.89 (0.76-1.00) on external testing, outperforming single-modality models. Adding report text did not generalize across sites: the image+text model fell from AUC 0.82 (internal) to 0.53 (external), and adding text to image+biology left external AUC unchanged (0.69) with similar specificity (0.43-0.44). With AI assistance both junior and senior readers improved; the juniors AUC rose from 0.71 to 0.77, reaching senior-level performance. Summary StatementA multicentric AI model combining CT and routine laboratory data (CRP and neutrophilia) improved radiologists detection of ischemia in small-bowel obstruction. This tool supports earlier decision-making within the first 24 hours.
Page 22 of 1411403 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.