Sort by:
Page 140 of 6476462 results

Mineo E, Assuncao-Jr AN, Grego da Silva CF, Liberato G, Dantas-Jr RN, Graves CV, Gutierrez MA, Nomura CH

pubmed logopapersSep 24 2025
Coronary artery calcium (CAC) scoring refines atherosclerotic cardiovascular disease (ASCVD) risk but is not frequently reported on routine non‑gated chest CT (NCCT), whose use expanded in the COVID‑19 era. We sought to develop and validate a workflow-ready deep learning model for fully automated, protocol-agnostic CAC quantification. In this retrospective study, a deep learning (DL) model was trained and validated using 2132 chest CT scans (routine, CT-CAC, and CT-COVID) from patients without established atherosclerotic cardiovascular disease (ASCVD) collected (2013-2023) at a single university hospital. The index test was a DL-based CAC segmentation model; the reference standard was manual annotation by experienced observers. Agreement was evaluated using intra-class correlation coefficients (ICC) for Agatston scores and Cohen's kappa for CAC risk categories. Sensitivity, specificity, positive, and negative predictive values, and F1 scores were calculated to measure diagnostic performance. The DL model demonstrated high reliability for Agatston scores (ICC=0.987) and strong agreement in CAC categories (Cohen's κ=0.86-0.95). Diagnostic performance for CAC >100 (F1=0.956) and CAC >300 (F1=0.967) was very high. External validation in the Mashhad COVID Study showed good agreement (κ=0.8). In the SBU COVID study, the F1 score for detecting moderate-to-severe CAC was 0.928. The proposed DL model delivers accurate, workflow‑ready CAC quantification across routine, dedicated, and pandemic‑era chest CT scans, supporting opportunistic, cost‑effective cardiovascular risk stratification in contemporary clinical practice.

Baba T, Goto T, Kitamura Y, Iwasawa T, Okudela K, Takemura T, Osawa A, Ogura T

pubmed logopapersSep 24 2025
Multidisciplinary discussion (MDD) is the gold standard for diagnosis in interstitial lung disease (ILD). However, its inter-rater agreement is not satisfactory, and access to the MDD is limited due to a shortage of ILD experts. Therefore, artificial intelligence would be helpful for diagnosing ILD. We retrospectively analyzed data from 630 patients with ILD, including clinical information, CT images, and pathological results. The ILD classification into four clinicopathologic entities (i.e., idiopathic pulmonary fibrosis, non-specific interstitial pneumonia, hypersensitivity pneumonitis, connective tissue disease) consists of two stages: first, pneumonia pattern classification of CT images using a convolutional neural network (CNN) model; second, multimodal (clinical, radiological, and pathological) classification using a support vector machine (SVM). The performance of the classification algorithm was evaluated using 5-fold cross-validation. The mean accuracies of the CNN model and SVM were 62.4 % and 85.4 %, respectively. For multimodal classification using SVM, the overall accuracy was very high, especially with sensitivities for idiopathic pulmonary fibrosis and hypersensitivity pneumonitis exceeding 90 %. When pneumonia patterns from CT images, pathological results, or clinical information were not used, the SVM accuracy was 84.3 %, 70.3 % and 79.8 %, respectively, suggesting that pathological results contributed most to MDD diagnosis. When an unclassifiable interstitial pneumonia was input, the SVM output tended to align with the most likely diagnosis by the expert MDD team. The algorithm based on multimodal information can assist in diagnosing interstitial lung disease and is suitable for ontology diagnosis. (242 words).

Yu W, Liu M, Qin W, Liu J, Chen S, Chen Y, Hu B, Chen Y, Liu E, Jin X, Liu S, Li C, Zhu Z

pubmed logopapersSep 24 2025
Explore the clinical characteristics of Tuberculosis Destroyed Lung (TDL) with pulmonary hypertension. Use Artificial Intelligence (AI) CT Imaging for the Diagnosis of TDL Patients with PH. 51 cases of TDL patients. Based on the results of the right heart catheterization examination, the patients were divided into two groups: TDL with group (n=31) and TDL Non-PH (n=20). The original chest CT data of the patients were reconstructed, segmented, and rendered using AI, and lung volume-related data were calculated. The differences in clinical data, hemodynamic data, and lung volume-related data between the two groups of patients were compared. The proportion of TDL patients with PH is significantly higher than those without TDL (61.82% vs. 22.64%, P<0.01). There were significant differences between the two groups of patients in terms of pulmonary function, PCWP/PVR, PASP/TRV and total volume of destroyed lung tissue (V<sub>TDLT</sub>) (P<0.05), and V<sub>TDLT</sub> is positively correlated with mean pulmonary arterial pressure (mPAP). Combined Diagnosis (V<sub>TDLT</sub> + PSAP): The area under the AUC was 0.917 (95%CI: 0.802-1), with a predicted probability of 0.51 and a Youden index of 0.789. The sensitivity was 90% and specificity was 88.9%. Patients with TDL accompanied by pulmonary hypertension are related to restrictive disorders. The V<sub>TDLT</sub> is positively correlated with mPAP. By calculating the V<sub>TDLT</sub> and combining it with the estimated PASP from echocardiography, it assists in the diagnosis of PH in these patients.

Dayu Tan, Zhenpeng Xu, Yansen Su, Xin Peng, Chunhou Zheng, Weimin Zhong

arxiv logopreprintSep 24 2025
Both local details and global context are crucial in medical image segmentation, and effectively integrating them is essential for achieving high accuracy. However, existing mainstream methods based on CNN-Transformer hybrid architectures typically employ simple feature fusion techniques such as serial stacking, endpoint concatenation, or pointwise addition, which struggle to address the inconsistencies between features and are prone to information conflict and loss. To address the aforementioned challenges, we innovatively propose HiPerformer. The encoder of HiPerformer employs a novel modular hierarchical architecture that dynamically fuses multi-source features in parallel, enabling layer-wise deep integration of heterogeneous information. The modular hierarchical design not only retains the independent modeling capability of each branch in the encoder, but also ensures sufficient information transfer between layers, effectively avoiding the degradation of features and information loss that come with traditional stacking methods. Furthermore, we design a Local-Global Feature Fusion (LGFF) module to achieve precise and efficient integration of local details and global semantic information, effectively alleviating the feature inconsistency problem and resulting in a more comprehensive feature representation. To further enhance multi-scale feature representation capabilities and suppress noise interference, we also propose a Progressive Pyramid Aggregation (PPA) module to replace traditional skip connections. Experiments on eleven public datasets demonstrate that the proposed method outperforms existing segmentation techniques, demonstrating higher segmentation accuracy and robustness. The code is available at https://github.com/xzphappy/HiPerformer.

Bande, J. K., Johnson, E. T., Banderudrappagari, R.

medrxiv logopreprintSep 24 2025
PurposeIn this study, we aimed to use advanced machine learning (ML) techniques, specifically transfer learning and Vision Transformers (ViTs), to accurately classify meningioma in brain MRI scans. ViTs process images similarly to how humans visually perceive details and are useful for analyzing complex medical images. Transfer learning is a technique that uses models previously trained on large datasets and adapts them to specific use cases. Using transfer learning, this study aimed to enhance the diagnostic accuracy of meningioma location and demonstrate the capabilities the new technology. ApproachWe used a Google ViT model pre-trained on ImageNet-21k (a dataset with 14 million images and 21,843 classes) and fine-tuned on ImageNet 2012 (a dataset with 1 million images and 1,000 classes). Using this model, which was pre-trained and fine-tuned on large datasets of images, allowed us to leverage the predictive capabilities of the model trained on those large datasets without needing to train an entirely new model specific to only meningioma MRI scans. Transfer learning was used to adapt the pre-trained ViT to our specific use case, being meningioma location classification, using a dataset of 1,094 images of T1, contrast-enhanced, and T2-weighted MRI scans of meningiomas sorted according to location in the brain, with 11 different classes. ResultsThe final model trained and adapted on the meningioma MRI dataset achieved an average validation accuracy of 98.17% and a test accuracy of 89.95%. ConclusionsThis study demonstrates the potential of ViTs in meningioma location classification, leveraging their ability to analyze spatial relationships in medical images. While transfer learning enabled effective adaptation with limited data, class imbalance affected classification performance. Future work should focus on expanding datasets and incorporating ensemble learning to improve diagnostic reliability.

Kotelevets SM

pubmed logopapersSep 24 2025
Serological screening, endoscopic imaging, morphological visual verification of precancerous gastric diseases and changes in the gastric mucosa are the main stages of early detection, accurate diagnosis and preventive treatment of gastric precancer. Laboratory - serological, endoscopic and histological diagnostics are carried out by medical laboratory technicians, endoscopists, and histologists. Human factors have a very large share of subjectivity. Endoscopists and histologists are guided by the descriptive principle when formulating imaging conclusions. Diagnostic reports from doctors often result in contradictory and mutually exclusive conclusions. Erroneous results of diagnosticians and clinicians have fatal consequences, such as late diagnosis of gastric cancer and high mortality of patients. Effective population serological screening is only possible with the use of machine processing of laboratory test results. Currently, it is possible to replace subjective imprecise description of endoscopic and histological images by a diagnostician with objective, highly sensitive and highly specific visual recognition using convolutional neural networks with deep machine learning. There are many machine learning models to use. All machine learning models have predictive capabilities. Based on predictive models, it is necessary to identify the risk levels of gastric cancer in patients with a very high probability.

Cortese R, Sforazzini F, Gentile G, de Mauro A, Luchetti L, Amato MP, Apóstolos-Pereira SL, Arrambide G, Bellenberg B, Bianchi A, Bisecco A, Bodini B, Calabrese M, Camera V, Celius EG, de Medeiros Rimkus C, Duan Y, Durand-Dubief F, Filippi M, Gallo A, Gasperini C, Granziera C, Groppa S, Grothe M, Gueye M, Inglese M, Jacob A, Lapucci C, Lazzarotto A, Liu Y, Llufriu S, Lukas C, Marignier R, Messina S, Müller J, Palace J, Pastó L, Paul F, Prados F, Pröbstel AK, Rovira À, Rocca MA, Ruggieri S, Sastre-Garriga J, Sato DK, Schneider R, Sepulveda M, Sowa P, Stankoff B, Tortorella C, Barkhof F, Ciccarelli O, Battaglini M, De Stefano N

pubmed logopapersSep 23 2025
Multiple sclerosis (MS) is common in adults while myelin oligodendrocyte glycoprotein antibody-associated disease (MOGAD) is rare. Our previous machine-learning algorithm, using clinical variables, ≤6 brain lesions, and no Dawson fingers, achieved 79% accuracy, 78% sensitivity, and 80% specificity in distinguishing MOGAD from MS but lacked validation. The aim of this study was to (1) evaluate the clinical/MRI algorithm for distinguishing MS from MOGAD, (2) develop a deep learning (DL) model, (3) assess the benefit of combining both, and (4) identify key differentiators using probability attention maps (PAMs). This multicenter, retrospective, cross-sectional MAGNIMS study included scans from 19 centers. Inclusion criteria were as follows: adults with non-acute MS and MOGAD, with high-quality T2-fluid-attenuated inversion recovery and T1-weighted scans. Brain scans were scored by 2 readers to assess the performance of the clinical/MRI algorithm on the validation data set. A DL-based classifier using a ResNet-10 convolutional neural network was developed and tested on an independent validation data set. PAMs were generated by averaging correctly classified attention maps from both groups, identifying key differentiating regions. We included 406 MRI scans (218 with relapsing remitting MS [RRMS], mean age: 39 years ±11, 69% F; 188 with MOGAD, age: 41 years ±14, 61% F), split into 2 data sets: a training/testing set (n = 265: 150 with RRMS, age: 39 years ±10, 72% F; 115 with MOGAD, age: 42 years ±13, 61% F) and an independent validation set (n = 141: 68 with RRMS, age: 40 years ±14, 65% F; 73 with MOGAD, age: 40 years ±15, 63% F). The clinical/MRI algorithm predicted RRMS over MOGAD with 75% accuracy (95% CI 67-82), 96% sensitivity (95% CI 88-99), and specificity 56% (95% CI 44-68) in the validation cohort. The DL model achieved 77% accuracy (95% CI 64-89), 73% sensitivity (95% CI 57-89), and 83% specificity (95% CI 65-96) in the training/testing cohort, and 70% accuracy (95% CI 63-77), 67% sensitivity (95% CI 55-79), and 73% specificity (95% CI 61-83) in the validation cohort without retraining. When combined, the classifiers reached 86% accuracy (95% CI 81-92), 84% sensitivity (95% CI 75-92), and 89% specificity (95% CI 81-96). PAMs identified key region volumes: corpus callosum (1872 mm<sup>3</sup>), left precentral gyrus (341 mm<sup>3</sup>), right thalamus (193 mm<sup>3</sup>), and right cingulate cortex (186 mm<sup>3</sup>) for identifying RRMS and brainstem (629 mm<sup>3</sup>), hippocampus (234 mm<sup>3</sup>), and parahippocampal gyrus (147 mm<sup>3</sup>) for identifying MOGAD. Both classifiers effectively distinguished RRMS from MOGAD. The clinical/MRI model showed higher sensitivity while the DL model offered higher specificity, suggesting complementary roles. Their combination improved diagnostic accuracy, and PAMs revealed distinct damage patterns. Future prospective studies should validate these models in diverse, real-world settings. This study provides Class III evidence that both a clinical/MRI algorithm and an MRI-based DL model accurately distinguish RRMS from MOGAD.

Lai M, Mascalchi M, Tessa C, Diciotti S

pubmed logopapersSep 23 2025
The potential of deep learning for medical imaging is often constrained by limited data availability. Generative models can unlock this potential by generating synthetic data that reproduces the statistical properties of real data while being more accessible for sharing. In this study, we investigated the influence of training set size on the performance of a state-of-the-art generative adversarial network, the StyleGAN2-ADA, trained on a cohort of 3,227 subjects from the OpenBHB dataset to generate 2D slices of brain MR images from healthy subjects. The quality of the synthetic images was assessed through qualitative evaluations and state-of-the-art quantitative metrics, which are provided in a publicly accessible repository. Our results demonstrate that StyleGAN2-ADA generates realistic and high-quality images, deceiving even expert radiologists while preserving privacy, as it did not memorize training images. Notably, increasing the training set size led to slight improvements in fidelity metrics. However, training set size had no noticeable impact on diversity metrics, highlighting the persistent limitation of mode collapse. Furthermore, we observed that diversity metrics, such as coverage and β-recall, are highly sensitive to the number of synthetic images used in their computation, leading to inflated values when synthetic data significantly outnumber real ones. These findings underscore the need to carefully interpret diversity metrics and the importance of employing complementary evaluation strategies for robust assessment. Overall, while StyleGAN2-ADA shows promise as a tool for generating privacy-preserving synthetic medical images, overcoming diversity limitations will require exploring alternative generative architectures or incorporating additional regularization techniques.

Bereska JI, Palic S, Bereska LF, Gavves E, Nio CY, Kop MPM, Struik F, Daams F, van Dam MA, Dijkhuis T, Besselink MG, Marquering HA, Stoker J, Verpalen IM

pubmed logopapersSep 23 2025
Pancreatic ductal adenocarcinoma (PDAC) is a leading cause of cancer-related deaths, with accurate staging being critical for treatment planning. Automated 3D segmentation models can aid in staging, but segmenting PDAC, especially in cases of locally advanced pancreatic cancer (LAPC), is challenging due to the tumor's heterogeneous appearance, irregular shapes, and extensive infiltration. This study developed and evaluated a tripartite self-supervised learning architecture for improved 3D segmentation of LAPC, addressing the challenges of heterogeneous appearance, irregular shapes, and extensive infiltration in PDAC. We implemented a tripartite architecture consisting of a teacher model, a professor model, and a student model. The teacher model, trained on manually segmented CT scans, generated initial pseudo-segmentations. The professor model refined these segmentations, which were then used to train the student model. We utilized 1115 CT scans from 903 patients for training. Three expert abdominal radiologists manually segmented 30 CT scans from 27 patients with LAPC, serving as reference standards. We evaluated the performance using DICE, Hausdorff distance (HD95), and mean surface distance (MSD). The teacher, professor, and student models achieved average DICE scores of 0.60, 0.73, and 0.75, respectively, with significant boundary accuracy improvements (teacher HD95/MSD, 25.71/5.96 mm; professor, 9.68/1.96 mm; student, 4.79/1.34 mm). Our findings demonstrate that the professor model significantly enhances segmentation accuracy for LAPC (p < 0.01). Both the professor and student models offer substantial improvements over previous work. The introduced tripartite self-supervised learning architecture shows promise for improving automated 3D segmentation of LAPC, potentially aiding in more accurate staging and treatment planning.

Stogiannos N, Skelton E, van Leeuwen KG, Edgington S, Shelmerdine SC, Malamateniou C

pubmed logopapersSep 23 2025
To explore the perspectives of AI vendors on the integration of AI in medical imaging and oncology clinical practice. An online survey was created on Qualtrics, comprising 23 closed and 5 open-ended questions. This was administered through social media, personalised emails, and the channels of the European Society of Medical Imaging Informatics and Health AI Register, to all those working at a company developing or selling accredited AI solutions for medical imaging and oncology. Quantitative data were analysed using SPSS software, version 28.0. Qualitative data were summarised using content analysis on NVivo, version 14. In total, 83 valid responses were received, with participants having a global distribution and diverse roles and professional backgrounds (business/management/clinical practitioners/engineers/IT, etc). The respondents mentioned the top enablers (practitioner acceptance, business case of AI applications, explainability) and challenges (new regulations, practitioner acceptance, business case) of AI implementation. Co-production with end-users was confirmed as a key practice by most (52.9%). The respondents recognised infrastructure issues within clinical settings (64.1%), lack of clinician engagement (54.7%), and lack of financial resources (42.2%) as key challenges in meeting customer expectations. They called for appropriate reimbursement, robust IT support, clinician acceptance, rigorous regulation, and adequate user training to ensure the successful integration of AI into clinical practice. This study highlights that people, infrastructure, and funding are fundamentals of AI implementation. AI vendors wish to work closely with regulators, patients, clinical practitioners, and other key stakeholders, to ensure a smooth transition of AI into daily practice. Question AI vendors' perspectives on unmet needs, challenges, and opportunities for AI adoption in medical imaging are largely underrepresented in recent research. Findings Provision of consistent funding, optimised infrastructure, and user acceptance were highlighted by vendors as key enablers of AI implementation. Clinical relevance Vendors' input and collaboration with clinical practitioners are necessary to clinically implement AI. This study highlights real-world challenges that AI vendors face and opportunities they value during AI implementation. Keeping the dialogue channels open is key to these collaborations.
Page 140 of 6476462 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.