Sort by:
Page 243 of 6576562 results

Ayivi W, Zhang X, Ativi WX, Sam F, Kouassi FAP

pubmed logopapersAug 21 2025
Lung cancer is one of the leading causes of cancer-related mortality worldwide. The diagnosis of this disease remains a challenge due to the subtle and ambiguous nature of early-stage symptoms and imaging findings. Deep learning approaches, specifically Convolutional Neural Networks (CNNs), have significantly advanced medical image analysis. However, conventional architectures such as ResNet50 that rely on first-order pooling often fall short. This study aims to overcome the limitations of CNNs in lung cancer classification by proposing a novel and dynamic model named LungSE-SOP. The model is based on Second-Order Pooling (SOP) and Squeeze-and-Excitation Networks (SENet) within a ResNet50 backbone to improve feature representation and class separation. A novel Dynamic Feature Enhancement (DFE) module is also introduced, which dynamically adjusts the flow of information through SOP and SENet blocks based on learned importance scores. The model was trained using a publicly available IQ-OTH/NCCD lung cancer dataset. The performance of the model was assessed using various metrics, including the accuracy, precision, recall, F1-score, ROC curves, and confidence intervals. For multiclass tumor classification, our model achieved 98.6% accuracy for benign, 98.7% for malignant, and 99.9% for normal cases. Corresponding F1-scores were 99.2%, 99.8%, and 99.9%, respectively, reflecting the model's high precision and recall across all tumor types and its strong potential for clinical deployment.

Alexandra Bernadotte, Elfimov Nikita, Mikhail Shutov, Ivan Menshikov

arxiv logopreprintAug 21 2025
Accurate segmentation of blood vessels in brain magnetic resonance angiography (MRA) is essential for successful surgical procedures, such as aneurysm repair or bypass surgery. Currently, annotation is primarily performed through manual segmentation or classical methods, such as the Frangi filter, which often lack sufficient accuracy. Neural networks have emerged as powerful tools for medical image segmentation, but their development depends on well-annotated training datasets. However, there is a notable lack of publicly available MRA datasets with detailed brain vessel annotations. To address this gap, we propose a novel semi-supervised learning lightweight neural network with Hessian matrices on board for 3D segmentation of complex structures such as tubular structures, which we named HessNet. The solution is a Hessian-based neural network with only 6000 parameters. HessNet can run on the CPU and significantly reduces the resource requirements for training neural networks. The accuracy of vessel segmentation on a minimal training dataset reaches state-of-the-art results. It helps us create a large, semi-manually annotated brain vessel dataset of brain MRA images based on the IXI dataset (annotated 200 images). Annotation was performed by three experts under the supervision of three neurovascular surgeons after applying HessNet. It provides high accuracy of vessel segmentation and allows experts to focus only on the most complex important cases. The dataset is available at https://git.scinalytics.com/terilat/VesselDatasetPartly.

Darya Taratynova, Alya Almsouti, Beknur Kalmakhanbet, Numan Saeed, Mohammad Yaqub

arxiv logopreprintAug 21 2025
Congenital heart defect (CHD) detection in ultrasound videos is hindered by image noise and probe positioning variability. While automated methods can reduce operator dependence, current machine learning approaches often neglect temporal information, limit themselves to binary classification, and do not account for prediction calibration. We propose Temporal Prompt Alignment (TPA), a method leveraging foundation image-text model and prompt-aware contrastive learning to classify fetal CHD on cardiac ultrasound videos. TPA extracts features from each frame of video subclips using an image encoder, aggregates them with a trainable temporal extractor to capture heart motion, and aligns the video representation with class-specific text prompts via a margin-hinge contrastive loss. To enhance calibration for clinical reliability, we introduce a Conditional Variational Autoencoder Style Modulation (CVAESM) module, which learns a latent style vector to modulate embeddings and quantifies classification uncertainty. Evaluated on a private dataset for CHD detection and on a large public dataset, EchoNet-Dynamic, for systolic dysfunction, TPA achieves state-of-the-art macro F1 scores of 85.40% for CHD diagnosis, while also reducing expected calibration error by 5.38% and adaptive ECE by 6.8%. On EchoNet-Dynamic's three-class task, it boosts macro F1 by 4.73% (from 53.89% to 58.62%). Temporal Prompt Alignment (TPA) is a framework for fetal congenital heart defect (CHD) classification in ultrasound videos that integrates temporal modeling, prompt-aware contrastive learning, and uncertainty quantification.

Deborup Sanyal

arxiv logopreprintAug 21 2025
COVID19 took the world by storm since December 2019. A highly infectious communicable disease, COVID19 is caused by the SARSCoV2 virus. By March 2020, the World Health Organization (WHO) declared COVID19 as a global pandemic. A pandemic in the 21st century after almost 100 years was something the world was not prepared for, which resulted in the deaths of around 1.6 million people worldwide. The most common symptoms of COVID19 were associated with the respiratory system and resembled a cold, flu, or pneumonia. After extensive research, doctors and scientists concluded that the main reason for lives being lost due to COVID19 was failure of the respiratory system. Patients were dying gasping for breath. Top healthcare systems of the world were failing badly as there was an acute shortage of hospital beds, oxygen cylinders, and ventilators. Many were dying without receiving any treatment at all. The aim of this project is to help doctors decide the severity of COVID19 by reading the patient's Computed Tomography (CT) scans of the lungs. Computer models are less prone to human error, and Machine Learning or Neural Network models tend to give better accuracy as training improves over time. We have decided to use a Convolutional Neural Network model. Given that a patient tests positive, our model will analyze the severity of COVID19 infection within one month of the positive test result. The severity of the infection may be promising or unfavorable (if it leads to intubation or death), based entirely on the CT scans in the dataset.

Tsai CL, Chu TC, Wang CH, Chang WT, Tsai MS, Ku SC, Lin YH, Tai HC, Kuo SW, Wang KC, Chao A, Tang SC, Liu WL, Tsai MH, Wang TA, Chuang SL, Lee YC, Kuo LC, Chen CJ, Kao JH, Wang W, Huang CH

pubmed logopapersAug 20 2025
Advancements in artificial intelligence (AI) have driven substantial breakthroughs in computer-aided detection (CAD) for chest x-ray (CXR) imaging. The National Taiwan University Hospital research team previously developed an AI-based emergency CXR system (Capstone project), which led to the creation of a CXR module. This CXR module has an established model supported by extensive research and is ready for application in clinical trials without requiring additional model training. This study will use 3 submodules of the system: detection of misplaced endotracheal tubes, detection of misplaced nasogastric tubes, and identification of pneumothorax. This study aims to apply a real-time CXR CAD system in emergency and critical care settings to evaluate its clinical and economic benefits without requiring additional CXR examinations or altering standard care and procedures. The study will evaluate the impact of CAD system on mortality reduction, postintubation complications, hospital stay duration, workload, and interpretation time, as wells as conduct a cost-effectiveness comparison with standard care. This study adopts a pilot trial and cluster randomized controlled trial design, with random assignment conducted at the ward level. In the intervention group, units are granted access to AI diagnostic results, while the control group continues standard care practices. Consent will be obtained from attending physicians, residents, and advanced practice nurses in each participating ward. Once consent is secured, these health care providers in the intervention group will be authorized to use the CAD system. Intervention units will have access to AI-generated interpretations, whereas control units will maintain routine medical procedures without access to the AI diagnostic outputs. The study was funded in September 2024. Data collection is expected to last from January 2026 to December 2027. This study anticipates that the real-time CXR CAD system will automate the identification and detection of misplaced endotracheal and nasogastric tubes on CXRs, as well as assist clinicians in diagnosing pneumothorax. By reducing the workload of physicians, the system is expected to shorten the time required to detect tube misplacement and pneumothorax, decrease patient mortality and hospital stays, and ultimately lower health care costs. PRR1-10.2196/72928.

Cai L, Pfob A, Barr RG, Duda V, Alwafai Z, Balleyguier C, Clevert DA, Fastner S, Gomez C, Goncalo M, Gruber I, Hahn M, Kapetas P, Nees J, Ohlinger R, Riedel F, Rutten M, Stieber A, Togawa R, Sidey-Gibbons C, Tozaki M, Wojcinski S, Heil J, Golatta M

pubmed logopapersAug 20 2025
Shear wave elastography (SWE) has been investigated as a complement to B-mode ultrasound for breast cancer diagnosis. Although multicenter trials suggest benefits for patients with Breast Imaging Reporting and Data System (BI-RADS) 4(a) breast masses, widespread adoption remains limited because of the absence of validated velocity thresholds. This study aims to develop and validate a deep learning (DL) model using SWE images (artificial intelligence [AI]-SWE) for BI-RADS 3 and 4 breast masses and compare its performance with human experts using B-mode ultrasound. We used data from an international, multicenter trial (ClinicalTrials.gov identifier: NCT02638935) evaluating SWE in women with BI-RADS 3 or 4 breast masses across 12 institutions in seven countries. Images from 11 sites were used to develop an EfficientNetB1-based DL model. An external validation was conducted using data from the 12th site. Another validation was performed using the latest SWE software from a separate institutional cohort. Performance metrics included sensitivity, specificity, false-positive reduction, and area under the receiver operator curve (AUROC). The development set included 924 patients (4,026 images); the external validation sets included 194 patients (562 images) and 176 patients (188 images, latest SWE software). AI-SWE achieved an AUROC of 0.94 (95% CI, 0.91 to 0.96) and 0.93 (95% CI, 0.88 to 0.98) in the two external validation sets. Compared with B-mode ultrasound, AI-SWE significantly reduced false-positive rates by 62.1% (20.4% [30/147] <i>v</i> 53.8% [431/801]; <i>P</i> < .001) and 38.1% (33.3% [14/42] <i>v</i> 53.8% [431/801]; <i>P</i> < .001), with comparable sensitivity (97.9% [46/47] and 97.8% [131/134] <i>v</i> 98.1% [311/317]; <i>P</i> = .912 and <i>P</i> = .810). AI-SWE demonstrated accuracy comparable with human experts in malignancy detection while significantly reducing false-positive imaging findings (ie, unnecessary biopsies). Future studies should explore its integration into multimodal breast cancer diagnostics.

Tahosin MS, Sheakh MA, Alam MJ, Hassan MM, Bairagi AK, Abdulla S, Alshathri S, El-Shafai W

pubmed logopapersAug 20 2025
Advances in deep learning have transformed medical imaging, yet progress is hindered by data privacy regulations and fragmented datasets across institutions. To address these challenges, we propose FedVGM, a privacy-preserving federated learning framework for multi-modal medical image analysis. FedVGM integrates four imaging modalities, including brain MRI, breast ultrasound, chest X-ray, and lung CT, across 14 diagnostic classes without centralizing patient data. Using transfer learning and an ensemble of VGG16 and MobileNetV2, FedVGM achieves 97.7% $\pm$ 0.01 accuracy on the combined dataset and 91.9-99.1% across individual modalities. We evaluated three aggregation strategies and demonstrated median aggregation to be the most effective. To ensure clinical interpretability, we apply explainable AI techniques and validate results through performance metrics, statistical analysis, and k-fold cross-validation. FedVGM offers a robust, scalable solution for collaborative medical diagnostics, supporting clinical deployment while preserving data privacy.

Trägårdh E, Ulén J, Enqvist O, Larsson M, Valind K, Minarik D, Edenbrandt L

pubmed logopapersAug 20 2025
In this study, we further developed an artificial intelligence (AI)-based method for the detection and quantification of tumours in the prostate, lymph nodes and bone in prostate-specific membrane antigen (PSMA)-targeting positron emission tomography with computed tomography (PET-CT) images. A total of 1064 [<sup>18</sup>F]PSMA-1007 PET-CT scans were used (approximately twice as many compared to our previous AI model), of which 120 were used as test set. Suspected lesions were manually annotated and used as ground truth. A convolutional neural network was developed and trained. The sensitivity and positive predictive value (PPV) were calculated using two sets of manual segmentations as reference. Results were also compared to our previously developed AI method. The correlation between manually and AI-based calculations of total lesion volume (TLV) and total lesion uptake (TLU) were calculated. The sensitivities of the AI method were 85% for prostate tumour/recurrence, 91% for lymph node metastases and 61% for bone metastases (82%, 86% and 70% for manual readings and 66%, 88% and 71% for the old AI method). The PPVs of the AI method were 85%, 83% and 58%, respectively (63%, 86% and 39% for manual readings, and 69%, 70% and 39% for the old AI method). The correlations between manual and AI-based calculations of TLV and TLU ranged from r = 0.62 to r = 0.96. The performance of the newly developed and fully automated AI-based method for detecting and quantifying prostate tumour and suspected lymph node and bone metastases increased significantly, especially the PPV. The AI method is freely available to other researchers ( www.recomia.org ).

Wu Y, Liu X, Shi Y, Chen X, Wang Z, Xu Y, Wang S

pubmed logopapersAug 20 2025
Accurate segmentation of lung adenocarcinoma nodules in computed tomography (CT) images is critical for clinical staging and diagnosis. However, irregular nodule shapes and ambiguous boundaries pose significant challenges for existing methods. This study introduces S<sup>3</sup>TU-Net, a hybrid CNN-Transformer architecture designed to enhance feature extraction, fusion, and global context modeling. The model integrates three key innovations: (1) structured convolution blocks (DWF-Conv/D<sup>2</sup>BR-Conv) for multi-scale feature extraction and overfitting mitigation; (2) S<sup>2</sup>-MLP Link, a spatial-shift-enhanced skip-connection module to improve multi-level feature fusion; and 3) residual-based superpixel vision transformer (RM-SViT) to capture long-range dependencies efficiently. Evaluated on the LIDC-IDRI dataset, S<sup>3</sup>TU-Net achieves a Dice score of 89.04%, precision of 90.73%, and IoU of 90.70%, outperforming recent methods by 4.52% in Dice. Validation on the EPDB dataset further confirms its generalizability (Dice, 86.40%). This work contributes to bridging the gap between local feature sensitivity and global context awareness by integrating structured convolutions and superpixel-based transformers, offering a robust tool for clinical decision support.

Desai M, Desai B

pubmed logopapersAug 20 2025
Artificial Intelligence (AI) is revolutionizing the prevention and control of breast cancer by improving risk assessment, prevention, and early diagnosis. Considering an emphasis on AI applications across the women's breast cancer spectrum, this review summarizes developments, existing applications, and future potential prospects. We conducted an in-depth review of the literature on AI applications in breast cancer risk prediction, prevention, and early detection from 2000 to 2025, with particular emphasis on Explainable AI (XAI), deep learning (DL), and machine learning (ML). We examined algorithmic fairness, model transparency, dataset representation, and clinical performance indicators. As compared to traditional methods, AI-based models continuously enhanced risk categorization, screening sensitivity, and early detection (AUCs ranging from 0.65 to 0.975). However, challenges remain in algorithmic bias, underrepresentation of minority populations, and limited external validation. Remarkably, 58% of public datasets focused on mammography, leaving gaps in modalities such as tomosynthesis and histopathology. AI technologies have an enormous number of opportunities for enhancing the diagnosis and treatment of breast cancer. However, transparent models, inclusive datasets, and standardized frameworks for explainability and external validation should be given the greatest attention in subsequent studies to ensure equitable and effective implementation.
Page 243 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.