Sort by:
Page 218 of 6546537 results

Bianconi F, Khan MU, Du H, Jassim S

pubmed logopapersAug 28 2025
Breast ultrasound images play a pivotal role in assessing the nature of suspicious breast lesions, particularly in patients with dense tissue. Computerized analysis of breast ultrasound images has the potential to assist the physician in the clinical decision-making and improve subjective interpretation. We assess the performance of conventional features, deep learning features and ensemble schemes for discriminating benign versus malignant breast lesions on ultrasound images. A total of 19 individual feature sets (1 morphological, 2 first-order, 10 texture-based, and 6 CNN-based) were included in the analysis. Furthermore, four combined feature sets (Best by class; Top 3, 5, and 7) and four fusion schemes (feature concatenation, majority voting, sum and product rule) were considered to generate ensemble models. The experiments were carried out on three independent open-access datasets respectively containing 252 (154 benign, 98 malignant), 232 (109 benign, 123 malignant), and 281 (187 benign, 94 malignant) lesions. CNN-based features outperformed the other individual descriptors achieving levels of accuracy between 77.4% and 83.6%, followed by morphological features (71.6%-80.8%) and histograms of oriented gradients (71.4%-77.6%). Ensemble models further improved the accuracy to 80.2% to 87.5%. Fusion schemes based on product and sum rule were generally superior to feature concatenation and majority voting. Combining individual feature sets by ensemble schemes demonstrates advantages for discriminating benign versus malignant breast lesions on ultrasound images.

Gennaro Percannella, Mattia Sarno, Francesco Tortorella, Mario Vento

arxiv logopreprintAug 28 2025
Mitosis detection in histopathology images plays a key role in tumor assessment. Although machine learning algorithms could be exploited for aiding physicians in accurately performing such a task, these algorithms suffer from significative performance drop when evaluated on images coming from domains that are different from the training ones. In this work, we propose a Mamba-based approach for mitosis detection under domain shift, inspired by the promising performance demonstrated by Mamba in medical imaging segmentation tasks. Specifically, our approach exploits a VM-UNet architecture for carrying out the addressed task, as well as stain augmentation operations for further improving model robustness against domain shift. Our approach has been submitted to the track 1 of the MItosis DOmain Generalization (MIDOG) challenge. Preliminary experiments, conducted on the MIDOG++ dataset, show large room for improvement for the proposed method.

Biondi M, Bortoli E, Marini L, Avitabile R, Bartoli A, Busatti E, Tozzi A, Cimmino MC, Piccini L, Giusti EB, Guasti A

pubmed logopapersAug 28 2025
Medical imaging faces critical challenges in radiation dose management and protocol standardisation. This study introduces a machine learning approach using a random forest algorithm to classify Computed Tomography (CT) scan protocols. By leveraging dose monitoring system data, we provide a data-driven solution for establishing Diagnostic Reference Levels while minimising computational resources. We developed a classification workflow using a Random Forest Classifier to categorise CT scans into anatomical regions: head, thorax, abdomen, spine, and complex multi-region scans (thorax + abdomen and total body). The methodology featured an iterative "human-in-the-loop" refinement process involving data preprocessing, machine learning algorithm training, expert validation, and protocol classification. After training the initial model, we applied the methodology to a new, independent dataset. By analysing 52,982 CT scan records from 11 imaging devices across five hospitals, we train the classificator to distinguish multiple anatomical regions, categorising scans into head, thorax, abdomen, and spine. The final validation on the new database confirmed the model's robustness, achieving a 97 % accuracy. This research introduces a novel medical imaging protocol classification approach by shifting from manual, time-consuming processes to a data-driven approach integrating a random forest algorithm. Our study presents a transformative approach to CT scan protocol classification, demonstrating the potential of data-driven methodologies in medical imaging. We have created a framework for managing protocol classification and establishing DRL by integrating computational intelligence with clinical expertise. Future research will explore applying this methodology to other radiological procedures.

Lei W, Han L, Cao Z, Duan T, Wang B, Li C, Pei X

pubmed logopapersAug 28 2025
To evaluate the precision of automated segmentation facilitated by deep learning (DL) and dose calculation in adaptive radiotherapy (ART) for nasopharyngeal cancer (NPC), leveraging synthetic CT (sCT) images derived from cone-beam CT (CBCT) scans on a conventional C-arm linac. Sixteen NPC patients undergoing a two-phase offline ART were analyzed retrospectively. The initial (pCT<sub>1</sub>) and adaptive (pCT<sub>2</sub>) CT scans served as gold standard alongside weekly acquired CBCT scans. Patient data, including manually delineated contours and dose information, were imported into ArcherQA. Using a cycle-consistent generative adversarial network (cycle-GAN) trained on an independent dataset, sCT images (sCT<sub>1</sub>, sCT<sub>4</sub>, sCT<sub>4</sub><sup>*</sup>) were generated from weekly CBCT scans (CBCT<sub>1</sub>, CBCT<sub>4</sub>, CBCT<sub>4</sub>) paired with corresponding planning CTs (pCT<sub>1</sub>, pCT<sub>1</sub>, pCT<sub>2</sub>). Auto-segmentation was performed on sCTs, followed by GPU-accelerated Monte Carlo dose recalculation. Auto-segmentation accuracy was assessed via Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD<sub>95</sub>). Dose calculation fidelity on sCTs was evaluated using dose-volume parameters. Dosimetric consistency between recalculated sCT and pCT plans was analyzed via Spearman's correlation, while volumetric changes were concurrently evaluated to quantify anatomical variations. Most anatomical structures demonstrated high pCT-sCT agreement, with mean values of DSC > 0.85 and HD<sub>95</sub> < 5.10 mm. Notable exceptions included the primary Gross Tumor Volume (GTVp) in the pCT<sub>2</sub>-sCT<sub>4</sub> comparison (DSC: 0.75, HD<sub>95</sub>: 6.03 mm), involved lymph node (GTVn) showing lower agreement (DSC: 0.43, HD<sub>95</sub>: 16.42 mm), and submandibular glands with moderate agreement (DSC: 0.64-0.73, HD<sub>95</sub>: 4.45-5.66 mm). Dosimetric analysis revealed the largest mean differences in GTVn D<sub>99</sub>: -1.44 Gy (95% CI: [-3.01, 0.13] Gy) and right parotid mean dose: -1.94 Gy (95% CI: [-3.33, -0.55] Gy, p < 0.05). Anatomical variations, quantified via sCTs measurements, correlated significantly with offline adaptive plan adjustments in ART. This correlation was strong for parotid glands (ρ > 0.72, p < 0.001), a result that aligned with sCT-derived dose discrepancy analysis (ρ > 0.57, p < 0.05). The proposed method exhibited minor variations in volumetric and dosimetric parameters compared to prior treatment data, suggesting potential efficiency improvements for ART in NPC through reduced human dependency.

Hrzic F, Movahhedi M, Lavoie-Gagne O, Kiapour A

pubmed logopapersAug 28 2025
It is well known that machine learning models require a high amount of annotated data to obtain optimal performance. Labelling Computed Tomography (CT) data can be a particularly challenging task due to its volumetric nature and often missing and/or incomplete associated meta-data. Even inspecting one CT scan requires additional computer software, or in the case of programming languages-additional programming libraries. This study proposes a simple, yet effective approach based on 2D X-ray-like estimation of 3D CT scans for body region identification. Although body region is commonly associated with the CT scan, it often describes only the focused major body region neglecting other anatomical regions present in the observed CT. In the proposed approach, estimated 2D images were utilized to identify 14 distinct body regions, providing valuable information for constructing a high-quality medical dataset. To evaluate the effectiveness of the proposed method, it was compared against 2.5D, 3D and foundation model (MI2) based approaches. Our approach outperformed the others, where it came on top with statistical significance and F1-Score for the best-performing model EffNet-B0 of 0.980 ± 0.016 in comparison to the 0.840 ± 0.114 (2.5D DenseNet-161), 0.854 ± 0.096 (3D VoxCNN), and 0.852 ± 0.104 (MI2 foundation model). The utilized dataset comprised three different clinical centers and counted 15,622 CT scans (44,135 labels).

Xu B, Chen Z, Liu D, Zhu Z, Zhang F, Lin L

pubmed logopapersAug 28 2025
Image-guided thermal ablation (IGTA) has been increasingly used in patients with stage IA non-small cell lung cancer (NSCLC) without surgical contraindications, but its long-term outcomes compared to lobectomy remain unknown. This study aims to evaluate the long-term outcomes of IGTA versus lobectomy and explore which patients may benefit most from ablation. After propensity score matching, a total of 290 patients with stage IA NSCLC between 2015 and 2023 were included. Progression-free survival (PFS) and overall survival (OS) were estimated using the Kaplan-Meier method. A Markov model was constructed to evaluate cost-effectiveness. Finally, a radiomics model based on preoperative computed tomography (CT) was developed to perform risk stratification. After matching, the median follow-up intervals were 34.8 months for the lobectomy group and 47.2 months for the ablation group. There were no significant differences between the groups in terms of 5-year PFS (hazard ratio [HR], 1.83; 95% CI, 0.86-3.92; p = 0.118) or OS (HR, 2.44; 95% CI, 0.87-6.63; p = 0.092). In low-income regions, lobectomy was not cost-effective in 99% of simulations. The CT-based radiomics model outperformed the traditional TNM model (AUC, 0.759 vs. 0.650; p < 0.01). Moreover, disease-free survival was significantly lower in the high-risk group than in the low-risk group (p = 0.009). This study comprehensively evaluated IGTA versus lobectomy in terms of survival outcomes, cost-effectiveness, and prognostic prediction. The findings suggest that IGTA may be a safe and feasible alternative to conventional surgery for carefully selected patients.

Ahmad K, Rehman HU, Shah B, Ali F, Hussain I

pubmed logopapersAug 28 2025
The precise detection and localization of abnormalities in radiological images are very crucial for clinical diagnosis and treatment planning. To build reliable models, large and annotated datasets are required that contain disease labels and abnormality locations. Most of the time, radiologists face challenges in identifying and segmenting thoracic diseases such as COVID-19, Pneumonia, Tuberculosis, and lung cancer due to overlapping visual patterns in X-ray images. This study proposes a dual-model approach: Gated Vision Transformers (GViT) for classification and Swin Transformer V2 for segmentation and localization. GViT successfully identifies thoracic diseases that exhibit similar radiographic features, while Swin Transformer V2 maps lung areas and pinpoints affected regions. Classification metrics, including precision, recall, and F1-scores, surpassed 0.95 while the Intersection over Union (IoU) score reached 90.98%. Performance assessment via Dice Coefficient, Boundary F1-Score, and Hausdorff Distance demonstrated the system's excellent effectiveness. This artificial intelligence solution will help radiologists in decreasing their mental workload while improving diagnostic precision in healthcare systems that face resource constraints. Transformer-based architectures show strong promise for enhancing medical imaging procedures, according to the study results. Future AI tools should build on this foundation, focusing on comprehensive and precise detection of chest diseases to support effective clinical decision-making.

Ruhwedel T, Rogasch J, Galler M, Schatka I, Wetz C, Furth C, Biernath N, De Santis M, Shnayien S, Kolck J, Geisel D, Amthauer H, Beetz NL

pubmed logopapersAug 28 2025
Body composition (BC) analysis is performed to quantify the relative amounts of different body tissues as a measure of physical fitness and tumor cachexia. We hypothesized that relative changes in body composition (BC) parameters, assessed by an artificial intelligence-based, PACS-integrated software, between baseline imaging before the start of radioligand therapy (RLT) and interim staging after two RLT cycles could predict overall survival (OS) in patients with metastatic castration-resistant prostate cancer. We conducted a single-center, retrospective analysis of 92 patients with mCRPC undergoing [<sup>177</sup>Lu]Lu-PSMA RLT between September 2015 and December 2023. All patients had [<sup>68</sup> Ga]Ga-PSMA-11 PET/CT at baseline (≤ 6 weeks before the first RLT cycle) and at interim staging (6-8 weeks after the second RLT cycle) allowing for longitudinal BC assessment. During follow-up, 78 patients (85%) died. Median OS was 16.3 months. Median follow-up time in survivors was 25.6 months. The 1 year mortality rate was 32.6% (95%CI 23.0-42.2%) and the 5 year mortality rate was 92.9% (95%CI 85.8-100.0%). In multivariable regression, relative change in visceral adipose tissue (VAT) (HR: 0.26; p = 0.006), previous chemotherapy of any type (HR: 2.4; p = 0.003), the presence of liver metastases (HR: 2.4; p = 0.018) and a higher baseline De Ritis ratio (HR: 1.4; p < 0.001) remained independent predictors of OS. Patients with a higher decrease in VAT (< -20%) had a median OS of 10.2 months versus 18.5 months in patients with a lower VAT decrease or VAT increase (≥ -20%) (log-rank test: p = 0.008). In a separate Cox model, the change in VAT predicted OS (p = 0.005) independent of the best PSA response after 1-2 RLT cycles (p = 0.09), and there was no interaction between the two (p = 0.09). PACS-Integrated, AI-based BC monitoring detects relative changes in the VAT, Which was an independent predictor of shorter OS in our population of patients undergoing RLT.

Long C, Huang M, Ye X, Futamura Y, Sakurai T

pubmed logopapersAug 28 2025
Deep learning has achieved significant success in pattern recognition, with convolutional neural networks (CNNs) serving as a foundational architecture for extracting spatial features from images. Quantum computing provides an alternative computational framework, a hybrid quantum-classical convolutional neural networks (QCCNNs) leverage high-dimensional Hilbert spaces and entanglement to surpass classical CNNs in image classification accuracy under comparable architectures. Despite performance improvements, QCCNNs typically use fixed quantum layers without incorporating trainable quantum parameters. This limits their ability to capture non-linear quantum representations and separates the model from the potential advantages of expressive quantum learning. In this work, we present a hybrid quantum-classical-quantum convolutional neural network (QCQ-CNN) that incorporates a quantum convolutional filter, a shallow classical CNN, and a trainable variational quantum classifier. This architecture aims to enhance the expressivity of decision boundaries in image classification tasks by introducing tunable quantum parameters into the end-to-end learning process. Through a series of small-sample experiments on MNIST, F-MNIST, and MRI tumor datasets, QCQ-CNN demonstrates competitive accuracy and convergence behavior compared to classical and hybrid baselines. We further analyze the effect of ansatz depth and find that moderate-depth quantum circuits can improve learning stability without introducing excessive complexity. Additionally, simulations incorporating depolarizing noise and finite sampling shots suggest that QCQ-CNN maintains a certain degree of robustness under realistic quantum noise conditions. While our results are currently limited to simulations with small-scale quantum circuits, the proposed approach offers a potentially promising direction for hybrid quantum learning in near-term applications.

Boubaker F, Lane JI, Puel U, Drouot G, Witte RJ, Ambarki K, Teixeira PAG, Blum A, Parietti-Winkler C, Vallee JN, Gillet R, Eliezer M

pubmed logopapersAug 28 2025
The labyrinth is a complex anatomical structure in the temporal bone. However, high-resolution imaging of its membranous portion is challenging due to its small size and the limitations of current MRI techniques. Deep Learning Reconstruction (DLR) represents a promising approach to advancing MRI image quality, enabling higher spatial resolution and reduced noise. This study aims to evaluate DLR-High-Resolution 3D-T2 MRI sequences for visualizing the labyrinthine structures, comparing them to conventional 3D-T2 sequences. The goal is to improve spatial resolution without prolonging acquisition times, allowing a more detailed view of the labyrinthine microanatomy. High-resolution heavy T2-weighted TSE SPACE images were acquired in patients using 3D-T2 and DLR-3D-T2. Two radiologists rated structure visibility on a four-point qualitative scale for the spiral lamina, scala tympani, scala vestibuli, scala media, utricle, saccule, utricular and saccular maculae, membranous semicircular ducts, and ampullary nerves. Ex vivo 9.4T MRI served as an anatomical reference. DLR-3D-T2 significantly improved the visibility of several inner ear structures. The utricle and utricular macula were systematically visualized, achieving grades ≥3 in 95% of cases (p < 0.001), while the saccule remained challenging to assess, with grades ≥3 in only 10% of cases. The cochlear spiral lamina and scala tympani were better delineated in the first two turns but remained poorly visible in the apical turn. Semicircular ducts were only partially visualized, with grades ≥3 in 12.5-20% of cases, likely due to resolution limitations relative to their diameter. Ampullary nerves were moderately improved, with grades ≥3 in 52.5-55% of cases, depending on the nerve. While DLR does not yet provide a complete anatomical assessment, it represents a significant step forward in the non-invasive evaluation of inner ear structures. Pending further technical refinements, this approach may help reduce reliance on delayed gadolinium-enhanced techniques for imaging membranous structures. 3D-T2 = Three-dimensional T2-weighted turbo spin-echo; DLR-3D-T2 = improved T2 weighted turbo spinecho sequence incorporating Deep Learning Reconstruction; DLR = Deep Learning Reconstruction.
Page 218 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.