Sort by:
Page 67 of 1421416 results

An integrated strategy based on radiomics and quantum machine learning: diagnosis and clinical interpretation of pulmonary ground-glass nodules.

Huang X, Xu F, Zhu W, Yao L, He J, Su J, Zhao W, Hu H

pubmed logopapersJul 11 2025
Accurate classification of pulmonary pure ground-glass nodules (pGGNs) is essential for distinguishing invasive adenocarcinoma (IVA) from adenocarcinoma in situ (AIS) and minimally invasive adenocarcinoma (MIA), which significantly influences treatment decisions. This study aims to develop a high-precision integrated strategy by combining radiomics-based feature extraction, Quantum Machine Learning (QML) models, and SHapley Additive exPlanations (SHAP) analysis to improve diagnostic accuracy and interpretability in pGGN classification. A total of 322 pGGNs from 275 patients were retrospectively analyzed. The CT images was randomly divided into training and testing cohorts (80:20), with radiomic features extracted from the training cohort. Three QML models-Quantum Support Vector Classifier (QSVC), Pegasos QSVC, and Quantum Neural Network (QNN)-were developed and compared with a classical Support Vector Machine (SVM). SHAP analysis was applied to interpret the contribution of radiomic features to the models' predictions. All three QML models outperformed the classical SVM, with the QNN model achieving the highest improvements ([Formula: see text]) in classification metrics, including accuracy (89.23%, 95% CI: 81.54% - 95.38%), sensitivity (96.55%, 95% CI: 89.66% - 100.00%), specificity (83.33%, 95% CI: 69.44% - 94.44%), and area under the curve (AUC) (0.937, 95% CI: 0.871 - 0.983), respectively. SHAP analysis identified Low Gray Level Run Emphasis (LGLRE), Gray Level Non-uniformity (GLN), and Size Zone Non-uniformity (SZN) as the most critical features influencing classification. This study demonstrates that the proposed integrated strategy, combining radiomics, QML models, and SHAP analysis, significantly enhances the accuracy and interpretability of pGGN classification, particularly in small-sample datasets. It offers a promising tool for early, non-invasive lung cancer diagnosis and helps clinicians make more informed treatment decisions. Not applicable.

Tiny-objective segmentation for spot signs on multi-phase CT angiography via contrastive learning with dynamic-updated positive-negative memory banks.

Zhang J, Horn M, Tanaka K, Bala F, Singh N, Benali F, Ganesh A, Demchuk AM, Menon BK, Qiu W

pubmed logopapersJul 11 2025
Presence of spot sign on CT Angiography (CTA) is associated with hematoma growth in patients with intracerebral hemorrhage. Measuring spot sign volume over time may aid to predict hematoma expansion. Due to the difficulties that imaging characteristics of spot sign are similar with vein and calcification and spot signs are tiny appeared in CTA images to detect, our aim is to develop an automated method to pick up spot signs accurately. We proposed a novel collaborative architecture of network based on a student-teacher model by efficiently exploiting additional negative samples with contrastive learning. In particular, a set of dynamic-updated memory banks is proposed to learn more distinctive features from the extremely imbalanced positive and negative samples. Alongside, a two-steam network with an additional contextual-decoder is designed for learning more contextual information at different scales in a collaborative way. Besides, to better inhibit the false positive detection rate, a region restriction loss function is further designed to confine the spot sign segmentation within the hemorrhage. Quantitative evaluations using dice, volume correlation, sensitivity, specificity, area under the curve show that the proposed method is able to segment and detect spot signs accurately. Our proposed contractive learning framework obtained the best segmentation performance regarding a mean Dice of 0.638 ± 0211, a mean VC of 0.871 and a mean VDP of 0.348 ± 0.237 and detection performance regarding sensitivity of 0.956 with CI(0.895,1.000), specificity of 0.833 with CI(0.766,0.900), and AUC of 0.892 with CI(0.888,0.896), outperforming nnuNet, cascade-nnuNet, nnuNet++, SegRegNet, UNETR and SwinUNETR. This paper proposed a novel segmentation approach that leverages contrastive learning to explore additional negative samples concurrently for the automatic segmentation of spot signs on mCTA images. The experimental results demonstrate the effectiveness of our method and highlight its potential applicability in clinical settings for measuring spot sign volumes.

Machine Learning-Assisted Multimodal Early Screening of Lung Cancer Based on a Multiplexed Laser-Induced Graphene Immunosensor.

Cai Y, Ke L, Du A, Dong J, Gai Z, Gao L, Yang X, Han H, Du M, Qiang G, Wang L, Wei B, Fan Y, Wang Y

pubmed logopapersJul 11 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, largely due to late-stage diagnosis. Early detection is critical for improving patient outcomes, yet current screening methods, such as low-dose computed tomography (CT), often lack the sensitivity and specificity required for early-stage detection. Here, we present a multimodal early screening platform that integrates a multiplexed laser-induced graphene (LIG) immunosensor with machine learning to enhance the accuracy of lung cancer diagnosis. Our platform enables the rapid, cost-effective, and simultaneous detection of four tumor markers─neuron-specific enolase (NSE), carcinoembryonic antigen (CEA), p53, and SOX2─with limits of detection (LOD) as low as 1.62 pg/mL. By combining proteomic data from the immunosensor with deep learning-based CT imaging features and clinical data, we developed a multimodal predictive model that achieves an area under the curve (AUC) of 0.936, significantly outperforming single-modality approaches. This platform offers a transformative solution for early lung cancer screening, particularly in resource-limited settings, and provides potential technical support for precision medicine in oncology.

Oriented tooth detection: a CBCT image processing method integrated with RoI transformer.

Zhao Z, Wu B, Su S, Liu D, Wu Z, Gao R, Zhang N

pubmed logopapersJul 11 2025
Cone beam computed tomography (CBCT) has revolutionized dental imaging due to its high spatial resolution and ability to provide detailed three-dimensional reconstructions of dental structures. This study introduces an innovative CBCT image processing method using an oriented object detection approach integrated with a Region of Interest (RoI) Transformer. This study addresses the challenge of accurate tooth detection and classification in PAN derived from CBCT, introducing an innovative oriented object detection approach, which has not been previously applied in dental imaging. This method better aligns with the natural growth patterns of teeth, allowing for more accurate detection and classification of molars, premolars, canines, and incisors. By integrating RoI transformer, the model demonstrates relatively acceptable performance metrics compared to conventional horizontal detection methods, while also offering enhanced visualization capabilities. Furthermore, post-processing techniques, including distance and grayscale value constraints, are employed to correct classification errors and reduce false positives, especially in areas with missing teeth. The experimental results indicate that the proposed method achieves an accuracy of 98.48%, a recall of 97.21%, an F1 score of 97.21%, and an mAP of 98.12% in tooth detection. The proposed method enhances the accuracy of tooth detection in CBCT-derived PAN by reducing background interference and improving the visualization of tooth orientation.

Interpretable Artificial Intelligence for Detecting Acute Heart Failure on Acute Chest CT Scans

Silas Nyboe Ørting, Kristina Miger, Anne Sophie Overgaard Olesen, Mikael Ploug Boesen, Michael Brun Andersen, Jens Petersen, Olav W. Nielsen, Marleen de Bruijne

arxiv logopreprintJul 11 2025
Introduction: Chest CT scans are increasingly used in dyspneic patients where acute heart failure (AHF) is a key differential diagnosis. Interpretation remains challenging and radiology reports are frequently delayed due to a radiologist shortage, although flagging such information for emergency physicians would have therapeutic implication. Artificial intelligence (AI) can be a complementary tool to enhance the diagnostic precision. We aim to develop an explainable AI model to detect radiological signs of AHF in chest CT with an accuracy comparable to thoracic radiologists. Methods: A single-center, retrospective study during 2016-2021 at Copenhagen University Hospital - Bispebjerg and Frederiksberg, Denmark. A Boosted Trees model was trained to predict AHF based on measurements of segmented cardiac and pulmonary structures from acute thoracic CT scans. Diagnostic labels for training and testing were extracted from radiology reports. Structures were segmented with TotalSegmentator. Shapley Additive explanations values were used to explain the impact of each measurement on the final prediction. Results: Of the 4,672 subjects, 49% were female. The final model incorporated twelve key features of AHF and achieved an area under the ROC of 0.87 on the independent test set. Expert radiologist review of model misclassifications found that 24 out of 64 (38%) false positives and 24 out of 61 (39%) false negatives were actually correct model predictions, with the errors originating from inaccuracies in the initial radiology reports. Conclusion: We developed an explainable AI model with strong discriminatory performance, comparable to thoracic radiologists. The AI model's stepwise, transparent predictions may support decision-making.

Objective assessment of diagnostic image quality in CT scans: what radiologists and researchers need to know.

Hoeijmakers EJI, Martens B, Wildberger JE, Flohr TG, Jeukens CRLPN

pubmed logopapersJul 10 2025
Quantifying diagnostic image quality (IQ) is not straightforward but essential for optimizing the balance between IQ and radiation dose, and for ensuring consistent high-quality images in CT imaging. This review provides a comprehensive overview of advanced objective reference-free IQ assessment methods for CT scans, beyond standard approaches. A literature search was performed in PubMed and Web of Science up to June 2024 to identify studies using advanced objective image quality methods on clinical CT scans. Only reference-free methods, which do not require a predefined reference image, were included. Traditional methods relying on the standard deviation of the Hounsfield units, the signal-to-noise ratio or contrast-to-noise ratio, all within a manually selected region-of-interest, were excluded. Eligible results were categorized by IQ metric (i.e., noise, contrast, spatial resolution and other) and assessment method (manual, automated, and artificial intelligence (AI)-based). Thirty-five studies were included that proposed or employed reference-free IQ methods, identifying 12 noise assessment methods, 4 contrast assessment methods, 14 spatial resolution assessment methods and 7 others, based on manual, automated or AI-based approaches. This review emphasizes the transition from manual to fully automated approaches for IQ assessment, including the potential of AI-based methods, and it provides a reference tool for researchers and radiologists who need to make a well-considered choice in how to evaluate IQ in CT imaging. This review examines the challenge of quantifying diagnostic CT image quality, essential for optimization studies and ensuring consistent high-quality images, by providing an overview of objective reference-free diagnostic image quality assessment methods beyond standard methods. Quantifying diagnostic CT image quality remains a key challenge. This review summarizes objective diagnostic image quality assessment techniques beyond standard metrics. A decision tree is provided to help select optimal image quality assessment techniques.

FF Swin-Unet: a strategy for automated segmentation and severity scoring of NAFLD.

Fan L, Lei Y, Song F, Sun X, Zhang Z

pubmed logopapersJul 10 2025
Non-alcoholic fatty liver disease (NAFLD) is a significant risk factor for liver cancer and cardiovascular diseases, imposing substantial social and economic burdens. Computed tomography (CT) scans are crucial for diagnosing NAFLD and assessing its severity. However, current manual measurement techniques require considerable human effort and resources from radiologists, and there is a lack of standardized methods for classifying the severity of NAFLD in existing research. To address these challenges, we propose a novel method for NAFLD segmentation and automated severity scoring. The method consists of three key modules: (1) The Semi-automatization nnU-Net Module (SNM) constructs a high-quality dataset by combining manual annotations with semi-automated refinement; (2) The Focal Feature Fusion Swin-Unet Module (FSM) enhances liver and spleen segmentation through multi-scale feature fusion and Swin Transformer-based architectures; (3) The Automated Severity Scoring Module (ASSM) integrates segmentation results with radiological features to classify NAFLD severity. These modules are embedded in a Flask-RESTful API-based system, enabling users to upload abdominal CT data for automated preprocessing, segmentation, and scoring. The Focal Feature Fusion Swin-Unet (FF Swin-Unet) method significantly improves segmentation accuracy, achieving a Dice similarity coefficient (DSC) of 95.64% and a 95th percentile Hausdorff distance (HD95) of 15.94. The accuracy of the automated severity scoring is 90%. With model compression and ONNX deployment, the evaluation speed for each case is approximately 5 seconds. Compared to manual diagnosis, the system can process a large volume of data simultaneously, rapidly, and efficiently while maintaining the same level of diagnostic accuracy, significantly reducing the workload of medical professionals. Our research demonstrates that the proposed system has high accuracy in processing large volumes of CT data and providing automated NAFLD severity scores quickly and efficiently. This method has the potential to significantly reduce the workload of medical professionals and holds immense clinical application potential.

Depth-Sequence Transformer (DST) for Segment-Specific ICA Calcification Mapping on Non-Contrast CT

Xiangjian Hou, Ebru Yaman Akcicek, Xin Wang, Kazem Hashemizadeh, Scott Mcnally, Chun Yuan, Xiaodong Ma

arxiv logopreprintJul 10 2025
While total intracranial carotid artery calcification (ICAC) volume is an established stroke biomarker, growing evidence shows this aggregate metric ignores the critical influence of plaque location, since calcification in different segments carries distinct prognostic and procedural risks. However, a finer-grained, segment-specific quantification has remained technically infeasible. Conventional 3D models are forced to process downsampled volumes or isolated patches, sacrificing the global context required to resolve anatomical ambiguity and render reliable landmark localization. To overcome this, we reformulate the 3D challenge as a \textbf{Parallel Probabilistic Landmark Localization} task along the 1D axial dimension. We propose the \textbf{Depth-Sequence Transformer (DST)}, a framework that processes full-resolution CT volumes as sequences of 2D slices, learning to predict $N=6$ independent probability distributions that pinpoint key anatomical landmarks. Our DST framework demonstrates exceptional accuracy and robustness. Evaluated on a 100-patient clinical cohort with rigorous 5-fold cross-validation, it achieves a Mean Absolute Error (MAE) of \textbf{0.1 slices}, with \textbf{96\%} of predictions falling within a $\pm1$ slice tolerance. Furthermore, to validate its architectural power, the DST backbone establishes the best result on the public Clean-CC-CCII classification benchmark under an end-to-end evaluation protocol. Our work delivers the first practical tool for automated segment-specific ICAC analysis. The proposed framework provides a foundation for further studies on the role of location-specific biomarkers in diagnosis, prognosis, and procedural planning. Our code will be made publicly available.

Predicting Thoracolumbar Vertebral Osteoporotic Fractures: Value Assessment of Chest CT-Based Machine Learning.

Chen Y, Che M, Yang H, Yu M, Yang Z, Qin J

pubmed logopapersJul 10 2025
To assess the value of a chest CT-based machine learning model in predicting osteoporotic vertebral fractures (OVFs) of the thoracolumbar vertebral bodies. We monitored 8910 patients aged ≥50 who underwent chest CT (2021-2024), identifying 54 incident OVFs cases. Using propensity score matching, 108 controls were selected. The 162 patients were randomly assigned to training (n=113) and testing (n=49) cohorts. Clinical models were developed through logistic regression. Radiomics features were extracted from the thoracolumbar vertebral bodies (T11-L2), with top 10 features selected via minimum-redundancy maximum-relevancy and the least absolute shrinkage and selection operator to construct a Radscore model. Nomogram model was established combining clinical and radiomics features, evaluated using receiver operating characteristic curves, decision curve analysis (DCA) and calibration plots. Volumetric bone mineral density (vBMD) (OR=0.95, 95%CI=0.93-0.97) and hemoglobin (HGB) (OR=0.96, 95%CI=0.94-0.98) were selected as independent risk factors for clinical model. From 2288 radiomics features, 10 were selected for Radscore calculation. The Nomogram model (Radscore + vBMD + HGB) achieved area under the curve (AUC) of 0.938/0.906 in training/testing cohorts, outperforming both Radscore (AUC=0.902/0.871) and clinical (AUC=0.802/0.820) models. DCA and calibration plots confirmed the Nomogram model's superior prediction capability. Nomogram model combined with radiomics and clinical features has high predictive performance, and its predictive results for thoracolumbar OVFs can provide reference for clinical decision making.

Patient-specific vs Multi-Patient Vision Transformer for Markerless Tumor Motion Forecasting

Gauthier Rotsart de Hertaing, Dani Manjah, Benoit Macq

arxiv logopreprintJul 10 2025
Background: Accurate forecasting of lung tumor motion is essential for precise dose delivery in proton therapy. While current markerless methods mostly rely on deep learning, transformer-based architectures remain unexplored in this domain, despite their proven performance in trajectory forecasting. Purpose: This work introduces a markerless forecasting approach for lung tumor motion using Vision Transformers (ViT). Two training strategies are evaluated under clinically realistic constraints: a patient-specific (PS) approach that learns individualized motion patterns, and a multi-patient (MP) model designed for generalization. The comparison explicitly accounts for the limited number of images that can be generated between planning and treatment sessions. Methods: Digitally reconstructed radiographs (DRRs) derived from planning 4DCT scans of 31 patients were used to train the MP model; a 32nd patient was held out for evaluation. PS models were trained using only the target patient's planning data. Both models used 16 DRRs per input and predicted tumor motion over a 1-second horizon. Performance was assessed using Average Displacement Error (ADE) and Final Displacement Error (FDE), on both planning (T1) and treatment (T2) data. Results: On T1 data, PS models outperformed MP models across all training set sizes, especially with larger datasets (up to 25,000 DRRs, p < 0.05). However, MP models demonstrated stronger robustness to inter-fractional anatomical variability and achieved comparable performance on T2 data without retraining. Conclusions: This is the first study to apply ViT architectures to markerless tumor motion forecasting. While PS models achieve higher precision, MP models offer robust out-of-the-box performance, well-suited for time-constrained clinical settings.
Page 67 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.