Sort by:
Page 6 of 45448 results

Integrating multi-scale information and diverse prompts in large model SAM-Med2D for accurate left ventricular ejection fraction estimation.

Wu Y, Zhao T, Hu S, Wu Q, Chen Y, Huang X, Zheng Z

pubmed logopapersJul 1 2025
Left ventricular ejection fraction (LVEF) is a critical indicator of cardiac function, aiding in the assessment of heart conditions. Accurate segmentation of the left ventricle (LV) is essential for LVEF calculation. However, current methods are often limited by small datasets and exhibit poor generalization. While leveraging large models can address this issue, many fail to capture multi-scale information and introduce additional burdens on users to generate prompts. To overcome these challenges, we propose LV-SAM, a model based on the large model SAM-Med2D, for accurate LV segmentation. It comprises three key components: an image encoder with a multi-scale adapter (MSAd), a multimodal prompt encoder (MPE), and a multi-scale decoder (MSD). The MSAd extracts multi-scale information at the encoder level and fine-tunes the model, while the MSD employs skip connections to effectively utilize multi-scale information at the decoder level. Additionally, we introduce an automated pipeline for generating self-extracted dense prompts and use a large language model to generate text prompts, reducing the user burden. The MPE processes these prompts, further enhancing model performance. Evaluations on the CAMUS dataset show that LV-SAM outperforms existing SOAT methods in LV segmentation, achieving the lowest MAE of 5.016 in LVEF estimation.

A Workflow-Efficient Approach to Pre- and Post-Operative Assessment of Weight-Bearing Three-Dimensional Knee Kinematics.

Banks SA, Yildirim G, Jachode G, Cox J, Anderson O, Jensen A, Cole JD, Kessler O

pubmed logopapersJul 1 2025
Knee kinematics during daily activities reflect disease severity preoperatively and are associated with clinical outcomes after total knee arthroplasty (TKA). It is widely believed that measured kinematics would be useful for preoperative planning and postoperative assessment. Despite decades-long interest in measuring three-dimensional (3D) knee kinematics, no methods are available for routine, practical clinical examinations. We report a clinically practical method utilizing machine-learning-enhanced software and upgraded C-arm fluoroscopy for the accurate and time-efficient measurement of pre-TKA and post-TKA 3D dynamic knee kinematics. Using a common C-arm with an upgraded detector and software, we performed an 8-s horizontal sweeping pulsed fluoroscopic scan of the weight-bearing knee joint. The patient's knee was then imaged using pulsed C-arm fluoroscopy while performing standing, kneeling, squatting, stair, chair, and gait motion activities. We used limited-arc cone-beam reconstruction methods to create 3D models of the femur and tibia/fibula bones with implants, which can then be used to perform model-image registration to quantify the 3D knee kinematics. The proposed protocol can be accomplished by an individual radiology technician in ten minutes and does not require additional equipment beyond a step and stool. The image analysis can be performed by a computer onboard the upgraded c-arm or in the cloud, before loading the examination results into the Picture Archiving and Communication System and Electronic Medical Record systems. Weight-bearing kinematics affects knee function pre- and post-TKA. It has long been exclusively the domain of researchers to make such measurements. We present an approach that leverages common, but digitally upgraded, imaging hardware and software to implement an efficient examination protocol for accurately assessing 3D knee kinematics. With these capabilities, it will be possible to include dynamic 3D knee kinematics as a component of the routine clinical workup for patients who have diseased or replaced knees.

A novel deep learning framework for retinal disease detection leveraging contextual and local features cues from retinal images.

Khan SD, Basalamah S, Lbath A

pubmed logopapersJul 1 2025
Retinal diseases are a serious global threat to human vision, and early identification is essential for effective prevention and treatment. However, current diagnostic methods rely on manual analysis of fundus images, which heavily depends on the expertise of ophthalmologists. This manual process is time-consuming and labor-intensive and can sometimes lead to missed diagnoses. With advancements in computer vision technology, several automated models have been proposed to improve diagnostic accuracy for retinal diseases and medical imaging in general. However, these methods face challenges in accurately detecting specific diseases within images due to inherent issues associated with fundus images, including inter-class similarities, intra-class variations, limited local information, insufficient contextual understanding, and class imbalances within datasets. To address these challenges, we propose a novel deep learning framework for accurate retinal disease classification. This framework is designed to achieve high accuracy in identifying various retinal diseases while overcoming inherent challenges associated with fundus images. Generally, the framework consists of three main modules. The first module is Densely Connected Multidilated Convolution Neural Network (DCM-CNN) that extracts global contextual information by effectively integrating novel Casual Dilated Dense Convolutional Blocks (CDDCBs). The second module of the framework, namely, Local-Patch-based Convolution Neural Network (LP-CNN), utilizes Class Activation Map (CAM) (obtained from DCM-CNN) to extract local and fine-grained information. To identify the correct class and minimize the error, a synergic network is utilized that takes the feature maps of both DCM-CNN and LP-CNN and connects both maps in a fully connected fashion to identify the correct class and minimize the errors. The framework is evaluated through a comprehensive set of experiments, both quantitatively and qualitatively, using two publicly available benchmark datasets: RFMiD and ODIR-5K. Our experimental results demonstrate the effectiveness of the proposed framework and achieves higher performance on RFMiD and ODIR-5K datasets compared to reference methods.

Intraindividual Comparison of Image Quality Between Low-Dose and Ultra-Low-Dose Abdominal CT With Deep Learning Reconstruction and Standard-Dose Abdominal CT Using Dual-Split Scan.

Lee TY, Yoon JH, Park JY, Park SH, Kim H, Lee CM, Choi Y, Lee JM

pubmed logopapersJul 1 2025
The aim of this study was to intraindividually compare the conspicuity of focal liver lesions (FLLs) between low- and ultra-low-dose computed tomography (CT) with deep learning reconstruction (DLR) and standard-dose CT with model-based iterative reconstruction (MBIR) from a single CT using dual-split scan in patients with suspected liver metastasis via a noninferiority design. This prospective study enrolled participants who met the eligibility criteria at 2 tertiary hospitals in South Korea from June 2022 to January 2023. The criteria included ( a ) being aged between 20 and 85 years and ( b ) having suspected or known liver metastases. Dual-source CT scans were conducted, with the standard radiation dose divided in a 2:1 ratio between tubes A and B (67% and 33%, respectively). The voltage settings of 100/120 kVp were selected based on the participant's body mass index (<30 vs ≥30 kg/m 2 ). For image reconstruction, MBIR was utilized for standard-dose (100%) images, whereas DLR was employed for both low-dose (67%) and ultra-low-dose (33%) images. Three radiologists independently evaluated FLL conspicuity, the probability of metastasis, and subjective image quality using a 5-point Likert scale, in addition to quantitative signal-to-noise and contrast-to-noise ratios. The noninferiority margins were set at -0.5 for conspicuity and -0.1 for detection. One hundred thirty-three participants (male = 58, mean body mass index = 23.0 ± 3.4 kg/m 2 ) were included in the analysis. The low- and ultra-low- dose had a lower radiation dose than the standard-dose (median CT dose index volume: 3.75, 1.87 vs 5.62 mGy, respectively, in the arterial phase; 3.89, 1.95 vs 5.84 in the portal venous phase, P < 0.001 for all). Median FLL conspicuity was lower in the low- and ultra-low-dose scans compared with the standard-dose (3.0 [interquartile range, IQR: 2.0, 4.0], 3.0 [IQR: 1.0, 4.0] vs 3.0 [IQR: 2.0, 4.0] in the arterial phase; 4.0 [IQR: 1.0, 5.0], 3.0 [IQR: 1.0, 4.0] vs 4.0 [IQR: 2.0, 5.0] in the portal venous phases), yet within the noninferiority margin ( P < 0.001 for all). FLL detection was also lower but remained within the margin (lesion detection rate: 0.772 [95% confidence interval, CI: 0.727, 0.812], 0.754 [0.708, 0.795], respectively) compared with the standard-dose (0.810 [95% CI: 0.770, 0.844]). Sensitivity for liver metastasis differed between the standard- (80.6% [95% CI: 76.0, 84.5]), low-, and ultra-low-doses (75.7% [95% CI: 70.2, 80.5], 73.7 [95% CI: 68.3, 78.5], respectively, P < 0.001 for both), whereas specificity was similar ( P > 0.05). Low- and ultra-low-dose CT with DLR showed noninferior FLL conspicuity and detection compared with standard-dose CT with MBIR. Caution is needed due to a potential decrease in sensitivity for metastasis ( clinicaltrials.gov/NCT05324046 ).

Machine-learning model based on ultrasomics for non-invasive evaluation of fibrosis in IgA nephropathy.

Huang Q, Huang F, Chen C, Xiao P, Liu J, Gao Y

pubmed logopapersJul 1 2025
To develop and validate an ultrasomics-based machine-learning (ML) model for non-invasive assessment of interstitial fibrosis and tubular atrophy (IF/TA) in patients with IgA nephropathy (IgAN). In this multi-center retrospective study, 471 patients with primary IgA nephropathy from four institutions were included (training, n = 275; internal testing, n = 69; external testing, n = 127; respectively). The least absolute shrinkage and selection operator logistic regression with tenfold cross-validation was used to identify the most relevant features. The ML models were constructed based on ultrasomics. The Shapley Additive Explanation (SHAP) was used to explore the interpretability of the models. Logistic regression analysis was employed to combine ultrasomics, clinical data, and ultrasound imaging characteristics, creating a comprehensive model. A receiver operating characteristic curve, calibration, decision curve, and clinical impact curve were used to evaluate prediction performance. To differentiate between mild and moderate-to-severe IF/TA, three prediction models were developed: the Rad_SVM_Model, Clinic_LR_Model, and Rad_Clinic_Model. The area under curves of these three models were 0.861, 0.884, and 0.913 in the training cohort, and 0.760, 0.860, and 0.894 in the internal validation cohort, as well as 0.794, 0.865, and 0.904 in the external validation cohort. SHAP identified the contribution of radiomics features. Difference analysis showed that there were significant differences between radiomics features and fibrosis. The comprehensive model was superior to that of individual indicators and performed well. We developed and validated a model that combined ultrasomics, clinical data, and clinical ultrasonic characteristics based on ML to assess the extent of fibrosis in IgAN. Question Currently, there is a lack of a comprehensive ultrasomics-based machine-learning model for non-invasive assessment of the extent of Immunoglobulin A nephropathy (IgAN) fibrosis. Findings We have developed and validated a robust and interpretable machine-learning model based on ultrasomics for assessing the degree of fibrosis in IgAN. Clinical relevance The machine-learning model developed in this study has significant interpretable clinical relevance. The ultrasomics-based comprehensive model had the potential for non-invasive assessment of fibrosis in IgAN, which helped evaluate disease progress.

Automated vs manual cardiac MRI planning: a single-center prospective evaluation of reliability and scan times.

Glessgen C, Crowe LA, Wetzl J, Schmidt M, Yoon SS, Vallée JP, Deux JF

pubmed logopapersJul 1 2025
Evaluating the impact of an AI-based automated cardiac MRI (CMR) planning software on procedure errors and scan times compared to manual planning alone. Consecutive patients undergoing non-stress CMR were prospectively enrolled at a single center (August 2023-February 2024) and randomized into manual, or automated scan execution using prototype software. Patients with pacemakers, targeted indications, or inability to consent were excluded. All patients underwent the same CMR protocol with contrast, in breath-hold (BH) or free breathing (FB). Supervising radiologists recorded procedure errors (plane prescription, forgotten views, incorrect propagation of cardiac planes, and field-of-view mismanagement). Scan times and idle phase (non-acquisition portion) were computed from scanner logs. Most data were non-normally distributed and compared using non-parametric tests. Eighty-two patients (mean age, 51.6 years ± 17.5; 56 men) were included. Forty-four patients underwent automated and 38 manual CMRs. The mean rate of procedure errors was significantly (p = 0.01) lower in the automated (0.45) than in the manual group (1.13). The rate of error-free examinations was higher (p = 0.03) in the automated (31/44; 70.5%) than in the manual group (17/38; 44.7%). Automated studies were shorter than manual studies in FB (30.3 vs 36.5 min, p < 0.001) but had similar durations in BH (42.0 vs 43.5 min, p = 0.42). The idle phase was lower in automated studies for FB and BH strategies (both p < 0.001). An AI-based automated software performed CMR at a clinical level with fewer planning errors and improved efficiency compared to manual planning. Question What is the impact of an AI-based automated CMR planning software on procedure errors and scan times compared to manual planning alone? Findings Software-driven examinations were more reliable (71% error-free) than human-planned ones (45% error-free) and showed improved efficiency with reduced idle time. Clinical relevance CMR examinations require extensive technologist training, and continuous attention, and involve many planning steps. A fully automated software reliably acquired non-stress CMR potentially reducing mistake risk and increasing data homogeneity.

CT-based clinical-radiomics model to predict progression and drive clinical applicability in locally advanced head and neck cancer.

Bruixola G, Dualde-Beltrán D, Jimenez-Pastor A, Nogué A, Bellvís F, Fuster-Matanzo A, Alfaro-Cervelló C, Grimalt N, Salhab-Ibáñez N, Escorihuela V, Iglesias ME, Maroñas M, Alberich-Bayarri Á, Cervantes A, Tarazona N

pubmed logopapersJul 1 2025
Definitive chemoradiation is the primary treatment for locally advanced head and neck carcinoma (LAHNSCC). Optimising outcome predictions requires validated biomarkers, since TNM8 and HPV could have limitations. Radiomics may enhance risk stratification. This single-centre observational study collected clinical data and baseline CT scans from 171 LAHNSCC patients treated with chemoradiation. The dataset was divided into training (80%) and test (20%) sets, with a 5-fold cross-validation on the training set. Researchers extracted 108 radiomics features from each primary tumour and applied survival analysis and classification models to predict progression-free survival (PFS) and 5-year progression, respectively. Performance was evaluated using inverse probability of censoring weights and c-index for the PFS model and AUC, sensitivity, specificity, and accuracy for the 5-year progression model. Feature importance was measured by the SHapley Additive exPlanations (SHAP) method and patient stratification was assessed through Kaplan-Meier curves. The final dataset included 171 LAHNSCC patients, with 53% experiencing disease progression at 5 years. The random survival forest model best predicted PFS, with an AUC of 0.64 and CI of 0.66 on the test set, highlighting 4 radiomics features and TNM8 as significant contributors. It successfully stratified patients into low and high-risk groups (log-rank p < 0.005). The extreme gradient boosting model most effectively predicted a 5-year progression, incorporating 12 radiomics features and four clinical variables, achieving an AUC of 0.74, sensitivity of 0.53, specificity of 0.81, and accuracy of 0.66 on the test set. The combined clinical-radiomics model improved the standard TNM8 and clinical variables in predicting 5-year progression though further validation is necessary. Question There is an unmet need for non-invasive biomarkers to guide treatment in locally advanced head and neck cancer. Findings Clinical data (TNM8 staging, primary tumour site, age, and smoking) plus radiomics improved 5-year progression prediction compared with the clinical comprehensive model or TNM staging alone. Clinical relevance SHAP simplifies complex machine learning radiomics models for clinicians by using easy-to-understand graphical representations, promoting explainability.

Preoperative discrimination of absence or presence of myometrial invasion in endometrial cancer with an MRI-based multimodal deep learning radiomics model.

Chen Y, Ruan X, Wang X, Li P, Chen Y, Feng B, Wen X, Sun J, Zheng C, Zou Y, Liang B, Li M, Long W, Shen Y

pubmed logopapersJul 1 2025
Accurate preoperative evaluation of myometrial invasion (MI) is essential for treatment decisions in endometrial cancer (EC). However, the diagnostic accuracy of commonly utilized magnetic resonance imaging (MRI) techniques for this assessment exhibits considerable variability. This study aims to enhance preoperative discrimination of absence or presence of MI by developing and validating a multimodal deep learning radiomics (MDLR) model based on MRI. During March 2010 and February 2023, 1139 EC patients (age 54.771 ± 8.465 years; range 24-89 years) from five independent centers were enrolled retrospectively. We utilized ResNet18 to extract multi-scale deep learning features from T2-weighted imaging followed by feature selection via Mann-Whitney U test. Subsequently, a Deep Learning Signature (DLS) was formulated using Integrated Sparse Bayesian Extreme Learning Machine. Furthermore, we developed Clinical Model (CM) based on clinical characteristics and MDLR model by integrating clinical characteristics with DLS. The area under the curve (AUC) was used for evaluating diagnostic performance of the models. Decision curve analysis (DCA) and integrated discrimination index (IDI) were used to assess the clinical benefit and compare the predictive performance of models. The MDLR model comprised of age, histopathologic grade, subjective MR findings (TMD and Reading for MI status) and DLS demonstrated the best predictive performance. The AUC values for MDLR in training set, internal validation set, external validation set 1, and external validation set 2 were 0.899 (95% CI, 0.866-0.926), 0.874 (95% CI, 0.829-0.912), 0.862 (95% CI, 0.817-0.899) and 0.867 (95% CI, 0.806-0.914) respectively. The IDI and DCA showed higher diagnostic performance and clinical net benefits for the MDLR than for CM or DLS, which revealed MDLR may enhance decision-making support. The MDLR which incorporated clinical characteristics and DLS could improve preoperative accuracy in discriminating absence or presence of MI. This improvement may facilitate individualized treatment decision-making for EC.

Malignancy risk stratification for pulmonary nodules: comparing a deep learning approach to multiparametric statistical models in different disease groups.

Piskorski L, Debic M, von Stackelberg O, Schlamp K, Welzel L, Weinheimer O, Peters AA, Wielpütz MO, Frauenfelder T, Kauczor HU, Heußel CP, Kroschke J

pubmed logopapersJul 1 2025
Incidentally detected pulmonary nodules present a challenge in clinical routine with demand for reliable support systems for risk classification. We aimed to evaluate the performance of the lung-cancer-prediction-convolutional-neural-network (LCP-CNN), a deep learning-based approach, in comparison to multiparametric statistical methods (Brock model and Lung-RADS®) for risk classification of nodules in cohorts with different risk profiles and underlying pulmonary diseases. Retrospective analysis was conducted on non-contrast and contrast-enhanced CT scans containing pulmonary nodules measuring 5-30 mm. Ground truth was defined by histology or follow-up stability. The final analysis was performed on 297 patients with 422 eligible nodules, of which 105 nodules were malignant. Classification performance of the LCP-CNN, Brock model, and Lung-RADS® was evaluated in terms of diagnostic accuracy measurements including ROC-analysis for different subcohorts (total, screening, emphysema, and interstitial lung disease). LCP-CNN demonstrated superior performance compared to the Brock model in total and screening cohorts (AUC 0.92 (95% CI: 0.89-0.94) and 0.93 (95% CI: 0.89-0.96)). Superior sensitivity of LCP-CNN was demonstrated compared to the Brock model and Lung-RADS® in total, screening, and emphysema cohorts for a risk threshold of 5%. Superior sensitivity of LCP-CNN was also shown across all disease groups compared to the Brock model at a threshold of 65%, compared to Lung-RADS® sensitivity was better or equal. No significant differences in the performance of LCP-CNN were found between subcohorts. This study offers further evidence of the potential to integrate deep learning-based decision support systems into pulmonary nodule classification workflows, irrespective of the individual patient risk profile and underlying pulmonary disease. Question Is a deep-learning approach (LCP-CNN) superior to multiparametric models (Brock model, Lung-RADS®) in classifying pulmonary nodule risk across varied patient profiles? Findings LCP-CNN shows superior performance in risk classification of pulmonary nodules compared to multiparametric models with no significant impact on risk profiles and structural pulmonary diseases. Clinical relevance LCP-CNN offers efficiency and accuracy, addressing limitations of traditional models, such as variations in manual measurements or lack of patient data, while producing robust results. Such approaches may therefore impact clinical work by complementing or even replacing current approaches.

Deep learning-based image domain reconstruction enhances image quality and pulmonary nodule detection in ultralow-dose CT with adaptive statistical iterative reconstruction-V.

Ye K, Xu L, Pan B, Li J, Li M, Yuan H, Gong NJ

pubmed logopapersJul 1 2025
To evaluate the image quality and lung nodule detectability of ultralow-dose CT (ULDCT) with adaptive statistical iterative reconstruction-V (ASiR-V) post-processed using a deep learning image reconstruction (DLIR)-based image domain compared to low-dose CT (LDCT) and ULDCT without DLIR. A total of 210 patients undergoing lung cancer screening underwent LDCT (mean ± SD, 0.81 ± 0.28 mSv) and ULDCT (0.17 ± 0.03 mSv) scans. ULDCT images were reconstructed with ASiR-V (ULDCT-ASiR-V) and post-processed using DLIR (ULDCT-DLIR). The quality of the three CT images was analyzed. Three radiologists detected and measured pulmonary nodules on all CT images, with LDCT results serving as references. Nodule conspicuity was assessed using a five-point Likert scale, followed by further statistical analyses. A total of 463 nodules were detected using LDCT. The image noise of ULDCT-DLIR decreased by 60% compared to that of ULDCT-ASiR-V and was lower than that of LDCT (p < 0.001). The subjective image quality scores for ULDCT-DLIR (4.4 [4.1, 4.6]) were also higher than those for ULDCT-ASiR-V (3.6 [3.1, 3.9]) (p < 0.001). The overall nodule detection rates for ULDCT-ASiR-V and ULDCT-DLIR were 82.1% (380/463) and 87.0% (403/463), respectively (p < 0.001). The percentage difference between diameters > 1 mm was 2.9% (ULDCT-ASiR-V vs. LDCT) and 0.5% (ULDCT-DLIR vs. LDCT) (p = 0.009). Scores of nodule imaging sharpness on ULDCT-DLIR (4.0 ± 0.68) were significantly higher than those on ULDCT-ASiR-V (3.2 ± 0.50) (p < 0.001). DLIR-based image domain improves image quality, nodule detection rate, nodule imaging sharpness, and nodule measurement accuracy of ASiR-V on ULDCT. Question Deep learning post-processing is simple and cheap compared with raw data processing, but its performance is not clear on ultralow-dose CT. Findings Deep learning post-processing enhanced image quality and improved the nodule detection rate and accuracy of nodule measurement of ultralow-dose CT. Clinical relevance Deep learning post-processing improves the practicability of ultralow-dose CT and makes it possible for patients with less radiation exposure during lung cancer screening.
Page 6 of 45448 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.