Sort by:
Page 82 of 3703696 results

Do We Need Pre-Processing for Deep Learning Based Ultrasound Shear Wave Elastography?

Sarah Grube, Sören Grünhagen, Sarah Latus, Michael Meyling, Alexander Schlaefer

arxiv logopreprintAug 1 2025
Estimating the elasticity of soft tissue can provide useful information for various diagnostic applications. Ultrasound shear wave elastography offers a non-invasive approach. However, its generalizability and standardization across different systems and processing pipelines remain limited. Considering the influence of image processing on ultrasound based diagnostics, recent literature has discussed the impact of different image processing steps on reliable and reproducible elasticity analysis. In this work, we investigate the need of ultrasound pre-processing steps for deep learning-based ultrasound shear wave elastography. We evaluate the performance of a 3D convolutional neural network in predicting shear wave velocities from spatio-temporal ultrasound images, studying different degrees of pre-processing on the input images, ranging from fully beamformed and filtered ultrasound images to raw radiofrequency data. We compare the predictions from our deep learning approach to a conventional time-of-flight method across four gelatin phantoms with different elasticity levels. Our results demonstrate statistically significant differences in the predicted shear wave velocity among all elasticity groups, regardless of the degree of pre-processing. Although pre-processing slightly improves performance metrics, our results show that the deep learning approach can reliably differentiate between elasticity groups using raw, unprocessed radiofrequency data. These results show that deep learning-based approaches could reduce the need for and the bias of traditional ultrasound pre-processing steps in ultrasound shear wave elastography, enabling faster and more reliable clinical elasticity assessments.

Light Convolutional Neural Network to Detect Chronic Obstructive Pulmonary Disease (COPDxNet): A Multicenter Model Development and External Validation Study.

Rabby ASA, Chaudhary MFA, Saha P, Sthanam V, Nakhmani A, Zhang C, Barr RG, Bon J, Cooper CB, Curtis JL, Hoffman EA, Paine R, Puliyakote AK, Schroeder JD, Sieren JC, Smith BM, Woodruff PG, Reinhardt JM, Bhatt SP, Bodduluri S

pubmed logopapersAug 1 2025
Approximately 70% of adults with chronic obstructive pulmonary disease (COPD) remain undiagnosed. Opportunistic screening using chest computed tomography (CT) scans, commonly acquired in clinical practice, may be used to improve COPD detection through simple, clinically applicable deep-learning models. We developed a lightweight, convolutional neural network (COPDxNet) that utilizes minimally processed chest CT scans to detect COPD. We analyzed 13,043 inspiratory chest CT scans from the COPDGene participants, (9,675 standard-dose and 3,368 low-dose scans), which we randomly split into training (70%) and test (30%) sets at the participant level to no individual contributed to both sets. COPD was defined by postbronchodilator FEV /FVC < 0.70. We constructed a simple, four-block convolutional model that was trained on pooled data and validated on the held-out standard- and low-dose test sets. External validation was performed using standard-dose CT scans from 2,890 SPIROMICS participants and low-dose CT scans from 7,893 participants in the National Lung Screening Trial (NLST). We evaluated performance using the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, Brier scores, and calibration curves. On COPDGene standard-dose CT scans, COPDxNet achieved an AUC of 0.92 (95% CI: 0.91 to 0.93), sensitivity of 80.2%, and specificity of 89.4%. On low-dose scans, AUC was 0.88 (95% CI: 0.86 to 0.90). When the COPDxNet model was applied to external validation datasets, it showed an AUC of 0.92 (95% CI: 0.91 to 0.93) in SPIROMICS and 0.82 (95% CI: 0.81 to 0.83) on NLST. The model was well-calibrated, with Brier scores of 0.11 for standard- dose and 0.13 for low-dose CT scans in COPDGene, 0.12 in SPIROMICS, and 0.17 in NLST. COPDxNet demonstrates high discriminative accuracy and generalizability for detecting COPD on standard- and low-dose chest CT scans, supporting its potential for clinical and screening applications across diverse populations.

Brain Age Prediction: Deep Models Need a Hand to Generalize.

Rajabli R, Soltaninejad M, Fonov VS, Bzdok D, Collins DL

pubmed logopapersAug 1 2025
Predicting brain age from T1-weighted MRI is a promising marker for understanding brain aging and its associated conditions. While deep learning models have shown success in reducing the mean absolute error (MAE) of predicted brain age, concerns about robust and accurate generalization in new data limit their clinical applicability. The large number of trainable parameters, combined with limited medical imaging training data, contributes to this challenge, often resulting in a generalization gap where there is a significant discrepancy between model performance on training data versus unseen data. In this study, we assess a deep model, SFCN-reg, based on the VGG-16 architecture, and address the generalization gap through comprehensive preprocessing, extensive data augmentation, and model regularization. Using training data from the UK Biobank, we demonstrate substantial improvements in model performance. Specifically, our approach reduces the generalization MAE by 47% (from 5.25 to 2.79 years) in the Alzheimer's Disease Neuroimaging Initiative dataset and by 12% (from 4.35 to 3.75 years) in the Australian Imaging, Biomarker and Lifestyle dataset. Furthermore, we achieve up to 13% reduction in scan-rescan error (from 0.80 to 0.70 years) while enhancing the model's robustness to registration errors. Feature importance maps highlight anatomical regions used to predict age. These results highlight the critical role of high-quality preprocessing and robust training techniques in improving accuracy and narrowing the generalization gap, both necessary steps toward the clinical use of brain age prediction models. Our study makes valuable contributions to neuroimaging research by offering a potential pathway to improve the clinical applicability of deep learning models.

Automated Assessment of Choroidal Mass Dimensions Using Static and Dynamic Ultrasonographic Imaging

Emmert, N., Wall, G., Nabavi, A., Rahdar, A., Wilson, M., King, B., Cernichiaro-Espinosa, L., Yousefi, S.

medrxiv logopreprintAug 1 2025
PurposeTo develop and validate an artificial intelligence (AI)-based model that automatically measures choroidal mass dimensions on B{square}scan ophthalmic ultrasound still images and cine loops. DesignRetrospective diagnostic accuracy study with internal and external validation. ParticipantsThe dataset included 1,822 still images and 283 cine loops of choroidal masses for model development and testing. An additional 182 still images were used for external validation, and 302 control images with other diagnoses were included to assess specificity MethodsA deep convolutional neural network (CNN) based on the U-Net architecture was developed to automatically measure the apical height and basal diameter of choroidal masses on B-scan ultrasound. All still images were manually annotated by expert graders and reviewed by a senior ocular oncologist. Cine loops were analyzed frame by frame and the frame with the largest detected mass dimensions was selected for evaluation. Outcome MeasuresThe primary outcome was the models measurement accuracy, defined by the mean absolute error (MAE) in millimeters, compared to expert manual annotations, for both apical height and basal diameter. Secondary metrics included the Dice coefficient, coefficient of determination (R2), and mean pixel distance between predicted and reference measurements. ResultsOn the internal test set of still images, the model successfully detected the tumor in 99.7% of cases. The mean absolute error (MAE) was 0.38 {+/-} 0.55 mm for apical height (95.1% of measurements <1 mm of the expert annotation) and was 0.99 {+/-} 1.15 mm for basal diameter (64.4% of measurements <1 mm). Linear agreement between predicted and reference measurements was strong, with R2 values of 0.74 for apical height and 0.89 for basal diameter. When applied to the control set of 302 control images, the model demonstrated a moderate false positive rate. On the external validation set, the model maintained comparable accuracy. Among the cine loops, the model detected tumors in 89.4% of cases with comparable accuracy. ConclusionDeep learning can deliver fast, reproducible, millimeter{square}level measurements of choroidal mass dimensions with robust performance across different mass types and imaging sources. These findings support the potential clinical utility of AI-assisted measurement tools in ocular oncology workflows.

Anatomical Considerations for Achieving Optimized Outcomes in Individualized Cochlear Implantation.

Timm ME, Avallone E, Timm M, Salcher RB, Rudnik N, Lenarz T, Schurzig D

pubmed logopapersAug 1 2025
Machine learning models can assist with the selection of electrode arrays required for optimal insertion angles. Cochlea implantation is a successful therapy in patients with severe to profound hearing loss. The effectiveness of a cochlea implant depends on precise insertion and positioning of electrode array within the cochlea, which is known for its variability in shape and size. Preoperative imaging like CT or MRI plays a significant role in evaluating cochlear anatomy and planning the surgical approach to optimize outcomes. In this study, preoperative and postoperative CT and CBCT data of 558 cochlea-implant patients were analyzed in terms of the influence of anatomical factors and insertion depth onto the resulting insertion angle. Machine learning models can predict insertion depths needed for optimal insertion angles, with performance improving by including cochlear dimensions in the models. A simple linear regression using just the insertion depth explained 88% of variability, whereas adding cochlear length or diameter and width further improved predictions up to 94%.

Moving Beyond CT Body Composition Analysis: Using Style Transfer for Bringing CT-Based Fully-Automated Body Composition Analysis to T2-Weighted MRI Sequences.

Haubold J, Pollok OB, Holtkamp M, Salhöfer L, Schmidt CS, Bojahr C, Straus J, Schaarschmidt BM, Borys K, Kohnke J, Wen Y, Opitz M, Umutlu L, Forsting M, Friedrich CM, Nensa F, Hosch R

pubmed logopapersAug 1 2025
Deep learning for body composition analysis (BCA) is gaining traction in clinical research, offering rapid and automated ways to measure body features like muscle or fat volume. However, most current methods prioritize computed tomography (CT) over magnetic resonance imaging (MRI). This study presents a deep learning approach for automatic BCA using MR T2-weighted sequences. Initial BCA segmentations (10 body regions and 4 body parts) were generated by mapping CT segmentations from body and organ analysis (BOA) model to synthetic MR images created using an in-house trained CycleGAN. In total, 30 synthetic data pairs were used to train an initial nnU-Net V2 in 3D, and this preliminary model was then applied to segment 120 real T2-weighted MRI sequences from 120 patients (46% female) with a median age of 56 (interquartile range, 17.75), generating early segmentation proposals. These proposals were refined by human annotators, and nnU-Net V2 2D and 3D models were trained using 5-fold cross-validation on this optimized dataset of real MR images. Performance was evaluated using Sørensen-Dice, Surface Dice, and Hausdorff Distance metrics including 95% confidence intervals for cross-validation and ensemble models. The 3D ensemble segmentation model achieved the highest Dice scores for the body region classes: bone 0.926 (95% confidence interval [CI], 0.914-0.937), muscle 0.968 (95% CI, 0.961-0.975), subcutaneous fat 0.98 (95% CI, 0.971-0.986), nervous system 0.973 (95% CI, 0.965-0.98), thoracic cavity 0.978 (95% CI, 0.969-0.984), abdominal cavity 0.989 (95% CI, 0.986-0.991), mediastinum 0.92 (95% CI, 0.901-0.936), pericardium 0.945 (95% CI, 0.924-0.96), brain 0.966 (95% CI, 0.927-0.989), and glands 0.905 (95% CI, 0.886-0.921). Furthermore, body part 2D ensemble model reached the highest Dice scores for all labels: arms 0.952 (95% CI, 0.937-0.965), head + neck 0.965 (95% CI, 0.953-0.976), legs 0.978 (95% CI, 0.968-0.988), and torso 0.99 (95% CI, 0.988-0.991). The overall average Dice across body parts (2D = 0.971, 3D = 0.969, P = ns) and body regions (2D = 0.935, 3D = 0.955, P < 0.001) ensemble models indicates stable performance across all classes. The presented approach facilitates efficient and automated extraction of BCA parameters from T2-weighted MRI sequences, providing precise and detailed body composition information across various regions and body parts.

Establishing a Deep Learning Model That Integrates Pretreatment and Midtreatment Computed Tomography to Predict Treatment Response in Non-Small Cell Lung Cancer.

Chen X, Meng F, Zhang P, Wang L, Yao S, An C, Li H, Zhang D, Li H, Li J, Wang L, Liu Y

pubmed logopapersAug 1 2025
Patients with identical stages or similar tumor volumes can vary significantly in their responses to radiation therapy (RT) due to individual characteristics, making personalized RT for non-small cell lung cancer (NSCLC) challenging. This study aimed to develop a deep learning model by integrating pretreatment and midtreatment computed tomography (CT) to predict the treatment response in NSCLC patients. We retrospectively collected data from 168 NSCLC patients across 3 hospitals. Data from Shanghai General Hospital (SGH, 35 patients) and Shanxi Cancer Hospital (SCH, 93 patients) were used for model training and internal validation, while data from Linfen Central Hospital (LCH, 40 patients) were used for external validation. Deep learning, radiomics, and clinical features were extracted to establish a varying time interval long short-term memory network for response prediction. Furthermore, we derived a model-deduced personalize dose escalation (DE) for patients predicted to have suboptimal gross tumor volume regression. The area under the receiver operating characteristic curve (AUC) and predicted absolute error were used to evaluate the predictive Response Evaluation Criteria in Solid Tumors classification and the proportion of gross tumor volume residual. DE was calculated as the biological equivalent dose using an /α/β ratio of 10 Gy. The model using only pretreatment CT achieved the highest AUC of 0.762 and 0.687 in internal and external validation respectively, whereas the model integrating both pretreatment and midtreatment CT achieved AUC of 0.869 and 0.798, with predicted absolute error of 0.137 and 0.185, respectively. We performed personalized DE for 29 patients. Their original biological equivalent dose was approximately 72 Gy, within the range of 71.6 Gy to 75 Gy. DE ranged from 77.7 to 120 Gy for 29 patients, with 17 patients exceeding 100 Gy and 8 patients reaching the model's preset upper limit of 120 Gy. Combining pretreatment and midtreatment CT enhances prediction performance for RT response and offers a promising approach for personalized DE in NSCLC.

Utility of an artificial intelligence-based lung CT airway model in the quantitative evaluation of large and small airway lesions in patients with chronic obstructive pulmonary disease.

Liu Z, Li J, Li B, Yi G, Pang S, Zhang R, Li P, Yin Z, Zhang J, Lv B, Yan J, Ma J

pubmed logopapersAug 1 2025
Accurate quantification of the extent of bronchial damage across various airway levels in chronic obstructive pulmonary disease (COPD) remains a challenge. In this study, artificial intelligence (AI) was employed to develop an airway segmentation model to investigate the morphological changes of the central and peripheral airways in COPD patients and the effects of these airway changes on pulmonary function classification and acute COPD exacerbations. Clinical data from a total of 340 patients with COPD and 73 healthy volunteers were collected and compiled. An AI-driven airway segmentation model was constructed using Convolutional Neural Regressor (CNR) and Airway Transfer Network (ATN) algorithms. The efficacy of the model was evaluated through support vector machine (SVM) and random forest regression approaches. The area under the receiver operating characteristic (ROC) curve (AUC) of the SVM in evaluating the COPD airway segmentation model was 0.96, with a sensitivity of 97% and a specificity of 92%, however, the AUC value of the SVM was 0.81 when it was replaced the healthy group by non-COPD outpatients. Compared with the healthy group, the grade and the total number of airway segmentation were decreased and the diameters of the right main bronchus and bilateral lobar bronchi of patients with COPD were smaller and the airway walls were thinner (all P < 0.01). However, the diameters of the subsegmental and small airway bronchi were increased, and airway walls were thickened, and the arc lengths were shorter ( all P < 0.01), especially in patients with severe COPD (all P < 0.05). Correlation and regression analysis showed that FEV1%pre was positively correlated with the diameters and airway wall thickness of the main and lobar airway, and the arc lengths of small airway bronchi (all P < 0.05). Airway wall thickness of the subsegment and small airway were found to have the greatest impact on the frequency of COPD exacerbations. Artificial intelligence lung CT airway segmentation model is a non-invasive quantitative tool for measuring chronic obstructive pulmonary disease. The main changes in COPD patients are that the central airway diameter becomes narrower and the thickness becomes thinner. The arc length of the peripheral airway becomes shorter, and the diameter and airway wall thickness become larger, which is more obvious in severe patients. Pulmonary function classification and small and medium airway dysfunction are also affected by the diameter, thickness and arc length of large and small airways. Small airway remodeling is more significant in acute exacerbations of COPD.

Lumbar and pelvic CT image segmentation based on cross-scale feature fusion and linear self-attention mechanism.

Li C, Chen L, Liu Q, Teng J

pubmed logopapersAug 1 2025
The lumbar spine and pelvis are critical stress-bearing structures of the human body, and their rapid and accurate segmentation plays a vital role in clinical diagnosis and intervention. However, conventional CT imaging poses significant challenges due to the low contrast of sacral and bilateral hip tissues and the complex and highly similar intervertebral space structures within the lumbar spine. To address these challenges, we propose a general-purpose segmentation network that integrates a cross-scale feature fusion strategy with a linear self-attention mechanism. The proposed network effectively extracts multi-scale features and fuses them along the channel dimension, enabling both structural and boundary information of lumbar and pelvic regions to be captured within the encoder-decoder architecture.Furthermore, we introduce a linear mapping strategy to approximate the traditional attention matrix with a low-rank representation, allowing the linear attention mechanism to significantly reduce computational complexity while maintaining segmentation accuracy for vertebrae and pelvic bones. Comparative and ablation experiments conducted on the CTSpine1K and CTPelvic1K datasets demonstrate that our method achieves improvements of 1.5% in Dice Similarity Coefficient (DSC) and 2.6% in Hausdorff Distance (HD) over state-of-the-art models, validating the effectiveness of our approach in enhancing boundary segmentation quality and segmentation accuracy in homogeneous anatomical regions.
Page 82 of 3703696 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.