Sort by:
Page 3 of 7237224 results

Hussein SA, Farouk AA, Saeid MM

pubmed logopapersDec 8 2025
The rising prevalence of retinal diseases is a significant concern, as certain untreated conditions can lead to severe vision impairment or even blindness. Deep learning algorithms have emerged as a powerful tool for the diagnosis and analysis of medical images. The automated detection of retinal diseases not only aids ophthalmologists in making accurate clinical decisions but also enhances efficiency by saving time. This study proposes a deep learning-based approach for the automated classification of multiple retinal diseases using fundus images. For this research, a balanced dataset was compiled by integrating data from various sources. Artificial Neural Networks (ANN) and transfer learning techniques were utilized to differentiate between healthy eyes and those affected by diabetic retinopathy, cataracts, or glaucoma. Multiple feature extraction methods were employed in conjunction with ANN for the multi-classification of retinal diseases. The results demonstrate that the model combining Artificial Neural Networks (ANN) with MobileNetV2 and DenseNet121 architectures, along with Principal Component Analysis (PCA) for feature extraction and dimensionality reduction, as well as the Discrete Wavelet Transform (DWT) algorithm, achieves highly satisfactory performance, attaining a peak accuracy of 98.2%.

Wenwu L, Di Z, Wei W, Wenbo D, Xin W, Xiang X, Weibo C, Tao W, Song H, Boyuan L, Wang Z, Chaoxue Z

pubmed logopapersDec 8 2025
Locally advanced thyroid cancer (LATC), characterized by lymph node metastasis or extrathyroidal extension, is associated with increased surgical complexity and worse prognosis compared to intrathyroidal cancer (ITC). Accurate preoperative identification of LATC is critical for optimizing surgical and postoperative management. This study aimed to develop a deep learning-based habitat radiomics (DLH) model using ultrasonographic features to predict LATC and assess recurrence risk. A retrospective cohort of 1881 thyroid cancer patients from nine medical centers. 1383 patients from eight centers were divided into a training cohort and an internal test cohort at a 7:3 ratio, while 498 patients from the remaining center served as the external test cohort. An additional prospective cohort of 130 patients serving as validation. The inclusion criteria required preoperative ultrasound examination and postoperative pathological confirmation of thyroid cancer, with exclusion of cases lacking complete clinical data. Patients were classified as LATC or intrathyroidal cancer based on postoperative pathology. Ultrasound tumor regions were manually segmented, and intratumoral subregions were delineated using K-means clustering to capture spatial heterogeneity. Peritumoral regions were generated by isotropic expansion of tumor regions of interest (ROIs). Radiomic features from intra- and peritumoral regions were used to train the DLH model. A clinical-radiomic nomogram incorporating DLH and clinical variables was constructed. Model performance was assessed using area under the curve (AUC), and recurrence-free survival (RFS) was evaluated via Kaplan-Meier and Cox regression analyses. DLH was an independent predictor for both RFS and LATC (P<0.05). The nomogram achieved robust performance in preoperative LATC identification (AUC: 0.852 in the internal test cohort and 0.897 in the external test cohort). Cox regression identified DLH, tumor size, surgical approach, and lymph node metastasis count as significant RFS predictors. Our multicenter model effectively predicts LATC and RFS based on routine ultrasound, with potential to guide individualized treatment planning. Further validation in diverse clinical settings and longer follow-up are warranted.

Yao F, Zhu D, Lin H, Miao C, Lin Q, Li T, Zhuang Y, Bian S, Yang Y, Lin J, Pan K

pubmed logopapersDec 8 2025
This study aimed to develop and validate deep learning (DL) models based on multiparametric MRI (mpMRI) and [<sup>18</sup>F]PSMA-1007 PET/CT to predict extracapsular extension (ECE) in prostate cancer (PCa), and to explore easy models integrating DL with clinical expertise. A total of 388 patients who underwent radical prostatectomy were enrolled from centers A, B and C. Three DL models based on mpMRI, PET/CT, and a combined MPC model were developed and compared with a manual model based on the ECE grading system. Additionally, three combined models (mpMRI-M, PET/CT-M, and MPC-M) were constructed by integrating the DL models with the Manual model. To enhance clinical applicability, an easy model (E-MPC-M) was developed. Model performance was evaluated using the area under the receiver-operating-characteristic curve (AUC) and metrics derived from the confusion matrix. Gradient-weighted class-activation-mapping (Grad-CAM) was employed to visualize model interpretability. In the internal cohort, the Manual, MPC, and MPC-M models achieved AUCs of 0.752, 0.897, and 0.907, respectively; corresponding sensitivities were 0.616, 0.896, and 0.915, and specificities were 0.791, 0.740, and 0.802. In the external validation cohort, these models achieved AUCs of 0.665, 0.824, and 0.849; sensitivities of 0.318, 0.955, and 0.955; and specificities of 0.960, 0.600, and 0.640, respectively. The E-MPC-M model also showed robust performance, with an AUC of 0.862 in the internal cohort and 0.775 in the external cohort. Grad-CAM visualizations highlighted the model's focus on tumor-relevant regions, confirming effective learning of tumor features. The MPC-M model demonstrated strong predictive performance for PCa ECE across internal and external cohorts, while the E-MPC-M model retained much of this performance with enhanced clinical practicality. However, these models should be considered as preliminary, and larger prospective multicenter studies are required to confirm their robustness and generalizability.

Zhou H, Liu Z, Jing J, Gu H, Ding L, Jiang Y, Liu H, Zhao J, Zhu W, Pan Y, Jiang Y, Meng X, Xie X, Zhang Z, Cheng J, Fan Y, Wang Y, Zhao X, Li H, Li Z, Liu T, Wang Y

pubmed logopapersDec 8 2025
Acute ischemic cerebrovascular disease (AICVD) exhibits high recurrence rates, necessitating novel biomarkers for refined risk stratification. While MRI-derived brain age correlates with stroke incidence, its prognostic utility for recurrence is unestablished. We developed the Mask-based Brain Age estimation Network (MBA Net), a deep learning framework designed for AICVD patients. MBA Net predicts contextual brain age (CBA) in non-infarcted regions by masking acute infarcts on T2-FLAIR images, thereby mitigating the confounding effects of dynamic infarcts during acute-phase neuroimaging. The model was trained on data from 5353 healthy individuals and then applied to a multicenter cohort of 10,890 AICVD patients. Brain age gap (BAG), defined as the deviation between CBA and chronological age, independently predicted stroke recurrence at both 3 months and 5 years, outperforming chronological age. Incorporating BAG into established prediction models significantly improved discriminative performance. These findings support brain age's potential utility in AI-driven precision strategies for secondary stroke prevention.

Wu X, Yang Z, Hou C, Huo D, Ge Y, Long Q, Yin P, Luo X

pubmed logopapersDec 8 2025
The brachial plexus nerves at the axillary level are small in size and have a tortuous course, intertwining with blood vessels and other tissues. Ultrasound images are affected by speckle noise and structural blurring, leading to errors in manual segmentation and reducing the accuracy and success rate of axillary nerve block localization. Current lightweight networks primarily focus on semantic segmentation, making it difficult to precisely distinguish the boundaries between adjacent nerve bundles and surrounding tissues. Additionally, they lack optimisation for axillary nerve instance segmentation, making it challenging to balance accuracy and real-time performance. To address these issues, we propose the Dynamic Feature Fusion Network, which enhances segmentation accuracy while maintaining lightweight architecture. This model combines a re-parameterised vision transformer with spatial pyramid pooling to compress parameter counts while maintaining feature expression capabilities. To achieve dynamic fusion of multi-scale features, this study proposes a SimAM-enhanced Bidirectional Fusion Network based on a similarity attention module, which improves segmentation accuracy in neurovascular intersection regions. Finally, a context-based mechanism is used to improve target localisation accuracy while completing pixel-level segmentation tasks, and a hybrid loss function is designed to enhance the model's localisation accuracy and optimise the training process. The method was evaluated on the Ultrasound Axillary Brachial Plexus dataset and experimental results showed that the proposed method achieved 57FPS and 0.604 [email protected]:0.95 on a single GPU. This significantly improves the segmentation accuracy and efficiency of the axillary brachial plexus, providing a reliable auxiliary tool for ultrasound-guided nerve block.

Luo HJ, Cheng J, Wang K, Ren JL, Guo LM, Niu J, Song XL

pubmed logopapersDec 8 2025
To develop and validate a multimodal model that integrates radiomics features (RFs) and deep learning features (DFs) derived from preoperative multisequence magnetic resonance imaging (MRI) for the prediction of lymphovascular space invasion (LVSI) in patients with endometrial cancer (EC). This multicenter, retrospective study enrolled 892 patients with postoperative pathologically confirmed EC. Preoperative MRI comprised T2-weighted imaging, contrast-enhanced T1-weighted imaging, and apparent diffusion coefficient maps, were analyzed. Regions of interest (ROIs) were manually delineated for 2D and 3D analyses. RFs were extracted using PyRadiomics, and DFs were obtained using pretrained VGG 11, ResNet 101, and DenseNet 121 architectures. Five single-modality models (2D-RF, 3D-RF, VGG11-DF, ResNet101-DF, and DenseNet121-DF) were developed. In addition, the integration of RFs and DFs were explored to construct combined models. Models were trained in a training cohort (n = 378) and evaluated in both internal (n = 160) and external (n = 354) validation cohorts. Model performance was evaluated by the area under the receiver operating characteristic curve (AUC). In the training cohort, the 2D-RF and 3D-RF models showed comparable performance for LVSI prediction (AUC: 0.775 vs. 0.772, P = 0.89). Among the deep learning models, DenseNet121-DF achieved the highest AUC (0.757), which was significantly higher than ResNet-101-DF (AUC: 0.671; P = 0.01) and not statistically different from VGG11-DF (AUC: 0.720, P = 0.20). The optimal combined model, integrating features from 2D-RF and DenseNet121-DF, yielded the highest performance in the training cohort (AUC: 0.796). These findings were confirmed in both the internal and external validation cohorts. A multimodal MRI-based model integrating both RFs and DFs achieved superior performance for noninvasive prediction of LVSI in patients with EC. This approach holds potential to enhance preoperative risk stratification and guide personalized treatment planning.

Zhu S, Zou M, Wu Q, Gong Z, Huang Z, Zou Y, Tan T, You Y, Dong X, Luo H

pubmed logopapersDec 8 2025
Hepatocellular carcinoma (HCC) remains one of the leading causes of cancer-related mortality, where accurate imaging-based diagnosis plays a central role in guiding treatment. Multiphasic CT and MRI provide dynamic information about lesion enhancement, yet most existing deep learning methods treat different phases as simple channels and fail to capture their temporal evolution. In this work, we introduce STD-Net, a spatio-temporal decoupling network that explicitly separates spatial feature extraction from temporal dynamics modeling. A shared-weight 3D encoder learns robust anatomical representations, while a transformer-based temporal module captures sequential contrast patterns such as arterial hyperenhancement and venous washout. This design mirrors clinical reasoning and reduces the entanglement of spatial appearance, temporal change, and motion artifacts. Comprehensive experiments on TCGA-LIHC, LiTS, and MSD datasets show that STD-Net consistently outperforms state-of-the-art baselines in both segmentation and characterization, achieving higher Dice, lower HD95, and superior classification accuracy. Qualitative analyses and distributional evaluations further confirm that our approach offers more stable and generalizable performance, particularly for small or low-contrast lesions. These findings demonstrate the potential of spatio-temporal decoupling as a general paradigm for dynamic medical imaging.

Wang Z, Zhang C, Zhu X, Niu K, Liu S, Wang S, Zhou X, Wang D, Xu N, He Z

pubmed logopapersDec 8 2025
Accurate diagnosis of upper cervical spine abnormalities is crucial for effective treatment and prognosis. However, the diversity of anatomical structures and pathological abnormalities poses significant challenges to the diagnostic process. These challenges highlight the need for deep learning models, which are capable of identifying multiple abnormalities to assist in diagnosis. Traditional deep learning approaches have exhibited limitations in extracting sequential features from multi-positional X-ray images and in performing joint diagnosis of multiple coexisting abnormalities. Therefore, a CerviHFENet-based framework is proposed to perform multi-label classification of six upper cervical spine abnormalities using three-view radiographs (extension, neutral, and flexion) obtained from individual patients. The system first incorporates an adaptive region of interest (ROI) detection module to minimize irrelevant information by precisely localizing the upper cervical spine. Following this, the CerviHFENet model integrates a hybrid feature extraction (HFE) mechanism that extracts both the anatomical features of the upper cervical vertebrae and the dynamic variations in bone structure across different neck positions, thereby fully representing the comprehensive characteristics of the upper cervical spine. Furthermore, a modified focal loss function is employed to enable the model to learn the mutual exclusivity or conditionally dependent relationships among the six abnormalities. A total of 249 patients participated in the study, contributing 747 X-ray images. The model achieved a mean AUC of 96.22% and a mean mAP of 94.6% on the test set, indicating promising diagnostic performance and validating the feasibility and effectiveness of the proposed model.

Boonrod A, Kittipongphat N, Piyaprapaphan P, Theerakulpisut D, Boonrod A

pubmed logopapersDec 8 2025
To diagnose osteoporosis, assess the risk of fragility fracture, and determine the necessity for treatment, bone mineral density (BMD) is mostly measured from dual energy X-ray absorptiometry (DXA) as a gold standard. Due to the limited resources of DXA, we proposed the deep learning models to screen for osteoporosis and measure its accuracy on osteoporosis detection from lumbar spine radiographs. The models were developed from the training data set (2244 anteroposterior and 2368 lateral lumbar spine radiographs). We categorized patients into two groups based on DXA BMD T-score: non-osteoporosis (T >  - 2.5) and osteoporosis (T ≤  - 2.5). A two-class models were trained to classify non-osteoporosis and osteoporosis. Model performance was tested with the test data set (963 AP and 1018 lateral images) to evaluate the accuracy. The results showed that, for AP images, the ResNet-18 model diagnosing osteoporosis achieved an area under the curve (AUC) of 0.79 (95% confidence interval [CI] 0.76-0.82) with a concomitant sensitivity of 79.7% (95% CI 74.4-85.0%) and specificity of 66.5% (95% CI 63.1-69.9%). For lateral images, the DarkNet-19 model yielded the highest AUC at 0.82 (95% CI 0.80-0.85) with the highest sensitivity for lateral data set at 87.5% (95% CI 83.1-91.9%) and specificity of 79.4% (95% CI 76.6-82.2%). Deep learning models may have the efficacy to anticipate osteoporosis screening based on lumbar spine radiographs which would be helpful as a readily available tool for assessing the risk and determining treatment.

Shiokawa R, Iwasawa J, Tanaka YO, Tokuoka Y, Sugawara Y, Hirano Y, Takaji R, Hayakawa Y, Oda K, Kudo Y, Li M, Mizuno K, Ozeki K, Nishimoto-Kakiuchi A, Terao K

pubmed logopapersDec 8 2025
Diagnosis of endometriosis faces significant challenges including diagnostic delay and reliance on invasive procedures. Deep endometriosis (DE) poses additional difficulties in non-invasive diagnosis due to its subtle and complex imaging features. To address these challenges, we developed an AI-based MRI reading support program (AMP) designed to improve diagnostic accuracy and efficiency, with the primary endpoint of demonstrating its potential to enhance radiologists' reading sensitivity. AMP comprises the following three models: (1) a nnU-Net model for endometriotic nodular lesion (plaque) segmentation, (2) a radiomics-based LightGBM model for adhesion detection, and (3) a nnU-Net model for detection/quantification of ovarian endometriotic cysts (OECs). In cross-validation, AMP achieves mean Dice similarity coefficient of 0.293 for plaque segmentation and 0.580 for OEC segmentation. For adhesion detection, AMP shows high performance for uterine adhesions (F1 scores > 0.6). In a preliminary clinical utility study with three radiologists, AMP improved mean recall for plaque detection from 0.73 to 0.91 demonstrating AMP's ability to support radiologists in identifying subtle DE lesions and adhesions. Our findings show that AMP is a reliable non-invasive clinical diagnosis tool, that has the potential to minimize diagnostic delays and improve patient outcome.
Page 3 of 7237224 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,100+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.