Sort by:
Page 22 of 6046038 results

Darestani AA, Davarani MN, -Cañas VG, Hashemi H, Zarei A, Havadaragh SH, Harirchian MH

pubmed logopapersOct 16 2025
This study presents an automated system using Convolutional Neural Networks (CNNs) for segmenting FLAIR Magnetic Resonance Imaging (MRI) images to aid in the diagnosis of Multiple Sclerosis (MS). The dataset included 103 patients from Imam Khomeini Hospital, Tehran and an additional 10 patients from an external center. Key preprocessing steps included skull stripping, normalization, resizing, segmentation mask processing, entropy-based exclusion, and data augmentation. The nnU-Net architecture tailored for 2D slices was employed and trained using a fivefold cross-validation approach. In the slice-level classification approach, the model achieved 83% accuracy, 100% sensitivity, 75% positive predictive value (PPV), and 99% negative predictive value (NPV) on the internal test set. For the external test set, the accuracy was 76%, sensitivity 100%, PPV 68%, and NPV 100%. Voxel-level segmentation showed a Dice Similarity Coefficient (DSC) of 70% for the internal set and 75% for the external set. The CNN-based system with nnU-Net architecture demonstrated high accuracy and reliability in segmenting MS lesions, highlighting its potential for enhancing clinical decision-making.

Candito A, Mun TS, Holbrey R, Doran S, Messiou C, Koh DM, Blackledge MD

pubmed logopapersOct 16 2025
Generative Artificial Intelligence (GenAI) has the potential to transform radiology by reducing reporting burdens, enhancing diagnostic workflows and facilitating communication of complex radiological information. However, research and adoption remain limited due to the lack of seamless integration with medical imaging viewers. This study introduces OsiriXgrpc, an open-source API plug-in that bridges this gap, enabling real-time communication between OsiriX, a CE-marked and FDA-approved DICOM viewer, and AI-driven tools deployed in any supported programming language (e.g., Python). OsiriXgrpc's design provides users with a unified platform to query, interact with, and visualise AI-generated outputs directly within OsiriX. To demonstrate its potential, we developed an AI Co-pilot for radiology that leverages OsiriXgrpc for iterative "request-to-answer" interactions between users and GenAI models, allowing real-time data queries and AI-generated output visualisation within the same DICOM viewer. We have adapted OsiriXgrpc to allow users to: (i) interrogate Foundation Large-Language Models (LLMs) to generate text from text-based prompts, (ii) employ Foundation Vision-Language Models (VLMs) to generate text by combining text and image prompts, and (iii) employ a one-click Foundation AI-driven segmentation model to generate Regions of Interest (ROIs) by combining points/bounding boxes with text prompts. For this proof-of-concept report, we applied OpenAI's LLMs and VLMs for text generation and the Segment Anything Model (SAM) for generating ROIs. We provide evidence for successful implementation of the plug-in, including visualisation of the AI-generated outputs for each model tested. We hypothesise that OsiriXgrpc can lower adoption barriers, facilitating GenAI models integration into clinical trials and routine healthcare, even in resource-limited settings, including low/middle income countries (LMICs).

Wang X, Lv X, Wang J, Zou L, Chen Z, Zhao R, Zhao L, Zhao M, Zhang X, Zhang B, Zhang J, Zhu Y, Shi X, Gao Y, Liu M, Ai L, Wang L, Liu X, Yang H

pubmed logopapersOct 16 2025
Accurate ovarian cancer screening and diagnosis are critical for patient survival. We present UMORSS, an AI-assisted diagnostic system integrating ultrasound (US) imaging and clinical data with uncertainty quantification for precise ovarian cancer risk assessment. Developed and evaluated using a multicentre dataset (7352 patients, 7594 lesions, 9281 US images), UMORSS employs a two-phase approach: Phase I rapidly triages low-risk lesions via initial US analysis, and Phase II provides uncertainty-aware multimodal analysis for complex cases. Phase I accurately identified 68.7% of physiological cysts and 13.8% of benign tumours as low-risk, with zero false negatives, and Phase II achieved an AUC of 0.955 (internal testing) and 0.926 (external validation). Furthermore, a prospective reader study (n = 284 cases, six radiologists) demonstrated that UMORSS as a human-AI collaborative tool increased radiologists' average AUC by 10.58% and sensitivity by 22.48%. UMORSS shows strong potential to streamline clinical workflow, optimize resource allocation, and standardize ovarian cancer diagnosis.

Nagawa K, Hara Y, Kakemoto S, Shiratori T, Kaizu A, Koyama M, Tsuchihashi S, Shimizu H, Inoue K, Sugita N, Kozawa E

pubmed logopapersOct 16 2025
We evaluated the effectiveness of magnetic resonance imaging (MRI)-based subregional texture analysis (TA) models for classifying knee osteoarthritis (OA) severity grades by compartment. We identified 122 MR images of 121 patients with knee OA (mild-to-severe OA equivalent to Kellgren-Lawrence grades 2-4), comprising sagittal proton density-weighted imaging and axial fat-suppressed proton density-weighted imaging. The data were divided into OA severity groups by medial, lateral, and articulation between the patella and femoral trochlea (P-FT) compartments (three groups for the medial compartment and two for the lateral and P-FT compartments). After extracting 93 texture features and dimension reduction for each compartment and imaging, models were created using linear discriminant analysis, support vector machine with linear, radial basis function, sigmoid kernels, and random forest classifiers. Models underwent 100-time repeat nested cross validations. We applied our classification model to total knee OA severity. The models' performance was modest for both compartments and total knee. The medial compartment showed better results than the lateral and patellofemoral compartments. Our MRI-based compartmental TA model can potentially differentiate between subregional OA severity grades. Further studies are needed to assess the feasibility of our subregional TA method and machine learning algorithms for classifying OA severity by compartment.

Okila N, Katumba A, Nakatumba-Nabende J, Murindanyi S, Mwikirize C, Serugunda J, Bugeza S, Oriekot A, Bossa J, Nabawanuka E

pubmed logopapersOct 16 2025
Lung ultrasound (LUS) vertical artifacts are critical sonographic markers commonly used in evaluating pulmonary conditions such as pulmonary edema, interstitial lung disease, pneumonia, and COVID-19. Accurate detection and localization of these artifacts are vital for informed clinical decision-making. However, interpreting LUS images remains highly operator-dependent, leading to variability in diagnosis. While deep learning (DL) models offer promising potential to automate LUS interpretation, their development is limited by the scarcity of annotated datasets specifically focused on vertical artifacts. This study introduces a curated dataset of 401 high-resolution LUS images, each annotated with polygonal bounding boxes to indicate vertical artifact locations. The images were collected from 152 patients with pulmonary conditions at Mulago and Kiruddu National Referral Hospitals in Uganda. This dataset serves as a valuable resource for training and evaluating DL models designed to accurately detect and localize LUS vertical artifacts, contributing to the advancement of AI-driven diagnostic tools for early detection and monitoring of respiratory diseases.

He K, Hohenberg J, Li Y, Xiao A, Cho H, Nagel E, Ramel S, Bell KA, Wei D, Park J, Ranger BJ

pubmed logopapersOct 16 2025
This study investigates the feasibility of deep learning to predict body composition with ultrasound, specifically fat mass (FM) and fat-free mass (FFM), to improve newborn health assessments. We analyzed 721 ultrasound images of the biceps, quadriceps and abdomen from 65 pre-term infants. A deep learning model incorporating a modified U-Net architecture was developed to predict FM and FFM using air displacement plethysmography as ground truth labels for training. Model performance was assessed using mean absolute error (MAE), mean squared error (MSE), root mean square error (RMSE) and mean absolute percentage error (MAPE), along with Bland-Altman plots to evaluate mean bias and limits of agreement. We tested different image combinations to determine the contribution of anatomical regions. Grad-CAM was applied to identify image regions with the strongest influence on predictions. Combining biceps, quadriceps and abdominal ultrasound images to predict whole-body composition showed strong agreement with ground truth values, with low MAE (FM: 0.0145 kg, FFM: 0.0794 kg), MSE (FM: 0.0003 kg<sup>2</sup>, FFM: 0.0073 kg<sup>2</sup>), RMSE (FM: 0.0183 kg, FFM: 0.0854 kg) and MAPE (FM: 2.65%, FFM: 8.40%). Using only abdominal images for prediction improved FFM performance (MAPE = 4.62%, MSE = 0.0041 kg<sup>2</sup>, RMSE = 0.0486 kg, MAE = 0.0378 kg). Grad-CAM revealed muscle regions as key contributors to FM and FFM predictions. Deep learning provides a promising approach to predicting body composition with ultrasound and could be a valuable tool for assessing nutritional status in neonatal care.

Koehler D, Shenas F, Sauer M, Apostolova I, Budäus L, Falkenbach F, Maurer T

pubmed logopapersOct 16 2025
Standardized prostate-specific membrane antigen (PSMA) PET/CT evaluation and reporting was introduced to aid interpretation, reproducibility, and communication. Artificial intelligence may enhance these efforts. This study aimed to evaluate the performance of aPROMISE, a deep learning segmentation and reporting software for PSMA PET/CT, compared with a standard image viewer (IntelliSpace Portal [ISP]) in patients undergoing PSMA-radioguided surgery. This allowed the correlation of target lesions with histopathology as a standard of truth. <b>Methods:</b> [<sup>68</sup>Ga]Ga-PSMA-I&T PET/CT of 96 patients with biochemical persistence or recurrence after prostatectomy (median prostate-specific antigen, 0.56 ng/mL; interquartile range, 0.31-1.24 ng/mL), who underwent PSMA-radioguided surgery, were retrospectively analyzed (twice with ISP and twice with aPROMISE) by 2 readers. Cohen κ with 95% CI was calculated to assess intra- and interrater agreement for miTNM stages. Differences between miTNM codelines were classified as no difference, minor difference (change of lymph node region without N/M change), and major difference (miTNM change). <b>Results:</b> Intrarater agreement rates were high for all categories, both readers, and systems (≥91.7%) with moderate to almost perfect κ values (reader 1, ISP, ≥0.51; range, 0.21-0.9; aPROMISE, ≥0.64; range, 0.41-0.99; reader 2, ISP, ≥0.83; range, 0.69-1; aPROMISE, ≥0.78; range, 0.63-1). Major differences occurred more frequently for reader 1 than for reader 2 (ISP, 26% vs. 13.5%; aPROMISE, 22.9% vs. 12.5%). Interrater agreement rates were high with both systems (≥92.2%), demonstrating substantial κ values (ISP, ≥0.73; range, 0.47-0.99; aPROMISE, ≥0.74; range, 0.54-1) with major miTNM staging differences in 21 (21.9%) cases. Readers identified 140 lesions by consensus, of which aPROMISE automatically segmented 129 (92.1%) lesions. Unsegmented lesions either were adjacent to high urine activity or demonstrated low PSMA expression. Agreement rates between imaging and histopathology were substantial (≥86.5%), corresponding to moderate to substantial κ values (≥0.6; range, 0.45-1) with major staging differences in 33 (34.4%) patients. This included 13 (13.5%) cases with metastases distant from targets identified on imaging. One of these lesions was automatically segmented by aPROMISE. <b>Conclusion:</b> Intra- and interreader agreement for PSMA PET/CT evaluation were similarly high with ISP and aPROMISE. The algorithm segmented 92.1% of all identified lesions. Software applications with artificial intelligence could be applied as support tools in PSMA PET/CT evaluation of early prostate cancer.

Guo W, Lin L, Wu Y, Lin X, Yang G, Song Y, Chen D

pubmed logopapersOct 16 2025
Our aim was to investigate the potential of using MRI-based habitat features for predicting progression-free survival (PFS) in patients with lung cancer brain metastasis (LCBM) receiving radiotherapy. One hundred and forty-six lesions from 68 patients with LCBM receiving radiotherapy were retrospectively reviewed and divided into training, random test (R-test), and time-independent test (TI-test) cohorts. Conventional radiomics and habitat features were extracted from the whole-tumor area and tumor subregions, respectively. Different machine learning risk models for predicting PFS were developed on the basis of clinical, radiomics, and habitat features, and their combination (clinical + habitat), respectively. The performance of the risk models was evaluated using the concordance index (C-Index) and Brier scores. The Kaplan-Meier curve was used to assess the prognostic value of the models. The habitat risk model achieved the best prediction ability among 4 different risk models in the TI-test cohort (C-Index: 0.716; 95% CI, 0.548-0.890). Additionally, the habitat and radiomics risk models outperformed the clinical risk model in the training (C-Index: 0.721-0.762 versus 0.697) and TI-test cohorts (C-Index: 0.630-0.716 versus 0.377). A habitat risk model based on intratumoral heterogeneity could be a reliable biomarker for predicting PFS in patients with LCBM receiving radiotherapy.

Kim JG, Ha SY, Kang YR, Hong H, Kim D, Lee M, Sunwoo L, Ryu WS, Kim JT

pubmed logopapersOct 16 2025
To evaluate the stand-alone efficacy and improvements in diagnostic accuracy of early-career physicians of the artificial intelligence (AI) software to detect large vessel occlusion (LVO) in CT angiography (CTA). This multicenter study included 595 ischemic stroke patients from January 2021 to September 2023. Standard references and LVO locations were determined by consensus among three experts. The efficacy of the AI software was benchmarked against standard references, and its impact on the diagnostic accuracy of four residents involved in stroke care was assessed. The area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity of the software and readers with versus without AI assistance were calculated. Among the 595 patients (mean age 68.5±13.4 years, 56% male), 275 (46.2%) had LVO. The median time interval from the last known well time to the CTA was 46.0 hours (IQR 11.8-64.4). For LVO detection, the software demonstrated a sensitivity of 0.858 (95% CI 0.811 to 0.897) and a specificity of 0.969 (95% CI 0.943 to 0.985). In subjects whose symptom onset to imaging was within 24 hours (n=195), the software exhibited an AUROC of 0.973 (95% CI 0.939 to 0.991), a sensitivity of 0.890 (95% CI 0.817 to 0.936), and a specificity of 0.965 (95% CI 0.902 to 0.991). Reading with AI assistance improved sensitivity by 4.0% (2.17 to 5.84%) and AUROC by 0.024 (0.015 to 0.033) (all P<0.001) compared with readings without AI assistance. The AI software demonstrated a high detection rate for LVO. In addition, the software improved diagnostic accuracy of early-career physicians in detecting LVO, streamlining stroke workflow in the emergency room.

Heo JU, Sun S, Jones RS, Gu Y, Jiang Y, Qian P, Baydoun A, Arsenault TH, Traughber M, Helo RA, Thompson C, Yao M, Dorth J, Nakayama J, Waggoner SE, Biswas T, Harris EE, Sandstrom KS, Traughber B, Muizc RJF

pubmed logopapersOct 16 2025
Positron Emission Tomography/Magnetic Resonance (PET/MR) offers benefits over PET/CT including simultaneous PET and MR acquisition, intrinsic spatial registration accuracy, MR-based functional information, and superior soft tissue contrast. However, accurate attenuation correction (AC) for PET remains challenging as MR signals do not directly correspond to attenuation. Using deep learning algorithms that learn complex relationships, we generate synthetic CT (sCT) from MR for AC. Our novel method for AC, merges deep learning with threshold-based segmentation, to produce an AC map for the entire torso from Dixon MR images, which heretofore has not been demonstrated.&#xD;&#xD;Twenty-nine prospectively collected, paired FDG-PET/CT and MR datasets were used for training and validation using the U-net Residual Network conditional Generative Adversarial Network integrated with tissue segmentation (URcGANmod) from Dixon MR data. Our application focused on torso (base of the skull to mid-thigh) AC, a common but challenging field of view (FOV). Performance was compared to that of 4 previously published methods.&#xD;&#xD;Using 15 paired datasets for training and 14 independent datasets for testing, the URcGANmod generates an accurate torso sCT with a mean absolute difference of 32±4 HU per voxel. When applied for AC for FDG images, and considering evaluable (SUV ≥ 0.1 g/mL) voxels across all regions of interest, absolute values of the differences were within 4.4% from those determined using the measured CT for AC. Reproducibility was excellent with less than 3.5% standard deviation. The results demonstrate the accuracy and precision of URcGANmod method for torso sCT generation for quantitatively accurate MR-based AC (MRAC), exceeding the comparison methods.&#xD;&#xD;Combining deep learning and segmentation enhances MRAC accuracy in torso FDG-PET/MR, improves SUV accuracy throughout the torso, achieves less than 4.4% SUV error, and outperforms comparison methods. Given the excellent sCT and SUV accuracy and precision, our proposed method warrants further studies for quantitative longitudinal multicenter trials.&#xD.
Page 22 of 6046038 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.