Sort by:
Page 234 of 6576562 results

Manish Bhardwaj, Huizhi Liang, Ashwin Sivaharan, Sandip Nandhra, Vaclav Snasel, Tamer El-Sayed, Varun Ojha

arxiv logopreprintAug 24 2025
Sarcopenia is a progressive loss of muscle mass and function linked to poor surgical outcomes such as prolonged hospital stays, impaired mobility, and increased mortality. Although it can be assessed through cross-sectional imaging by measuring skeletal muscle area (SMA), the process is time-consuming and adds to clinical workloads, limiting timely detection and management; however, this process could become more efficient and scalable with the assistance of artificial intelligence applications. This paper presents high-quality three-dimensional cross-sectional computed tomography (CT) images of patients with sarcopenia collected at the Freeman Hospital, Newcastle upon Tyne Hospitals NHS Foundation Trust. Expert clinicians manually annotated the SMA at the third lumbar vertebra, generating precise segmentation masks. We develop deep-learning models to measure SMA in CT images and automate this task. Our methodology employed transfer learning and self-supervised learning approaches using labelled and unlabeled CT scan datasets. While we developed qualitative assessment models for detecting sarcopenia, we observed that the quantitative assessment of SMA is more precise and informative. This approach also mitigates the issue of class imbalance and limited data availability. Our model predicted the SMA, on average, with an error of +-3 percentage points against the manually measured SMA. The average dice similarity coefficient of the predicted masks was 93%. Our results, therefore, show a pathway to full automation of sarcopenia assessment and detection.

Asadullah Bin Rahman, Masud Ibn Afjal, Md. Abdulla Al Mamun

arxiv logopreprintAug 24 2025
Medical imaging modalities are inherently susceptible to noise contamination that degrades diagnostic utility and clinical assessment accuracy. This paper presents a comprehensive comparative evaluation of three state-of-the-art deep learning architectures for MRI brain image denoising: CNN-DAE, CADTra, and DCMIEDNet. We systematically evaluate these models across multiple Gaussian noise intensities ($\sigma = 10, 15, 25$) using the Figshare MRI Brain Dataset. Our experimental results demonstrate that DCMIEDNet achieves superior performance at lower noise levels, with PSNR values of $32.921 \pm 2.350$ dB and $30.943 \pm 2.339$ dB for $\sigma = 10$ and $15$ respectively. However, CADTra exhibits greater robustness under severe noise conditions ($\sigma = 25$), achieving the highest PSNR of $27.671 \pm 2.091$ dB. All deep learning approaches significantly outperform traditional wavelet-based methods, with improvements ranging from 5-8 dB across tested conditions. This study establishes quantitative benchmarks for medical image denoising and provides insights into architecture-specific strengths for varying noise intensities.

Han Y, Kim J, Park S, Moon JS, Lee JH

pubmed logopapersAug 24 2025
Glomeruli are crucial for blood filtration, waste removal, and regulation of essential substances in the body. Traditional methods for detecting glomeruli rely on human interpretation, which can lead to variability. AI techniques have improved this process; however, most studies have used images with fixed magnification. This study proposes a novel magnification-integrated ensemble method to enhance glomerular segmentation accuracy. Whole-slide images (WSIs) from 12 patients were used for training, two for validation, and one for testing. Patch and mask images were extracted at 256 × 256 size × x2, x3, and x4 magnification levels. Data augmentation techniques, such as RandomResize, RandomCrop, and RandomFlip, were used. The segmentation model underwent 80,000 iterations with a stochastic gradient descent (SGD). Performance varied with changes in magnification. The models trained on high-magnification images showed significant drops when tested at lower magnifications, and vice versa. The proposed method improved segmentation accuracy across different magnifications, achieving 87.72 mIoU and 93.04 Dice score with the U-Net model. The magnification-integrated ensemble method significantly enhanced glomeruli segmentation accuracy across varying magnifications, thereby addressing the limitations of fixed magnification models. This approach improves the robustness and reliability of AI-driven diagnostic tools, potentially benefiting various medical imaging applications by ensuring consistent performance despite changes in image magnification.

Hernáiz Ferrer AI, Bortolotto C, Carone L, Preda EM, Fichera C, Lionetti A, Gambini G, Fresi E, Grassi FA, Preda L

pubmed logopapersAug 23 2025
We evaluated the diagnostic performance of two AI software programs (BoneView and RBfracture) in assisting non-specialist radiologists (NSRs) in detecting scaphoid fractures using conventional wrist radiographs (X-rays). We retrospectively analyzed 724 radiographs from 264 patients with wrist trauma. Patients were classified into two groups: Group 1 included cases with a definitive diagnosis by a specialist radiologist (SR) based on X-rays (either scaphoid fracture or not), while Group 2 comprised indeterminate cases for the SRs requiring a CT scan for a final diagnosis. Indeterminate cases were defined as negative or doubtful X-rays in patients with persistent clinical symptoms. The X-rays were evaluated by AI and two NSRs, independently and in combination. We compared their diagnostic performances using sensitivity, specificity, area under the curve (AUC), and Cohen's kappa for diagnostic agreement. Group 1 included 174 patients, with 80 cases (45.97%) of scaphoid fractures. Group 2 had 90 patients, of which 44 with uncertain diagnoses and 46 negative cases with persistent symptoms. Scaphoid fractures were identified in 51 patients (56.67%) in Group 2 after further CT imaging. In Group 1, AI performed similarly to NSRs (AUC: BoneView 0.83, RBfracture 0.84, NSR1 0.88, NSR2 0.90), without significant contribution of AI to the performance of NSRs. In Group 2, performances were lower (AUC: BoneView 0.62, RBfracture 0.65, NSR1 0.46, NSR2 0.63), but AI assistance significantly improved NSR performance (NSR2 + BoneView AUC = 0.75, p = 0.003; NSR2 + RBfracture AUC = 0.72, p = 0.030). Diagnostic agreement between NSR1 with AI support and SR was moderate (kappa = 0.576), and substantial for NSR2 (kappa = 0.712). AI tools may effectively assist NSRs, especially in complex scaphoid fracture cases.

Huang JW, Zhang YL, Li KY, Li HL, Ye HB, Chen YH, Lin XX, Tian NF

pubmed logopapersAug 23 2025
Magnetic resonance imaging (MRI) is essential for diagnosing lumbar foraminal stenosis (LFS). However, access remains limited in China due to uneven equipment distribution, high costs, and long waiting times. Therefore, this study developed a lightweight deep learning (DL) model using sagittal CT images to classify LFS severity as a potential clinical alternative where MRI is unavailable. A retrospective study included 868 sagittal CT images from 177 patients (2016-2025). Data were split at the patient level into training (n = 125), validation (n = 31), and test sets (n = 21), with annotations, based on the Lee grading system, provided by two spine surgeons. Two DL models were developed: DL1 (EfficientNet-B0) and DL2 (MobileNetV3-Large-100), both of which incorporated a Faster R-CNN with a ResNet-50-based region-of-interest (ROI) detector. Diagnostic performance was benchmarked against spine surgeons with different levels of clinical experience. DL1 achieved 82.35% diagnostic accuracy (matching the senior spine surgeon's 83.33%), with DL2 at 80.39% (mean 81.37%), both exceeding the junior spine surgeon's 62.75%. DL1 demonstrated near-perfect diagnostic agreement with the senior spine surgeon, as validated by Cohen's kappa analysis (κ = 0.815; 95% CI: 0.723-0.907), whereas DL2 showed substantial consistency (κ = 0.799; 95% CI: 0.703-0.895). Inter-model agreement yielded κ = 0.782 (95% CI: 0.682-0.882). The DL models achieved a mean diagnostic accuracy of 81.37%, comparable to that of the senior spine surgeon (83.33%) in grading LFS severity on sagittal CT. However, given the limited sample size and absence of external validation, their applicability and generalisability to other populations and in multi-centre, large-scale datasets remain uncertain.

Grubert Van Iderstine M, Kim S, Karur GR, Granton J, de Perrot M, McIntosh C, McInnis M

pubmed logopapersAug 23 2025
The aim of this study was to develop machine learning (ML) models to explore the relationship between chronic pulmonary embolism (PE) burden and severe pulmonary hypertension (PH) in surgical chronic thromboembolic pulmonary hypertension (CTEPH). CTEPH patients with a preoperative CT pulmonary angiogram and pulmonary endarterectomy between 01/2017 and 06/2022 were included. A mean pulmonary artery pressure of > 50 mmHg was classified as severe. CTs were scored by a blinded radiologist who recorded chronic pulmonary embolism extent in detail, and measured the right ventricle (RV), left ventricle (LV), main pulmonary artery (PA) and ascending aorta (Ao) diameters. XGBoost models were developed to identify CTEPH feature importance and compared to a logistic regression model. There were 184 patients included; 54.9% were female, and 21.7% had severe PH. The average age was 57 ± 15 years. PE burden alone was not helpful in identifying severe PH. The RV/LV ratio logistic regression model performed well (AUC 0.76) with a cutoff of 1.4. A baseline ML model (Model 1) including only the RV, LV, Pa and Ao measures and their ratios yielded an average AUC of 0.66 ± 0.10. The addition of demographics and statistics summarizing the CT findings raised the AUC to 0.75 ± 0.08 (F1 score 0.41). While measures of PE burden had little bearing on PH severity independently, the RV/LV ratio, extent of disease in various segments, total webs observed, and patient demographics improved performance of machine learning models in identifying severe PH. Question Can machine learning methods applied to CT-based cardiac measurements and detailed maps of chronic thromboembolism type and distribution predict pulmonary hypertension (PH) severity? Findings The right-to-left ventricle (RV/LV) ratio was predictive of PH severity with an optimal cutoff of 1.4, and detailed accounts of chronic thromboembolic burden improved model performance. Clinical relevance The identification of a CT-based RV/LV ratio cutoff of 1.4 gives radiologists, clinicians, and patients a point of reference for chronic thromboembolic PH severity. Detailed chronic thromboembolic burden data are useful but cannot be used alone to predict PH severity.

Klemenz AC, Watzke LM, Deyerberg KK, Böttcher B, Gorodezky M, Manzke M, Dalmer A, Lorbeer R, Weber MA, Meinel FG

pubmed logopapersAug 23 2025
To evaluate deep-learning (DL) based real-time cardiac cine sequences acquired in free breathing (FB) vs breath hold (BH). In this prospective single-centre cohort study, 56 healthy adult volunteers were investigated on a 1.5-T MRI scanner. A set of real-time cine sequences, including a short-axis stack, 2-, 3-, and 4-chamber views, was acquired in FB and with BH. A validated DL-based cine sequence acquired over three cardiac cycles served as the reference standard for volumetric results. Subjective image quality (sIQ) was rated by two blinded readers. Volumetric analysis of both ventricles was performed. sIQ was rated as good to excellent for FB real-time cine images, slightly inferior to BH real-time cine images (p < 0.0001). Overall acquisition time for one set of cine sequences was 50% shorter with FB (median 90 vs 180 s, p < 0.0001). There were significant differences between the real-time sequences and the reference in left ventricular (LV) end-diastolic volume, LV end-systolic volume, LV stroke volume and LV mass. Nevertheless, BH cine imaging showed excellent correlation with the reference standard, with an intra-class correlation coefficient (ICC) > 0.90 for all parameters except right ventricular ejection fraction (RV EF, ICC = 0.887). With FB cine imaging, correlation with the reference standard was good for LV ejection fraction (LV EF, ICC = 0.825) and RV EF (ICC = 0.824) and excellent (ICC > 0.90) for all other parameters. DL-based real-time cine imaging is feasible even in FB with good to excellent image quality and acceptable volumetric results in healthy volunteers. Question Conventional cardiac MR (CMR) cine imaging is challenged by arrhythmias and patients unable to hold their breath, since data is acquired over several heartbeats. Findings DL-based real-time cine imaging is feasible in FB with acceptable volumetric results and reduced acquisition time by 50% compared to real-time breath-hold sequences. Clinical relevance This study fits into the wider goal of increasing the availability of CMR by reducing the complexity, duration of the examination and improving patient comfort and making CMR available even for patients who are unable to hold their breath.

Revel MP, Biederer J, Nair A, Silva M, Jacobs C, Snoeckx A, Prokop M, Prosch H, Parkar AP, Frauenfelder T, Larici AR

pubmed logopapersAug 23 2025
Low-dose CT screening for lung cancer reduces the risk of death from lung cancer by at least 21% in high-risk participants and should be offered to people aged between 50 and 75 with at least 20 pack-years of smoking. Iterative reconstruction or deep learning algorithms should be used to keep the effective dose below 1 mSv. Deep learning algorithms are required to facilitate the detection of nodules and the measurement of their volumetric growth. Only large solid nodules larger than 500 mm<sup>3</sup> or those with spiculations, bubble-like lucencies, or pleural indentation and complex cysts should be investigated further. Short-term follow-up at 3 or 6 months is required for solid nodules of 100 to 500 mm<sup>3</sup>. A watchful waiting approach is recommended for most subsolid nodules, to limit the risk of overtreatment. Finally, the description of additional findings must be limited if LCS is to be cost-effective. KEY POINTS: Low-dose CT screening reduces the risk of death from lung cancer by at least 21% in high-risk individuals, with a greater benefit in women. Quality assurance of screening is essential to control radiation dose and the number of false positives. Screening with low-dose CT scans detects incidental findings of variable clinical relevance, only those of importance should be reported.

Zhang R, Wang X, Zhou Z, Ni L, Jiang M, Hu P

pubmed logopapersAug 23 2025
Epicardial and paracardial adipose tissues (EAT and PAT) are two types of fat depots around the heart and they have important roles in cardiac physiology. Manual quantification of EAT and PAT from cardiac MR (CMR) is time-consuming and prone to human bias. Leveraging the cardiac motion, we aimed to develop deep learning neural networks for automated segmentation and quantification of EAT and PAT in short-axis cine CMR. A modified U-Net equipped with modules of multi-resolution convolution, motion information extraction, feature fusion, and dual attention mechanisms, was developed. Multiple steps of ablation studies were performed to verify the efficacy of each module. The performance of different networks was also compared. The final network incorporating all modules achieved segmentation Dice indices of 77.72% ± 2.53% and 77.18% ± 3.54% for EAT and PAT, respectively, which were significantly higher than the baseline U-Net. It also achieved the highest performance compared to other networks. With our model, the determination coefficients of EAT and PAT volumes to the reference were 0.8550 and 0.8025, respectively. Our proposed network can provide accurate and quick quantification of EAT and PAT on routine short-axis cine CMR, which can potentially aid cardiologists in clinical settings.

Mollica F, Metz C, Anders MS, Wismayer KK, Schmid A, Niehues SM, Veldhoen S

pubmed logopapersAug 23 2025
Fractures in children are common in emergency care, and accurate diagnosis is crucial to avoid complications affecting skeletal development. Limited access to pediatric radiology specialists emphasizes the potential of artificial intelligence (AI)-based diagnostic tools. This study evaluates the performance of the AI software BoneView® for detecting fractures of the upper extremity in children aged 2-18 years. A retrospective analysis was conducted using radiographic data from 826 pediatric patients presenting to the university's pediatric emergency department. Independent assessments by two experienced pediatric radiologists served as reference standard. The diagnostic accuracy of the AI tool compared to the reference standard was evaluated and performance parameters, e.g., sensitivity, specificity, positive and negative predictive values were calculated. The AI tool achieved an overall sensitivity of 89% and specificity of 91% for detecting fractures of the upper extremities. Significantly poorer performance compared to the reference standard was observed for the shoulder, elbow, hand, and fingers, while no significant difference was found for the wrist, clavicle, upper arm, and forearm. The software performed best for wrist fractures (sensitivity: 96%; specificity: 94%) and worst for elbow fractures (sensitivity: 87%; specificity: 65%). The software assessed provides diagnostic support in pediatric emergency radiology. While its overall performance is robust, limitations in specific anatomical regions underscore the need for further training of the underlying algorithms. The results suggest that AI can complement clinical expertise but should not replace radiological assessment. Question There is no comprehensive analysis of an AI-based tool for the diagnosis of pediatric fractures focusing on the upper extremities. Findings The AI-based software demonstrated solid overall diagnostic accuracy in the detection of upper limb fractures in children, with performance differing by anatomical region. Clinical relevance AI-based fracture detection can support pediatric emergency radiology, especially where expert interpretation is limited. However, further algorithm training is needed for certain anatomical regions and for detecting associated findings such as joint effusions to maximize clinical benefit.
Page 234 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.