Sort by:
Page 15 of 6036030 results

Ahn S, Kim M, Kim J, Park W

pubmed logopapersOct 18 2025
With advancements in deep learning-based dental imaging analysis, artificial intelligence (AI) models are increasingly being employed to assist in mandibular third molar surgery. However, a comprehensive overview of the clinical utility remains limited. This scoping review aimed to identify and compare deep learning models used in the radiographic evaluation of mandibular third molar surgery, with a focus on AI model types, key performance metrics, imaging modalities, and clinical applicability. Following the PRISMA-ScR guidelines, a comprehensive search was conducted in the PubMed and Scopus databases for original research articles published between 2015 and 2024. Systematic reviews, editorial articles, and studies with insufficient datasets were excluded. Studies utilising panoramic radiographs and cone-beam computed tomography (CBCT) images for AI-based mandibular third molar analyses were included. The extracted data were charted according to the AI model types, performance metrics (accuracy, sensitivity, and specificity), dataset size and distribution, validation processes, and clinical applicability. Comparative performance tables and heat maps were utilised for visualisation. Of the initial 948 articles, 16 met the inclusion criteria. Various convolutional neural network (CNN)-based models have been developed, with U-Net demonstrating the highest accuracy and clinical utility. Most studies employed panoramic and CBCT images, with U-Net outperforming other models in predicting nerve injury and evaluating extraction difficulty. However, substantial variations in dataset size, validation procedures, and performance metrics were noted, highlighting inconsistencies in model generalisability. Deep learning shows promising potential in the radiographic evaluation of mandibular third molars. To date, most studies have relied on two-dimensional images and focused on detection and segmentation, while predictive modeling and three-dimensional CBCT-based analysis are relatively limited. To enhance clinical utility, larger standardized datasets, transparent multi-expert annotation, task-specific benchmarking, and robust external/multicenter validation are needed. These measures will enable reliable pre-extraction risk prediction and support clinical decision-making.

Ghoul A, Cassal Paulson P, Lingg A, Krumm P, Hammernik K, Rueckert D, Gatidis S, Küstner T

pubmed logopapersOct 18 2025
This study aims to develop an automated framework for operator-independent assessment of cardiac ventricular function from highly accelerated images. We introduce a deep learning framework that generates reliable ventricular volumetric parameters and strain measures from fully sampled and retrospectively accelerated MR images. This method integrates image registration, motion-compensated reconstruction, and segmentation in a synergetic loop for mutual refinement. The evaluation was performed on an in-house dataset of healthy and cardiovascular-diseased subjects. We examined the performance of the underlying tasks, including registration and segmentation, and their impact on derived parameters related to ventricular function. The proposed approach demonstrates robustness to undersampling artifacts and requires limited annotation, while still reducing variability and errors for segmentation and registration. This translates to a <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mn>9</mn> <mo>%</mo></mrow> <annotation>$$ 9\% $$</annotation></semantics> </math> to <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mn>22</mn> <mo>%</mo></mrow> <annotation>$$ 22\% $$</annotation></semantics> </math> increase in Dice similarity compared to existing deep learning methods for left endocardium, left epicardium, and right ventricular delineation. Analysis of the predicted left and right ventricular ejection fraction reveals a strong correlation ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>></mo> <mn>0</mn> <mo>.</mo> <mn>9</mn></mrow> <annotation>$$ >0.9 $$</annotation></semantics> </math> ) with manual measurements. Moreover, the estimated motion and segmentation masks enable consistent radial and circumferential strain measurements across accelerations up to <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mi>R</mi> <mo>=</mo> <mn>24</mn></mrow> <annotation>$$ R=24 $$</annotation></semantics> </math> . A comprehensive ventricular function analysis can be performed using highly accelerated cine MR data with minimal annotation effort. This multitasking strategy has the potential to enable more accessible cardiac function analysis within a single breath-hold.

Uppal D, Prakash S

pubmed logopapersOct 18 2025
Brain tumors are caused by rapid and uncontrolled cell growth that may pose a potential threat to life if not treated at an early stage. The complex structure of the brain poses a significant challenge in accurately delineating both tumor and non-tumor regions. Detecting brain tumors is complicated by their irregular shapes and varied locations within the brain. Automated brain tumor segmentation from Magnetic Resonance Imaging (MRI) scans is crucial for timely clinical diagnosis and treatment planning. Recent advancements in deep learning methods have shown remarkable success in automated brain tumor segmentation. However, these methods are unable to effectively capture channel-wise feature interdependence, which is required for efficiently learning spatial and channel feature representations. Thus, we propose Dilated Convolutional Transformer V-Net, named DCTransVNet, for volumetric brain tumor segmentation in Multimodal MRI scans. DCTransVNet is structured as an encoder-decoder network, where the encoder integrates four 3D dilated residual blocks to effectively extract multi-scale contextual feature representation from MRI scans. Following each 3D dilated residual block, an Efficient Paired-Attention block is employed to emphasize spatial and channel-wise information, contributing to more discriminative feature representation. Further, the Global Context module captures long-range dependencies and achieves a comprehensive global feature representation. Additionally, we propose a novel Augmented Brain Tumor Segmentation Generative Adversarial Network (AuBTS-GAN) to generate realistic synthetic images for enhanced segmentation performance. We perform extensive experiments and ablation studies on benchmark datasets, including BraTS-2019, BraTS-2020, BraTS-2021 from the Multimodal Brain Tumor Segmentation Challenge (BraTS) and the Medical Segmentation Decathlon (MSD) BraTS dataset. The results show the efficacy of the proposed DCTransVNet compared to existing state-of-the-art approaches.

Shinozaki Y, Ishida H, Kaneno T, Takaya E, Kobayashi T, Taniyama Y, Sato C, Okamoto H, Ozawa Y, Ueda T, Kamei T

pubmed logopapersOct 18 2025
This study aimed to assess the performance of a deep learning model using multimodal imaging for detecting lymph node metastasis in esophageal cancer in comparison to expert assessments. A retrospective analysis was performed for 521 lymph nodes from 167 patients with esophageal cancer who underwent esophagectomy. Deep learning models were developed based on multimodal imaging, including non-contrast-enhanced computed tomography, contrast-enhanced computed tomography, and positron emission tomography imaging. The diagnostic performance was evaluated and compared with expert assessments using a receiver operating characteristic curve analysis. The area under the receiver operating characteristic curve values for the deep learning model were 0.81 with multimodal imaging, 0.73 with non-contrast-enhanced computed tomography, 0.72 with contrast-enhanced computed tomography, and 0.75 with positron emission tomography were calculated. The area under the curve of the deep learning model (0.81) demonstrated diagnostic performance comparable to that of experienced experts (area under the curve, 0.84; P = 0.62, DeLong's test). The multimodal deep learning model using computed tomography and positron emission tomography demonstrated performance comparable to that of experts in diagnosing the presence of lymph node metastasis, a key prognostic factor in esophageal cancer, suggesting its potential clinical utility.

Al-Naser Y, Sharma S, Niure K, Ibach K, Khosa F, Yong-Hing CJ

pubmed logopapersOct 18 2025
As generative AI tools increasingly produce medical imagery and videos for education, marketing, and communication, concerns have arisen about the accuracy and equity of these representations. Existing research has identified demographic biases in AI-generated depictions of healthcare professionals, but little is known about their portrayal of Medical Radiation Technologists (MRTs), particularly in the Canadian context. This study evaluated 690 AI-generated outputs (600 images and 90 videos) created by eight leading text-to-image and text-to-video models using the prompt ``Image [or video] of a Canadian Medical Radiation Technologist.'' Each image and video was assessed for demographic characteristics (gender, race/ethnicity, age, religious representation, visible disabilities), and the presence and accuracy of imaging equipment. These were compared to real-world demographic data on Canadian MRTs (n = 20,755). Significant demographic discrepancies were observed between AI-generated content and real-world data. AI depictions included a higher proportion of visible minorities (as defined by Statistics Canada) (39% vs. 20.8%, p < 0.001) and males (41.4% vs. 21.2%, p < 0.001), while underrepresenting women (58.5% vs. 78.8%, p < 0.001). Age representation skewed younger than actual workforce demographics (p < 0.001). Equipment representation was inconsistent, with 66% of outputs showing CT/MRI and only 4.3% showing X-rays; 26% included inaccurate or fictional equipment. Generative AI models frequently produce demographically and contextually inaccurate depictions of MRTs, misrepresenting workforce diversity and clinical tools. These inconsistencies pose risks for educational accuracy, public perception, and equity in professional representation. Improved model training and prompt sensitivity are needed to ensure reliable and inclusive AI-generated medical content.

Melika Filvantorkaman, Maral Filvan Torkaman

arxiv logopreprintOct 18 2025
Medical imaging plays a vital role in modern diagnostics; however, interpreting high-resolution radiological data remains time-consuming and susceptible to variability among clinicians. Traditional image processing techniques often lack the precision, robustness, and speed required for real-time clinical use. To overcome these limitations, this paper introduces a deep learning framework for real-time medical image analysis designed to enhance diagnostic accuracy and computational efficiency across multiple imaging modalities, including X-ray, CT, and MRI. The proposed system integrates advanced neural network architectures such as U-Net, EfficientNet, and Transformer-based models with real-time optimization strategies including model pruning, quantization, and GPU acceleration. The framework enables flexible deployment on edge devices, local servers, and cloud infrastructures, ensuring seamless interoperability with clinical systems such as PACS and EHR. Experimental evaluations on public benchmark datasets demonstrate state-of-the-art performance, achieving classification accuracies above 92%, segmentation Dice scores exceeding 91%, and inference times below 80 milliseconds. Furthermore, visual explanation tools such as Grad-CAM and segmentation overlays enhance transparency and clinical interpretability. These results indicate that the proposed framework can substantially accelerate diagnostic workflows, reduce clinician workload, and support trustworthy AI integration in time-critical healthcare environments.

Junno Yun, Yaşar Utku Alçalar, Mehmet Akçakaya

arxiv logopreprintOct 18 2025
Algorithm unrolling methods have proven powerful for solving the regularized least squares problem in computational magnetic resonance imaging (MRI). These approaches unfold an iterative algorithm with a fixed number of iterations, typically alternating between a neural network-based proximal operator for regularization, a data fidelity operation and auxiliary updates with learnable parameters. While the connection to optimization methods dictate that the proximal operator network should be shared across unrolls, this can introduce artifacts or blurring. Heuristically, practitioners have shown that using distinct networks may be beneficial, but this significantly increases the number of learnable parameters, making it challenging to prevent overfitting. To address these shortcomings, by taking inspirations from proximal operators with varying thresholds in approximate message passing (AMP) and the success of time-embedding in diffusion models, we propose a time-embedded algorithm unrolling scheme for inverse problems. Specifically, we introduce a novel perspective on the iteration-dependent proximal operation in vector AMP (VAMP) and the subsequent Onsager correction in the context of algorithm unrolling, framing them as a time-embedded neural network. Similarly, the scalar weights in the data fidelity operation and its associated Onsager correction are cast as time-dependent learnable parameters. Our extensive experiments on the fastMRI dataset, spanning various acceleration rates and datasets, demonstrate that our method effectively reduces aliasing artifacts and mitigates noise amplification, achieving state-of-the-art performance. Furthermore, we show that our time-embedding strategy extends to existing algorithm unrolling approaches, enhancing reconstruction quality without increasing the computational complexity significantly.

Olajumoke O. Adekunle, Joseph D. Akinyemi, Khadijat T. Ladoja, Olufade F. W. Onifade

arxiv logopreprintOct 18 2025
Lung cancer, a malignancy originating in lung tissues, is commonly diagnosed and classified using medical imaging techniques, particularly computed tomography (CT). Despite the integration of machine learning and deep learning methods, the predictive efficacy of automated systems for lung cancer classification from CT images remains below the desired threshold for clinical adoption. Existing research predominantly focuses on binary classification, distinguishing between malignant and benign lung nodules. In this study, a novel deep learning-based approach is introduced, aimed at an improved multi-class classification, discerning various subtypes of lung cancer from CT images. Leveraging a pre-trained ResNet model, lung tissue images were classified into three distinct classes, two of which denote malignancy and one benign. Employing a dataset comprising 15,000 lung CT images sourced from the LC25000 histopathological images, the ResNet50 model was trained on 10,200 images, validated on 2,550 images, and tested on the remaining 2,250 images. Through the incorporation of custom layers atop the ResNet architecture and meticulous hyperparameter fine-tuning, a remarkable test accuracy of 98.8% was recorded. This represents a notable enhancement over the performance of prior models on the same dataset.

Chate RC, Carvalho CRR, Sawamura MVY, Salge JM, Fonseca EKUN, Amaral PTMA, de Almeida Lamas C, de Luna LAV, Kay FU, Junior ANA, Nomura CH

pubmed logopapersOct 17 2025
To investigate imaging phenotypes in posthospitalized COVID-19 patients by integrating quantitative CT (QCT) and machine learning (ML), with a focus on small airway disease (SAD) and its correlation with plethysmography. In this single-center cross-sectional retrospective study, a subanalysis of a larger prospective cohort, 257 adult survivors from the initial COVID-19 peak (mean age, 56±13 y; 49% male) were evaluated. Patients were admitted to a quaternary hospital between March 30 and August 31, 2020 (median length of stay: 16 [8-26] d) and underwent plethysmography along with volumetric inspiratory and expiratory chest CT 6 to 12 months after hospitalization. QCT parameters were derived using AI-Rad Companion Chest CT (Siemens Healthineers). Hierarchical clustering of QCT parameters identified 4 phenotypes among survivors, named "SAD," "intermediate," "younger fibrotic," and "older fibrotic," based on clinical and imaging characteristics. The SAD cluster (n=37, 14%) showed higher residual volume (RV) and RV/total lung capacity (TLC) ratios as well as lower FEF25-75/forced vital capacity (FVC) on plethysmography. The older fibrotic cluster (n=42, 16%) had the lowest TLC and FVC values. The younger fibrotic cluster (n=79, 31%) demonstrated lower RV and RV/TLC ratios and higher FEF25-75 than the other phenotypes. The intermediate cluster (n=99, 39%) exhibited characteristics that were intermediate between those of SAD and fibrotic phenotypes. The integration of inspiratory and expiratory chest CT with quantitative analysis and ML enables the identification of distinct imaging phenotypes in long COVID patients, including a unique SAD cluster strongly associated with specific pulmonary function abnormalities.

Yao Y, Cen X, Gan L, Jiang J, Wang M, Xu Y, Yuan J

pubmed logopapersOct 17 2025
Accurate staging of esophageal cancer is crucial for determining prognosis and guiding treatment strategies, but manual interpretation of radiology reports by clinicians is prone to variability and limited accuracy, resulting in reduced staging accuracy. Recent advances in large language models (LLMs) have shown promise in medical applications, but their utility in esophageal cancer staging remains underexplored. This study aims to compare the performance of 3 locally deployed LLMs (INF-72B, Qwen2.5-72B, and LLaMA3.1-70B) and clinicians in preoperative esophageal cancer staging using free-text radiology reports. This retrospective study included 200 patients from Shanghai Chest Hospital who underwent esophageal cancer surgery from May to December 2024. The dataset consisted of 1134 Chinese free-text radiology reports. The reference standard was derived from postoperative pathological staging. A total of 3 LLMs determined tumor classification (T1-T4), node classification (N0-N3), and overall staging (I-IV) using 3 prompting strategies (zero-shot, chain-of-thought, and a proposed interpretable reasoning [IR] method). The McNemar test and Pearson chi-square test were used for comparisons. INF-72B+IR achieved a superior overall staging accuracy of 61.5% and an F1-score of 0.60, substantially higher than the clinicians' accuracy of 39.5% and F1-score of 0.39 (all P<.001). Qwen2.5-72B+IR also demonstrated an advantage, achieving an overall staging accuracy of 46% and an F1-score of 0.51, which was better than the clinicians' performance (P<.001). LLaMA3.1-70B showed no statistically significant difference in overall staging performance compared to clinicians (all P>0.5). This study demonstrates that LLMs, particularly when guided by the proposed IR strategy, can accurately and reliably perform esophageal cancer staging from free-text radiology reports. This approach not only provides high-performance predictions but also offers a transparent and verifiable reasoning process, highlighting its potential as a valuable decision-support tool to augment human expertise in complex clinical diagnostic tasks.
Page 15 of 6036030 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.