Sort by:
Page 20 of 45441 results

Establishing an AI-based diagnostic framework for pulmonary nodules in computed tomography.

Jia R, Liu B, Ali M

pubmed logopapersJul 12 2025
Pulmonary nodules seen by computed tomography (CT) can be benign or malignant, and early detection is important for optimal management. The existing manual methods of identifying nodules have limitations, such as being time-consuming and erroneous. This study aims to develop an Artificial Intelligence (AI) diagnostic scheme that improves the performance of identifying and categorizing pulmonary nodules using CT scans. The proposed deep learning framework used convolutional neural networks, and the image database totaled 1,056 3D-DICOM CT images. The framework was initially preprocessing, including lung segmentation, nodule detection, and classification. Nodule detection was done using the Retina-UNet model, while the features were classified using a Support Vector Machine (SVM). Performance measures, including accreditation, sensitivity, specificity, and the AUROC, were used to evaluate the model's performance during training and validation. Overall, the developed AI model received an AUROC of 0.9058. The diagnostic accuracy was 90.58%, with an overall positive predictive value of 89% and an overall negative predictive value of 86%. The algorithm effectively handled the CT images at the preprocessing stage, and the deep learning model performed well in detecting and classifying nodules. The application of the new diagnostic framework based on AI algorithms increased the accuracy of the diagnosis compared with the traditional approach. It also provides high reliability for detecting pulmonary nodules and classifying the lesions, thus minimizing intra-observer differences and improving the clinical outcome. In perspective, the advancements may include increasing the size of the annotated data-set and fine-tuning the model due to detection issues of non-solitary nodules.

Accurate and real-time brain tumour detection and classification using optimized YOLOv5 architecture.

Saranya M, Praveena R

pubmed logopapersJul 12 2025
The brain tumours originate in the brain or its surrounding structures, such as the pituitary and pineal glands, and can be benign or malignant. While benign tumours may grow into neighbouring tissues, metastatic tumours occur when cancer from other organs spreads to the brain. This is because identification and staging of such tumours are critical because basically all aspects involving a patient's disease entail accurate diagnosis as well as the staging of the tumour. Image segmentation is incredibly valuable to medical imaging since it can make possible to simulate surgical operations, diseases diagnosis, anatomical and pathologic analysis. This study performs the prediction and classification of brain tumours present in MRI, a combined classification and localization framework model is proposed connecting Fully Convolutional Neural Network (FCNN) and You Only Look Once version 5 (YOLOv5). The FCNN model is designed to classify images into four categories: benign - glial, adenomas and pituitary related, and meningeal. It utilizes a derivative of Root Mean Square Propagation (RMSProp)optimization to boost the classification rate, based upon which the performance was evaluated with the standard measures that are precision, recall, F1 coefficient, specificity and accuracy. Subsequently, the YOLOv5 architectural design for more accurate detection of tumours is incorporated, with the subsequent use of FCNN for creation of the segmentation's masks of the tumours. Thus, the analysis proves that the suggested approach has more accuracy than the existing system with 98.80% average accuracy in the identification and categorization of brain tumour. This integration of detection and segmentation models presents one of the most effective techniques for enhancing the diagnostic performance of the system to add value within the medical imaging field. On the basis of these findings, it becomes possible to conclude that the advancements in the deep learning structures could apparently improve the tumour diagnosis while contributing to the finetuning of the clinical management.

The REgistry of Flow and Perfusion Imaging for Artificial INtelligEnce with PET (REFINE PET): Rationale and Design.

Ramirez G, Lemley M, Shanbhag A, Kwiecinski J, Miller RJH, Kavanagh PB, Liang JX, Dey D, Slipczuk L, Travin MI, Alexanderson E, Carvajal-Juarez I, Packard RRS, Al-Mallah M, Einstein AJ, Feher A, Acampa W, Knight S, Le VT, Mason S, Sanghani R, Wopperer S, Chareonthaitawee P, Buechel RR, Rosamond TL, deKemp RA, Berman DS, Di Carli MF, Slomka PJ

pubmed logopapersJul 11 2025
The REgistry of Flow and Perfusion Imaging for Artificial INtelligEnce with PET (REFINE PET) was established to aggregate PET and associated computed tomography (CT) images with clinical data from hospitals around the world into one comprehensive research resource. REFINE PET is a multicenter, international registry that contains both clinical and imaging data. The PET scans were processed using QPET software (Cedars-Sinai Medical Center, Los Angeles, CA), while the CT scans were processed using deep learning (DL) to detect coronary artery calcium (CAC). Patients were followed up for the occurrence of major adverse cardiovascular events (MACE), which include death, myocardial infarction, unstable angina, and late revascularization (>90 days from PET). The REFINE PET registry currently contains data for 35,588 patients from 14 sites, with additional patient data and sites anticipated. Comprehensive clinical data (including demographics, medical history, and stress test results) were integrated with more than 2200 imaging variables across 42 categories. The registry is poised to address a broad range of clinical questions, supported by correlating invasive angiography (within 6 months of MPI) in 5972 patients and a total of 9252 major adverse cardiovascular events during a median follow-up of 4.2 years. The REFINE PET registry leverages the integration of clinical, multimodality imaging, and novel quantitative and AI tools to advance the role of PET/CT MPI in diagnosis and risk stratification.

Oriented tooth detection: a CBCT image processing method integrated with RoI transformer.

Zhao Z, Wu B, Su S, Liu D, Wu Z, Gao R, Zhang N

pubmed logopapersJul 11 2025
Cone beam computed tomography (CBCT) has revolutionized dental imaging due to its high spatial resolution and ability to provide detailed three-dimensional reconstructions of dental structures. This study introduces an innovative CBCT image processing method using an oriented object detection approach integrated with a Region of Interest (RoI) Transformer. This study addresses the challenge of accurate tooth detection and classification in PAN derived from CBCT, introducing an innovative oriented object detection approach, which has not been previously applied in dental imaging. This method better aligns with the natural growth patterns of teeth, allowing for more accurate detection and classification of molars, premolars, canines, and incisors. By integrating RoI transformer, the model demonstrates relatively acceptable performance metrics compared to conventional horizontal detection methods, while also offering enhanced visualization capabilities. Furthermore, post-processing techniques, including distance and grayscale value constraints, are employed to correct classification errors and reduce false positives, especially in areas with missing teeth. The experimental results indicate that the proposed method achieves an accuracy of 98.48%, a recall of 97.21%, an F1 score of 97.21%, and an mAP of 98.12% in tooth detection. The proposed method enhances the accuracy of tooth detection in CBCT-derived PAN by reducing background interference and improving the visualization of tooth orientation.

Enhanced Detection of Prostate Cancer Lesions on Biparametric MRI Using Artificial Intelligence: A Multicenter, Fully-crossed, Multi-reader Multi-case Trial.

Xing Z, Chen J, Pan L, Huang D, Qiu Y, Sheng C, Zhang Y, Wang Q, Cheng R, Xing W, Ding J

pubmed logopapersJul 11 2025
To assess artificial intelligence (AI)'s added value in detecting prostate cancer lesions on MRI by comparing radiologists' performance with and without AI assistance. A fully-crossed multi-reader multi-case clinical trial was conducted across three institutions with 10 non-expert radiologists. Biparametric MRI cases comprising T2WI, diffusion-weighted images, and apparent diffusion coefficient were retrospectively collected. Three reading modes were evaluated: AI alone, radiologists alone (unaided), and radiologists with AI (aided). Aided and unaided readings were compared using the Dorfman-Berbaum-Metz method. Reference standards were established by senior radiologists based on pathological reports. Performance was quantified via sensitivity, specificity, and area under the alternative free-response receiver operating characteristic curve (AFROC-AUC). Among 407 eligible male patients (69.5±9.3years), aided reading significantly improved lesion-level sensitivity from 67.3% (95% confidence intervals [CI]: 58.8%, 75.8%) to 85.5% (95% CI: 81.3%, 89.7%), with a substantial difference of 18.2% (95% CI: 10.7%, 25.7%, p<0.001). Case-level specificity increased from 75.9% (95% CI: 68.7%, 83.1%) to 79.5% (95% CI: 74.1%, 84.8%), demonstrating non-inferiority (p<0.001). AFROC-AUC was also higher for aided than unaided reading (86.9% vs 76.1%, p<0.001). AI alone achieved robust performance (AFROC-AUC=83.1%, 95%CI: 79.7%, 86.6%), with lesion-level sensitivity of 88.4% (95% CI: 84.0%, 92.0%) and case-level specificity of 77.8% (95% CI: 71.5%, 83.3%). Subgroup analysis revealed improved detection for lesions with smaller size and lower prostate imaging reporting and data system scores. AI-aided reading significantly enhances lesion detection compared to unaided reading, while AI alone also demonstrates high diagnostic accuracy.

Research on a deep learning-based model for measurement of X-ray imaging parameters of atlantoaxial joint.

Wu Y, Zheng Y, Zhu J, Chen X, Dong F, He L, Zhu J, Cheng G, Wang P, Zhou S

pubmed logopapersJul 10 2025
To construct a deep learning-based SCNet model, in order to automatically measure X-ray imaging parameters related to atlantoaxial subluxation (AAS) in cervical open-mouth view radiographs, and the accuracy and reliability of the model were evaluated. A total of 1973 cervical open-mouth view radiographs were collected from picture archiving and communication system (PACS) of two hospitals(Hospitals A and B). Among them, 365 images of Hospital A were randomly selected as the internal test dataset for evaluating the model's performance, and the remaining 1364 images of Hospital A were used as the training dataset and validation dataset for constructing the model and tuning the model hyperparameters, respectively. The 244 images of Hospital B were used as an external test dataset to evaluate the robustness and generalizability of our model. The model identified and marked landmarks in the images for the parameters of the lateral atlanto-dental space (LADS), atlas lateral mass inclination (ALI), lateral mass width (LW), axis spinous process deviation distance (ASDD). The measured results of landmarks on the internal test dataset and external test dataset were compared with the mean values of manual measurement by three radiologists as the reference standard. Percentage of correct key-points (PCK), intra-class correlation coefficient (ICC), mean absolute error (MAE), Pearson correlation coefficient (r), mean square error (MSE), root mean square error (RMSE) and Bland-Altman plot were used to evaluate the performance of the SCNet model. (1) Within the 2 mm distance threshold, the PCK of the SCNet model predicted landmarks in internal test dataset images was 98.6-99.7%, and the PCK in the external test dataset images was 98-100%. (2) In the internal test dataset, for the parameters LADS, ALI, LW, and ASDD, there were strong correlation and consistency between the SCNet model predictions and the manual measurements (ICC = 0.80-0.96, r = 0.86-0.96, MAE = 0.47-2.39 mm/°, MSE = 0.38-8.55 mm<sup>2</sup>/°<sup>2</sup>, RMSE = 0.62-2.92 mm/°). (3) The same four parameters also showed strong correlation and consistency between SCNet and manual measurements in the external test dataset (ICC = 0.81-0.91, r = 0.82-0.91, MAE = 0.46-2.29 mm/°, MSE = 0.29-8.23mm<sup>2</sup>/°<sup>2</sup>, RMSE = 0.54-2.87 mm/°). The SCNet model constructed based on deep learning algorithm in this study can accurately identify atlantoaxial vertebral landmarks in cervical open-mouth view radiographs and automatically measure the AAS-related imaging parameters. Furthermore, the independent external test set demonstrates that the model exhibits a certain degree of robustness and generalization capability under meet radiographic standards.

Automated Detection of Lacunes in Brain MR Images Using SAM with Robust Prompts via Self-Distillation and Anatomy-Informed Priors

Deepika, P., Shanker, G., Narayanan, R., Sundaresan, V.

medrxiv logopreprintJul 10 2025
Lacunes, which are small fluid-filled cavities in the brain, are signs of cerebral small vessel disease and have been clinically associated with various neurodegenerative and cerebrovascular diseases. Hence, accurate detection of lacunes is crucial and is one of the initial steps for the precise diagnosis of these diseases. However, developing a robust and consistently reliable method for detecting lacunes is challenging because of the heterogeneity in their appearance, contrast, shape, and size. To address the above challenges, in this study, we propose a lacune detection method using the Segment Anything Model (SAM), guided by point prompts from a candidate prompt generator. The prompt generator initially detects potential lacunes with a high sensitivity using a composite loss function. The SAM model selects true lacunes by delineating their characteristics from mimics such as the sulcus and enlarged perivascular spaces, imitating the clinicians strategy of examining the potential lacunes along all three axes. False positives were further reduced by adaptive thresholds based on the region-wise prevalence of lacunes. We evaluated our method on two diverse, multi-centric MRI datasets, VALDO and ISLES, comprising only FLAIR sequences. Despite diverse imaging conditions and significant variations in slice thickness (0.5-6 mm), our method achieved sensitivities of 84% and 92%, with average false positive rates of 0.05 and 0.06 per slice in ISLES and VALDO datasets respectively. The proposed method outperformed the state-of-the-art methods, demonstrating its effectiveness in lacune detection and quantification.

Depth-Sequence Transformer (DST) for Segment-Specific ICA Calcification Mapping on Non-Contrast CT

Xiangjian Hou, Ebru Yaman Akcicek, Xin Wang, Kazem Hashemizadeh, Scott Mcnally, Chun Yuan, Xiaodong Ma

arxiv logopreprintJul 10 2025
While total intracranial carotid artery calcification (ICAC) volume is an established stroke biomarker, growing evidence shows this aggregate metric ignores the critical influence of plaque location, since calcification in different segments carries distinct prognostic and procedural risks. However, a finer-grained, segment-specific quantification has remained technically infeasible. Conventional 3D models are forced to process downsampled volumes or isolated patches, sacrificing the global context required to resolve anatomical ambiguity and render reliable landmark localization. To overcome this, we reformulate the 3D challenge as a \textbf{Parallel Probabilistic Landmark Localization} task along the 1D axial dimension. We propose the \textbf{Depth-Sequence Transformer (DST)}, a framework that processes full-resolution CT volumes as sequences of 2D slices, learning to predict $N=6$ independent probability distributions that pinpoint key anatomical landmarks. Our DST framework demonstrates exceptional accuracy and robustness. Evaluated on a 100-patient clinical cohort with rigorous 5-fold cross-validation, it achieves a Mean Absolute Error (MAE) of \textbf{0.1 slices}, with \textbf{96\%} of predictions falling within a $\pm1$ slice tolerance. Furthermore, to validate its architectural power, the DST backbone establishes the best result on the public Clean-CC-CCII classification benchmark under an end-to-end evaluation protocol. Our work delivers the first practical tool for automated segment-specific ICAC analysis. The proposed framework provides a foundation for further studies on the role of location-specific biomarkers in diagnosis, prognosis, and procedural planning. Our code will be made publicly available.

Artificial Intelligence for Low-Dose CT Lung Cancer Screening: Comparison of Utilization Scenarios.

Lee M, Hwang EJ, Lee JH, Nam JG, Lim WH, Park H, Park CM, Choi H, Park J, Goo JM

pubmed logopapersJul 10 2025
<b>BACKGROUND</b>. Artificial intelligence (AI) tools for evaluating low-dose CT (LDCT) lung cancer screening examinations are used predominantly for assisting radiologists' interpretations. Alternate utilization scenarios (e.g., use of AI as a prescreener or backup) warrant consideration. <b>OBJECTIVE</b>. The purpose of this study was to evaluate the impact of different AI utilization scenarios on diagnostic outcomes and interpretation times for LDCT lung cancer screening. <b>METHODS</b>. This retrospective study included 366 individuals (358 men, 8 women; mean age, 64 years) who underwent LDCT from May 2017 to December 2017 as part of an earlier prospective lung cancer screening trial. Examinations were interpreted by one of five readers, who reviewed their assigned cases in two sessions (with and without a commercial AI computer-aided detection tool). These interpretations were used to reconstruct simulated AI utilization scenarios: as an assistant (i.e., radiologists interpret all examinations with AI assistance), as a prescreener (i.e., radiologists only interpret examinations with a positive AI result), or as backup (i.e., radiologists reinterpret examinations when AI suggests a missed finding). A group of thoracic radiologists determined the reference standard. Diagnostic outcomes and mean interpretation times were assessed. Decision-curve analysis was performed. <b>RESULTS</b>. Compared with interpretation without AI (recall rate, 22.1%; per-nodule sensitivity, 64.2%; per-examination specificity, 88.8%; mean interpretation time, 164 seconds), AI as an assistant showed higher recall rate (30.3%; <i>p</i> < .001), lower per-examination specificity (81.1%), and no significant change in per-nodule sensitivity (64.8%; <i>p</i> = .86) or mean interpretation time (161 seconds; <i>p</i> = .48); AI as a prescreener showed lower recall rate (20.8%; <i>p</i> = .02) and mean interpretation time (143 seconds; <i>p</i> = .001), higher per-examination specificity (90.3%; <i>p</i> = .04), and no significant difference in per-nodule sensitivity (62.9%; <i>p</i> = .16); and AI as a backup showed increased recall rate (33.6%; <i>p</i> < .001), per-examination sensitivity (66.4%; <i>p</i> < .001), and mean interpretation time (225 seconds; <i>p</i> = .001), with lower per-examination specificity (79.9%; <i>p</i> < .001). Among scenarios, only AI as a prescreener demonstrated higher net benefit than interpretation without AI; AI as an assistant had the least net benefit. <b>CONCLUSION</b>. Different AI implementation approaches yield varying outcomes. The findings support use of AI as a prescreener as the preferred scenario. <b>CLINICAL IMPACT</b>. An approach whereby radiologists only interpret LDCT examinations with a positive AI result can reduce radiologists' workload while preserving sensitivity.

Hierarchical deep learning system for orbital fracture detection and trap-door classification on CT images.

Oku H, Nakamura Y, Kanematsu Y, Akagi A, Kinoshita S, Sotozono C, Koizumi N, Watanabe A, Okumura N

pubmed logopapersJul 10 2025
To develop and evaluate a hierarchical deep learning system that detects orbital fractures on computed tomography (CT) images and classifies them as depressed or trap-door types. A retrospective diagnostic accuracy study analyzing CT images from patients with confirmed orbital fractures. We collected CT images from 686 patients with orbital fractures treated at a single institution (2010-2025), resulting in 46,013 orbital CT slices. After preprocessing, 7809 slices were selected as regions of interest and partitioned into training (6508 slices) and test (1301 slices) datasets. Our hierarchical approach consisted of a first-stage classifier (YOLOv8) for fracture detection and a second-stage classifier (Vision Transformer) for distinguishing depressed from trap-door fractures. Performance was evaluated at both slice and patient levels, focusing on accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC-ROC) at both slice and patient levels. For fracture detection, YOLOv8 achieved a slice-level sensitivity of 80.4 % and specificity of 79.2 %, with patient-level performance improving to 94.7 % sensitivity and 90.0 % specificity. For fracture classification, Vision Transformer demonstrated a slice-level sensitivity of 91.5 % and specificity of 83.5 % for trap-door and depressed fractures, with patient-level metrics of 100 % sensitivity and 88.9 % specificity. The complete system correctly identified 18/20 no-fracture cases, 35/40 depressed fracture cases, and 15/17 trap-door fracture cases. Our hierarchical deep learning system effectively detects orbital fractures and distinguishes between depressed and trap-door types with high accuracy. This approach could aid in the timely identification of trap-door fractures requiring urgent surgical intervention, particularly in settings lacking specialized expertise.
Page 20 of 45441 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.