Sort by:
Page 20 of 6046038 results

Wang F, Deng W, Zhong Z

pubmed logopapersOct 16 2025
This systematic review and meta-analysis aimed to compare the diagnostic performance of MRI-based deep learning (DL) algorithms versus radiologists in detecting lymph node metastasis (LNM) in colorectal cancer (CRC). A comprehensive literature search was conducted in PubMed, Embase, and Web of Science up to June 30, 2025 for studies evaluating MRI-based DL algorithms for LNM diagnosis, using histopathology as the reference standard. Pooled sensitivity, specificity, and area under the curve (AUC) were calculated using a bivariate random-effects model. Risk of bias and applicability were assessed using the PROBAST+AI tool. Certainty of evidence was rated with the GRADE approach. A total of 10 studies met inclusion criteria. Internal validation cohorts (9 studies, n=1850) showed pooled sensitivity of 0.89 (95% CI: 0.80-0.94), specificity of 0.85 (95% CI: 0.77-0.91), and AUC of 0.93 (95% CI: 0.91-0.95). Radiologists achieved lower pooled sensitivity of 0.65 (95% CI: 0.60-0.71) and specificity of 0.74 (95% CI: 0.71-0.77), with an AUC of 0.76 (95% CI: 0.73-0.80). DL algorithms in internal validation cohorts consistently outperformed junior radiologists in all metrics, and demonstrated higher sensitivity and AUC than senior radiologists(all P<0.05). MRI-based DL algorithms show promising diagnostic performance in detecting LNM in CRC, with performance generally higher than those reported for radiologists in internal validation cohorts, particularly junior-experienced readers. However, most included studies were retrospective and originated from China, limiting generalizability. Prospective, multicenter studies are warranted to validate these findings across diverse populations.

Lam-Rachlin J, Punn R, Behera SK, Geiger M, Lachaud M, David N, Garmel S, Fox NS, Rebarber A, DeVore GR, Zelop CM, Janssen MK, Sylvester-Armstrong KR, Kennedy J, Spiegelman J, Heiligenstein M, Bessis R, Mobeen S, Kia F, Friedman C, Melka S, Stos B, De Boisredon M, Askinazi E, Thorey V, Gardella C, Levy M, Arunamata A

pubmed logopapersOct 16 2025
To evaluate whether artificial intelligence (AI)-based software was associated with enhanced identification of eight second-trimester fetal ultrasound findings suspicious for congenital heart defects (CHDs) among obstetrician-gynecologists (ob-gyns) and maternal-fetal medicine specialists. A dataset of 200 fetal ultrasound examinations from 11 centers, including 100 with at least one suspicious finding, was retrospectively constituted (singleton pregnancy, 18-24 weeks of gestation, patients aged 18 years or older). Only examinations containing two-dimensional grayscale cines with interpretable four-chamber, left ventricular outflow tract, and right ventricular outflow tract standard views were included. Seven ob-gyns and seven maternal-fetal medicine specialists reviewed each examination in randomized order both with and without AI assistance and assessed the presence or absence of each finding suspicious for CHD with confidence scores. Outcomes included readers' performance in identifying the presence of any finding and each finding at the examination level, as measured by the area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity. In addition, reading time and confidence were evaluated. The detection of any suspicious finding significantly improved for AI-aided compared with unaided readers with a significantly higher AUROC (0.974 [95% CI, 0.957-0.990] vs 0.825 [95% CI, 0.741-0.908], P=.002), sensitivity (0.935 [95% CI, 0.892-0.978] vs 0.782 [95% CI, 0.686-0.878]), and specificity (0.970 [95% CI, 0.949-0.991] vs 0.759 [95% CI, 0.630-0.887]). AI assistance also resulted in a significant decrease in clinician interpretation time and increase in clinician confidence score (226 seconds [95% CI, 218-234] vs 274 seconds [95% CI, 265-283], P<.001; 4.63 [95% CI, 4.60-4.66] vs 3.90 [95% CI, 3.85-3.95], P<.001, respectively). The use of AI-based software to assist clinicians was associated with enhanced identification of findings suspicious for CHD on prenatal ultrasonography.

Foraker R, Sperling L, Bratzke L, Budoff M, Leppert M, Razavi AC, Rodriguez F, Shapiro MD, Whelton S, Wong ND, Yang E

pubmed logopapersOct 16 2025
Coronary artery calcium (CAC) is a marker of subclinical atherosclerosis that confers increased risk of atherosclerotic cardiovascular disease. Measured by noncontrast cardiac computed tomography, CAC improves risk stratification beyond traditional risk factors and can aid decision-making for allocation of preventive treatments. Although national guidelines recommend consideration of CAC measurement for >17 million individuals in the United States with borderline to intermediate 10-year atherosclerotic cardiovascular disease risk, adoption has been limited. A promising approach to bridge this gap is opportunistic detection of CAC using non-ECG-gated chest computed tomography scans that are performed for a noncardiac indication. Approximately 19 million non-ECG-gated chest computed tomography scans are performed per year, and reporting opportunistic detection of CAC from these scans can enhance atherosclerotic cardiovascular disease risk stratification without additional radiation exposure, cost, or burden. Estimation of risk by traditional risk factor scoring is underused, and reporting of opportunistic detection of CAC has the potential to alert physicians of risk, independent of guideline-recommended risk calculator use. Advancements in artificial intelligence allow integration of automated CAC quantification into clinical practice. Several artificial intelligence algorithms are in use to improve the likelihood of reporting opportunistic detection of CAC and appropriate allocation of preventive therapies. Systematic approaches are needed to ensure appropriate reporting, interpretation, and action while avoiding unnecessary downstream testing. Implementation that includes tailored preventive care and streamlined care pathways involving multidisciplinary clinical teams including radiology, cardiology, and primary care is essential.

Shi Y, Guo C, Xu Z, Tan L, Huang K

pubmed logopapersOct 16 2025
Deep learning techniques have achieved significant advancements in medical image segmentation in recent years. However, model generalization remains severely constrained by domain shift, particularly in cross-modal medical image segmentation tasks. Traditional segmentation models struggle to generalize effectively to unseen target domains due to differences in joint probability distributions across medical imaging modalities. Existing methods primarily focus on unsupervised domain adaptation (UDA) and domain generalization (DG) techniques. While UDA methods face practical limitations due to difficulties in obtaining target domain data, current DG approaches often overlook inherent anatomical priors in medical images as well as the heterogeneity and sparsity of lesion regions. To address these challenges, this paper proposes a cross-modal medical image segmentation framework that integrates the Vision Mamba model with dynamic domain generalization. The framework achieves cross-domain feature alignment and multi-scale feature fusion by leveraging bidirectional state-space sequence modeling, Bezier curve-style enhancement, and a dual-normalization strategy. Additionally, the VEBlock module is introduced, which effectively combines the dynamic sequence modeling capabilities of the Mamba model with non-local attention mechanisms to better capture cross-modal global dependencies. Experimental results on the BraTS 2018 and cross-modal cardiac datasets demonstrate significant improvements in cross-modal segmentation. For example, in T2 → T1 tasks, our framework achieves an average Dice score of 56.22%, outperforming baseline methods by 1.78% while reducing Hausdorff distance for tumor boundaries to 13.26 mm. Furthermore, in the Cardiac CT → MRI tasks, the Hausdorff distance is optimized to 27.34 mm, validating the proposed framework's strong generalization capability for complex anatomical structures.

Wang J, Yang B, Liu S, Zheng X, Yao W, Chen J

pubmed logopapersOct 16 2025
Vision Graph Neural Network (ViG) is designed to recognize different objects through graph-level processing. However, ViG constructs graphs with appearance-level neighbors and neglects the category semantic. The oversight results in the unintentional connection of patches that belong to different objects, thus affecting the distinctiveness of categories in multi-label medical image learning. Since the pixel-level annotations for images are not easily available, category-aware graphs can not be directly built. To solve this problem, we consider localizing category-specific regions using Class Activation Maps (CAMs), an effective way to highlight regions belonging to each category without requiring manual annotations. Specifically, we propose a CAM-interacted Vision GNN (CiV-GNN), in which category-aware graphs are formed to perform intra-category graph processing. CIV-GNN includes a Class-activated Patch Division (CAPD) module, which introduces CAMs as guidance for category-aware graph building. Furthermore, we develop a Multi-graph Interactive Processing (MIP) module to model the relations between category-aware graphs, promoting inter-category interaction learning. Experimental results show that CiV-GNN performs well in surgical tool localization and multi-label medical image classification. Specifically, for m2cai16-localization, CiV-GNN exhibits a 1.43% and 7.02% improvement in mAP50 and mAP50-95, respectively, compared to YOLOv8.

Yu L, Chen GH, Fletcher JG, Jiang L, Kachelrieß M, Zeng R, Zhou Z

pubmed logopapersOct 16 2025
This article provides an overview of Deep-Learning-based techniques in CT image Reconstruction and processing (referred to as "DLR"), covering technical implementations, performance evaluation, radiation dose reduction, and future perspectives. DLR methods can be categorized into projection-space, projection-to-image-space, image-space, and various hybrid techniques, with applications such as noise reduction, artifact correction, and spatial resolution enhancement. Performance evaluations include phantom-based studies, patient-image-based studies, and virtual imaging trials. These evaluation studies demonstrated that DLR can effectively reduce image noise while preserving an image texture like in traditional filtered-backprojection (FBP) images, although the extent of radiation dose reduction varies widely depending on the study and the specific diagnostic task. Challenges remain in low-contrast lesion detection and characterization, where dose reduction may still be less than 50% compared to traditional reconstruction methods. Additionally, the potential for DLR methods to generate false structures or "hallucinations," especially at low radiation doses, emphasizes the need for effective monitoring and mitigation strategies from both technical and clinical perspectives. Quantitative, accurate, and efficient evaluation techniques, such as virtual image trial-based methods, can be explored to help optimize these algorithms for reducing radiation dose and enhancing diagnostic performance.

Abbadi YA, Al-Ghraibah A, Altayeb M

pubmed logopapersOct 16 2025
Tooth cavities are primarily driven by sugar-induced bacterial activity that progressively erodes dental structures. Advances in medical image processing provide dentists with valuable tools to support accurate diagnosis and selection of appropriate therapeutic interventions, thereby improving oral healthcare. This study presents the development of an automated dental disease detection system, designed to reduce clinician workload, minimise diagnostic time, and lower the risk of human error. Dental radiographs are first subjected to noise reduction, greyscale conversion, filtering, and resizing, followed by the extraction of discriminative features. Key feature extraction methods include Wavelet analysis, Gray-Level Co-Occurrence Matrix (GLCM), and texture analysis. These features were subsequently used to train and evaluate machine learning classifiers, specifically Support Vector Machine (SVM) and Neural Network (NN) models. The system achieved classification accuracies of 80% with SVM and 77% with NN when all features were combined. The primary objective of the system is to classify dental X-ray images as normal or abnormal, and to further identify abnormalities such as caries. Compared to conventional diagnostic methods, the proposed automated approach enables faster and more reliable detection of dental diseases. Ultimately, this system has the potential to support dentists in clinical decision-making and enhance the quality of patient care.

Shokrollahi P, Zambrano Chavez JM, Lam JPH, Sharma AA, Pal D, Bahrami N, Gatidis S, Chaudhari AS, Loening AM

pubmed logopapersOct 16 2025
Selection of radiology imaging protocols is a vital step in the radiology workflow as incorrect protocol selection can lead to suboptimal imaging and thereby jeopardize patient health, delay treatments, and/or increase healthcare costs. However, this process is generally thought of as an inefficient use of radiologist's time. We developed a machine learning (ML) system that can predict radiology protocols accurately based on patients' electronic medical record (EMR) data. The system is an ensemble of three decision tree (DT)-based techniques trained to provide protocols for body computed tomography (CT) examinations. The most common 15 CT abdomen protocols were used to tune the models, with the system designed to provide the three most probable predictions for further radiologist revision. Our ensemble classifier, with the F1 score of approximately 83%, outperformed each model with the mean F1 score of approximately 80% in 5-fold cross-validation and performed the best with an F1 score of 95.5% for the top three predictions, surpassing the individual models with F1 scores ranging from 87.6% to 92.9%. In conclusion, the present study demonstrates that ML techniques can predict radiology protocols and identify key classification-dependent features. These models could be leveraged for use as a clinical decision support system to improve radiologists' efficiency.

Pedro R. A. S. Bassi, Xinze Zhou, Wenxuan Li, Szymon Płotka, Jieneng Chen, Qi Chen, Zheren Zhu, Jakub Prządo, Ibrahim E. Hamacı, Sezgin Er, Yuhan Wang, Ashwin Kumar, Bjoern Menze, Jarosław B. Ćwikła, Yuyin Zhou, Akshay S. Chaudhari, Curtis P. Langlotz, Sergio Decherchi, Andrea Cavalli, Kang Wang, Yang Yang, Alan L. Yuille, Zongwei Zhou

arxiv logopreprintOct 16 2025
Early tumor detection save lives. Each year, more than 300 million computed tomography (CT) scans are performed worldwide, offering a vast opportunity for effective cancer screening. However, detecting small or early-stage tumors on these CT scans remains challenging, even for experts. Artificial intelligence (AI) models can assist by highlighting suspicious regions, but training such models typically requires extensive tumor masks--detailed, voxel-wise outlines of tumors manually drawn by radiologists. Drawing these masks is costly, requiring years of effort and millions of dollars. In contrast, nearly every CT scan in clinical practice is already accompanied by medical reports describing the tumor's size, number, appearance, and sometimes, pathology results--information that is rich, abundant, and often underutilized for AI training. We introduce R-Super, which trains AI to segment tumors that match their descriptions in medical reports. This approach scales AI training with large collections of readily available medical reports, substantially reducing the need for manually drawn tumor masks. When trained on 101,654 reports, AI models achieved performance comparable to those trained on 723 masks. Combining reports and masks further improved sensitivity by +13% and specificity by +8%, surpassing radiologists in detecting five of the seven tumor types. Notably, R-Super enabled segmentation of tumors in the spleen, gallbladder, prostate, bladder, uterus, and esophagus, for which no public masks or AI models previously existed. This study challenges the long-held belief that large-scale, labor-intensive tumor mask creation is indispensable, establishing a scalable and accessible path toward early detection across diverse tumor types. We plan to release our trained models, code, and dataset at https://github.com/MrGiovanni/R-Super

Rui Luo, Peng Hu, Haikun Qi

arxiv logopreprintOct 16 2025
Density Compensation Function (DCF) is widely used in non-Cartesian MRI reconstruction, either for direct Non-Uniform Fast Fourier Transform (NUFFT) reconstruction or for iterative undersampled reconstruction. Current state-of-the-art methods involve time-consuming tens of iterations, which is one of the main hurdles for widespread application of the highly efficient non-Cartesian MRI. In this paper, we propose an efficient, non-iterative method to calculate DCF for arbitrary non-Cartesian $k$-space trajectories using Fast Fourier Deconvolution. Simulation experiments demonstrate that the proposed method is able to yield DCF for 3D non-Cartesian reconstruction in around 20 seconds, achieving orders of magnitude speed improvement compared to the state-of-the-art method while achieving similar reconstruction quality.
Page 20 of 6046038 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.