Sort by:
Page 30 of 45442 results

Deep learning detects retropharyngeal edema on MRI in patients with acute neck infections.

Rainio O, Huhtanen H, Vierula JP, Nurminen J, Heikkinen J, Nyman M, Klén R, Hirvonen J

pubmed logopapersJun 19 2025
In acute neck infections, magnetic resonance imaging (MRI) shows retropharyngeal edema (RPE), which is a prognostic imaging biomarker for a severe course of illness. This study aimed to develop a deep learning-based algorithm for the automated detection of RPE. We developed a deep neural network consisting of two parts using axial T2-weighted water-only Dixon MRI images from 479 patients with acute neck infections annotated by radiologists at both slice and patient levels. First, a convolutional neural network (CNN) classified individual slices; second, an algorithm classified patients based on a stack of slices. Model performance was compared with the radiologists' assessment as a reference standard. Accuracy, sensitivity, specificity, and area under receiver operating characteristic curve (AUROC) were calculated. The proposed CNN was compared with InceptionV3, and the patient-level classification algorithm was compared with traditional machine learning models. Of the 479 patients, 244 (51%) were positive and 235 (49%) negative for RPE. Our model achieved accuracy, sensitivity, specificity, and AUROC of 94.6%, 83.3%, 96.2%, and 94.1% at the slice level, and 87.4%, 86.5%, 88.2%, and 94.8% at the patient level, respectively. The proposed CNN was faster than InceptionV3 but equally accurate. Our patient classification algorithm outperformed traditional machine learning models. A deep learning model, based on weakly annotated data and computationally manageable training, achieved high accuracy for automatically detecting RPE on MRI in patients with acute neck infections. Our automated method for detecting relevant MRI findings was efficiently trained and might be easily deployed in practice to study clinical applicability. This approach might improve early detection of patients at high risk for a severe course of acute neck infections. Deep learning automatically detected retropharyngeal edema on MRI in acute neck infections. Areas under the receiver operating characteristic curve were 94.1% at the slice level and 94.8% at the patient level. The proposed convolutional neural network was lightweight and required only weakly annotated data.

Innovative technologies and their clinical prospects for early lung cancer screening.

Deng Z, Ma X, Zou S, Tan L, Miao T

pubmed logopapersJun 18 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, due to lacking effective early-stage screening approaches. Imaging, such as low-dose CT, poses radiation risk, and biopsies can induce some complications. Additionally, traditional serum tumor markers lack diagnostic specificity. This highlights the urgent need for precise and non-invasive early detection techniques. This systematic review aims to evaluate the limitations of conventional screening methods (imaging/biopsy/tumor markers), seek breakthroughs in liquid biopsy for early lung cancer detection, and assess the potential value of Artificial Intelligence (AI), thereby providing evidence-based insights for establishing an optimal screening framework. We systematically searched the PubMed database for the literature published up to May 2025. Key words include "Artificial Intelligence", "Early Lung cancer screening", "Imaging examination", "Innovative technologies", "Liquid biopsy", and "Puncture biopsy". Our inclusion criteria focused on studies about traditional and innovative screening methods, with an emphasis on original research concerning diagnostic performance or high-quality reviews. This approach helps identify critical studies in early lung cancer screening. Novel liquid biopsy techniques are non-invasive and have superior diagnostic efficacy. AI-assisted diagnostics further enhance accuracy. We propose three development directions: establishing risk-based liquid biopsy screening protocols, developing a stepwise "imaging-AI-liquid biopsy" diagnostic workflow, and creating standardized biomarker panel testing solutions. Integrating traditional methodologies, novel liquid biopsies, and AI to establish a comprehensive early lung cancer screening model is important. These innovative strategies aim to significantly increase early detection rates, substantially enhancing lung cancer control. This review provides both theoretical guidance for clinical practice and future research.

Imaging Epilepsy: Past, Passing, and to Come.

Theodore WH, Inati SK, Adler S, Pearl PL, Mcdonald CR

pubmed logopapersJun 18 2025
New imaging techniques appearing over the last few decades have replaced procedures that were uncomfortable, of low specificity, and prone to adverse events. While computed tomography remains useful for imaging patients with seizures in acute settings, structural magnetic resonance imaging (MRI) has become the most important imaging modality for epilepsy evaluation, with adjunctive functional imaging also increasingly well established in presurgical evaluation, including positron emission tomography (PET), single photon ictal-interictal subtraction computed tomography co-registered to MRI and functional MRI for preoperative cognitive mapping. Neuroimaging in inherited metabolic epilepsies is integral to diagnosis, monitoring, and assessment of treatment response. Neurotransmitter receptor PET and magnetic resonance spectroscopy can help delineate the pathophysiology of these disorders. Machine learning and artificial intelligence analyses based on large MRI datasets composed of healthy volunteers and people with epilepsy have been initiated to detect lesions that are not found visually, particularly focal cortical dysplasia. These methods, not yet approved for patient care, depend on careful clinical correlation and training sets that fully sample broad populations.

2nd trimester ultrasound (anomaly).

Carocha A, Vicente M, Bernardeco J, Rijo C, Cohen Á, Cruz J

pubmed logopapersJun 17 2025
The second-trimester ultrasound is a crucial tool in prenatal care, typically conducted between 18 and 24 weeks of gestation to evaluate fetal anatomy, growth, and mid-trimester screening. This article provides a comprehensive overview of the best practices and guidelines for performing this examination, with a focus on detecting fetal anomalies. The ultrasound assesses key structures and evaluates fetal growth by measuring biometric parameters, which are essential for estimating fetal weight. Additionally, the article discusses the importance of placental evaluation, amniotic fluid levels measurement, and the risk of preterm birth through cervical length measurements. Factors that can affect the accuracy of the scan, such as the skill of the operator, the quality of the equipment, and maternal conditions such as obesity, are discussed. The article also addresses the limitations of the procedure, including variability in detection. Despite these challenges, the second-trimester ultrasound remains a valuable screening and diagnostic tool, providing essential information for managing pregnancies, especially in high-risk cases. Future directions include improving imaging technology, integrating artificial intelligence for anomaly detection, and standardizing ultrasound protocols to enhance diagnostic accuracy and ensure consistent prenatal care.

Beyond the First Read: AI-Assisted Perceptual Error Detection in Chest Radiography Accounting for Interobserver Variability

Adhrith Vutukuri, Akash Awasthi, David Yang, Carol C. Wu, Hien Van Nguyen

arxiv logopreprintJun 16 2025
Chest radiography is widely used in diagnostic imaging. However, perceptual errors -- especially overlooked but visible abnormalities -- remain common and clinically significant. Current workflows and AI systems provide limited support for detecting such errors after interpretation and often lack meaningful human--AI collaboration. We introduce RADAR (Radiologist--AI Diagnostic Assistance and Review), a post-interpretation companion system. RADAR ingests finalized radiologist annotations and CXR images, then performs regional-level analysis to detect and refer potentially missed abnormal regions. The system supports a "second-look" workflow and offers suggested regions of interest (ROIs) rather than fixed labels to accommodate inter-observer variation. We evaluated RADAR on a simulated perceptual-error dataset derived from de-identified CXR cases, using F1 score and Intersection over Union (IoU) as primary metrics. RADAR achieved a recall of 0.78, precision of 0.44, and an F1 score of 0.56 in detecting missed abnormalities in the simulated perceptual-error dataset. Although precision is moderate, this reduces over-reliance on AI by encouraging radiologist oversight in human--AI collaboration. The median IoU was 0.78, with more than 90% of referrals exceeding 0.5 IoU, indicating accurate regional localization. RADAR effectively complements radiologist judgment, providing valuable post-read support for perceptual-error detection in CXR interpretation. Its flexible ROI suggestions and non-intrusive integration position it as a promising tool in real-world radiology workflows. To facilitate reproducibility and further evaluation, we release a fully open-source web implementation alongside a simulated error dataset. All code, data, demonstration videos, and the application are publicly available at https://github.com/avutukuri01/RADAR.

Radiologist-AI workflow can be modified to reduce the risk of medical malpractice claims

Bernstein, M., Sheppard, B., Bruno, M. A., Lay, P. S., Baird, G. L.

medrxiv logopreprintJun 16 2025
BackgroundArtificial Intelligence (AI) is rapidly changing the legal landscape of radiology. Results from a previous experiment suggested that providing AI error rates can reduce perceived radiologist culpability, as judged by mock jury members (4). The current study advances this work by examining whether the radiologists behavior also impacts perceptions of liability. Methods. Participants (n=282) read about a hypothetical malpractice case where a 50-year-old who visited the Emergency Department with acute neurological symptoms received a brain CT scan to determine if bleeding was present. An AI system was used by the radiologist who interpreted imaging. The AI system correctly flagged the case as abnormal. Nonetheless, the radiologist concluded no evidence of bleeding, and the blood-thinner t-PA was administered. Participants were randomly assigned to either a 1.) single-read condition, where the radiologist interpreted the CT once after seeing AI feedback, or 2.) a double-read condition, where the radiologist interpreted the CT twice, first without AI and then with AI feedback. Participants were then told the patient suffered irreversible brain damage due to the missed brain bleed, resulting in the patient (plaintiff) suing the radiologist (defendant). Participants indicated whether the radiologist met their duty of care to the patient (yes/no). Results. Hypothetical jurors were more likely to side with the plaintiff in the single-read condition (106/142, 74.7%) than in the double-read condition (74/140, 52.9%), p=0.0002. Conclusion. This suggests that the penalty for disagreeing with correct AI can be mitigated when images are interpreted twice, or at least if a radiologist gives an interpretation before AI is used.

Roadmap analysis for coronary artery stenosis detection and percutaneous coronary intervention prediction in cardiac CT for transcatheter aortic valve replacement.

Fujito H, Jilaihawi H, Han D, Gransar H, Hashimoto H, Cho SW, Lee S, Gheyath B, Park RH, Patel D, Guo Y, Kwan AC, Hayes SW, Thomson LEJ, Slomka PJ, Dey D, Makkar R, Friedman JD, Berman DS

pubmed logopapersJun 16 2025
The new artificial intelligence-based software, Roadmap (HeartFlow), may assist in evaluating coronary artery stenosis during cardiac computed tomography (CT) for transcatheter aortic valve replacement (TAVR). Consecutive TAVR candidates who underwent both cardiac CT angiography (CTA) and invasive coronary angiography were enrolled. We evaluated the ability of three methods to predict obstructive coronary artery disease (CAD), defined as ≥50 ​% stenosis on quantitative coronary angiography (QCA), and the need for percutaneous coronary intervention (PCI) within one year: Roadmap, clinician CT specialists with Roadmap, and CT specialists alone. The area under the curve (AUC) for predicting QCA ≥50 ​% stenosis was similar for CT specialists with or without Roadmap (0.93 [0.85-0.97] vs. 0.94 [0.88-0.98], p ​= ​0.82), both significantly higher than Roadmap alone (all p ​< ​0.05). For PCI prediction, no significant differences were found between QCA and CT specialists, with or without Roadmap, while Roadmap's AUC was lower (all p ​< ​0.05). The negative predictive value (NPV) of CT specialists with Roadmap for ≥50 ​% stenosis was 97 ​%, and for PCI prediction, the NPV was comparable to QCA (p ​= ​1.00). In contrast, the positive predictive value (PPV) of Roadmap alone for ≥50 ​% stenosis was 49 ​%, the lowest among all approaches, with a similar trend observed for PCI prediction. While Roadmap alone is insufficient for clinical decision-making due to low PPV, Roadmap may serve as a "second observer", providing a supportive tool for CT specialists by flagging lesions for careful review, thereby enhancing workflow efficiency and maintaining high diagnostic accuracy with excellent NPV.

Evaluating Explainability: A Framework for Systematic Assessment and Reporting of Explainable AI Features

Miguel A. Lago, Ghada Zamzmi, Brandon Eich, Jana G. Delfino

arxiv logopreprintJun 16 2025
Explainability features are intended to provide insight into the internal mechanisms of an AI device, but there is a lack of evaluation techniques for assessing the quality of provided explanations. We propose a framework to assess and report explainable AI features. Our evaluation framework for AI explainability is based on four criteria: 1) Consistency quantifies the variability of explanations to similar inputs, 2) Plausibility estimates how close the explanation is to the ground truth, 3) Fidelity assesses the alignment between the explanation and the model internal mechanisms, and 4) Usefulness evaluates the impact on task performance of the explanation. Finally, we developed a scorecard for AI explainability methods that serves as a complete description and evaluation to accompany this type of algorithm. We describe these four criteria and give examples on how they can be evaluated. As a case study, we use Ablation CAM and Eigen CAM to illustrate the evaluation of explanation heatmaps on the detection of breast lesions on synthetic mammographies. The first three criteria are evaluated for clinically-relevant scenarios. Our proposed framework establishes criteria through which the quality of explanations provided by AI models can be evaluated. We intend for our framework to spark a dialogue regarding the value provided by explainability features and help improve the development and evaluation of AI-based medical devices.

A multimodal deep learning model for detecting endoscopic images of near-infrared fluorescence capsules.

Wang J, Zhou C, Wang W, Zhang H, Zhang A, Cui D

pubmed logopapersJun 15 2025
Early screening for gastrointestinal (GI) diseases is critical for preventing cancer development. With the rapid advancement of deep learning technology, artificial intelligence (AI) has become increasingly prominent in the early detection of GI diseases. Capsule endoscopy is a non-invasive medical imaging technique used to examine the gastrointestinal tract. In our previous work, we developed a near-infrared fluorescence capsule endoscope (NIRF-CE) capable of exciting and capturing near-infrared (NIR) fluorescence images to specifically identify subtle mucosal microlesions and submucosal abnormalities while simultaneously capturing conventional white-light images to detect lesions with significant morphological changes. However, limitations such as low camera resolution and poor lighting within the gastrointestinal tract may lead to misdiagnosis and other medical errors. Manually reviewing and interpreting large volumes of capsule endoscopy images is time-consuming and prone to errors. Deep learning models have shown potential in automatically detecting abnormalities in NIRF-CE images. This study focuses on an improved deep learning model called Retinex-Attention-YOLO (RAY), which is based on single-modality image data and built on the YOLO series of object detection models. RAY enhances the accuracy and efficiency of anomaly detection, especially under low-light conditions. To further improve detection performance, we also propose a multimodal deep learning model, Multimodal-Retinex-Attention-YOLO (MRAY), which combines both white-light and fluorescence image data. The dataset used in this study consists of images of pig stomachs captured by our NIRF-CE system, simulating the human GI tract. In conjunction with a targeted fluorescent probe, which accumulates at lesion sites and releases fluorescent signals for imaging when abnormalities are present, a bright spot indicates a lesion. The MRAY model achieved an impressive precision of 96.3%, outperforming similar object detection models. To further validate the model's performance, ablation experiments were conducted, and comparisons were made with publicly available datasets. MRAY shows great promise for the automated detection of GI cancers, ulcers, inflammations, and other medical conditions in clinical practice.
Page 30 of 45442 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.