Sort by:
Page 29 of 45442 results

Diagnostic Performance of Universal versus Stratified Computer-Aided Detection Thresholds for Chest X-Ray-Based Tuberculosis Screening

Sung, J., Kitonsa, P. J., Nalutaaya, A., Isooba, D., Birabwa, S., Ndyabayunga, K., Okura, R., Magezi, J., Nantale, D., Mugabi, I., Nakiiza, V., Dowdy, D. W., Katamba, A., Kendall, E. A.

medrxiv logopreprintJun 24 2025
BackgroundComputer-aided detection (CAD) software analyzes chest X-rays for features suggestive of tuberculosis (TB) and provides a numeric abnormality score. However, estimates of CAD accuracy for TB screening are hindered by the lack of confirmatory data among people with lower CAD scores, including those without symptoms. Additionally, the appropriate CAD score thresholds for obtaining further testing may vary according to population and client characteristics. MethodsWe screened for TB in Ugandan individuals aged [&ge;]15 years using portable chest X-rays with CAD (qXR v3). Participants were offered screening regardless of their symptoms. Those with X-ray scores above a threshold of 0.1 (range, 0 - 1) were asked to provide sputum for Xpert Ultra testing. We estimated the diagnostic accuracy of CAD for detecting Xpert-positive TB when using the same threshold for all individuals (under different assumptions about TB prevalence among people with X-ray scores <0.1), and compared this estimate to age- and/or sex-stratified approaches. FindingsOf 52,835 participants screened for TB using CAD, 8,949 (16.9%) had X-ray scores [&ge;]0.1. Of 7,219 participants with valid Xpert Ultra results, 382 (5.3%) were Xpert-positive, including 81 with trace results. Assuming 0.1% of participants with X-ray scores <0.1 would have been Xpert-positive if tested, qXR had an estimated AUC of 0.920 (95% confidence interval 0.898-0.941) for Xpert-positive TB. Stratifying CAD thresholds according to age and sex improved accuracy; for example, at 96.1% specificity, estimated sensitivity was 75.0% for a universal threshold (of [&ge;]0.65) versus 76.9% for thresholds stratified by age and sex (p=0.046). InterpretationThe accuracy of CAD for TB screening among all screening participants, including those without symptoms or abnormal chest X-rays, is higher than previously estimated. Stratifying CAD thresholds based on client characteristics such as age and sex could further improve accuracy, enabling a more effective and personalized approach to TB screening. FundingNational Institutes of Health Research in contextO_ST_ABSEvidence before this studyC_ST_ABSThe World Health Organization (WHO) has endorsed computer-aided detection (CAD) as a screening tool for tuberculosis (TB), but the appropriate CAD score that triggers further diagnostic evaluation for tuberculosis varies by population. The WHO recommends determining the appropriate CAD threshold for specific settings and population and considering unique thresholds for specific populations, including older age groups, among whom CAD may perform poorly. We performed a PubMed literature search for articles published until September 9, 2024, using the search terms "tuberculosis" AND ("computer-aided detection" OR "computer aided detection" OR "CAD" OR "computer-aided reading" OR "computer aided reading" OR "artificial intelligence"), which resulted in 704 articles. Among them, we identified studies that evaluated the performance of CAD for tuberculosis screening and additionally reviewed relevant references. Most prior studies reported area under the curves (AUC) ranging from 0.76 to 0.88 but limited their evaluations to individuals with symptoms or abnormal chest X-rays. Some prior studies identified subgroups (including older individuals and people with prior TB) among whom CAD had lower-than-average AUCs, and authors discussed how the prevalence of such characteristics could affect the optimal value of a population-wide CAD threshold; however, none estimated the accuracy that could be gained with adjusting CAD thresholds between individuals based on personal characteristics. Added value of this studyIn this study, all consenting individuals in a high-prevalence setting were offered chest X-ray screening, regardless of symptoms, if they were [&ge;]15 years old, not pregnant, and not on TB treatment. A very low CAD score cutoff (qXR v3 score of 0.1 on a 0-1 scale) was used to select individuals for confirmatory sputum molecular testing, enabling the detection of radiographically mild forms of TB and facilitating comparisons of diagnostic accuracy at different CAD thresholds. With this more expansive, symptom-neutral evaluation of CAD, we estimated an AUC of 0.920, and we found that the qXR v3 threshold needed to decrease to under 0.1 to meet the WHO target product profile goal of [&ge;]90% sensitivity and [&ge;]70% specificity. Compared to using the same thresholds for all participants, adjusting CAD thresholds by age and sex strata resulted in a 1 to 2% increase in sensitivity without affecting specificity. Implications of all the available evidenceTo obtain high sensitivity with CAD screening in high-prevalence settings, low score thresholds may be needed. However, countries with a high burden of TB often do not have sufficient resources to test all individuals above a low threshold. In such settings, adjusting CAD thresholds based on individual characteristics associated with TB prevalence (e.g., male sex) and those associated with false-positive X-ray results (e.g., old age) can potentially improve the efficiency of TB screening programs.

AI-based large-scale screening of gastric cancer from noncontrast CT imaging.

Hu C, Xia Y, Zheng Z, Cao M, Zheng G, Chen S, Sun J, Chen W, Zheng Q, Pan S, Zhang Y, Chen J, Yu P, Xu J, Xu J, Qiu Z, Lin T, Yun B, Yao J, Guo W, Gao C, Kong X, Chen K, Wen Z, Zhu G, Qiao J, Pan Y, Li H, Gong X, Ye Z, Ao W, Zhang L, Yan X, Tong Y, Yang X, Zheng X, Fan S, Cao J, Yan C, Xie K, Zhang S, Wang Y, Zheng L, Wu Y, Ge Z, Tian X, Zhang X, Wang Y, Zhang R, Wei Y, Zhu W, Zhang J, Qiu H, Su M, Shi L, Xu Z, Zhang L, Cheng X

pubmed logopapersJun 24 2025
Early detection through screening is critical for reducing gastric cancer (GC) mortality. However, in most high-prevalence regions, large-scale screening remains challenging due to limited resources, low compliance and suboptimal detection rate of upper endoscopic screening. Therefore, there is an urgent need for more efficient screening protocols. Noncontrast computed tomography (CT), routinely performed for clinical purposes, presents a promising avenue for large-scale designed or opportunistic screening. Here we developed the Gastric Cancer Risk Assessment Procedure with Artificial Intelligence (GRAPE), leveraging noncontrast CT and deep learning to identify GC. Our study comprised three phases. First, we developed GRAPE using a cohort from 2 centers in China (3,470 GC and 3,250 non-GC cases) and validated its performance on an internal validation set (1,298 cases, area under curve = 0.970) and an independent external cohort from 16 centers (18,160 cases, area under curve = 0.927). Subgroup analysis showed that the detection rate of GRAPE increased with advancing T stage but was independent of tumor location. Next, we compared the interpretations of GRAPE with those of radiologists and assessed its potential in assisting diagnostic interpretation. Reader studies demonstrated that GRAPE significantly outperformed radiologists, improving sensitivity by 21.8% and specificity by 14.0%, particularly in early-stage GC. Finally, we evaluated GRAPE in real-world opportunistic screening using 78,593 consecutive noncontrast CT scans from a comprehensive cancer center and 2 independent regional hospitals. GRAPE identified persons at high risk with GC detection rates of 24.5% and 17.7% in 2 regional hospitals, with 23.2% and 26.8% of detected cases in T1/T2 stage. Additionally, GRAPE detected GC cases that radiologists had initially missed, enabling earlier diagnosis of GC during follow-up for other diseases. In conclusion, GRAPE demonstrates strong potential for large-scale GC screening, offering a feasible and effective approach for early detection. ClinicalTrials.gov registration: NCT06614179 .

Brain ultrasonography in neurosurgical patients.

Mahajan C, Kapoor I, Prabhakar H

pubmed logopapersJun 24 2025
Brain ultrasound is a popular point-of-care test that helps visualize brain structures. This review highlights recent developments in brain ultrasonography. There is a need to keep pace with the ongoing technological advancements and establishing standardized quality criteria for improving its utility in clinical practice. Newer automated indices derived from transcranial Doppler help establish its role as a noninvasive monitor of intracranial pressure and diagnosing vasospasm/delayed cerebral ischemia. A novel robotic transcranial Doppler system equipped with artificial intelligence allows real-time continuous neuromonitoring. Intraoperative ultrasound assists neurosurgeons in real-time localization of brain lesions and helps in assessing the extent of resection, thereby enhancing surgical precision and safety. Optic nerve sheath diameter point-of-care ultrasonography is an effective means of diagnosing raised intracranial pressure, triaging, and prognostication. The quality criteria checklist can help standardize this technique. Newer advancements like focused ultrasound, contrast-enhanced ultrasound, and functional ultrasound have also been discussed. Brain ultrasound continues to be a critical bedside tool in neurologically injured patients. With the advent of technological advancements, its utility has widened and its capabilities have expanded, making it more accurate and versatile in clinical practice.

SE-ATT-YOLO- A deep learning driven ultrasound based respiratory motion compensation system for precision radiotherapy.

Kuo CC, Pillai AG, Liao AH, Yu HW, Ramanathan S, Zhou H, Boominathan CM, Jeng SC, Chiou JF, Chuang HC

pubmed logopapersJun 21 2025
The therapeutic management of neoplasm employs high level energy beam to ablate malignant cells, which can cause collateral damage to adjacent normal tissue. Furthermore, respiration-induced organ motion, during radiotherapy can lead to significant displacement of neoplasms. In this work, a non-invasive ultrasound-based deep learning algorithm for respiratory motion compensation system (RMCS) was developed to mitigate the effect of respiratory motion induced neoplasm movement in radiotherapy. The deep learning algorithm generated based on modified YOLOv8n (You Only Look Once), by incorporating squeeze and excitation blocks for channel wise recalibration and enhanced attention mechanisms for spatial channel focus (SE-ATT-YOLO) to cope up with enhanced ultrasound image detection in real time scenario. The trained model was inferred with ultrasound movement of human diaphragm and tracked the bounding box coordinates using BoT-Sort, which drives the RMCS. The SE-ATT-YOLO model achieved mean average precision (mAP) of 0.88 which outperforms YOLOv8n with the value of 0.85. The root mean square error (RMSE) obtained from prerecorded respiratory signals with the compensated RMCS signal was calculated. The model achieved an inference speed of approximately 50 FPS. The RMSE values recorded were 4.342 for baseline shift, 3.105 for sinusoidal signal, 1.778 for deep breath, and 1.667 for slow signal. The SE-ATT-YOLO model outperformed all the results of previous models. The loss function uncertainty in YOLOv8n model was rectified in SE-ATT YOLO depicting the stability of the model. The model' stability, speed and accuracy of the model optimized the performance of the RMCS.

Automatic detection of hippocampal sclerosis in patients with epilepsy.

Belke M, Zahnert F, Steinbrenner M, Halimeh M, Miron G, Tsalouchidou PE, Linka L, Keil B, Jansen A, Möschl V, Kemmling A, Nimsky C, Rosenow F, Menzler K, Knake S

pubmed logopapersJun 21 2025
This study was undertaken to develop and validate an automatic, artificial intelligence-enhanced software tool for hippocampal sclerosis (HS) detection, using a variety of standard magnetic resonance imaging (MRI) protocols from different MRI scanners for routine clinical practice. First, MRI scans of 36 epilepsy patients with unilateral HS and 36 control patients with epilepsy of other etiologies were analyzed. MRI features, including hippocampal subfield volumes from three-dimensional (3D) magnetization-prepared rapid acquisition gradient echo (MPRAGE) scans and fluid-attenuated inversion recovery (FLAIR) intensities, were calculated. Hippocampal subfield volumes were corrected for total brain volume and z-scored using a dataset of 256 healthy controls. Hippocampal subfield FLAIR intensities were z-scored in relation to each subject's mean cortical FLAIR signal. Additionally, left-right ratios of FLAIR intensities and volume features were obtained. Support vector classifiers were trained on the above features to predict HS presence and laterality. In a second step, the algorithm was validated using two independent, external cohorts, including 118 patients and 116 controls in sum, scanned with different MRI scanners and acquisition protocols. Classifiers demonstrated high accuracy in HS detection and lateralization, with slight variations depending on the input image availability. The best cross-validation accuracy was achieved using both 3D MPRAGE and 3D FLAIR scans (mean accuracy = 1.0, confidence interval [CI] = .939-1.0). External validation of trained classifiers in two independent cohorts yielded accuracies of .951 (CI = .902-.980) and .889 (CI = .805-.945), respectively. In both validation cohorts, the additional use of FLAIR scans led to significantly better classification performance than the use of MPRAGE data alone (p = .016 and p = .031, respectively). A further model was trained on both validation cohorts and tested on the former training cohort, providing additional evidence for good validation performance. Comparison to a previously published algorithm showed no significant difference in performance (p = 1). The method presented achieves accurate automated HS detection using standard clinical MRI protocols. It is robust and flexible and requires no image processing expertise.

Current and future applications of artificial intelligence in lung cancer and mesothelioma.

Roche JJ, Seyedshahi F, Rakovic K, Thu AW, Le Quesne J, Blyth KG

pubmed logopapersJun 20 2025
Considerable challenges exist in managing lung cancer and mesothelioma, including diagnostic complexity, treatment stratification, early detection and imaging quantification. Variable incidence in mesothelioma also makes equitable provision of high-quality care difficult. In this context, artificial intelligence (AI) offers a range of assistive/automated functions that can potentially enhance clinical decision-making, while reducing inequality and pathway delay. In this state-of-the-art narrative review, we synthesise evidence on this topic, focusing particularly on tools that ingest routine pathology and radiology images. We summarise the strengths and weaknesses of AI applied to common multidisciplinary team (MDT) functions, including histological diagnosis, therapeutic response prediction, radiological detection and quantification, and survival estimation. We also review emerging methods capable of generating novel biological insights and current barriers to implementation, including access to high-quality training data and suitable regulatory and technical infrastructure. Neural networks trained on pathology images have proven utility in histological classification, prognostication, response prediction and survival. Self-supervised models can also generate new insights into biological features responsible for adverse outcomes. Radiology applications include lung nodule tools, which offer critical pathway support for imminent lung cancer screening and urgent referrals. Tumour segmentation AI offers particular advantages in mesothelioma, where response assessment and volumetric staging are difficult using human readers due to tumour size and morphological complexity. AI is also critical for radiogenomics, permitting effective integration of molecular and radiomic features for discovery of non-invasive markers for molecular subtyping and enhanced stratification. AI solutions offer considerable potential benefits across the MDT, particularly in repetitive or time-consuming tasks based on pathology and radiology images. Effective leveraging of this technology is critical for lung cancer screening and efficient delivery of increasingly complex diagnostic and predictive MDT functions. Future AI research should involve transparent and interpretable outputs that assist in explaining the basis of AI-supported decision making.

Emergency radiology: roadmap for radiology departments.

Aydin S, Ece B, Cakmak V, Kocak B, Onur MR

pubmed logopapersJun 20 2025
Emergency radiology has evolved into a significant subspecialty over the past 2 decades, facing unique challenges including escalating imaging volumes, increasing study complexity, and heightened expectations from clinicians and patients. This review provides a comprehensive overview of the key requirements for an effective emergency radiology unit. Emergency radiologists play a crucial role in real-time decision-making by providing continuous 24/7 support, requiring expertise across various organ systems and close collaboration with emergency physicians and specialists. Beyond image interpretation, emergency radiologists are responsible for organizing staff schedules, planning equipment, determining imaging protocols, and establishing standardized reporting systems. Operational considerations in emergency radiology departments include efficient scheduling models such as circadian-based scheduling, strategic equipment organization with primary imaging modalities positioned near emergency departments, and effective imaging management through structured ordering systems and standardized protocols. Preparedness for mass casualty incidents requires a well-organized workflow process map detailing steps from patient transfer to image acquisition and interpretation, with clear task allocation and imaging pathways. Collaboration between emergency radiologists and physicians is essential, with accurate communication facilitated through various channels and structured reporting templates. Artificial intelligence has emerged as a transformative tool in emergency radiology, offering potential benefits in both interpretative domains (detecting intracranial hemorrhage, pulmonary embolism, acute ischemic stroke) and non-interpretative applications (triage systems, protocol assistance, quality control). Despite implementation challenges including clinician skepticism, financial considerations, and ethical issues, AI can enhance diagnostic accuracy and workflow optimization. Teleradiology provides solutions for staff shortages, particularly during off-hours, with hybrid models allowing radiologists to work both on-site and remotely. This review aims to guide stakeholders in establishing and maintaining efficient emergency radiology services to improve patient outcomes.

Image-Based Search in Radiology: Identification of Brain Tumor Subtypes within Databases Using MRI-Based Radiomic Features.

von Reppert M, Chadha S, Willms K, Avesta A, Maleki N, Zeevi T, Lost J, Tillmanns N, Jekel L, Merkaj S, Lin M, Hoffmann KT, Aneja S, Aboian MS

pubmed logopapersJun 20 2025
Existing neuroradiology reference materials do not cover the full range of primary brain tumor presentations, and text-based medical image search engines are limited by the lack of consistent structure in radiology reports. To address this, an image-based search approach is introduced here, leveraging an institutional database to find reference MRIs visually similar to presented query cases. Two hundred ninety-five patients (mean age and standard deviation, 51 ± 20 years) with primary brain tumors who underwent surgical and/or radiotherapeutic treatment between 2000 and 2021 were included in this retrospective study. Semiautomated convolutional neural network-based tumor segmentation was performed, and radiomic features were extracted. The data set was split into reference and query subsets, and dimensionality reduction was applied to cluster reference cases. Radiomic features extracted from each query case were projected onto the clustered reference cases, and nearest neighbors were retrieved. Retrieval performance was evaluated by using mean average precision at k, and the best-performing dimensionality reduction technique was identified. Expert readers independently rated visual similarity by using a 5-point Likert scale. t-Distributed stochastic neighbor embedding with 6 components was the highest-performing dimensionality reduction technique, with mean average precision at 5 ranging from 78%-100% by tumor type. The top 5 retrieved reference cases showed high visual similarity Likert scores with corresponding query cases (76% 'similar' or 'very similar'). We introduce an image-based search method for exploring historical MR images of primary brain tumors and fetching reference cases closely resembling queried ones. Assessment involving comparison of tumor types and visual similarity Likert scoring by expert neuroradiologists validates the effectiveness of this method.

Automatic Detection of B-Lines in Lung Ultrasound Based on the Evaluation of Multiple Characteristic Parameters Using Raw RF Data.

Shen W, Zhang Y, Zhang H, Zhong H, Wan M

pubmed logopapersJun 20 2025
B-line artifacts in lung ultrasound, pivotal for diagnosing pulmonary conditions, warrant automated recognition to enhance diagnostic accuracy. In this paper, a lung ultrasound B-line vertical artifact identification method based on radio frequency (RF) signal was proposed. B-line regions were distinguished from non-B-line regions by inputting multiple characteristic parameters into nonlinear support vector machine (SVM). Six characteristic parameters were evaluated, including permutation entropy, information entropy, kurtosis, skewness, Nakagami shape factor, and approximate entropy. Following the evaluation that demonstrated the performance differences in parameter recognition, Principal Component Analysis (PCA) was utilized to reduce the dimensionality to a four-dimensional feature set for input into a nonlinear Support Vector Machine (SVM) for classification purposes. Four types of experiments were conducted: a sponge with dripping water model, gelatin phantoms containing either glass beads or gelatin droplets, and in vivo experiments. By employing precise feature selection and analyzing scan lines rather than full images, this approach significantly reduced the dependency on large image datasets without compromising discriminative accuracy. The method exhibited performance comparable to contemporary image-based deep learning approaches, which, while highly effective, typically necessitate extensive data for training and require expert annotation of large datasets to establish ground truth. Owing to the optimized architecture of our model, efficient sample recognition was achieved, with the capability to process between 27,000 and 33,000 scan lines per second (resulting in a frame rate exceeding 100 FPS, with 256 scan lines per frame), thus supporting real-time analysis. The results demonstrate that the accuracy of the method to classify a scan line as belonging to a B-line region was up to 88%, with sensitivity reaching up to 90%, specificity up to 87%, and an F1-score up to 89%. This approach effectively reflects the performance of scan line classification pertinent to B-line identification. Our approach reduces the reliance on large annotated datasets, thereby streamlining the preprocessing phase.

Ensuring integrity in dental education: Developing a novel AI model for consistent and traceable image analysis in preclinical endodontic procedures.

Ibrahim M, Omidi M, Guentsch A, Gaffney J, Talley J

pubmed logopapersJun 19 2025
Academic integrity is crucial in dental education, especially during practical exams assessing competencies. Traditional oversight may not detect sophisticated academic dishonesty methods like radiograph substitution or tampering. This study aimed to develop and evaluate a novel artificial intelligence (AI) model utilizing a Siamese neural network to detect inconsistencies in radiographic images taken for root canal treatment (RCT) procedures in preclinical endodontic courses, thereby enhancing educational integrity. A Siamese neural network was designed to compare radiographs from different RCT procedures. The model was trained on 3390 radiographs, with data augmentation applied to improve generalizability. The dataset was split into training, validation, and testing subsets. Performance metrics included accuracy, precision, sensitivity (recall), and F1-score. Cross-validation and hyperparameter tuning optimized the model. Our AI model achieved an accuracy of 89.31%, a precision of 76.82%, a sensitivity of 84.82%, and an F1-score of 80.50%. The optimal similarity threshold was 0.48, where maximum accuracy was observed. The confusion matrix indicated a high rate of correct classifications, and cross-validation confirmed the model's robustness with a standard deviation of 1.95% across folds. The AI-driven Siamese neural network effectively detects radiographic inconsistencies in RCT preclinical procedures. Implementing this novel model will serve as an objective tool to uphold academic integrity in dental education, enhance the fairness and reliability of assessments, promote a culture of honesty amongst students, and reduce the administrative burden on educators.
Page 29 of 45442 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.