Sort by:
Page 54 of 67668 results

A Deep Neural Network Framework for the Detection of Bacterial Diseases from Chest X-Ray Scans.

Jain S, Jindal H, Bharti M

pubmed logopapersMay 27 2025
This research aims to develop an advanced deep-learning framework for detecting respiratory diseases, including COVID-19, pneumonia, and tuberculosis (TB), using chest X-ray scans. A Deep Neural Network (DNN)-based system was developed to analyze medical images and extract key features from chest X-rays. The system leverages various DNN learning algorithms to study X-ray scan color, curve, and edge-based features. The Adam optimizer is employed to minimize error rates and enhance model training. A dataset of 1800 chest X-ray images, consisting of COVID-19, pneumonia, TB, and typical cases, was evaluated across multiple DNN models. The highest accuracy was achieved using the VGG19 model. The proposed system demonstrated an accuracy of 94.72%, with a sensitivity of 92.73%, a specificity of 96.68%, and an F1-score of 94.66%. The error rate was 5.28% when trained with 80% of the dataset and tested on 20%. The VGG19 model showed significant accuracy improvements of 32.69%, 36.65%, 42.16%, and 8.1% over AlexNet, GoogleNet, InceptionV3, and VGG16, respectively. The prediction time was also remarkably low, ranging between 3 and 5 seconds. The proposed deep learning model efficiently detects respiratory diseases, including COVID-19, pneumonia, and TB, within seconds. The method ensures high reliability and efficiency by optimizing feature extraction and maintaining system complexity, making it a valuable tool for clinicians in rapid disease diagnosis.

Segmentation of the Left Ventricle and Its Pathologies for Acute Myocardial Infarction After Reperfusion in LGE-CMR Images.

Li S, Wu C, Feng C, Bian Z, Dai Y, Wu LM

pubmed logopapersMay 26 2025
Due to the association with higher incidence of left ventricular dysfunction and complications, segmentation of left ventricle and related pathological tissues: microvascular obstruction and myocardial infarction from late gadolinium enhancement cardiac magnetic resonance images is crucially important. However, lack of datasets, diverse shapes and locations, extreme imbalanced class, severe intensity distribution overlapping are the main challenges. We first release a late gadolinium enhancement cardiac magnetic resonance benchmark dataset LGE-LVP containing 140 patients with left ventricle myocardial infarction and concomitant microvascular obstruction. Then, a progressive deep learning model LVPSegNet is proposed to segment the left ventricle and its pathologies via adaptive region of interest extraction, sample augmentation, curriculum learning, and multiple receptive field fusion in dealing with the challenges. Comprehensive comparisons with state-of-the-art models on the internal and external datasets demonstrate that the proposed model performs the best on both geometric and clinical metrics and it most closely matched the clinician's performance. Overall, the released LGE-LVP dataset alongside the LVPSegNet we proposed offer a practical solution for automated left ventricular and its pathologies segmentation by providing data support and facilitating effective segmentation. The dataset and source codes will be released via https://github.com/DFLAG-NEU/LVPSegNet.

Training a deep learning model to predict the anatomy irradiated in fluoroscopic x-ray images.

Guo L, Trujillo D, Duncan JR, Thomas MA

pubmed logopapersMay 26 2025
Accurate patient dosimetry estimates from fluoroscopically-guided interventions (FGIs) are hindered by limited knowledge of the specific anatomy that was irradiated. Current methods use data reported by the equipment to estimate the patient anatomy exposed during each irradiation event. We propose a deep learning algorithm to automatically match 2D fluoroscopic images with corresponding anatomical regions in computational phantoms, enabling more precise patient dose estimates. Our method involves two main steps: (1) simulating 2D fluoroscopic images, and (2) developing a deep learning algorithm to predict anatomical coordinates from these images. For part (1), we utilized DeepDRR for fast and realistic simulation of 2D x-ray images from 3D computed tomography datasets. We generated a diverse set of simulated fluoroscopic images from various regions with different field sizes. In part (2), we employed a Residual Neural Network (ResNet) architecture combined with metadata processing to effectively integrate patient-specific information (age and gender) to learn the transformation between 2D images and specific anatomical coordinates in each representative phantom. For the Modified ResNet model, we defined an allowable error range of ± 10 mm. The proposed method achieved over 90% of predictions within ± 10 mm, with strong alignment between predicted and true coordinates as confirmed by Bland-Altman analysis. Most errors were within ± 2%, with outliers beyond ± 5% primarily in Z-coordinates for infant phantoms due to their limited representation in the training data. These findings highlight the model's accuracy and its potential for precise spatial localization, while emphasizing the need for improved performance in specific anatomical regions. In this work, a comprehensive simulated 2D fluoroscopy image dataset was developed, addressing the scarcity of real clinical datasets and enabling effective training of deep-learning models. The modified ResNet successfully achieved precise prediction of anatomical coordinates from the simulated fluoroscopic images, enabling the goal of more accurate patient-specific dosimetry.

Detecting microcephaly and macrocephaly from ultrasound images using artificial intelligence.

Mengistu AK, Assaye BT, Flatie AB, Mossie Z

pubmed logopapersMay 26 2025
Microcephaly and macrocephaly, which are abnormal congenital markers, are associated with developmental and neurologic deficits. Hence, there is a medically imperative need to conduct ultrasound imaging early on. However, resource-limited countries such as Ethiopia are confronted with inadequacies such that access to trained personnel and diagnostic machines inhibits the exact and continuous diagnosis from being met. This study aims to develop a fetal head abnormality detection model from ultrasound images via deep learning. Data were collected from three Ethiopian healthcare facilities to increase model generalizability. The recruitment period for this study started on November 9, 2024, and ended on November 30, 2024. Several preprocessing techniques have been performed, such as augmentation, noise reduction, and normalization. SegNet, UNet, FCN, MobileNetV2, and EfficientNet-B0 were applied to segment and measure fetal head structures using ultrasound images. The measurements were classified as microcephaly, macrocephaly, or normal using WHO guidelines for gestational age, and then the model performance was compared with that of existing industry experts. The metrics used for evaluation included accuracy, precision, recall, the F1 score, and the Dice coefficient. This study was able to demonstrate the feasibility of using SegNet for automatic segmentation, measurement of abnormalities of the fetal head, and classification of macrocephaly and microcephaly, with an accuracy of 98% and a Dice coefficient of 0.97. Compared with industry experts, the model achieved accuracies of 92.5% and 91.2% for the BPD and HC measurements, respectively. Deep learning models can enhance prenatal diagnosis workflows, especially in resource-constrained settings. Future work needs to be done on optimizing model performance, trying complex models, and expanding datasets to improve generalizability. If these technologies are adopted, they can be used in prenatal care delivery. Not applicable.

Beyond Accuracy: Evaluating certainty of AI models for brain tumour detection.

Nisa ZU, Bhatti SM, Jaffar A, Mazhar T, Shahzad T, Ghadi YY, Almogren A, Hamam H

pubmed logopapersMay 26 2025
Brain tumors pose a severe health risk, often leading to fatal outcomes if not detected early. While most studies focus on improving classification accuracy, this research emphasizes prediction certainty, quantified through loss values. Traditional metrics like accuracy and precision do not capture confidence in predictions, which is critical for medical applications. This study establishes a correlation between lower loss values and higher prediction certainty, ensuring more reliable tumor classification. We evaluate CNN, ResNet50, XceptionNet, and a Proposed Model (VGG19 with customized classification layers) using accuracy, precision, recall, and loss. Results show that while accuracy remains comparable across models, the Proposed Model achieves the best performance (96.95 % accuracy, 0.087 loss), outperforming others in both precision and recall. These findings demonstrate that certainty-aware AI models are essential for reliable clinical decision-making. This study highlights the potential of AI to bridge the shortage of medical professionals by integrating reliable diagnostic tools in healthcare. AI-powered systems can enhance early detection and improve patient outcomes, reinforcing the need for certainty-driven AI adoption in medical imaging.

Improving brain tumor diagnosis: A self-calibrated 1D residual network with random forest integration.

Sumithra A, Prathap PMJ, Karthikeyan A, Dhanasekaran S

pubmed logopapersMay 26 2025
Medical specialists need to perform precise MRI analysis for accurate diagnosis of brain tumors. Current research has developed multiple artificial intelligence (AI) techniques for the process automation of brain tumor identification. However, existing approaches often depend on singular datasets, limiting their generalization capabilities across diverse clinical scenarios. The research introduces SCR-1DResNet as a new diagnostic tool for brain tumor detection that incorporates self-calibrated Random Forest along with one-dimensional residual networks. The research starts with MRI image acquisition from multiple Kaggle datasets then proceeds through stepwise processing that eliminates noise, enhances images, and performs resizing and normalization and conducts skull stripping operations. After data collection the WaveSegNet mode l extracts important attributes from tumors at multiple scales. Components of Random Forest classifier together with One-Dimensional Residual Network form the SCR-1DResNet model via self-calibration optimization to improve prediction reliability. Tests show the proposed system produces classification precision of 98.50% accompanied by accuracy of 98.80% and recall reaching 97.80% respectively. The SCR-1DResNet model demonstrates superior diagnostic capability and enhanced performance speed which shows strong prospects towards clinical decision support systems and improved neurological and oncological patient treatments.

Diffusion based multi-domain neuroimaging harmonization method with preservation of anatomical details.

Lan H, Varghese BA, Sheikh-Bahaei N, Sepehrband F, Toga AW, Choupan J

pubmed logopapersMay 26 2025
In multi-center neuroimaging studies, the technical variability caused by the batch differences could hinder the ability to aggregate data across sites, and negatively impact the reliability of study-level results. Recent efforts in neuroimaging harmonization have aimed to minimize these technical gaps and reduce technical variability across batches. While Generative Adversarial Networks (GAN) has been a prominent method for addressing harmonization tasks, GAN-harmonized images suffer from artifacts or anatomical distortions. Given the advancements of denoising diffusion probabilistic model which produces high-fidelity images, we have assessed the efficacy of the diffusion model for neuroimaging harmonization. While GAN-based methods intrinsically transform imaging styles between two domains per model, we have demonstrated the diffusion model's superior capability in harmonizing images across multiple domains with single model. Our experiments highlight that the learned domain invariant anatomical condition reinforces the model to accurately preserve the anatomical details while differentiating batch differences at each diffusion step. Our proposed method has been tested using T1-weighted MRI images from two public neuroimaging datasets of ADNI1 and ABIDE II, yielding harmonization results with consistent anatomy preservation and superior FID score compared to the GAN-based methods. We have conducted multiple analyses including extensive quantitative and qualitative evaluations against the baseline models, ablation study showcasing the benefits of the learned domain invariant conditions, and improvements in the consistency of perivascular spaces segmentation analysis and volumetric analysis through harmonization.

ScanAhead: Simplifying standard plane acquisition of fetal head ultrasound.

Men Q, Zhao H, Drukker L, Papageorghiou AT, Noble JA

pubmed logopapersMay 26 2025
The fetal standard plane acquisition task aims to detect an Ultrasound (US) image characterized by specified anatomical landmarks and appearance for assessing fetal growth. However, in practice, due to variability in human operator skill and possible fetal motion, it can be challenging for a human operator to acquire a satisfactory standard plane. To support a human operator with this task, this paper first describes an approach to automatically predict the fetal head standard plane from a video segment approaching the standard plane. A transformer-based image predictor is proposed to produce a high-quality standard plane by understanding diverse scales of head anatomy within the US video frame. Because of the visual gap between the video frames and standard plane image, the predictor is equipped with an offset adaptor that performs domain adaption to translate the off-plane structures to the anatomies that would usually appear in a standard plane view. To enhance the anatomical details of the predicted US image, the approach is extended by utilizing a second modality, US probe movement, that provides 3D location information. Quantitative and qualitative studies conducted on two different head biometry planes demonstrate that the proposed US image predictor produces clinically plausible standard planes with superior performance to comparative published methods. The results of dual-modality solution show an improved visualization with enhanced anatomical details of the predicted US image. Clinical evaluations are also conducted to demonstrate the consistency between the predicted echo textures and the expected echo patterns seen in a typical real standard plane, which indicates its clinical feasibility for improving the standard plane acquisition process.

MobNas ensembled model for breast cancer prediction.

Shahzad T, Saqib SM, Mazhar T, Iqbal M, Almogren A, Ghadi YY, Saeed MM, Hamam H

pubmed logopapersMay 25 2025
Breast cancer poses a real and immense threat to humankind, thus a need to develop a way of diagnosing this devastating disease early, accurately, and in a simpler manner. Thus, while substantial progress has been made in developing machine learning algorithms, deep learning, and transfer learning models, issues with diagnostic accuracy and minimizing diagnostic errors persist. This paper introduces MobNAS, a model that uses MobileNetV2 and NASNetLarge to sort breast cancer images into benign, malignant, or normal classes. The study employs a multi-class classification design and uses a publicly available dataset comprising 1,578 ultrasound images, including 891 benign, 421 malignant, and 266 normal cases. By deploying MobileNetV2, it is easy to work well on devices with less computational capability than is used by NASNetLarge, which enhances its applicability and effectiveness in other tasks. The performance of the proposed MobNAS model was tested on the breast cancer image dataset, and the accuracy level achieved was 97%, the Mean Absolute Error (MAE) was 0.05, and the Matthews Correlation Coefficient (MCC) was 95%. From the findings of this research, it is evident that MobNAS can enhance diagnostic accuracy and reduce existing shortcomings in breast cancer detection.

Evaluation of synthetic training data for 3D intraoral reconstruction of cleft patients from single images.

Lingens L, Lill Y, Nalabothu P, Benitez BK, Mueller AA, Gross M, Solenthaler B

pubmed logopapersMay 24 2025
This study investigates the effectiveness of synthetic training data in predicting 2D landmarks for 3D intraoral reconstruction in cleft lip and palate patients. We take inspiration from existing landmark prediction and 3D reconstruction techniques for faces and demonstrate their potential in medical applications. We generated both real and synthetic datasets from intraoral scans and videos. A convolutional neural network was trained using a negative-Gaussian log-likelihood loss function to predict 2D landmarks and their corresponding confidence scores. The predicted landmarks were then used to fit a statistical shape model to generate 3D reconstructions from individual images. We analyzed the model's performance on real patient data and explored the dataset size required to overcome the domain gap between synthetic and real images. Our approach generates satisfying results on synthetic data and shows promise when tested on real data. The method achieves rapid 3D reconstruction from single images and can therefore provide significant value in day-to-day medical work. Our results demonstrate that synthetic training data are viable for training models to predict 2D landmarks and reconstruct 3D meshes in patients with cleft lip and palate. This approach offers an accessible, low-cost alternative to traditional methods, using smartphone technology for noninvasive, rapid, and accurate 3D reconstructions in clinical settings.
Page 54 of 67668 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.