Sort by:
Page 9 of 6036030 results

Jain B, Varshney A, Vithal M

pubmed logopapersOct 21 2025
Attention deficit hyperactivity disease (ADHD) is a neurobehavioural disorder of heterogeneous nature that is common among children and adolescents. It is marked by symptoms such as inattention, hyperactivity, and impulsivity that substantially impact an individual's everyday living and well-being. Due to the complicated nature of the disorder, it is challenging to obtain an accurate and timely diagnosis despite its prevalence. The lack of standardized testing techniques further deepens the problem. As a result, the number of undiagnosed cases is alarmingly high. This research presents a data-driven approach to enhance clinical decision-making in ADHD diagnosis using quantum machine learning and evolutionary algorithms. Leveraging the publicly available ADHD-200 dataset, which contains neuroimaging and associated phenotypic data from multiple global research sites, a comprehensive diagnostic framework is proposed. Feature extraction is carried out using a Quantum Convolutional Neural Network (QCNN), designed to identify nuanced patterns within high-dimensional MRI and behavioral data. To further refine the dataset, a novel Differential Evolution-Swarm Optimization (DE-Swarm) algorithm is introduced, combining the strengths of Differential Evolution and Particle Swarm Optimization for effective feature selection. This algorithm ensures minimal redundancy while improving interpretability and model robustness. The selected features are then fed into an AutoML system for model selection and hyperparameter optimization. Among several models tested, the Gradient Boosting Classifier achieved the highest test accuracy of 98.53%, outperforming existing techniques in terms of precision, recall, and specificity. By integrating quantum computing principles with optimized evolutionary strategies, this study contributes a robust and scalable framework for ADHD diagnosis. The work demonstrates how the integration of phenotypic and neuroimaging data, when processed through advanced machine learning pipelines, can significantly enhance diagnostic precision and support more personalized clinical assessments.

Reschke P, Gruenewald LD, Koch V, Gotta J, Höhne E, Nica AI, Yel I, D'Angelo T, Ascenti G, Bucolo GM, Lanzafame LRM, Jubica E, Eichler K, Vogl TJ, Booz C

pubmed logopapersOct 21 2025
Accurate detection of intracranial hemorrhage (ICH) on non-contrast CT is critical in emergency settings, where missed diagnoses may delay treatment and worsen outcomes. While artificial intelligence (AI) models demonstrate high standalone performance, their additive value as a second reader for radiology residents is not well-established. This retrospective study included 1,337 non-contrast head CT scans from 2015 to 2019 (670 ICH-positive and 667 ICH-negative). A previously validated AI model was used for ICH detection. Two radiology residents reviewed all scans in consensus, first without and later with AI support after a 30-day washout. Ground truth was established by expert consensus. Diagnostic performance metrics were calculated. AI assistance significantly improved radiology residents' diagnostic performance. Sensitivity increased from 0.85 to 0.94 and specificity from 0.87 to 0.97 (both p < 0.01), ROC-AUC rose from 0.86 to 0.95, and PR-AUC from 0.83 to 0.95 (p < 0.0001). The number of false negatives dropped from 101 to 41 with AI support. The greatest benefit was observed in subdural hematomas (SDH), where misses declined from 32 to 9 (20.3-5.7%; p < 0.001), corresponding to a 72% relative risk reduction. Misses also decreased for intraparenchymal hemorrhages (IPH: 37-20; RRR 46%) and subarachnoid hemorrhages (SAH: 30-11; RRR 63%). AI support reduced common error sources: small hemorrhage volume (48-21), atypical locations (30-12), and image-degrading artifacts (23-8). False positives fell from 87 to 21. By reducing diagnostic errors and supporting learning, AI serves as a valuable second reader for radiology residents-enhancing both patient safety and resident training in ICH detection.

Huang R, Wang X, Qiu Y, Lin W, Wei L, Zhou J, Huang C, Chen S

pubmed logopapersOct 21 2025
Utilizing artificial intelligence (AI)-assisted chest computed tomography (CT) to assess bone mineral density (BMD) in patients requiring long-term glucocorticoid (GC) therapy. In this retrospective study, dual-energy X-ray absorptiometry (DXA) was used as the gold standard for BMD assessment. The area under the receiver operating characteristic curve (AUC) was used to test diagnostic accuracy. We combined opportunistic chest CT and AI to measure vertebral BMD. We analyzed results of 249 chest CT-DXA pairs from 235 patients requiring GC therapy. Prevalence of osteopenia and osteoporosis was 28.51% and 22.13%, respectively. The average of thoracic 11 (T11) and T12 vertebral BMD measured by AI (AI-BMD) was 154.83 mg/cm<sup>3</sup> and 147.56 mg/cm<sup>3</sup>, respectively. AUC of AI-BMD for osteopenia/osteoporosis was 0.867 (95% confidence interval (CI): 0.823, 0.911) at T11 and 0.844 (95% CI: 0.795, 0.893) at T12. No significant differences in AUC between T11 and T12 were found in the cohort. 80 pairs of chest CT-DXA scans were performed among patients with long-term GC therapy. AUC of AI-BMD for osteopenia/osteoporosis were excellent in patients with long-term and initiating GC therapy. There were no significant differences in AUC among the two subgroups. We found that AI-assisted chest CT is a convenient diagnostic tool for BMD assessment. AUC of AI-BMD for osteopenia/osteoporosis at T11 and T12 is excellent in patients requiring long-term GC therapy. By using AI-assisted opportunistic chest CT, patients can receive BMD assessments without the extra burden.

Luo L, An X, Zhang J, Zhou W, Zhao X, Zhao H, Tian Y, Chen T, Zhao F

pubmed logopapersOct 21 2025
To develop a deep learning model for predicting molecular subgroups of medulloblastoma (MB) using preoperative brain MRI. This study included a cohort of 350 patients with MB for model development. Preoperative multiparametric brain MRIs were acquired, and molecular classification data for tumor samples were analyzed. A dual-task deep learning model, composed of a 3D Swin Transformer backbone and a Transformer-based mask decoder, was developed for the prediction of MB molecular subgroups. The model was jointly optimized with a parallel task of tumor and cerebellum segmentation. Ablation analysis was conducted to verify the effectiveness of the dual-task model design. An independent test cohort of 126 patients with MB was established to validate the predictive performance of the dual-task model. Our dual-task deep learning model demonstrated superior performance for MB molecular subgroup prediction, achieving an AUC of 0.877, accuracy of 88.9%, sensitivity of 71.6%, and specificity of 91.9%. The performance remained robust across both adult and pediatric age populations, with AUCs of 0.915 and 0.871, respectively. Furthermore, our approach exhibited effective generalization to the independent test cohort, yielding an AUC of 0.853, accuracy of 89.7%, sensitivity of 73.5%, and specificity of 92.1%. Ablation analysis demonstrated a significant improvement in AUC of 0.169 (95% CI 0.097-0.244) when using the dual-task model design. In comparison with the radiomics-based model, our deep learning model achieved a higher AUC by 0.156 (95% CI 0.079-0.233). Our proposed dual-task deep learning model enables automated and accurate prediction of MB molecular subgroups.

Yasaka K, Katayama A, Sakamoto N, Sato Y, Asari Y, Kanzawa J, Sonoda Y, Suzuki Y, Amemiya S, Kiryu S, Abe O

pubmed logopapersOct 21 2025
To evaluate the impact of super-resolution deep learning reconstruction (SR-DLR) algorithm on the evaluations of pituitary neuroendocrine tumor (PitNET) and on the image quality of pituitary MRI compared to conventional images with zero-filling interpolation (ZIP) technique. This retrospective study included 29 patients with PitNET who underwent pituitary MRI imaging. T2-weighted coronal images were reconstructed with SR-DLR and ZIP. Three readers assessed the images in terms of pituitary stalk deviation, noise, sharpness, depiction of PitNET, and diagnostic acceptability. A radiologist placed circular or ovoid regions of interest (ROIs) on the lateral ventricle and the tumor, and signal-to-noise ratio (SNR) and contrast-to-noise ratio were calculated. The radiologist also placed a linear ROI crossing the septum pellucidum perpendicularly. From the signal intensity profile along this ROI, edge rise slope (ERS) and full width at half maximum (FWHM) were calculated. Inter-reader agreement in the evaluations of pituitary stalk deviation in SR-DLR (0.518) tended to be superior to that in ZIP (0.405). Scores in the qualitative image analyses in SR-DLR were significantly better than those in ZIP for all evaluation items (p < 0.001). SNR and CNR in SR-DLR were significantly higher compared to ZIP (p < 0.001). Results of ERS (5433/2177 in SR-DLR/ZIP) and FWHM (0.67/1.27 mm in SR-DLR/ZIP) indicated significantly enhanced spatial resolution in SR-DLR compared to ZIP. SR-DLR tended to enhance inter-reader agreement in the evaluations of pituitary stalk deviation and significantly improved quality of pituitary MRI images compared to conventional ZIP images.

Neel Patel, Alexander Wong, Ashkan Ebadi

arxiv logopreprintOct 21 2025
Tuberculosis remains a critical global health issue, particularly in resource-limited and remote areas. Early detection is vital for treatment, yet the lack of skilled radiologists underscores the need for artificial intelligence (AI)-driven screening tools. Developing reliable AI models is challenging due to the necessity for large, high-quality datasets, which are costly to obtain. To tackle this, we propose a teacher--student framework which enhances both disease and symptom detection on chest X-rays by integrating two supervised heads and a self-supervised head. Our model achieves an accuracy of 98.85% for distinguishing between COVID-19, tuberculosis, and normal cases, and a macro-F1 score of 90.09% for multilabel symptom detection, significantly outperforming baselines. The explainability assessments also show the model bases its predictions on relevant anatomical features, demonstrating promise for deployment in clinical screening and triage settings.

Anna Oliveras, Roger Marí, Rafael Redondo, Oriol Guardià, Ana Tost, Bhalaji Nagarajan, Carolina Migliorelli, Vicent Ribas, Petia Radeva

arxiv logopreprintOct 21 2025
This work introduces a new latent diffusion model to generate high-quality 3D chest CT scans conditioned on 3D anatomical masks. The method synthesizes volumetric images of size 256x256x256 at 1 mm isotropic resolution using a single mid-range GPU, significantly lowering the computational cost compared to existing approaches. The conditioning masks delineate lung and nodule regions, enabling precise control over the output anatomical features. Experimental results demonstrate that conditioning solely on nodule masks leads to anatomically incorrect outputs, highlighting the importance of incorporating global lung structure for accurate conditional synthesis. The proposed approach supports the generation of diverse CT volumes with and without lung nodules of varying attributes, providing a valuable tool for training AI models or healthcare professionals.

Maryam Dialameh, Hossein Rajabzadeh, Jung Suk Sim, Hyock Ju Kwon

arxiv logopreprintOct 21 2025
Papillary thyroid microcarcinoma (PTMC) is increasingly managed with radio-frequency ablation (RFA), yet accurate lesion segmentation in ultrasound videos remains difficult due to low contrast, probe-induced motion, and heat-related artifacts. The recent Segment Anything Model 2 (SAM-2) generalizes well to static images, but its frame-independent design yields unstable predictions and temporal drift in interventional ultrasound. We introduce \textbf{EMA-SAM}, a lightweight extension of SAM-2 that incorporates a confidence-weighted exponential moving average pointer into the memory bank, providing a stable latent prototype of the tumour across frames. This design preserves temporal coherence through probe pressure and bubble occlusion while rapidly adapting once clear evidence reappears. On our curated PTMC-RFA dataset (124 minutes, 13 patients), EMA-SAM improves \emph{maxDice} from 0.82 (SAM-2) to 0.86 and \emph{maxIoU} from 0.72 to 0.76, while reducing false positives by 29\%. On external benchmarks, including VTUS and colonoscopy video polyp datasets, EMA-SAM achieves consistent gains of 2--5 Dice points over SAM-2. Importantly, the EMA pointer adds \textless0.1\% FLOPs, preserving real-time throughput of $\sim$30\,FPS on a single A100 GPU. These results establish EMA-SAM as a robust and efficient framework for stable tumour tracking, bridging the gap between foundation models and the stringent demands of interventional ultrasound. Codes are available here \hyperref[code {https://github.com/mdialameh/EMA-SAM}.

Schulz, M.-A., Siegel, N. T., Ritter, K.

biorxiv logopreprintOct 21 2025
This study critically reevaluates the utility of brain-age models within the context of detecting neurological and psychiatric disorders, challenging the conventional emphasis on maximizing chronological age prediction accuracy. Our analysis of T1 MRI data from 46,381 UK Biobank participants reveals that simpler machine learning models, and notably those with excessive regularization, demonstrate superior sensitivity to disease-relevant changes compared to their more complex counterparts, despite being less accurate in chronological age prediction. This counterintuitive discovery suggests that models traditionally deemed inferior might, in fact, offer a more meaningful biomarker for brain health by capturing variations pertinent to disease states. These findings challenge the assumption that accuracy-optimized brain-age models serve as useful normative models of brain aging. Optimizing for age accuracy appears misaligned with normative aims: it drives models to rely on low-variance aging features and to deemphasize higher-variance signals that are more informative about brain health and disease. Consequently, we propose a recalibration of focus towards models that, while less accurate in conventional terms, yield brain-age gaps with larger patient-control effect sizes, offering greater utility in early disease detection and understanding the multifaceted nature of brain aging.

Levy N, Meslin S, Barthélémy R, Benremily F, Bourgeois C, Bourzeix P, Chousterman B, Djadi-Prat J, Ep A, Kezar A, Laidet C, Lanoy E, Léopold V, Pereira H, Plateker O, Rivoalen AS, de Roquetaillade C, Vignon P, Bruno J, Cholley B

pubmed logopapersOct 21 2025
Stroke volume is a major determinant of tissue perfusion and, therefore, a key parameter to monitor in patients with haemodynamic instability and hypoperfusion. Left ventricular outflow tract (LVOT) velocity-time integral (VTI) measurement using pulsed-wave Doppler is widely used as an estimation of stroke volume and should be a competence required for every intensive care unit (ICU) physician. Artificial intelligence (AI) applied to ultrasound facilitates the acquisition of adequate images. The aim of the present study is to evaluate the interchangeability of LVOT VTI measurements obtained by minimally trained operators and expert physicians, both guided by AI. This is a prospective multicentre randomised controlled trial. ICU patients in whom fluid administration is considered necessary will be included. A minimally trained operator and an expert will independently measure LVOT VTI, guided by the UltraSight AI software to obtain the best five-chamber view, before and after a 250 mL fluid challenge. The order of acquisition between each operator will be randomised. 100 patients will be included.The primary endpoint is the relative difference in LVOT VTI between operators. Secondary outcomes include the concordance of the therapeutic decision made by the blinded physician in charge of the patient based on the measures obtained by each operator, and the agreement between absolute values of LVOT VTI obtained by minimally trained and expert operators. The study has been reviewed and approved by a regional ethics committee (Comité de Protection des Personnes-Ile de France II-n°24.00671.000291). An information note will be given to the participant before he or she participates in the study. The present study will be disseminated through peer-reviewed publications and academic and medical conferences. NCT06486467.
Page 9 of 6036030 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.