ICCV preprint cover

I Am Big, You Are Little; I Am Right, You Are Wrong

We study how different vision architectures focus on minimal sufficient pixel sets that determine classifications. Using this notion of model 'concentration', we show architectures exhibit statistically distinct concentration patterns (notably ConvNeXt and EVA), and that misclassified images require larger pixel sets than correct classifications.

August 2025 · David A. Kelly, Akchunya Chanchal, Nathan Blake
SpecReX cover

SpecReX: Explainable AI for Raman Spectroscopy

Explainable AI method for Raman spectroscopy based on causal responsibility, with validation on simulated spectra.

March 2025 · Nathan Blake, David A. Kelly, Akchunya Chanchal, Sarah Kapllani-Mucaj, Geraint Thomas, Hana Chockler
Comparision of different explainability tools

Explainable AI for the Classication of Brain MRIs

Background: Machine learning applied to medical imaging labor under a lack of trust due to the inscrutability of AI models and the lack of explanations for their outcomes. Explainable AI has therefore become an essential research topic. Unfortunately, many AI models, both research and commercial, are unavailable for white-box examination, in which access to a model’s internals is required. There is therefore a need for black-box explainability tools in the medical domain. Several such tools for general images exist, but their relative strengths and weak- nesses when applied to medical images have not been explored. Methods: We use a publicly available dataset of brain MRI images and a model trained to classify cancerous and non-cancerous slices to assess a number of black-box explainability tools (LIME, RISE, IG, SHAP and ReX) and one white-box tool (Grad-CAM) as a baseline comparator. We use several common measures to assess the con- condance of the explanations with clinician provided annotations, including the Dice Coefficient, Hausdorff Distance, Jaccard Index and propose a Penalised Dice Coefficient which combines the strengths of these measures. Results: ReX (Dice Coefficient = 0.42±0.20) consistently performs relatively well across all measures with comparable performance to Grad-CAM (Dice Coef- ficient = 0.33±0.22). A panel of images is presented for qualitative inspection, showing a number of failure modes. Conclusion: In contrast to general images, we find evidence that most black-box explainability tools do not perform well for medical image classifications when used with default settings.

August 2024 · Nathan Blake, David Kelly, Santiago Peña, Akchunya Chanchal and Hana Chockler
Some Regression Analysis

Investigating the Effect of Patient-Related Factors on Computed Tomography Radiation Dose Using Regression and Correlation Analysis

Analysis of 333 CT exams shows dose indices (CTDIvol, DLP, ED, SSDE) correlate with BMI and weight across chest/cardiac/abdomen studies, while age has little effect. Multiple regression improves correlations; CTDIvol and DLP are most strongly related—informing safer, optimized CT protocols.

December 2023 · Mohammad AlShurbaji, Sara El Haout, Akchunya Chanchal, Salam Dhou and Entesar Dalah
The system architecture for the AWS test bench

Comparison of Cloud-Computing Providers for Deployment of Object-Detection Deep Learning Models

Comparative study of deploying YOLOv8 on Azure vs AWS. Azure is faster for downloads, inference and throughput; AWS is stronger on upload speed, cost, deployment ease, ML services and SLAs—offering a practical guide to provider choice.

November 2023 · Prem Rajendran, Sarthak Maloo, Rohan Mitra, Akchunya Chanchal and Raafat Aburukba
The SpectMatch Semi-Supervised Algorithm

Exploring Semi-Supervised Learning for Audio-Based Automated Classroom Observations

Adapts FixMatch for audio-based classroom observation to cut labeling costs. Trained on real classroom audio, achieves near-supervised F1 (0.81 with 20% labels), indicating SSL can automate observation effectively with minimal labeled data.

September 2022 · Akchunya Chanchal and Imran Zualkernan