The system architecture for the AWS test bench

Comparison of Cloud-Computing Providers for Deployment of Object-Detection Deep Learning Models

As cloud computing rises in popularity across diverse industries, the necessity to compare and select the most appropriate cloud provider for specific use cases becomes imperative. This research conducts an in-depth comparative analysis of two prominent cloud platforms, Microsoft Azure and Amazon Web Services (AWS), with a specific focus on their suitability for deploying object-detection algorithms. The analysis covers both quantitative metrics—encompassing upload and download times, throughput, and inference time—and qualitative assessments like cost effectiveness, machine learning resource availability, deployment ease, and service-level agreement (SLA). Through the deployment of the YOLOv8 object-detection model, this study measures these metrics on both platforms, providing empirical evidence for platform evaluation. Furthermore, this research examines general platform availability and information accessibility to highlight differences in qualitative aspects. This paper concludes that Azure excels in download time (average 0.49 s/MB), inference time (average 0.60 s/MB), and throughput (1145.78 MB/s), and AWS excels in upload time (average 1.84 s/MB), cost effectiveness, ease of deployment, a wider ML service catalog, and superior SLA. However, the decision between either platform is based on the importance of their performance based on business-specific requirements. Hence, this paper ends by presenting a comprehensive comparison based on business-specific requirements, aiding stakeholders in making informed decisions when selecting a cloud platform for their machine learning projects.

Prem Rajendran, Sarthak Maloo, Rohan Mitra, Akchunya Chanchal and Raafat Aburukba
Comparision of different explainability tools

Explainable AI for the Classication of Brain MRIs

Background: Machine learning applied to medical imaging labor under a lack of trust due to the inscrutability of AI models and the lack of explanations for their outcomes. Explainable AI has therefore become an essential research topic. Unfortunately, many AI models, both research and commercial, are unavailable for white-box examination, in which access to a model’s internals is required. There is therefore a need for black-box explainability tools in the medical domain. Several such tools for general images exist, but their relative strengths and weak- nesses when applied to medical images have not been explored. Methods: We use a publicly available dataset of brain MRI images and a model trained to classify cancerous and non-cancerous slices to assess a number of black-box explainability tools (LIME, RISE, IG, SHAP and ReX) and one white-box tool (Grad-CAM) as a baseline comparator. We use several common measures to assess the con- condance of the explanations with clinician provided annotations, including the Dice Coefficient, Hausdorff Distance, Jaccard Index and propose a Penalised Dice Coefficient which combines the strengths of these measures. Results: ReX (Dice Coefficient = 0.42±0.20) consistently performs relatively well across all measures with comparable performance to Grad-CAM (Dice Coef- ficient = 0.33±0.22). A panel of images is presented for qualitative inspection, showing a number of failure modes. Conclusion: In contrast to general images, we find evidence that most black-box explainability tools do not perform well for medical image classifications when used with default settings.

Nathan Blake, David Kelly, Santiago Peña, Akchunya Chanchal and Hana Chockler