Restricted Research - Award List, Note/Discussion Page
Fiscal Year: 2023
1841 The University of Texas at El Paso (143729)
Principal Investigator: Tizpaz Niari,Saeid
Total Amount of Contract, Award, or Gift (Annual before 2011): $ 353,357
Exceeds $250,000 (Is it flagged?): Yes
Start and End Dates: 4/1/23 - 3/31/26
Restricted Research: YES
Academic Discipline: Computer Science
Department, Center, School, or Institute: Computer Science
Title of Contract, Award, or Gift: TR Allowed: Collaborative Research: SaTC: Core: Small: Detecting and Localizing Non-Functional Vulnerabilities in Machine Learning Libraries
Name of Granting or Contracting Agency/Entity:
NATIONAL SCIENCE FOUNDATION
CFDA Link: NSF
47.070
Program Title:
Computer and Information Science and Engineering
CFDA Linked: Computer and Information Science and Engineering
Note:
Machine learning (ML) has been achieving excellent performance and widely adopted in classical ML tasks such as image classification and many other tasks such as autonomous driving and malware detection. Such widespread adoption brings challenges that require reasoning about the functional correctness of ML systems. This has been the focus of previous work, which developed techniques to detect and defend against attacks such as adversarial examples. However, non-functional vulnerabilities in ML systems have not been studied well. They include denial-of-services (DoS), side-channels, and discrimination. There is a growing number of reported DoS and side-channel vulnerabilities for key ML libraries such as Tensorflow and Numpy, but a systematic approach to identifying and debugging them has not been explored. Furthermore, a vast majority of previous work focused on the security of ML models and presented techniques to detect and mitigate functional vulnerabilities in ML models. Arguably, a vulnerability in an ML core library, which generates individual ML models, can have serious consequences and impact many critical ML services. A few systems considered ML libraries, but only focused on functional bugs. Our goal is to aid ML library developers to detect and explain critical non-functional vulnerabilities in the presence of malicious users, who try to compromise availability and confidentiality of ML core libraries used by machine-learning-as-service (MLaas) frameworks. We propose an approach that combines evolutionary algorithms with a gradient-based search to detect and quantify the strengths of non-functional vulnerabilities. Our key observation is to approximate non-functional behaviors of ML systems with deep learning methods, which are used to guide the gradient-based search. For debugging, we propose statistical learning methods combined with causal inferences to localize the root causes.
Discussion: No discussion notes