$$Events$$

Jun. 24, 2020
13:00
-14:00

Building 96, Room 001

​​​Speaker: Ishai Rosenberg


 

Title: Adversarial learning in API call-based RNN classifiers


Abstract: 

Machine-learning, and especially deep learning, has been researched in the last decade as an effective method to augment today’s signature-based and heuristic-based detection methods capability to cope with unknown malware.

However, an adversary can attack such models, generating input that would be misclassified by the model. For instance, he or she can create a modified version of the malicious code, which can evade detection by the IDS. The methods to produce such attacks, termed adversarial machine learning, is an important domain which allows researches to understand the limitations and possible dangers of the ever growing usage of machine learning, and mainly deep learning models in areas critical to our lives, such as disease diagnosis, autonomous cars and stopping malware, and to defend them against such attacks. 

In this presentation, we would focus on the cyber-security domain, especially on malware classifiers, as opposed to most research conducted so far in this area, which is focused in the computer vision domain. We would further focus on deep learning classifiers which have little to none published attacks: sequence-based neural networks. Using such architectures at malware classifiers have shown state of the art performance, e.g., by using system calls executed by the inspected code as features.

The first part of our research would be a novel black-box attack against all state of the art machine learning based malware classifiers: recurrent neural networks, feed-forward deep neural networks and conventional machine learning classifiers such as SVM. This attack would use an adversary generated surrogate model to replace white box access to the attacked model in-order to compute its gradients for the attack, and thus requires knowledge about the used features encoding.

The second part would be a black box attack that minimizes the queries for the black box model (each query to a cloud service might cost money, etc.), does not require knowledge about the features, and doesn’t use a surrogate model which is computationally expensive to use.

The last part is a paper involving defense mechanisms such as using ensemble of deep learning classifiers against such attacks.


 


Bio: Ishai Rosenberg is the manager of the Deep Learning group at Deep-Instinct, in charge of all deep learning and data science related features, capabilities and algorithms across Deep-Instinct's product line. Ishai has over 15 years of experience in various cyber security and machine learning R&D positions in both governmental and private organizations

Ishai is a PhD candidate in the Software and Information Systems Engineering department in Ben Gurion University, focusing in adversarial deep learning for RNNs.