4th Reversing and Offensive-oriented Trends Symposium 2020 (ROOTS)

Academic workshop co-located with DeepSec November 19/20, Vienna

List of Accepted Papers


Exploiting Interfaces of Secure Encrypted Virtual Machines

Martin Radev (Fraunhofer AISEC, Garching near Munich, Germany), Mathias Morbitzer (Fraunhofer AISEC, Garching near Munich, Germany)

Cloud computing is a convenient model for processing data re- motely. However, users must trust their cloud provider with the confidentiality and integrity of the stored and processed data. To increase the protection of virtual machines, AMD introduced SEV, a hardware feature which aims to protect code and data in a virtual machine. This allows to store and process sensitive data in cloud environments without the need to trust the cloud provider or the underlying software.
However, the virtual machine still depends on the hypervisor for performing certain activities, such as the emulation of special CPU instructions, or the emulation of devices. Yet, most code that runs in virtual machines was not written with an attacker model which considers the hypervisor as malicious.
In this work, we introduce a new class of attacks in which a malicious hypervisor manipulates external interfaces of an SEV or SEV-ES virtual machine to make it act against its own interests. We start by showing how we can make use of virtual devices to extract encryption keys and secret data of a virtual machine. We then show how we can reduce the entropy of probabilistic kernel defenses in the virtual machine by carefully manipulating the results of the CPUID and RDTSC instructions. We continue by showing an approach for secret data exfiltration and code injection based on the forgery of MMIO regions over the VM’s address space. Finally, we show another attack which forces decryption of the VM’s stack and uses Return Oriented Programming to execute arbitrary code inside the VM.
While our approach is also applicable to traditional virtualization environments, its severity significantly increases with the attacker model of SEV-ES, which aims to protect a virtual machine from a benign but vulnerable hypervisor.
[ ACM ISBN 978-1-4503-8974-7/20/11. ] [ DOI 10.1145/3433667.3433668 ]
[ ACM Digital Library (PDF) ] [ Presentation (Video) ] [ Author's copy on arXiv (PDF) ]

No Need to Teach New Tricks to Old Malware: Winning an Evasion Challenge with XOR-based Adversarial Samples

Fabrício Ceschin (Federal University of Paraná Curitiba, Paraná, Brazil), Marcus Botacin (Federal University of Paraná Curitiba, Paraná, Brazil), Gabriel Lüders (Federal University of Paraná Curitiba, Paraná, Brazil), Heitor Murilo Gomes (University of Waikato, Hamilton, Waikato, New Zealand), Luiz S. Oliveira (Federal University of Paraná Curitiba, Paraná, Brazil), André Grégio (Federal University of Paraná Curitiba, Paraná, Brazil)

dversarial attacks to Machine Learning (ML) models became such a concern that tech companies (Microsoft and CUJO AI’s Vulnerability Research Lab) decided to launch contests to better understand their impact on practice. During the contest’s first edition (2019), participating teams were challenged to bypass three ML models in a white box manner. Our team bypassed all the three of them and reported interesting insights about models’ weaknesses. In the second edition (2020), the challenge evolved to an attack-and-defense model: the teams should either propose defensive models and attack other teams’ models in a black box manner. Despite the difficulty increase, our team was able to bypass all models again. In this paper, we describe our insights for this year’s contest regarding on attacking models, as well defending them from adversarial attacks. In particular, we show how frequency-based models (e.g., TF-IDF) are vulnerable to the addition of dead function imports, and how models based on raw bytes are vulnerable to payload-embedding obfuscation (e.g., XOR and base64 encoding).
[ ACM ISBN 978-1-4503-8974-7/20/11 ] [ DOI 10.1145/3433667.3433669 ]
[ ACM Digital Library (PDF) ] [ Presentation (Video) ] [ Source: Dropper ] [ Source: Appender ] [ Source: Detection Model ] [ Web based platform ]

A survey on practical adversarial examples for malware classifiers

Daniel Park (Rensselaer Polytechnic Institute), Bülent Yener (Rensselaer Polytechnic Institute)

Machine learning based solutions have been very helpful in solving problems that deal with immense amounts of data, such as malware detection and classification. However, deep neural networks have been found to be vulnerable to adversarial examples, or inputs that have been purposefully perturbed to result in an incorrect label. Researchers have shown that this vulnerability can be exploited to create evasive malware samples. However, many proposed at- tacks do not generate an executable and instead generate a feature vector. To fully understand the impact of adversarial examples on malware detection, we review practical attacks against malware classifiers that generate executable adversarial malware examples. We also discuss current challenges in this area of research, as well as suggestions for improvement and future research directions.
[ ACM ISBN 978-1-4503-8974-7/20/11 ] [ DOI 10.1145/3433667.3433670 ]
[ ACM Digital Library (PDF) ] [ Presentation (Video) ] [ arXiv (PDF) ]