DC-Area Anonymity, Privacy, and Security Seminar

Fall 2019 Seminar
Thursday, December 5th, 2019
Time: 9:30am - 5:00pm
Lunch nearby (not provided)

Location: Room B1220 (floor B1, one level directly below the lobby)
Science and Engineering Hall (SEH, 800 22nd Street, NW)
George Washington University
Hosts: Adam Aviv

9:30am - 9:35am Welcome and Introductions
9:35am - 10:00am Nitin Vaidya (Georgetown University)
Title: Security and Privacy for Distributed Optimization and Learning
Abstract: Consider a network of agents wherein each agent has a private cost function. In the context of distributed machine learning the private cost function of an agent may represent the "loss function" corresponding to the agent's local data. The objective here is to identify parameters that minimize the total cost over all the agents. In machine learning for classification the cost function is designed such that minimizing the cost function should result model parameters that achieve higher accuracy of classification.  Similar problems arise in the context of other applications as well including swarm robotics. Our work addresses privacy and security of distributed optimization with applications to machine learning. In privacy-preserving machine learning the goal is to optimize the model parameters correctly while preserving the privacy of each agent's local data. Privacy-preserving machine learning is becoming important due to the increasing reliance on user-generated data for machine learning. In security the goal is to identify the model parameters correctly while tolerating adversarial agents that may be supplying incorrect information. When a large number of agents participate in distributed optimization security compromise of some of the agents becomes increasingly likely. We constructively show that such privacy-preserving and secure algorithms for distributed optimization exist. The talk will provide intuition behind the design and correctness of the algorithms.
10:00am - 10:25am Omer Akgul (University of Maryland)
Title: A Qualitative Investigation of Insecure Code Propagation from Online Forums
Abstract: Research demonstrates that code snippets listed on programming-oriented online forums (e.g. Stack Overflow) –including snippets containing security mistakes – make their way into production code. Prior work also shows that software developers who reference Stack Overflow in their development cycle produce less secure code. While there are many plausible explanations for why developers propagate insecure code in this manner there is little or no empirical evidence. To address this question we identify Stack Overflow code snippets that contain security errors and find clones of these snippets in open source GitHub repositories. We then survey (n=133) and interview (n=15) the authors of these GitHub repositories to explore how and why these errors were introduced. We find that some developers (perhaps mistakenly) trust their security skills to validate the code they import but the majority admit they would need to learn more about security before they could properly perform such validation. Further although some prioritize functionality over security others believe that ensuring security is not or should not be their responsibility. Our results have implications for attempts to ameliorate the propagation of this insecure code.
10:25am - 10:50am Coffee break
10:50am - 11:15am Kelsey Fulton (University of Maryland)
Title: The Effect of Entertainment Media on Mental Models of Computer Security
Abstract: When people inevitably need to make decisions about their computer-security posture they rely on their mental models of threats and potential targets. Research has demonstrated that these mental models which are often incomplete or incorrect are informed in part by fictional portrayals in television and film. Inspired by prior research in public health demonstrating that efforts to ensure accuracy in the portrayal of medical situations has had an overall positive effect on public medical knowledge we explore the relationship between computer security and fictional television and film. We report on a semi-structured interview study (n=19) investigating what users have learned about computer security from mass media and how they evaluate what is and is not realistic within fictional portrayals. In addition to confirming prior findings that television and film shape users' mental models of security we identify specific misconceptions that appear to align directly with common fictional tropes. We identify specific proxies that people use to evaluate realism and examine how they influence these misconceptions. We conclude with recommendations for security researchers as well as creators of fictional media when considering how to improve people's understanding of computer-security concepts and behaviors.
11:15am - 11:40am Michael Reininger (University of Maryland)
Title: Towards a Programmable Tor Network
Abstract: Today's Tor is largely relegated to web proxies and hidden services and unfortunately neither of these applications can scale to handle dynamic workloads or large-scale attacks mounted by sophisticated adversaries. Conversely services on the standard "non-anonymous" Internet such as software-defined networking (SDN) network function virtualization (NFV) and content delivery networks (CDNs) have added robustness scalability and resiliency to the network. However no basic primitives exist on anonymity networks to achieve such features. In this talk I will present a vision for a programmable Tor in which users can write functions for relays to execute. I will then showcase function prototypes that help secure Tor users by preventing website fingerprinting generating cover traffic and load balancing hidden services. Finally I will discuss ongoing and future work in safely running functions in Tor.
11:40am - 12:05pm Zhou Li (University of California-Irvine)
Title: An End-to-End Large-Scale Measurement of DNS-over-Encryption: How Far Have We Come?
Abstract: To mitigate threats against plaintext DNS several protocols  (we term as DNS-over-Encryption)  have been proposed to encrypt DNS queries between DNS clients and servers While some proposals have been standardized and are gaining strong support from the industry little has been done to understand their status from the view of global users. In this work we perform by far the first end-to-end and large-scale analysis on DNS-over-Encryption. By collecting data from Internet scanning user-end measurement and passive monitoring logs we have gained several unique insights. In general the service quality of DNS-over-Encryption is satisfying. The packets are less likely to be disrupted by in-path interception and the extra overhead is tolerable. Compared to traditional DNS DNS-over-Encryption is used by far fewer users but we have witnessed a growing trend.
12:05pm - 1:15pm Lunch break (lunch nearby on your own)
1:15pm - 1:40pm Lucas Davi (University of Duisburg-Essen)
Title: Memory Corruption Attacks Against Intel SGX
Abstract: The Intel Software Guard Extensions (SGX) technology allows an application developer to isolate security-critical code and data inside a protected memory area called enclave. While most research has studied side-channel attacks against SGX this talk will investigate memory corruption attacks such as return-oriented programming in the context of SGX. We will demonstrate how an attacker can exploit the Intel SDK libraries to compromise enclaves and steal secret information. In addition we will investigate the host-to-enclave boundary and its susceptibility to memory corruption attacks.
1:40pm - 2:05pm Khaled N. Khasawneh (George Mason University)
Title: SpecCFI: Mitigating Spectre Attacks using CFI Informed Speculation
Abstract: Spectre attacks and their many subsequent variants are a new vulnerability class for modern CPUs. The attacks rely on the ability to misguide/hijack speculative execution generally by exploiting the branch prediction structures to execute a vulnerable code sequence speculatively. In this paper we propose to use Control-Flow Integrity (CFI) a security technique used to stop control-flow hijacking attacks on the committed path to prevent speculative control-flow from being hijacked to launch the most dangerous variants of the Spectre attacks (Spectre-BTB and Spectre-RSB). Specifically CFI attempts to constrain the target of an indirect branch to a set of legal targets defined by a pre-calculated control-flow graph (CFG). As CFI is being adopted by commodity software (e.g. Windows and Android) and commodity hardware (e.g. Intel's CET and ARM's BTI) the CFI information could be readily available through the hardware CFI extensions. With the CFI information we apply CFI principles to also constrain illegal control-flow during speculative execution. Specifically our proposed defense SpecCFI ensures that control flow instructions target legal destinations to constrain dangerous speculation on forward control-flow paths (indirect calls and branches). We augment this protection with a precise speculation-aware hardware stack to constrain speculation on backward control-flow edges (returns). We combine this solution with existing solutions against branch target predictor attacks (Spectre-PHT) to close all known non-vendor-specific Spectre vulnerabilities.  We show that SprcCFI results in small overheads both in terms of performance and additional hardware complexity.
2:05pm - 2:30pm Coffee break
2:30pm - 2:55pm Hooman Mohajeri Moghaddam (Princeton University)
Title: Watching You Watch: The Tracking Ecosystem of Over-the-Top TV Streaming Devices
Abstract: The number of Internet-connected TV devices has grown significantly in recent years especially Over-the-Top ("OTT") streaming devices such as Roku TV and Amazon Fire TV. OTT devices offer an alternative to multi-channel television subscription services and are often monetized through behavioral advertising. To shed light on the privacy practices of such platforms we developed a system that can automatically download OTT apps (also known as channels) and interact with them while intercepting the network traffic and performing best-effort TLS interception. We used this smart crawler to visit more than 2000 channels on two popular OTT platforms namely Roku and Amazon Fire TV. Our results show that tracking is pervasive on both OTT platforms with traffic to known trackers present on 69% of Roku channels and 89% of Amazon Fire TV channels. We also discover widespread practice of collecting and transmitting unique identifiers such as device IDs serial numbers WiFi MAC addresses and SSIDs at times over unencrypted connections. Finally we show that the countermeasures available on these devices such as limiting ad tracking options and adblocking are practically ineffective. Based on our findings we make recommendations for researchers regulators policy makers and platform/app developers.
2:55pm - 3:20pm Zhiqiang Lin (Ohio State University)
Title: Automatically Identifying Vulnerable BLE-IoT Devices From Google Play and Locating Them in Real World
Abstract: Being an easy-to-deploy and cost-effective low power wireless solution Bluetooth Low Energy (BLE) has been widely used by Internet-of-Things (IoT) devices. In a typical IoT scenario an IoT device first needs to be connected with its companion mobile app which serves as a gateway for its Internet access. To establish a connection a device often broadcasts an advertisement packet with a UUID to a nearby smartphone app. With the UUID a companion app is able to identify the device pairs and bonds with it and allows further data communication. However in this talk we will demonstrate that there is a flaw in the current design and implementation of the communication protocols between a BLE device and its companion mobile app which allows an attacker to precisely fingerprint a BLE device with the static UUIDs from the apps. With this observation we show that we can build a program analysis tool to automatically extract these UUIDs from IoT mobile apps from Google Play Store identify vulnerable BLE-IoT devices and locate them in the real world with a long-range bluetooth sniffer we built.
3:20pm - 3:45pm Break
3:45pm - 4:10pm Gang (Gary) Tan (Pennsylvania State University)
Title: Recent Advances in Automatic Privilege Separation
Abstract: Automatic privilege separation decomposes an input program into multiple modules each with its own set of privileges and loaded into a separate protection domain.  Privilege separated software is more secure since the compromise of one module does not directly lead to the compromise of other isolated modules.  We present two systems that make advances in automatic privilege separation. The first system PtrSplit proposes a set of techniques that make it possible to privilege separate C/C++ programs that pass pointer data between modules. The second system called Program-mandering allows programmers to make quantitative tradeoffs between security and performance during program partitioning.  We will also discuss our experience of using these systems to privilege separate real-world software as well as remaining challenges.
4:10pm - 4:35pm Neil Gong (Duke University)
Title: Defending against Machine Learning based Inference Attacks via Adversarial Examples
Abstract: Attackers increasingly leverage machine learning (ML) to perform automated large-scale inference attacks. In such an ML-equipped inference attack an attacker has access to some data (called public data) of an individual or a system; and the attacker uses an ML classifier to automatically infer their private data ranging from personal information such as gender and political view to cryptographic keys used by a system. In this talk we will discuss defending against such ML-equipped inference attacks via adversarial examples. Our key observation is that attackers rely on ML classifiers in inference attacks and ML classifiers are vulnerable to adversarial examples. To defend against inference attacks we can add carefully crafted noise into the public data to turn them into adversarial examples such that attackers' classifiers make incorrect predictions for the private data. We will focus on protecting social media users from attackers like Cambridge Analytica. We will also briefly discuss other inference attacks such as membership inference attacks that can be defended by adversarial examples.
4:35pm - 5:00pm Panos Papadimitratos (KTH Royal Institute of Technology)
Title: Security and privacy for large-scale mobile systems
Abstract: We discuss challenges and solutions for security and privacy in emerging large-scale mobile systems. First we consider securing participatory sensing systems and protecting contributors' privacy with data quality and user incentives as additional dimensions. Then we consider security and privacy for vehicular communication systems looking at two problems: (i) the management of credentials orders of magnitude more numerous than those issued by current general-purpose public key infrastructures; and (ii) the efficient validation of vehicular safety messages in dense network settings or when under a clogging denial of service attack. Last we discuss secure and privacy preserving location based services seeking to balance privacy protection resilience to malicious disruptions and quality of user-obtained data.

Directions: There are two building entrances on 22nd St. close to Eye and H Streets, respectively. See a university map here.

Transportation
By Car: There is visitor parking in the building at $23 maximum for the day. Parking entrance is on H St, between 22nd and 23rd, on the left if approaching from 23rd. For details, see here.

By Metro: The workshop is 2 blocks from the Foggy Bottom Metro Station, which is on the Blue and Orange Metro lines. The Metro Station has only one exit, on 23rd and Eye (I) Streets.