DC-Area Anonymity, Privacy, and Security Seminar

Winter 2018 Seminar
Monday, February 26th, 2018
1:00 p.m. - 4:30 p.m.

Location: Volgenau School of Engineering
George Mason University
Host: Dov Gordon

1:00 p.m. - 1:25 p.m.
Speaker: Christine Task (Knexus Research)
Title: Communities, Information Propagation, and Differentially Private Crisis Detection
Abstract: Intuitively, differential privacy requires analytics algorithms to produce "blurry" results; random noise is added to obfuscate the impact of any single individual on the published results. This provides a provable, unconditional guarantee of privacy that does not depend on the distribution of the underlying data-set, which is reassuring. However, when differentially private analytics are put to use in critical applications, their utility becomes a point of concern — and the utility of noisy analytics can be deeply dependent on the distribution of the data. In this talk we'll discuss an ongoing differentially private social network analysis project focusing on detecting the onset of public crises such as shootings, from patterns in the call-text communication graph. We'll talk about the difficulties of modeling the data, testing, evaluation, and what we've learned so far about how privatized analytics work in practice.

1:25 p.m. - 1:50 p.m.
Speaker: Rob Jansen (U.S. Naval Research Laboratory)
Title: Inside Job: Applying Traffic Analysis to Measure Tor from Within
Abstract: In this paper, we explore traffic analysis attacks on Tor that are conducted solely with middle relays rather than with relays from the entry or exit positions. We create a methodology to apply novel Tor circuit and website fingerprinting from middle relays to detect onion service usage; that is, we are able to identify websites with hidden network addresses by their traffic patterns. We also carry out the first privacy-preserving popularity measurement of a single social networking website hosted as an onion service by deploying our novel circuit and website fingerprinting techniques in the wild. Our results show: (i) that the middle position enables wide-scale monitoring and measurement not possible from a comparable resource deployment in other relay positions, (ii) that traffic fingerprinting techniques are as effective from the middle relay position as prior works show from a guard relay, and (iii) that an adversary can use our fingerprinting methodology to discover the popularity of onion services, or as a filter to target specific nodes in the network, such as particular guard relays.

1:50 p.m. - 2:20 p.m.
Coffee Break

2:20 p.m. - 2:45 p.m.
Speaker: Arkady Yerukhimovich (MIT Lincoln Laboratory)
Title: Cryptographically Protected Database Search
Abstract: Protected database search systems cryptographically isolate the roles of reading from, writing to, and administering the database. This separation limits unnecessary administrator access and protects data in the case of system breaches. Since protected search was introduced in 2000, the area has grown rapidly; systems are offered by academia, start-ups, and established companies.

However, there is no best protected search system or set of techniques. Design of such systems is a balancing act between security, functionality, performance, and usability. This challenge is made more difficult by ongoing database specialization, as some users will want the functionality of SQL, NoSQL, or NewSQL databases. This database evolution will continue, and the protected search community should be able to quickly provide functionality consistent with newly invented databases.

At the same time, the community must accurately and clearly characterize the tradeoffs between different approaches. To address these challenges, we provide the following contributions:

  1. An identification of the important primitive operations across database paradigms. We find there are a small number of base operations that can be used and combined to support a large number of database paradigms.
  2. An evaluation of the current state of protected search systems in implementing these base operations. This evaluation describes the main approaches and tradeoffs for each base operation. Furthermore, it puts protected search in the context of unprotected search, identifying key gaps in functionality.
  3. An analysis of attacks against protected search for different base queries.
  4. A roadmap for transforming a protected search system into a protected database.

2:45 p.m. - 3:10 p.m.
Speaker: Samuel Ranellucci (George Mason University / University of Maryland)
Title: Secure 4 party computation with fewer than 8 bits of communication per gate.
Abstract: We show how to compute any boolean circuit with less than 8 bits of communication per gate. Our protocol is secure against a malicious adversary.

3:10 p.m. - 3:40 p.m.
Coffee Break

3:40 p.m. - 4:05 p.m.
Speaker: Mohammad Mahmoody (University of Virginia)
Title: Learning under p-Tampering Attacks
Abstract: Mahloujifar and Mahmoody (TCC'17) studied attacks against learning algorithms using a special case of Valiant's malicious noise, called p-tampering, in which the adversary could change training examples with independent probability p but only using correct labels. They showed the power of such attacks by increasing the error probability in the so called "targeted" poisoning model in which the adversary's goal is to increase the loss of the generated hypothesis over a particular test example. At the heart of their attack was an efficient algorithm to bias the average output of any bounded real-valued function through p-tampering.

In this work, we present new attacks for biasing the average output of bounded real-valued functions, improving upon the biasing attacks of MM16. Our improved biasing attacks, directly imply improved p-tampering attacks against learners in the targeted poisoning model. As a bonus, our attacks come with considerably simpler analysis compared to previous attacks.

We also study the possibility of PAC learning under p-tampering attacks in the non-targeted (aka indiscriminate) setting where the adversary's goal is to increase the risk of the generated hypothesis (for a random test example). We show that PAC learning is possible under p-tampering poisoning attacks essentially whenever it is possible in the realizable setting without the attacks. We further show that PAC learning under "no-mistake" adversarial noise is not possible, if the adversary could choose the (still limited to only p fraction of) tampered examples that she substitutes with adversarially chosen ones. Our formal model for such "bounded-budget" tampering attackers is inspired by the notions of (strong) adaptive corruption in secure multi-party computation.

4:05 p.m. - 4:30 p.m.
Speaker: Allison Mankin (Salesforce)
Title: Deploying DNS privacy and security enhancements from a user perspective.

Directions: The seminar will be held at the Volgenau School of Engineering.

By Car: The closest visitor parking is in the Shenandoah parking deck. You can also park in the Mason Pond or Rappahannock River parking decks. The cost for any of these is $3/hour (max: $15). If you purchase and print a daily parking permit ahead of time, it will cost $2-4 but will require you to park in a somewhat distant lot. Check here for more information.

By Metro: George Mason University provides a shuttle service between campus and the Vienna metro stop (Orange line) called the Sandy Creek - Vienna Metro shuttle. The shuttles run every 15 to 30 minutes from 6 a.m. to 10:00 p.m., Monday through Friday. When exiting the Metro, take the North exit, and find the bus at Bay C. Take the shuttle between the Vienna stop and the Sandy Creek Transit Center. Check here for more information.