« Return to Overview

General Information

Location: Georgetown University Healey Family Student Center

The Healey Family Student Center is located on the southern end of the Hilltop Campus. It is less than a 5-minute walk from the Southwest Parking Garage, Shuttle Bus Turnaround, and the Front Gates (37th & O Street).

Hosts: Elissa Redmiles and Micah Sherr

Program time: 1–5 PM, Friday Feb 21

Georgetown's Accessibility Information

Assistive listening technology will be available during the talks at DCAPS.

Program Schedule

Introductory Remarks

Time: 1:00–1:10 PM


Talk #1: Paul Syverson (US Naval Research Laboratory)

Time: 1:10–1:35 PM

Abstract: Onion-Location makes it easy for websites offering onion service access to support automatic discovery in Tor Browser of the random-looking onion address associated with their domain. We provide the first measurement study of how many websites are currently using Onion-Location. We also describe the open-source tools we created to conduct the study. Onion-Location has been criticized elsewhere for its lack of transparency and vulnerability to blocking. Perhaps even more troubling, we show that Onion-Location is vulnerable to very accurate fingerprinting. We present recommended changes to and alternatives to Onion-Location as well as steps towards even more secure onion discovery and association.


Talk #2: Rupayan Mallick (Georgetown University, Massive Data Institute)

Time: 1:35–2:00 PM

Abstract: Risk assessment of generative models in digital replica creation

Generative models have been a talking point for the last couple of years with the models such as ChatGPT, Dall-E, Stable Diffusion etc. These models can generate and personalize images achieving outputs of high fidelity and detail. Such capabilities make these models ideal for image editing and image enhancements. These models also demonstrate a number of vulnerabilities when associated with an individual or copyrighted content; deepfakes, memorization, and digital replica generation. There has been increased interest in the privacy aspects of such generative models vulnerabilities. This talk will focus on risk assessment of state of the art generative models for the digital replica vulnerability.


Talk #3: Jan Tolsdorf (George Washington University)

Time: 2:00–2:10 PM

Abstract: Unfairness, bias, and untruth remain persistent concerns in AI chatbots. To explore how users perceive these issues, we conducted a study with 260 participants who audited an AI chatbot, evaluating its fairness, trustworthiness, and potential risks. Our findings offer insights into the strategies participants employed during the audit and how they assessed these key principles. This talk presents preliminary results on user perceptions of fairness, risk, and trust in AI chatbots.


Coffee Break #1

Time: 2:10–2:30 PM


Talk #4: Simson Garfinkel

Time: 2:30–2:55 PM

Abstract: In 2023 MIT Press asked me to write a book about differential privacy. That book will be published on March 25, 2025. You should pre-order a copy today from Amazon at this link - https://amzn.to/4hmJ41L. In this talk, I’ll share the story and show some of the book’s cartoons from Ted Rall, the book’s award-winning illustrator.


Talk #5: Maurice Shih (University of Maryland)

Time: 2:55–3:20 PM

Abstract: zk-promises: Anonymous Moderation, Reputation, and Blocking from Anonymous Credentials with Callbacks

Anonymity is essential for free speech and expressing dissent, but platform moderators need ways to police bad actors. For anonymous clients, this may involve banning their accounts, docking their reputation, or updating their state in a complex access control scheme. Frequently, these operations happen asynchronously when some violation, e.g., a forum post, is found well after the offending action occurred. Malicious clients, naturally, wish to evade this asynchronous negative feedback. This raises a challenge: how can multiple parties interact with private state stored by an anonymous client while ensuring state integrity and supporting oblivious updates?

We propose zk-promises, a framework supporting stateful anonymous credentials where the state machines are Turing- complete and support asynchronous callbacks. Client state is stored in what we call a zk-object, zero-knowledge proofs ensure the object can only be updated as programmed, and callbacks allow third party updates even for anonymous clients, e.g, for downvotes or banning. When clients authenticate, they anonymously assert some predicate on their state and that they have handled all previous callbacks.

zk-promises allows us to build a privacy-preserving account model. State that would normally be stored on a trusted server can be privately outsourced to the client while preserving the server’s ability to update the account.

To demonstrate the feasibility of our approach, we design, implement, and benchmark an anonymous reputation system with better-than-state-of-the-art performance and features, supporting asynchronous reputation updates, banning, and reputation-dependent rate limiting to better protect against Sybil attacks.


Coffee Break #2

Time: 3:20–3:50 PM


Talk #6: Zhifan Lu (University of Virginia)

Time: 3:50–4:15 PM

Abstract: Tor is a well-known anonymous communication tool, used by people with various privacy and security needs. Prior works have exploited routing attacks to observe Tor traffic and deanonymize Tor users. Subsequently, location-aware relay selection algorithms have been proposed to defend against such attacks on Tor. However, location-aware relay selection algorithms are known to be vulnerable to information leakage on client locations and guard placement attacks. Can we design a new location-unaware approach to relay selection while achieving the similar goal of defending against routing attacks?

Towards this end, we leverage the Resource Public Key Infrastructure (RPKI) in designing new guard relay selection algorithms. We develop a lightweight Discount Selection algorithm by only incorporating Route Origin Authorization (ROA) information, and a more secure Matching Selection algorithm by incorporating both ROA and Route Origin Validation (ROV) information. Our evaluation results show an increase in the number of ROA-ROV matched client-relay pairs using our Matching Selection algorithm, reaching 48.47% with minimal performance overhead through custom Shadow simulations and benchmarking.


Talk #7: Micah Sherr (Georgetown University)

Time: 4:15–4:40 PM

Exploring the collateral-damage argument of Internet censorship resistance.


Happy Hour

Time: 4:40–End

Join us for conversations and refreshments to conclude the day!