Online users engaging with apps face a range of threats related to bad actors monitoring or modifying their internet flows, including surveillance, targeted attacks, and censorship. Although researchers can help by identifying misbehaving apps with poor cryptography that put users at risk, doing so in an ethical and efficient manner is often practically impossible. Individuals are understandably wary of granting access to their actual payloads or allowing their data to be written to disk—and relying on the combination of deductive reasoning and reverse engineering to tackle such a problem is too inefficient to scale.
Yet without obtaining visibility into actual packets and payloads, how are researchers supposed to determine which network flows in the data are putting users at risk and which apps are responsible for generating these flows?
From 2021-2022, Information Controls Fellowship Program Fellow Ben Mixon-Baca set out to create and test an inductive manner by which to answer this difficult question. In collaboration with Diwen Xue and Dr. Roya Ensafi from the University of Michigan, and Dr. Jedidiah R. Crandall from Arizona State University, Mixon-Baca developed CryptoSluice as a means to automatically identify weak or unencrypted traffic at scale in an ethical way. The new tool—which is also the subject of a soon-to-be-published report—preserves user privacy as well as data utility by using a form of modified content sifting to identify apps associated with flows that have repeating byte patterns in anonymized and obfuscated data. Due to the team’s emphasis on ethical research, app attribution from real-time flows is accomplished without ever viewing or storing raw payloads from internet traffic (all analysis is conducted in memory). The upcoming report details Mixon-Baca’s groundbreaking approach as well as insights gained from deploying CryptoSluice through a major university’s Internet gateway over a period of thirty days. For now, the following highlights can be shared.
Key Findings
- CryptoSluice’s newly developed methodology identifies problematic and leaky apps from data alone—analysts no longer need to play a guessing game in deciding which apps to reverse (the data informs the analyst, not the other way around).
- The inductive approach preserves user privacy throughout its entire analysis—with all data remaining in flight, and no payloads ever written to disk.
- To test the methodology, attribution to specific apps was accomplished from real-time internet traffic flows via the tool’s deployment on an actual university network.
- Across 30 days of deployment, 105 apps with poor or no transport layer security were identified—six of which were subsequently reverse-engineered and confirmed to be putting users at risk by requesting suspicious permissions or employing insecure cryptography (such as transmitting Personally Identifiable Information in plaintext).
- The six apps, many of which have millions of downloads from Tencent and other Chinese app stores, are (1) Kuaishou, (2) Fliggy Travel (Taobao Trip), (3) Quark Browser, (4) Kuaifan VPN Accelerator, (5) Royal Flush, and (6) Ctrip. These apps do not implement proper encryption for their network communications.
- Identified information leaks include software identifiers (operating system and app names and version numbers), hardware identifiers (hardware mac addresses, screen resolution), and user identifiers (server and client IPs, geolocation, language setting, and unique identifiers).
The Full Report is currently under review for publication and will be made publicly available upon acceptance. Further inquiries on this project should be directed to [email protected] or [email protected].
This post is Part One of a two-part series on Mixon-Baca’s work with CryptoSluice. Part Two details how human rights activists, vulnerability researchers, and network operations teams can use CryptoSluice going forward to identify apps that are insecurely transmitting information across their networks.
About the program: OTF’s Information Controls Fellowship Program (ICFP) supports examination into how governments in countries, regions, or areas of OTF’s core focus are restricting the free flow of information, impeding access to the open internet, and implementing censorship mechanisms, thereby threatening the ability of global citizens to exercise basic human rights and democracy. The program supports fellows to work within host organizations that are established centers of expertise by offering competitively paid fellowships for three, six, nine, or twelve months in duration.