blog

RSS
[TAGGED: surveillance]
  1. The JavaScript Black Hole

    A playbook for ethical engineering on the web.

    In the 25 years since JavaScript was first added to Netscape Navigator, the language has evolved from a cute little toy to an integral part of the Internet. JavaScript frameworks such as React and Angular have transformed the web, bringing us fully-fledged client side applications with functionality that could only be imagined just a decade ago. In the process, the web has become more powerful, but also much more dangerous. Malware and mass surveillance have become persistent threats, fueled by the ever-expanding amounts of user data exposed by new JavaScript features, and sucked into the black hole of omnipresent tracking networks. With real human costs, these threats have been worsened by the increasingly popular belief that "the web browser is an operating system, and everything is an app."

    This essay is written for web developers and people interested in the field. In it, I break down the problems mentioned above, demonstrate some commonly-used JavaScript practices that can expose users to harm, provide examples of actual harm being done, and ultimately propose some actionable alternatives that we, as developers, can adopt to prioritize ethical engineering and minimize harm for our users, while still building feature rich applications.

    Read More

    Posted 2020-04-04 11:40:00 CST by henriquez. 9 comments
  2. Chrome allows silent enumeration of USB devices

    User consent is baked into the spec, but Google skips it.

    Via the Web MIDI API, Google Chrome (up to at least version 70) allows silent monitoring of all connected USB MIDI devices, such as MIDI keyboards and audio interfaces. While this enables interesting web applications such as software synthesizers, it also provides a new vector for shady ad networks and malicious actors to do very precise device fingerprinting and tracking. The API is trivial to access; for example run this in a JavaScript console:

    navigator.requestMIDIAccess({sysex: false})
        .then(
            function(midiAccess) {
                console.log(midiAccess);
                for (var entry of midiAccess.inputs) {
                    var input = entry[1];
                    console.log('Found device: ', input.manufacturer, input.name);
                }
            },
            function() { console.log('Error: no MIDI access'); }
        );

    Assuming you have MIDI devices connected, this will output something like:

    MIDIAccess {inputs: MIDIInputMap, outputs: MIDIOutputMap, sysexEnabled: false, onstatechange: null}
    Found device:  Microsoft Corporation 3- UA-25EX
    Found device:  Midiman MIDIIN3 (Axiom Pro 61)
    Found device:  Midiman MIDIIN4 (Axiom Pro 61)

    From here, it's possible to listen for inputs on all connected MIDI devices (aka a MIDI keylogger!)

    Again, while Google most likely had noble intentions in providing this API, their implementation is half-assed. The Web MIDI Specification provides for a user consent step, similar to the confirmation dialogs that pop up around webcam access or push notifications, but Chrome skips over this and grants permission as soon as a script asks for it.

    Privacy implications

    On its face, the impact of allowing scripts to silently dump a list of USB MIDI devices seems minor—only a very small percentage of users will have MIDI keyboards or audio interfaces hooked up. But counterinuitively, this increases the privacy impact: because the number of users is small, Chrome's implementation of the Web MIDI API provides a new vector for very precise device fingerprinting.

    The Electronic Frontier Foundation (EFF) has a great write-up and demonstration of device fingerprinting techniques via their Panopticlick Project:

    When you visit a website, you are allowing that site to access a lot of information about your computer's configuration. Combined, this information can create a kind of fingerprint — a signature that could be used to identify you and your computer. Some companies use this technology to try to identify individual computers.

    To my knowledge, I don't believe EFF or anyone else has researched the impact of Web MIDI device leakage in the context of device fingerprinting. In practice, it seems like this could enable precise tracking of creative individuals in a manner that couldn't be blocked without disabling JavaScript entirely.

    Google can easily fix this!

    Again, the Web MIDI API provides a specification for user consent, and Google Chrome already has generic UI components to display user confirmation dialog prompts. It should be simple for them to implement a consent prompt and prevent malicious scripts from scooping up peoples' connected MIDI devices. While Google has a perverse incentive as the world's biggest advertiser to make it easier to track their users, again I believe the Chrome team had good intentions in setting up this API. They just did a bad job, and they should fix it.

    Posted 2018-10-20 12:14:00 PST by henriquez. Comments
  3. How the DEA covers up illegal evidence-gathering

    Secret phone records database used for “parallel construction” of evidence

    According to slides released by EFF, law enforcement agencies have been using Hemisphere, a secret phone records monitoring database, to build criminal cases against defendants and then cover it up by “fortuitously” happening across other evidence gained through legitimate channels. The 24-page slide deck describes the program, along with the elaborate techniques used to conceal the true source of evidence from judges, prosecutors and criminal defendants.

    Funded by the Office of National Drug Control Policy (ONDCP), the Hemisphere program is powered by a massive phone metadata monitoring database with advanced pattern-recognition algorithms designed to track individual targets, including location. Features include:

    • No need for a warrant! Near realtime-access to phone records and metadata

    • Pattern recognition to identify individuals, even when they change phones

    • Location information for “tracking targets and placing them in certain areas at certain times.”

    Sounds great, right? The only problem, which the presentation skillfuly dances around without explicitly acknowledging, is that it’s most likely illegal and unconstitutional. That’s why you “DO NOT mention Hemisphere in any official reports or court documents.” Instead, you use Hemisphere to gather the evidence you need, and then, by sheer luck, get the documents you need through official channels, or pull the right car over at the right place and right time. This is an evidence-laundering technique known as parallel construction, or as the Hemisphere presentation puts it, “Parallel Subpoenaing.” The presentation goes to great lengths to describe this, emphasizing how the program must remain secret.

    It's illegal.

    Under the U.S. Constitution, criminal defendants are entitled to due process of law, which means both that evidence against them must be obtained through legitimate means, and that they must be given a chance to challenge it in court. The Hemisphere program flies in the face of both of these requirements. Obtaining evidence through warrantless mass surveillance clearly violates the Fourth Amendment. Parallel construction conceals the true origin of evidence (illegally obtained evidence), making it impossible for defendants to challenge the practices of law enforcement agents.

    Under “Fruit of the poisonous tree” doctrine, if criminal evidence is gathered through illegal means, it’s inadmissible, and any further evidence obtained as a result of that evidence is also inadmissible. This is a legal precedent designed to prevent exactly what the government is doing with the Hemisphere program. On paper, our criminal justice system realizes that it’s better to let a few criminals walk free than allow the Constitutional rights of everyone to be systematically violated by shady law enforcement practices. Unfortunately, when evidence is concealed from the courts, it’s impossible for them to put a stop to this, and justice cannot be served.

    Posted 2014-09-14 04:19:00 PST by henriquez. 3 comments
  4. The FBI’s plan to criminalize strong encryption

    Now that Congress has quietly backed away from CISPA and expansion of the CFAA, the Federal Government has wasted no time in introducing new half-baked Internet regulations. The latest comes courtesy of the FBI. Under their proposed expansion of CALEA, a federal wiretapping law, online service providers would be required to build wiretapping capabilities into their software, allowing law enforcement to secretly monitor user communications.

    According to the FBI, these expanded monitoring capabilities are required because child pornographers and terrorists are increasingly “going dark;” meaning that instead of calling each other on their wiretapped iPhones, they’re sending encrypted messages over the Internet, and the FBI can’t read them. By expanding wiretapping requirements to include online service providers, the FBI reasons, these tech-savvy villains can be brought to justice.

    On the surface, this looks like a reasonable proposition. But, as is the case with all technical regulations, the devil is in the details.

    For a typical online service provider, like Gmail, complying with a wiretapping order would be little to no trouble. Because a user’s messages are stored centrally on Google’s servers, Google could simply give the FBI access to their Gmail servers and be done with it.

    But some online service providers allow for the exchange of encrypted messages between users. Although the service providers may run central servers that facilitate the exchange of these messages, they are not readable by the service providers due to their encryption. Only the intended recipients can decrypt and read them. For these service providers, the only way to comply with a government wiretapping mandate is to bundle secret monitoring capabilities, or “backdoors,” into the actual apps that run on users’ computers or smartphones.

    While this might sound like a crazy conspiracy theory, it is the primary concern of a leading group of computer security researchers, including cryptography legends Bruce Schneier and Phil Zimmerman. Last Friday, the Center for Democracy and Technology released a report condemning the FBI’s plan. They warn that requiring software providers to install backdoors on peoples’ devices would “lower the already low barriers to successful cybersecurity attacks,” by giving hackers an easy way to attack apps while remaining undetected.

    But this could be exactly what the FBI wants. The FBI’s plan effectively gives developers a choice:

    • Install sketchy, easily-hacked monitoring software on your users’ devices.

    • Pay $25,000 a day in non-compliance fines. (unpaid fines double daily after 90 days, quickly paying off the national debt)

    • —or— just don’t bother offering end-to-end encrypted software.

    By making the first two options morally reprehensible and unrealistically burdensome, the FBI might hope that companies will just stop offering encrypted software for their users, making it much easier for them to centrally wiretap peoples’ communication.

    Certainly, investors will be less willing to fund startups that are required to install backdoors on users’ devices. If such a backdoor is exploited by hackers, where does the liability fall? It is arguably negligent to include a feature that is unquestionably adverse to every user, regardless of whether the service provider was required by law to do so. Unless the FBI also gives service providers immunity for any damages related to the backdoors, it’s quite likely these companies will be on the losing end of lawsuits when they inevitably get hacked.

    More importantly than concerns about stifling Silicon Valley innovation, the FBI’s proposed regulation raises questions about the rights of government to eavesdrop on its citizens. Since the 1990s, law-abiding people have taken for granted their ability to exchange encrypted digital communication with complete (or at least pretty good) privacy. Are child pornographers and terrorists a big enough threat to justify taking this away?

    I hate to end with a physical analogy, but this is a great way to explain the issue to someone less tech-savvy. Imagine you are a manufacturer of the locks used in bank and casino vaults. You take great pride in your craft, and your lock is secure against all but the most extreme attempts to break it. Now, suppose one day the FBI comes and tells you, “it’s fine and all that you built this vault, but we need you to install a second keyhole so we can open the vault and see if there are terrorists hiding inside.” The key they want to use is no more sophisticated than a house key, and can be opened by the most pedestrian of criminals. Is this really a good idea?

    Posted 2013-05-20 00:00:00 PST by henriquez. Comments