麻豆果冻传媒

In Depth

Cybersecurity Research Should Not Be A Crime

Why We Need Clear, Permanent CFAA and DMCA Exemptions

Cybersecurity research
Shutterstock

Introduction

In early 2021, the fate of the Bidens鈥 Peloton became a brief sub-plot to the Presidential transition. The question of whether the soon-to-be First Family's exercise bike could introduce a cybersecurity risk onto the very secure White House grounds was mostly seen as a "presidential bubble" story in which security experts speculated about ways that this incredibly popular internet-connected stationary bike could be vulnerable to hackers, and how it might be hardened. Among were whether the bike's cameras and microphones could be used for spying, if it could leak other sensitive information, or if it could be used as a stepping stone for gaining access to other White House systems. As it turns out, those concerns were not misplaced. On the day President Biden took his oath of office, a team of British security researchers to Peloton and disclosed a number of serious information leaks they had found on Peloton's servers.

The President's unique digital security needs are evaluated by top-notch cybersecurity professionals, who can take the necessary steps to mitigate any security and privacy risks. These experts are tasked with figuring out if the First Family's digital devices are safe for use in secure locations, and even . They also may decide that certain devices are just too insecure to introduce into such a . But lurking beneath the question of whether a is the question of whether it is secure enough for everyone else's house. Most American consumers don't have access to a team of security experts who could answer that question for them, and absent that team of experts, whether or not a product is digitally vulnerable is a question left to product manufacturers.

A piece of tech is considered "vulnerable" when a part of the code that runs it can be exploited by an attacker. A software vulnerability is like a hole in an otherwise sturdy fence, it may be hard to find, but it has the potential to let out what should be inside (or let in what should be outside). There can also be vulnerabilities in a product鈥檚 hardware, which similarly provide entrance points for technological attacks.听聽

Uncovering a 鈥渮ero-day鈥濃攖he term for an undiscovered vulnerability鈥攔equires an understanding of common coding mistakes, previous vulnerabilities, and some knowledge of how to look for all of that in code. Discovering a new vulnerability is valuable for the malicious actor who finds it, but not everyone looking for vulnerabilities has terrible intentions. Often labeled as "white hat hackers," there is a community of security experts working to find those holes and responsibly disclose that information to manufacturers so vulnerabilities can be patched before they are maliciously exploited. This allows vendors to make their products more secure without spending the time and money necessary to conduct their own testing. Encouragingly, there has also been a growing trend among companies to implement processes for responsible vulnerability disclosure, and in some cases even for the vulnerabilities they find.听

In the case of Peloton, left the data of exposed. While the vulnerability was present, it could have allowed nefarious actors to gather those users鈥 data and use it for their own ends. Researchers found this leak, and disclosed that information to Peloton through the company鈥檚 . When those researchers released their report about the Peloton vulnerability in May of 2021, the company had already fixed the vulnerabilities in their system. Were it not for the responsible disclosures made by the researchers at Pen Test Partners, the private data of Peloton riders might still be broadly available.

Editorial disclosure: This brief discusses policies by Google and Microsoft (including LinkedIn, whose co-founder is on 麻豆果冻传媒鈥檚 Board), all of which are funders of work at 麻豆果冻传媒 but did not contribute funds directly to the research or writing of this piece. View our full list of donors at www.newamerica.org/our-funding.

Criminalization of Security Research

Despite some recent changes in norms around vulnerability disclosure, researchers have not always found a receptive audience with the companies whose products they test. Good-faith researchers work in fear of their efforts being met with legal threats, lawsuits, and even criminal penalties.

This fear is largely rooted in how two pieces of federal law鈥攖he Computer Fraud and Abuse Act of 1986 (CFAA), and the Digital Millennium Copyright Act (DMCA)鈥 to . Both laws were written to address new forms of crime enabled by new uses of technology, but they were written for a fundamentally different digital world. Each law includes penalties for conduct that includes methods vital to security testing, but neither carves out general or permanent exemptions that clearly distinguish between researchers trying to help and the sorts of intentional malicious behavior the laws seek to prohibit.

The legal uncertainty faced by researchers is something the Open Technology Institute has experienced firsthand. Over the last couple of years, OTI has spent a lot of time testing connected consumer products using 鈥攁 framework for evaluating the privacy and security of internet-connected consumer products and software. The Standard was developed by Consumer Reports and a coalition of civil society organizations, and consists of a group of tests measuring how well a product adheres to a set of best practices for connected hardware and software. Some of the tests evaluate technical practices, like whether a product uses encryption or strong authentication processes, while others evaluate a company鈥檚 policies on privacy issues like data collection and retention. In the course of this work, OTI conducted the same kinds of investigations that many security researchers do when searching for software vulnerabilities, including decompiling and examining mobile app code, monitoring network traffic to and from connected devices and/or mobile apps, and providing form values intended to break an app. After considering OTI's own legal risk, decisions to test products and publish specific findings were limited by concerns about potential penalties under the CFAA and DMCA, particularly manyany products have disclaimers in their Terms of Service saying that they will take legal action against, or refer to law enforcement for prosecution, individuals who violate those terms.

Background: the Landscape of Digital Security

In the last two years, a steady stream of hacks and ransomware attacks has kept cybersecurity in the headlines. Affecting everything from hospitals and local school systems to critical infrastructure to the federal government itself, these attacks have brought renewed attention to the current state of digital security. They have also made clear how broadly vulnerable to cyber attack the United States may be.

Shared Vulnerabilities

A piece of vulnerable software could be installed on millions of devices. Developers typically include 鈥渃ode libraries鈥 when building software to handle some functionality needed for their software that may be common across many other developers鈥 software projects. This can include everything from how buttons look in a mobile app to how a smart product connects to a WiFi network. If a library is popular, it could be a component in thousands of products. The technical term for this library鈥檚 relationship to those products is 鈥渦pstream.鈥 When that upstream library has a weakness, all of the 鈥渄ownstream鈥 systems running that code share it. These are known as shared, or common, vulnerabilities.

Researchers and developers sharing information about common vulnerabilities when they are uncovered is , and product makers鈥 distribution of updates and patches is a primary defense against cyber attack. However, developers do not always reliably update their upstream code, and users do not always reliably install available patches. Understanding how many shared vulnerabilities exist in the world, it is common practice for attackers to use tools that scan the internet looking for evidence of a system running software known to be vulnerable.

Proliferation of Vulnerable Hardware

When assessing cyber threat vectors, it is also important to remember common vulnerabilities can exist in firmware (the code that controls digital hardware) or even in the .听

For example, Intel has faced a of in the security features available in some of its processor chips. Computer owners can generally find out if their system has an Intel processor, and if an Intel vulnerability is publicized could at least know that they are affected. As one of the largest chip makers in the world, Intel has some ability to make updates for its hardware available, and many customers know to look for such firmware updates, but not all of them do. This means that unpatched chips are still out there. The task of identifying and fixing vulnerabilities in Intel chips is huge, but it pales in comparison to the situation for most Internet of Things (IoT) devices.

Unlike processors from major vendors, the chips and processors used in many IoT products are hard to trace. There are many different small chip makers, and most small chips are not well labeled. Even if they are labeled, information about the chip can be hard to find. Furthermore, due to factors like cost and availability, the chips used in a product can change between manufacturing runs. Given the frequency of these changes, it is sometimes hard to figure out what hardware is being used on any given device, even between versions of the same model. This reality of tech manufacturing means that a wide variety of products may end up using similarly vulnerable supplies.

IoT devices tend to be orders of magnitude less powerful than multi-function computers like laptops, or even smartphones. Because of technical resource constraints, it can be hard鈥攁nd in some cases impossible鈥攖o update code without special equipment. But even in the cases where updating code is technically feasible, a manufacturer's product life cycle may not include producing and distributing code updates for the entire usable life of the product, even if there are known vulnerabilities.

Responsible Disclosure Programs

Enthusiasts with technical talent and a willingness to help have long played an important role in finding bugs in software. The term "bug bounty" was coined in 1995, by Netscape Communications Corporation for a to reward people who reported bugs "with various prizes depending on the bug class." Even in this early incarnation of a bug bounty, "users reporting significant security bugs" was already seen as the most valuable contribution, with that being the only category where one could collect a cash prize. All of the other prizes were Netscape Navigator merchandise. Despite these deep roots, an emphasis on responsible vulnerability disclosure has only come into its own as an industry standard over the last decade.

Many large companies such as and have set up disclosure programs, and鈥攁s seen in the case of Peloton鈥攕maller companies are doing the same.

Providing an ability for researchers to report vulnerabilities takes more than setting up a dedicated email address. Incoming reports must be reviewed and validated, the vulnerability must be reproduced in order to test if it has been fixed, and then a patch must be devised, written, and deployed. In addition, vulnerabilities may bring researchers in contact with data they should not have access to, which depending on the type of data may carry its own legal risk. Setting up a vulnerability disclosure program requires a willingness to work through that process. Given the relative complexity of these processes, it is important to create incentives so that smaller players can adopt those practices, and to spread vulnerability reporting processes as a standard practice in technology development.

In 2017, recognizing that "an increasing number of organizations in the public and private sectors are adopting vulnerability disclosure programs to improve their ability to detect security issues," the Cybersecurity Unit at the U.S. Department of Justice "to assist organizations interested in instituting a formal vulnerability disclosure program." While this framework sent an encouraging signal that the federal government recognizes the importance of responsible vulnerability disclosure, it also illustrates the challenges created by current law. The framework did not "dictate the form of or objectives for vulnerability disclosure programs" but rather it outlined a design aimed at "reducing the likelihood that such described activities will result in a civil or criminal violation of law under the Computer Fraud and Abuse Act."聽聽

The Computer Fraud and Abuse Act

Three years before Tim Berners Lee , Congress passed the . The law was written for a very different internet by a Congress whose frame of reference was a fairly specific kind of computer crime. The House Judiciary Committee report accompanying the (the precursor to the CFAA) called the 1983 movie 鈥攊n which a Seattle teen who is hunting for video games breaks into a top secret computer, and nearly starts a thermonuclear war鈥攁 鈥渞ealistic representation of the鈥 access capabilities of the personal computer.鈥 With the CFAA, Congress was trying to create penalties for breaking into computers to view, change, or destroy data.

The CFAA makes it illegal to access any computer "without authorization.鈥 In addition to possible felony punishment, the CFAA also allows companies to sue in civil court alleging CFAA violations, even if law enforcement does not pursue charges.

Unfortunately, even with both criminal and civil penalties at stake, the CFAA does not define what accessing a computer "without authorization" means, or what it means to "exceed" that authorization. For most of the last decade there have been different and conflicting interpretations of these terms in different federal districts. The 2021 Supreme Court decision in Van Buren v. United States provided some guidance, requiring that technical definitions be used when words such as 鈥渁ccess鈥 carry technical meaning, and a technical 鈥済ates up or down鈥 standard for defining exceeding access.

This prolonged lack of clarity has allowed the CFAA to be used as a cudgel to stop independent security researchers from evaluating products, limit competition between companies, and forbid other normal internet behavior. Cease-and-desist letters to researchers citing possible CFAA violations have become an all-too-common tool for intimidation. It is impossible to know the exact number of cease-and-desist letters that chilled behavior without leading to litigation. In cases that were litigated, the CFAA claims revealed in some of those letters rested on an argument that "authorize" in the CFAA was not meant technically, and that documents like Terms of Service could set the boundaries of what "authorized access" meant. This definition requires no technical limits to be placed on that access. In other words, simply saying that information, or types of usage, are off limits counts as removing authorization under the CFAA.

This is what happened in the case of . As part of its business, hiQ conducts research by gathering and analyzing data about how workers move from job to job from LinkedIn and other websites. LinkedIn sent a cease-and-desist letter in an attempt to stop hiQ from using automated tools to gather data from its site鈥攁 technique known as 鈥渟craping.鈥 LinkedIn claimed that the cease-and-desist letter itself constituted a revocation of authorization to access the data, and that hiQ's further scraping of LinkedIn after receipt of the letter was a violation of not only the CFAA but also the DMCA. In response, hiQ sued LinkedIn to challenge the claim that scraping public data constituted a CFAA violation. Ultimately the Ninth Circuit with hiQ that exceeding access to a computer system only happens "when a person circumvents a computer鈥檚 generally applicable rules regarding access permissions, such as username and password requirements, to gain access." The Court added that, in the case of scraping, "when a computer network generally permits public access to its data, a user鈥檚 accessing that publicly available data will not constitute access without authorization under the CFAA."

The Ninth Circuit's ruling in the hiQLabs case held that exceeding access under the CFAA required the circumvention of technical barriers. The interpretation was at odds with an Eleventh Circuit ruling in , which found that 鈥渢he plain language of the Act鈥 meant 鈥淩odriguez exceeded his authorized access and violated the Act when he obtained personal information for a nonbusiness reason.鈥 Van Buren v. United States sought to settle these diverging interpretations.

At issue in the Van Buren case was whether a former police officer caught taking money to run license plate numbers through a law enforcement database was correctly convicted for violating the CFAA. He had not broken any technical safeguards on the database, nor was he using stolen logins to get access to this information. Instead, he was misusing access to the database that he already had as a police officer, and doing so in violation of the use policy that he agreed to as part of his job. The main question in this case was whether the violation of a use policy alone constitutes exceeding access to a computer system. Using such an interpretation of the CFAA would mean that everything from browsing the web at work against employee policies to violating a website鈥檚 terms of service by copying or using information from the site when it is forbidden could possibly invite felony prosecution.

In the , Justice Amy Coney Barrett wrote that 鈥渁n individual 鈥榚xceeds authorized access鈥 when he accesses a computer with authorization but then obtains information located in particular areas of the computer鈥攕uch as files, folders, or databases鈥攖hat are off limits to him." The opinion also advises courts to 鈥渢ake note of terms that carry 鈥榯echnical meaning[s],鈥欌 and goes on to say that access "is one such term, long carrying a 鈥榳ell established鈥 meaning in the 鈥榗omputational sense.鈥欌 The Court later introduced a "gates-up-or-down" approach to whether access has been exceeded. Either someone is allowed to access data, or they are not. If there is not a technical "gate"鈥攍ike password protection鈥攖hat needs to be circumvented or broken to access data, this behavior is not a crime under the CFAA.

While clarity around 鈥渁ccess鈥 was welcomed by advocates and security researchers, the Court did not address concerns raised in Amicus filings seeking clear protections for research under the CFAA. So while there is more certainty around simple rule violation, it is still unclear if a researcher accessing data to test for security vulnerabilities in good faith, and without the intention to modify or destroy, is allowed. Researchers are still faced with questions about legal risk for conducting their work.

The Digital Millennium Copyright Act

The (DMCA) was passed in 1998 as the United States sought to codify obligations agreed to under . The treaties aimed to extend copyright protection to digital formats such as DVDs and mp3s, which could be more easily copied. But despite the DMCA鈥檚 benefits for copyright holders, its rules both limit the freedom of people who purchase copyrighted materials and risk making certain types of security research a violation of copyright law.

of the DMCA of "a technological measure that effectively controls access to a work." Also called a 鈥渢echnical protection measure鈥 or TPM, these technologies can take many forms, for instance Digital Rights Management (DRM). DRM is a way for copyright holders to limit how a digital file can be used. For example, if you purchase an ebook, but are prevented from copying it from your ebook reader to your phone, that is likely DRM at work. DRM can also be used to limit the way digitally enabled devices work, like a coffee maker built to only work with . In addition to its anti-circumvention prohibitions on, Section 1201 also prohibits distributing tools to break TPMs, which is commonly referred to as the .

On its face, Section 1201鈥檚 prohibitions against breaking TPMs seem focused on protecting content from digital piracy (like copy protection on DVDs), but the DMCA covers other copyrighted digital works including computer code. of what it means to "circumvent a technological measure" is quite broad, including anyone who might "avoid, bypass, remove, deactivate, or impair a technological measure, without the authority of the copyright owner." This, by default, places most code (and products that use it) into an area of legal uncertainty for researchers.

Unlike the CFAA, the DMCA does recognize that there need to be reasonable exemptions to Section 1201's anti-circumvention prohibitions. Section 1201 contains several permanent exemption categories, including for and . There are requirements to qualify for these exemptions, including a reliance on the CFAA's vague definition for exceeding authorized access. Researchers are also required to prove that any code they used in their research is not "primarily designed or produced for the purpose of circumventing a technological measure," breaking the anti-trafficking provision. In addition to the permanent exemptions, the law authorizes the (and the) temporary exemptions to 1201's anti-circumvention ban. These three-year exemptions are decided in response to a public conducted every three years by the Copyright Office. The were released in late October 2021.

In contrast to the CFAA, this triennial process has allowed for Section 1201's broad restrictions to be somewhat more adaptable to changing technology, as well as the concerns of researchers. Over the last several rounds of this rulemaking process, advocates were able to gain very important鈥攊f temporary鈥攖ech-related exemptions, such as a 2015 exemption for the diagnosis and repair of "motorized land vehicles."聽 The also included an exemption for "good-faith security research." This temporary "good-faith" exemption is a case study in how despite its flexibility, there are gaps in the exemption process聽 that put researchers at risk of violating the DMCA.

The covered a very limited set of devices (medical devices for implant, land vehicles, and "machines primarily designed for use by individual consumers, including voting machines"), as well as a list of other unclearly written limitations and requirements. Those ambiguities could cause

Advocates succeeded in addressing some of these limitations in the , including removing limits on the list of covered devices, and addressing ambiguity over the definition of the 鈥渃ontrolled鈥 environment required for conducting research. But researchers were not able to make a successful case for many other requested changes.

One such request was the removal of a requirement to 鈥渘ot violate any applicable law.鈥 Proponents of the change 鈥渞esearch can implicate numerous federal and state regulations, with legal uncertainty and uneven application in different jurisdictions,鈥 specifically citing the differing interpretations of the CFAA. Researchers feared that this provision pushed the DMCA 鈥渋nto other non-copyright legal regimes, exposing researchers to double liability." They also pointed out that "removal of this condition would not eliminate researchers鈥 obligation to comply with other applicable laws." While the Register was , removing the "other laws" limitation was requested again as part of the . In the 2021 recommendation that the limits were "likely to impose an adverse effect on noninfringing security research" but noting that "this exemption does not serve as waiver of liability for violating other laws while performing good-faith security research."

This exemption's evolution through three cycles of rulemaking show a process that is limiting and cumbersome, putting researchers at a disadvantage. By default those seeking an exemption Through multiple rounds of the process some of the issues identified in the exemption's original language were addressed. However, this means that the chilling effects identified in 2017 were prolonged into 2021. Additionally, these exemptions must be renewed鈥攁nd possibly defended鈥攅very three years. Renewal is not guaranteed, meaning certain kinds of research could lose protection in 2024. In addition to the labor burden this creates for researchers needing to regularly participate in a year-long public process, the temporary exemption process has shown itself to be slow in response to legal uncertainties that chill good-faith research, in the fast paced world of tech, this seems untenable.

Policy Recommendations

We are in a vastly different technological landscape now than we were in 1998 when the DMCA was written, much less in 1986 when CFAA was written. We need updated laws to reflect modern technological threats, and how we combat them.. Congress should take the following steps to ensure researchers are able to continue their vital work without fear of reprisal by government or companies. By protecting and encouraging research, Congress will expedite the spread of best practices around security reporting, making it possible for researchers to evaluate an ever expanding catalog of internet connected goods in the marketplace.

1. Congress must update the Digital Millennium Copyright Act to create a permanent exemption for legitimate security research.

Section 1201 of the Digital Millennium Copyright Act creates the existing process for requesting and granting temporary exemptions every three years. This section should be rewritten to create a research exemption that is clear, robust, and permanent. Particularly the anti-circumvention and anti-trafficking provisions need to specifically create permanent protections for researchers who are not seeking to further violate a copyright.

2. Congress must update the Computer Fraud and Abuse Act to create a permanent exemption for legitimate security research.

The Computer Fraud and Abuse Act should similarly be updated to include a clear and permanent exemption for those engaged in good-faith security research. Particularly, the law should clarify its aim as a tool only to be used against those who appropriate, modify, or delete data.

3. Congress must update the Computer Fraud and Abuse Act to have a more clear test for civil claims.

Under the current law, a company can sue someone based on an alleged CFAA violation both in the absence of that alleged violation being criminally prosecuted and without a required standard for what constitutes harm, or any required intent to cause harm. The law should be updated in a way that leaves a right to civil action but clarifies the collection of harms required to constitute a valid civil claim.

4. The Government should increase the speed at which it sets up vulnerability disclosure programs, and make them visible.

Many formal vulnerability disclosure programs have been implemented by government agencies, including , and , who . These processes create avenues for researchers to responsibly disclose any vulnerabilities they might find in public-facing government tech. As CISA noted in its directive: 鈥淰ulnerability disclosure policies enhance the resiliency of the government鈥檚 online services by encouraging meaningful collaboration between federal agencies and the public.鈥 Expanding the use of such programs, and making them easy to find and understand, would serve as a model for how such programs could work at other levels of government, as well as at companies and other organizations.

5. The Government should incentivize more companies to implement their own responsible disclosure programs.

When companies create programs that allow researchers to conduct good-faith research and create avenues for them to disclose their findings, they are able to benefit from the talent and expertise of independent security researchers. These programs should include best practices, including those required by CISA of federal agencies, and include a commitment not to pursue legal action against individuals who are legitimate researchers seeking to discover vulnerabilities. Incentives could include government support for smaller players setting up disclosure programs, requiring vendors who sell the government 鈥渟mart鈥 tech to create their own disclosure programs, and more.听

Conclusion

There have been several high-visibility, headline-grabbing hacks over the last two years, each followed by commentary that "this is a wake up call." It seems that we are living in an age of cybersecurity wake up calls.

The exponential growth of both cyber attacks and the number of connected devices illustrates two trends. There are already a lot of vulnerable pieces of tech out there, and with so much new tech it is likely that more vulnerabilities are introduced every day. Understanding the scope of current cyber threats will require vulnerability research on all manner of connected tech. Growth in the number of researchers in the field is currently held back by fears of civil liability and felony prosecution rooted in the Computer Fraud and Abuse Act of 1986 and the Digital Millennium Copyright Act鈥攁 pair of tech laws written in the 1980s and 90s.

Unambiguous protections for good-faith security research are needed. Such protections already have a strong track record under the DMCA, which provides for exemptions to be granted every three years. There have not been widespread problems arising from the three-year exemptions, suggesting that strengthening protections for research won鈥檛 create new problems.

Congress must take legislative action, updating federal law to expand protections and clarify protections for good-faith security research. The government can also play a key role in changing norms in areas like vulnerability disclosure, and better testing standards.

Researchers who take the time to look for and responsibly disclose vulnerabilities should be able to do their work without fear of legal reprisal. At a minimum, this work should not be slowed down by the current fears rooted in outdated legislation. It鈥檚 time for these laws to catch up鈥攆aced with such a huge threat, expanding protections and incentives that will encourage good-faith security research is a vital step.

Downloads


More 麻豆果冻传媒 the Authors

nat-meysenburg_person_image.jpeg
Nat Meysenburg

Technologist, Open Technology Institute

Programs/Projects/Initiatives

Topics

Cybersecurity Research Should Not Be A Crime