Search and browse in Computer · Security
   search hits: 33
website Deeplinks
EFF's Deeplinks Blog: Noteworthy news from around the internet
feed text EFF to Japan: Reject Website Blocking
Fri, 13 Jul 2018 20:44:54 +0000

Website blocking to deal with alleged copyright infringement is like cutting off your hand to deal with a papercut. Sure, you don’t have a papercut anymore, but you’ve also lost a lot more than you’ve gained. The latest country to consider a website blocking proposal is Japan, and EFF has responded to the call for comment by sharing all the reasons that cutting off websites is a terrible solution for copyright violations.

In response to infringement of copyrighted material, specifically citing a concern for manga, the government of Japan began work on a proposal that would make certain websites inaccessible in Japan. We’ve seen proposals like this before, most recently in the European Union’s Article 13.

In response to Japan’s proposal, EFF explained that website blocking is not effective at the stated goal of protecting artists and their work. First, it can be easily circumvented. Second, it ends up capturing a lot of lawful expression. Blocking an entire website does not distinguish between legal and illegal content, punishing both equally. Blocking and filtering by governments has frequently been found to violate national and international principles of free expression [pdf].

EFF also shared the research leading Internet engineers did in response to a potential U.S. law that would have enabled website blocking. They said that website blocking would lead to network errors and security problems.

According to numerous studies, the best answer to the problem of online infringement is providing easy, lawful alternatives. Doing this also has the benefit of not penalizing legitimate expression the way blocking does.

Quite simply, website blocking doesn’t work, violates the right to free expression, and breaks the Internet. Japan shouldn’t go down this path but look to proven alternatives.

It’s already much too difficult to invalidate bad patents—the kind that never should have been issued in the first place. Now, unfortunately, the Patent Office has proposed regulation changes that will make it even harder. That’s the wrong path to take. This week, EFF submitted comments [PDF] opposing the Patent Office’s proposal.

Congress created some new kinds of Patent Office proceedings as part of the America Invents Act (AIA) of 2011. That was done with the goal of improving patent quality by giving third parties the opportunity to challenge patents at the Patent Trial and Appeal Board, or PTAB. EFF used one of these proceedings, known as inter partes review, to successfully challenge a patent that had been used to sue podcasters.

Congress didn’t explicitly say how these judges should interpret patent claims in AIA proceedings. But the Patent Office, until recently, read the statute as EFF still does: it requires the office to interpret patent claims in PTAB challenges the same way it does in all other proceedings. That approach requires giving the words of a patent claim their broadest reasonable interpretation (BRI). That’s different than the approach used in federal courts, which apply a standard that can produce a claim of narrower scope.

Using the BRI approach in AIA proceedings makes sense. Critically, it ensures the Patent Office reviews a wide pool of prior art (publications and products that pre-date the patent application). If the patent owner thinks this pool is too broad, it can amend claims to narrow their scope and avoid invalidating prior art. Requiring patent owners to amend their claims to avoid invalidating prior art encourages innovation and deters baseless litigation by giving the public clearer notice about what the patent does and does not claim.

But you don’t have to take our word for it. Barely two years ago, the Patent Office made the same argument to the Supreme Court to justify the agency’s use of the BRI approach in AIA proceedings. The Supreme Court agreed. In Cuozzo v. Lee [PDF], the court upheld the agency’s approach based on the text and structure of the AIA, a century of agency practice, and considerations of fairness and efficiency.

After successfully convincing the Supreme Court that the BRI standard should apply in AIA proceedings, why has the PTO changed its mind? Unfortunately, the Patent Office’s notice says little to explain its sudden change of course. Nor does it offer any reasons why this change would improve patent quality, or the efficiency of patent litigation. Apparently, the Patent Office assumes minimizing differences between two deliberately different types of proceedings will be more efficient. That assumption is flawed. The PTAB’s interpretation of claim language will only be relevant to a district court if similar terms are in dispute. If not, the change will only ensure more lawsuits, based on bad patents, clog up the courts.

The timing of the Patent Office’s proposal may hint at its impetus. When the agency adopted and argued for the BRI standard, the Director was Michelle Lee. On February 8, 2018, Andrei Iancu became Director. Three months later, on May 9, the Patent Office proposed abandoning the BRI standard. In his keynote speech, Director Iancu referenced unfounded criticisms of AIA proceedings, from "some" who, "pointing to the high invalidation rates . . . hate the new system with vigor, arguing that it’s an unfair process that tilts too much in favor of the petitioner." The Patent Office’s sudden change of view on this topic may be a capitulation to these unfounded criticisms and a sign of further policy changes to come.

We hope the Patent Office will reconsider its proposal, after considering our comments, as well as those submitted by the R Street Institute and CCIA, a technology trade group. Administrative judges must remain empowered to weed out those patents that should never have issued in the first place.



Employees at Google, Microsoft, and Amazon have raised public concerns about those companies assisting U.S. military, law enforcement, and the Immigration and Customs Enforcement Agency (ICE) in deploying various kinds of surveillance technologies.

These public calls from employees raise important questions: what steps should a company take to ensure that government entities who purchase or license their technologies don’t misuse them? When should they refuse to sell to a governmental entity?

Tech companies must step up and ensure that they aren’t assisting governments in committing human rights abuses.

While the specific context of U.S. law enforcement using new surveillance technologies is more recent, the underlying questions aren’t. In 2011, EFF proposed a basic Know Your Customer framework for these questions. The context then was foreign repressive governments’ use of the technology from U.S. and European companies to facilitate human rights abuses. EFF’s framework was cited favorably by the United Nations in its implementation guide for technology companies for its own Guiding Principles on Business and Human Rights.

Now, those same basic ideas about investigation, auditing, and accountability can be, and should be, deployed domestically.

Put simply, tech companies, especially those selling surveillance equipment, must step up and ensure that they aren’t assisting governments in committing human rights, civil rights and civil liberties abuses. This obligation applies whether those governments are foreign or domestic, federal or local.

One way tech companies can navigate this difficult issue is by adopting a robust Know Your Customer program, modeled on requirements that companies already have to follow in the export control and anti-bribery context. Below, we outline our proposal for sales to foreign governments from 2011, with a few updates to reflect shifting from an international to domestic focus. Employees at companies that sell to government agencies, especially agencies with a record as troubling as ICE, may want to advocate for this as a process to protect against future corporate complicity.

We propose a simple framework:

  1. Companies selling surveillance technologies to governments need to affirmatively investigate and "know your customer" before and during a sale. We suggest customer investigations similar to what many of these companies are already required to do under the Foreign Corrupt Practices Act and the export regulations for their foreign customers.
  2. Companies need to refrain from participating in transactions where their "know your customer" investigations reveal either objective evidence or credible concerns that the technologies provided by the company will be used to facilitate governmental human or civil rights or civil liberties violations.

This framework can be implemented voluntarily, and should include independent review and auditors, employee participation, and public reporting. A voluntary approach can be more flexible as technologies change and situations around the world shift. Nokia Siemens Networks has already adopted a Human Rights Policy that incorporates some of these guidelines. In a more recent example, Google's AI principles contain many of these steps along with guidance about how they should be applied

If companies don’t act on their own, however, and don’t act with convincing transparency and commitment, then a legal approach may be necessary. Microsoft has already indicated that it not only would be open to a legal (rather than voluntary) approach, but that such an approach is necessary. For technology companies to be truly accountable, a legal approach can and should include extending liability to companies that knowingly and actively facilitate governmental abuses, whether through aiding and abetting liability. EFF has long advocated for corporate liability for aiding governmental surveillance, including in the Doe v. Cisco case internationally and in our Hepting v. AT&T case domestically. 

Elaborating on the basic framework above, here are some guidelines:

[Note: These guidelines use key terms—Technologies, Transaction, Company, and Government—that are defined at the bottom and capitalized throughout.]

Affirmatively Investigate: The Company must have a process, led by a specifically designated person, to engage in an ongoing evaluation of whether Technologies or Transactions will be, or are being, used to aid, facilitate, or cover up human rights, civil rights, and civil liberties abuses ("governmental abuses"). 

This process needs to be more than lip service and needs to be verifiable (and verified) by independent outsiders. It should also include concerned employees, who deserve to have a voice in ensuring that the tools they develop are not misused by governments. This must be an organizational commitment, with effective enforcement mechanisms in place. It must include tools, training, and education of personnel, plus career consequences when the process is not followed. In addition, in order to build transparency and solidarity, a Company that decides to refuse (or continue) further service on the basis of these standards should, where possible, report that decision publicly so that the public understands the decisions and other companies can have the benefit of their evaluation.

The investigation process should include, at a minimum:

  1. Review what the purchasing Government and Government agents, and Company personnel and agents, are saying about the use of the Technologies, both before and during any Transaction. This includes, among other things, review of sales and marketing materials, technical discussions and questions, presentations, technical and contractual specifications, and technical support conversations or requests. For machine learning or AI applications, it must include review of training data and mechanisms to identify what questions the technology will be asked to answer or learn about. Examples include:
    1. Evidence in the Doe v. Cisco case, arising from Cisco’s participation with the Chinese government in building surveillance tools aimed at identifying Falun Gong, are the presentations made by Cisco employees that brag about how their technology can help the Chinese Government combat the "Falun Gong Evil Religion."
    2. In 2016, the ACLU of Northern California published a report outlining how Geofeedia advertised that its location-based, social media surveillance system could be used by government offices and the police to monitor the protest activities of activists, including specifically of color, raising core First Amendment concerns.
  2. Review the capabilities of the Technology for human rights abuses and consider possible mitigation measures, both technical and contractual.
    1. For instance, the fact that facial recognition software misidentifies people of color at a much higher rate than white people is a clear signal that the Technology is highly vulnerable to governmental abuses.
    2. Note that we do not believe that Companies should be held responsible merely for selling general purpose or even dual-use products to the government that are later misused, as long as the Company conducted a sufficient investigation that did not reveal governmental abuse as a serious risk.  
  3. Review the Government’s laws, regulations, and practices regarding surveillance, including approval of purchase of surveillance equipment, laws concerning interception of communications, access to stored communications, due process requirements, and other relevant legal process. For sellers of machine learning and artificial intelligence tools, the issue of whether the tool can be subject to true due process requirements–that is, whether a person impacted by a system's decision can have sufficient access to be able to determine how an adverse decision was made–should be a key factor.
    1. For instance, Nokia Siemens says that it will only provide core lawful intercept (i.e. surveillance) capabilities that are legally required and are "based on clear standards and a transparent foundation in law and practice." 
    2. In some instances, as with AI, this review may include interpreting and applying legal and ethics principles, rather than simply waiting for "generally accepted" ones to emerge, since law enforcement often implements technologies before those rules are clear. EFF and a broad international coalition have already interpreted key international legal doctrines on mass surveillance in the Necessary and Proportionate Principles.
  4. For domestic uses, this review must include an evaluation of whether sufficient local control is in place. EFF and the ACLU have worked to ensure this with a set of proposals called  Community Control Over Police Surveillance or (CCOPS). If local control and protections are not yet in place, the company should decline to provide the technology until they are, especially in locations in which the population is already at risk from surveillance.
  5. Review credible reports about the Government and its human rights record, including news or other reports from nongovernmental sources or local sources that indicate whether the Government engages in the use or misuse of surveillance capabilities to conduct human rights abuses. 
    1. Internationally, this can include U.S. State Department reports as well as other governmental and U.N. reports, as well as those by well-respected NGOs and journalists.
    2. Domestically, this can include all of the above, plus Department of Justice reports about police departments, like the ones issued about Ferguson, MO, and San Francisco, CA.
    3. For both, this review can and should included nongovernmental and journalist sources as well.

Refrain from Participation: The Company must not participate in, or continue to participate in, a Transaction or provide a Technology if it appears reasonably foreseeable that the Transaction or Technology will directly or indirectly facilitate governmental abuses. This includes cases in which:

  1. The portion of the Transaction that the Company is involved in or the specific Technology provided includes building, customizing, configuring, or integrating into a system that is known or is reasonably foreseen to be used for governmental abuses, whether done by the Company or by others.
  2. The portion of the Government that is engaging in the Transaction or overseeing the Technologies has been recognized as committing governmental abuses using or relying on similar Technologies.
  3. The Government's overall record on human rights generally raises credible concerns that the Technology or Transaction will be used to facilitate governmental abuses.
  4. The Government refuses to incorporate contractual terms confirming the intended use or uses of the Technology, confirming local control similar to the CCOPS Proposals, or allowing the auditing of their use by the Government purchasers in sales of surveillance Technologies.
  5. The investigation reveals that the technology is not capable of operating in a way that protects against abuses, such as when due process cannot be guaranteed in AI/ML decision-making, or bias in training data or facial recognition outcome is endemic or cannot be corrected.

Key Definitions and the Scope of the Process: Who should undertake these steps? The field is actually pretty small: Companies engaging in Transactions to sell or lease usrveillance Technologies to Governments, defined as follows:

  1. "Governmental Abuses" includes governmental violations of international human rights law, international humanitarian law, domestic civil rights violations, domestic civil liberties violations and other legal violations that involve governments doing harm to people. As noted above, in some instances involving new or evolving technology or uses of technology, this may include interpreting and applying those principles and laws, rather than simply waiting for legal interpretations to emerge.
  2. "Transaction" includes all sales, leases, rental or other types of arrangements where a Company, in exchange for any form of payment or other consideration, either provides or assists in providing Technologies, personnel or non-technological support to a Government. This also includes providing of any ongoing support to Governments such as software or hardware upgrades, consulting or similar services.
  3. "Technologies" include all systems, technologies, consulting services, and software that, through marketing, customization, government contracting processes, or otherwise are known to the company to be used or be reasonably likely to be used to surveil third parties. This includes technologies that intercept communications, packet-sniffing software, deep packet inspection technologies, facial recognition systems, artificial intelligence and machine learning systems aimed at facilitating surveillance, certain biometrics devices and systems, voting systems, and smart meters. 
    1. Note that EFF does not believe that general purpose technologies should be included in this, unless the Company has a clear reason to believe that they will be used for surveillance.
    2. Surveillance technologies like facial recognition systems are generally not sold to Governments off the shelf. Technology providers are almost inevitably involved in training, supporting, and developing these tools for specific governmental end users, like a specific law enforcement agency.
  4. "Company" includes subsidiaries, joint ventures (especially joint ventures directly with government entities), and other corporate structures where the Company has significant holdings or has operational control.
  5. "Government" includes all segments of government: local law enforcement, state law enforcement, and federal and even military agencies. It includes formal, recognized governments, including State parties to the United Nations.
    1. It also includes governing or government-like entities, such as the Chinese Communist Party or the Taliban and other nongovernmental entities that effectively exercise governing powers over a country or a portion of a country.
    2. For these purposes "Government" includes indirect sales through a broker, reseller, systems integrator, contractor, or other intermediary or multiple intermediaries if the Company is aware or should know that the final recipient of the Technology is a Government.

If tech companies want to be part of making the world better, they must commit to making business decisions that consider potential governmental abuses. 

This framework is similar to the one in the current U.S. export controls and also to the steps required by Companies under the Foreign Corrupt Practices Act. It is based on the recognition that companies involved in domestic government contracting, especially for to the kinds of expensive, service-heavy surveillance systems provided by technology companies, are already participating in a highly regulatory process with many requirements. For larger federal contractors, these include providing complex cost or pricing data, doing immigration checks and conducting drug testing. Asking these companies to ensure that they are not facilitating governmental abuses is not a heavy additional lift.

Regardless of how tech companies get there, if they want to be part of making the world better, not worse, they must commit to making business decisions that consider potential governmental abuses.  No reasonable company wants to be known as the company that knowingly helps facilitate governmental abuses. Technology workers are making it clear that they don’t want to work for those companies either. While the blog posts and public statements from a few of the tech giants are a good start, it’s time all tech companies take real, enforceable steps to ensure that they aren’t serving as "abuse’s little helpers."

On Tuesday, we wrote a report about how the Irvine Company, a private real estate development company, has collected automated license plate reader (ALPR) data from patrons of several of its shopping centers, and is providing the collected data to Vigilant Solutions, a contractor notorious for its contracts with state and federal law enforcement agencies across the country. 

The Irvine Company initially declined to respond to EFF’s questions, but after we published our report, the company told the media that it only collects information at three malls in Orange County (Irvine Spectrum Center, Fashion Island, and The Marketplace) and that Vigilant Solutions only provides the data to three local police departments (the Irvine, Newport Beach, and Tustin police departments). 

The next day, Vigilant Solutions issued a press release claiming that the Irvine Company ALPR data actually had more restricted access (in particular, denying transfers to the U.S. Immigration & Customs Enforcement [ICE] agency), and demanding EFF retract the report and apologize. As we explain below, the EFF report is a fair read of the published ALPR policies of both the Irvine Company and Vigilant Solutions. Those policies continue to permit broad uses of the ALPR data, far beyond the limits that Vigilant now claims exist.   

Vigilant Solutions’ press release states that the Irvine Company’s ALPR data "is shared with select law enforcement agencies to ensure the security of mall patrons," and that those agencies "do not have the ability in Vigilant Solutions' system to electronically copy this data or share this data with other persons or agencies, such as ICE."  

However, neither Vigilant Solutions nor the Irvine Company have updated their published ALPR policies to reflect these restrictions.  Pursuant to California Civil Code § 1798.90.51(d), an ALPR operator "shall" implement and publish a usage and privacy policy that includes the "restrictions on, the sale, sharing, or transfer of ALPR information to other persons."

This is important because the published policies are extremely broad. To begin with, the Irvine Company policy explains that "[t]he automatic license plate readers used by Irvine or its contractors are programmed to transmit the ALPR Information to" "a searchable database of information from multiple sources ('ALPR System') operated by Vigilant Solutions, LLC" "upon collection." 

Moreover, the Irvine Company policy still says that Vigilant Solutions "may access and use the ALPR System for any of the following purposes: (i) to provide ALPR Information to law enforcement agencies (e.g., for identifying stolen vehicles, locating suspected criminals or witnesses, etc.); or (ii) to cooperate with law enforcement agencies, government requests, subpoenas, court orders or legal process."

Under this policy, the use of ALPR data is not limited only to uses that "ensure the security of mall patrons," nor even to any particular set of law enforcement agencies, select or otherwise. The policy doesn’t even require legal process; instead it allows access where the "government requests." 

Likewise, Vigilant Solutions’ policy states that the "authorized uses of the ALPR system" include the very broad category of "law enforcement agencies for law enforcement purposes," and—unlike the policy it claims to have in their press release—does not state any restriction on access by any particular law enforcement agency or to any particular law enforcement purpose. ICE is a law enforcement agency. 

We appreciate that Vigilant Solutions is now saying that the information collected from customers of the Irvine Spectrum Center, Fashion Island, and The Marketplace will never be transited to ICE and will only be used to ensure the security of mall patrons. But if they want to put that issue to rest, they should, at a minimum, update their published ALPR policies.

Better yet, given the inherent risks with maintaining databases of sensitive information, Irvine and Vigilant Solutions should stop collecting information about mall patrons and destroy all the collected information. As a mass-surveillance technology, ALPR can be used to gather information on sensitive populations, such as immigrant drivers, and may be misused. Further, once collected, ALPR may be accessible by other government entities—including ICE—through various legal processes. 

In addition, Vigilant Solutions’ press release takes issue with EFF’s statement that "Vigilant Solutions shares data with as many as 1,000 law enforcement agencies nationwide."  According to Vigilant Solutions press release, "Vigilant Solutions does not share any law enforcement data. The assertion is simply untrue. Law enforcement agencies own their own ALPR data and if they choose to share it with other jurisdictions, the[y] can elect to do so."   

This is a distinction without a difference.

As Vigilant Solutions’ policy section on "Sale, Sharing or Transfer of LPR Data" (emphasis added) states, "the company licenses our commercially collected LPR data to customers," "shares the results of specific queries for use by its customers" and "allows law enforcement agencies to query the system directly for law enforcement purposes." The only restriction is that, for information collected by law enforcement agencies, "we facilitate sharing that data only with other LEAs … if sharing is consistent with the policy of the agency which collected the data."  If Vigilant Solutions only meant to dispute "sharing" with respect to information collected by law enforcement, this is a non-sequitur, as the Irvine Company is not a law enforcement agency.

Nevertheless, Vigilant Solutions’ dispute over whether it truly "shares" information puts an Irvine Company letter published yesterday in an interesting light. The Irvine Company reportedly wrote to Vigilant Solutions to confirm that "Vigilant has not shared any LPR Data generated by Irvine with any person or agency other than the Irvine, Newport Beach and Tustin police departments and, more specifically you have not shared any such data with U.S. Immigration and Customs Enforcement (ICE)." 

Under the cramped "sharing" definition in the Vigilant Solutions press release, any such "confirmation" would not prevent Vigilant from licensing the Irvine data, sharing results of specific queries, allowing law enforcement to query the system directly, or "facilitate sharing" with ICE if the police department policies allowed it. If Irvine and Vigilant didn’t mean to allow this ambiguity, they should be more clear and transparent about the actual policies and restrictions. 

The rest of the press release doesn’t really need much of a response, but we must take issue with one further claim. Vigilant Solutions complains that, while EFF reached out several times to the Irvine Company (with no substantive response), EFF did not reach out to them directly about the story. This assertion is both misleading and ironic. 

A year ago, EFF sent a letter to Vigilant Solutions with 31 questions about its policies and practices. To date, Vigilant Solutions has not responded to a single question. In addition, Vigilant Solutions had already told the press, "as policy, Vigilant Solutions is not at liberty to discuss or share any contractual details. This is a standard agreement between our company, our partners, and our clients." 

Indeed, Vigilant Solutions has quite a history of fighting EFF’s effort to shine a light on ALPR practices, issuing an open letter to police agencies taking EFF to task for using Freedom of Information Act and Public Records Act requests to uncover information on how public agencies collect and share data. A common Vigilant Solutions contract has provisions where the law enforcement agency "also agrees not to voluntarily provide ANY information, including interviews, related to Vigilant, its products or its services to any member of the media without the express written consent of Vigilant." 

Vigilant Solutions has built its business on gathering sensitive information on the private activities of civilians, packaging it, and making it easily available to law enforcement. It’s deeply ironic that Vigilant gets so upset when someone wants to take a closer look at its own practices.

The hope that filled Egypt's Internet after the 2011 January 25 uprising has long since faded away. In recent years, the country's military government has instead created a digital dystopia, pushing once-thriving political and journalism communities into closed spaces or offline, blocking dozens of websites, and arresting a large number of activists who once relied upon digital media for their work.

In the past two years, we’ve witnessed the targeting of digital rights defenders, journalists, crusaders against sexual harassment, and even poets, often on trumped-up grounds of association with a terrorist organization or "spreading false news." Now, the government has put forward a new law that will result in its ability to target and persecute just about anyone who uses digital technology.

The new 45-article cybercrime law, named the Anti-Cyber and Information Technology Crimes law, is divided into two parts. The first part of the bill stipulates that service providers are obligated to retain user information (i.e. tracking data) in the event of a crime, whereas the second part of the bill covers a variety of cybercrimes under overly broad language (such as "threat to national security").

Article 7 of the law, in particular, grants the state the authority to shut down Egyptian or foreign-based websites that "incite against the Egyptian state" or "threaten national security" through the use of any digital content, media, or advertising. Article 2 of the law authorizes broad surveillance capabilities, requiring telecommunications companies to retain and store users’ data for 180 days. And Article 4 explicitly enables foreign governments to obtain access to information on Egyptian citizens and does not make mention of requirements that the requesting country have substantive data protection laws.

The implications of these articles are described in detail in a piece written by the Association for Freedom of Thought and Expression (AFTE) and Access Now. In the piece, the organizations state "These laws serve to close space for civil society and deprive citizens of their rights, especially the right to freedom of expression and of association" and call for the immediate withdrawal of the law.

We agree—the law must be withdrawn. It would appear that the bill’s underlying goal is to set up legal frameworks to block undesirable websites, intimidate social media users, and solidify state control over websites.   By expanding government’s power to block websites, target individuals for their speech, and surveil citizens, the Egyptian parliament is helping the already-authoritarian executive branch inch ever closer toward a goal of repressing anyone who dares speak their mind. The overly broad language contained throughout the law will lead to the persecution of individuals who engage in online speech and create an atmosphere of self-censorship, as others shy away from using language that may be perceived as threatening to the government.

The Egyptian law comes at a time of increased repression throughout the Middle East. In the wake of the 2011 uprisings, a number of countries in the region began to crack down on online speech, implementing cybercrime-related laws that utilize broad language to ensure that anyone who steps out of line can be punished.

In a 2015 piece for the Committee to Protect Journalists, Courtney Radsch wrote: "Cybercrime legislation, publicly justified as a means of preventing terrorism and protecting children, is a growing concern for journalists because the laws are also used to restrict legitimate speech, especially when it is critical or embarrassing to authorities."

A June 2018 report from the Gulf Center for Human Rights maps both legal frameworks and violations of freedom of expression in the six Gulf states, as well as Jordan, Syria, and Lebanon, noting that "The general trend for prosecution was that digital rights and freedoms were penalised and ruled as 'cybercrime' cases delegated to general courts. Verdicts in these cases have been either based on an existing penal code where cybercrime laws are absent, in the process of being drafted, or under the penal code and a cybercrime law."

These are difficult times for free expression in the region. EFF continues to monitor the development of cybercrime and other relevant laws and offers our support to the many organizations in the region fighting back against these draconian laws.

When government agencies refuse to let the members of the public watch what they’re doing, drones can be a crucial journalistic tool. But now, some members of Congress want to give the federal government the power to destroy private drones it deems to be an undefined "threat." Even worse, they’re trying to slip this new, expanded power into unrelated, must-pass legislation without a full public hearing. Worst of all, the power to shoot these drones down will be given to agencies notorious for their absence of transparency, denying access to journalists, and lack of oversight.

Back in June, the Senate Homeland Security and Governmental Affairs Committee held a hearing on the Preventing Emerging Threats Act of 2018 (S. 2836), which would give the Department of Homeland Security and the Department of Justice the sweeping new authority to counter privately owned drones. Congress shouldn’t grant DHS and DOJ such broad, vague authorities that allow them to sidestep current surveillance law.

Now, Chairman Ron Johnson is working to include language similar to this bill in the National Defense Authorization Act (NDAA). EFF is opposed to this idea, for many reasons.

The NDAA is a complicated and complex annual bill to reauthorize military programs and is wholly unrelated to both DHS and DOJ. Hiding language in unrelated bills is rarely a good way to make public policy, especially when the whole Congress hasn’t had a chance to vet the policy.

But most importantly, expanding the agencies’ authorities without requiring that they follow the Wiretap Act, Electronic Communications Privacy Act, and the Computer Fraud and Abuse Act raises large First and Fourth Amendment concerns that must be addressed.

Drones are a powerful tool for journalism and transparency. Today, the Department of Homeland Security routinely denies reporters access to detention centers along the southwest border. On the rare occasions DHS does allow entry, the visitors are not permitted to take photos or record video. Without other ways to report on these activities, drones have provided crucial documentation of the facilities being constructed to hold children. Congress should think twice before granting the DHS the authority to shoot drones down, especially without appropriate oversight and limitations.

If S. 2836 is rolled into the NDAA, it would give DHS the ability to "track," "disrupt," "control," "seize or otherwise confiscate" any drone that the government deems to be a "threat," without a warrant or due process. DHS and DOJ might interpret this vague and overbroad language to include the power to stop journalists from using drones to document government malfeasance at these controversial children’s detention facilities.

As we said before, the government may have legitimate reasons for engaging drones that pose an actual, imminent, and narrowly defined "threat." Currently, the Department of Defense already has the authority to take down drones, but only in much more narrowly circumscribed areas directly related to enumerated defense missions. DHS and DOJ have not made it clear why existing exigent circumstance authorities aren’t enough. But even if Congress agrees that DHS and DOJ need expanded authority, that authority must be carefully balanced so as not to curb people’s right to use drones for journalism, free expression, and other non-criminal purposes.

EFF has been concerned about government misuse of drones for a long time. But drones also represent an important tool for journalism and activism in the face of a less-than-transparent government. We can’t hand the unchecked power to destroy drones to agencies not known for self-restraint, and we certainly can’t let Congress give them that power through an opaque, backroom process.

text All Hands on Deck: Join EFF
Tue, 10 Jul 2018 18:14:53 +0000

It’s easy to feel adrift these days. The rising tide of social unrest and political extremism can be overwhelming, but on EFF’s 28th birthday our purpose has never been more clear. With the strength of our numbers, we can fight against the scourge of pervasive surveillance, government and corporate overreach, and laws that stifle creativity and speech. That's why today we're launching the Shipshape Security membership drive with a goal of 1,500 new and renewing members. For two weeks only, you can join EFF for as little $20 and get special member swag that will remind you to keep your digital cabin shipshape.

Join TOday

Online Freedom Begins with You!

Digital security anchors your ability to express yourself, challenge ideas, and have candid conversations. It’s why EFF members fight for uncompromised online tools and ecosystems: the world can no longer resist tyranny without them. We also know that our impact is amplified when we approach security together and support one another. The future of online privacy and free expression depend on our actions today.

EFF's new logo member t-shirt

If you know people who care about online freedom, the Shipshape Security drive is a great time to encourage them to join EFF. On the occasion of our birthday, EFF has also released a new member t-shirt for this year featuring our fresh-from-the-oven logo. Members support EFF's work educating policymakers and the public with crucial analysis of the law, developing educational resources like Surveillance Self-Defense, developing software tools like Privacy Badger, empowering you with a robust action center, and doing incisive work in the courts to protect the public interest.

Before the rise of the Internet, a crew of pioneers established EFF to help the world navigate the great promise and dangerous possibilities of digital communications. Today, precisely 28 years later, EFF is the flagship nonprofit leading a tenacious movement to protect online rights. Support from the public makes it possible, and EFF refuses to back down.

Come hell or high water, EFF is fighting for your rights online. Lend your support and join us today.

The Trump Administration’s "zero tolerance" program of criminally prosecuting all undocumented adult immigrants who cross the U.S.-Mexico border has had the disastrous result of separating as many as 3,000 children—many no older than toddlers—from their parents and family members. The federal government doesn’t appear to have kept track of where each family member has ended up. Now politicians, agency officials, and private companies argue DNA collection is the way to bring these families back together. DNA is not the answer.

Politicians argue DNA collection is the way to bring these families back together. DNA is not the answer.

Two main DNA-related proposals appear to be on the table. First, in response to requests from U.S. Representative Jackie Speier, two private commercial DNA-collection companies proposed donating DNA sampling kits to verify familial relationships between children and their parents. Second, the federal Department of Health and Human Services has said it is either planning to or has already started collecting DNA from immigrants, also to verify kinship.

Both of these proposals threaten not just the privacy, security, and liberty of undocumented immigrants swept up in Trump’s Zero Tolerance program but also the privacy, security, and liberty of everyone related to them.

Jennifer Falcon, communications director at RAICES, an organization that provides free and low-cost legal services to immigrant children, families, and refugees in Texas succinctly summarized the problem:

These are already vulnerable communities, and this would potentially put their information at risk with the very people detaining them. They’re looking to solve one violation of civil rights with something that could cause another violation of civil rights.

Why is this a problem?

DNA reveals an extraordinary amount of private information about us. Our DNA contains our entire genetic makeup. It can reveal where our ancestors came from, who we are related to, our physical characteristics, and whether we are likely to get various genetically-determined diseases. Researchers have also theorized DNA may predict race, intelligence, criminality, sexual orientation, and even political ideology.

DNA collected from one person can be used to track down and implicate family members, even if those family members have never willingly donated their own DNA to a database. In 2012, researchers used genetic genealogy databases and publicly-available information to identify nearly 50 people from just three original anonymized samples. The police have used familial DNA searching to tie family members to unsolved crimes.

Once the federal government collects a DNA sample—no matter which agency does the collecting—the sample is sent to the FBI for storage, and the extracted profile is incorporated into the FBI’s massive CODIS database, which already contains over 13 million "offender" profiles ("detainees" are classified as "offenders"). It is next to impossible to get DNA expunged from the database, and once it’s in CODIS it is subject to repeated warrantless searches from all levels of state and federal law enforcement. Those searches have implicated people for crimes they didn’t commit.

Unanswered Questions

Both of the proposals to use DNA to verify kinship between separated family members raise many unanswered questions. Here are a few we should be asking:

Who is actually collecting the DNA samples from parents and children?
Is it the federal government? If so, which agency? If it’s a private entity, which entity?

What legal authority do they have to collect DNA samples?
DHS still doesn’t appear to have legal authority to collect DNA samples from anyone younger than 14. Children younger than 14 should not be deemed to have consented to DNA collection. And under these circumstances, parents cannot consent to the collection of DNA from their children because the federal government has admitted it has already lost track of which children are related to which adults.

How are they collecting and processing the DNA?
Are they collecting a sample via a swab of the cheek? Is collection coerced or is it with the consent and assistance of the undocumented person? Once the sample is collected, how is it processed? Is it processed in a certified lab? Is it processed using a Rapid DNA machine? How is chain of custody tracked, and how is the collecting entity ensuring samples aren’t getting mixed up?

What happens to the DNA samples after they are collected, and who has access to them?
Are samples deleted after a match is found? If not, and if they are collected by a private genetics or genetic geneology company like 23 and Me or MyHerritage, do these companies get to hold onto the samples and add them to their databanks? Are there any limits on who can access them and for what purpose? If the federal government collects the samples, where is it storing them and who has access to them?

Will the DNA profiles extracted from the samples end up in FBI’s vast CODIS criminal DNA database?
Currently DHS does not have its own DNA database. Any DNA it collects goes to the FBI, where it may be searched by any criminal agency in the country.

Will the collected DNA be shared with foreign governments?
The U.S. government shares biometric data with its foreign partners. Will it share immigrant DNA? Will this be used to target immigrants if or when they are sent back home?

What if the separated family members aren’t genetically related or don’t represent a parent-child relationship?
How is the U.S. government planning to determine who is a "family member" once agencies have lost track of the families who traveled here together? What if the parent is a step-parent or legal guardian? What if the child is adopted? What if the adult traveling with the child is a more distant relative? Will they still be allowed to be reunited with their children?

Undocumented families shouldn’t have to trade one civil rights violation for another

These proposals to use DNA to reunite immigrant families aren’t new. In 2008, the United Nations High Commissioner for Refugees (UNHCR) looked at this exact problem. In a document titled DNA Testing to Establish Family Relationships in the Refugee Context, it recognized that DNA testing "can have serious implications for the right to privacy and family unity" and should be used only as a "last resort." In 2012, we raised alarms about DHS’s proposals at that time to use DNA to verify claimed kinship in the refugee and asylum context. The concerns raised by DNA collection ten years ago have only heightened today.

The Trump administration shouldn’t be allowed to capitalize on the family separation crisis it created to blind us to these concerns. And well-meaning people who want to reunite families should consider other solutions to this crisis. Immigrant families shouldn’t have to trade the civil rights violation of being separated from their family members for the very real threats to privacy and civil liberties posed by DNA collection.

Free WiFi all across New York City? It might sound like a dream to many New Yorkers, until the public learned that it wasn’t "free" at all. LinkNYC, a communications network that is replacing public pay phones with WiFi kiosks across New York City, is paid for by advertising that tracks users, festooned with cameras and microphones, and has questionable processes for allowing the public to influence its data handling policies.

These kiosks also gave birth to ReThink LinkNYC, a grassroots community group that’s uniting New Yorkers from different backgrounds in standing up for their privacy. In a recent interview with EFF, organizers Adsila Amani and Mari Dej described the organization as a "hodgepodge of New Yorkers" who were shocked by the surveillance-fueled WiFi kiosks entering their neighborhoods. More importantly, they saw opportunity. As Dej described, "As we began scratching the surface, [we] saw that this was an opportunity as well to highlight some of the problems that are largely invisible with data brokers and surveillance capitalism."

ReThink LinkNYC, which has launched an informational website and hosts events across New York, has been pushing city officials for transparency and accountability. They have demanded a halt to construction on the kiosks until adequate privacy safeguards are enacted.

The group has already had some successes. As Dej described it, "We certainly got the attention of LinkNYC, and that itself is a victory – [they] know that there is an organized group of everyday peeps unhappy with the lack of transparency around the LinkNYC 'spy kiosks.’" 

But Amani cautioned that it was too early to know whether early changes in response to the group’s advocacy—including a revised LinkNYC privacy policy, the creation of a Chief Privacy Officer role for the city, and a new city taskforce—will actually advance the privacy concerns of New Yorkers. "We would like to see the end of individualized tracking of location, faces, and all biometric data on the kiosks," Amani offered, "With LinkNYC having the means to collect this data and still not having figured out the path for community oversight of the hardware and software, it’s saying trust us, we won't hurt you. That's naive, especially in these times."

ReThink LinkNYC has thrived in part because it actively cultivated partnerships, and not just with the tech community. Dej noted, "Inasmuch as the structure of surveillance affects us all, all of us deserve to be aware, and welcomed into action.  A movement needs to extend beyond the tech community." 

To other groups around the country that might be interesting in campaigning to defend civil liberties in their own communities, Amani advised organizers to examine the power structures they are opposing and cultivate personal connections: "Civic involvement remains a more or less fringe activity for a majority of people.  So appeal to what human community is—feelings of connection, acceptance, creating a safe world for our children, and a chance to be creative, 'seen', and given a sense that one’s participation is valued. If we'd like our tech future to be cooperative (versus dominated by wealth or authoritarian styles), then that's how we organize.  If we dedicate ourselves to unlearning the hierarchical behavioral model, we can more easily sense our power."

Dej agreed, adding "We have the power, we just have yet to realize it." 

ReThink LinkNYC joined the Electronic Frontier Alliance (EFA) over a year ago, and has used the network to help connect with other digital rights activists in New York City, get assistance with event promotion, and discuss strategies. Dej shared that EFA has been useful for connecting with other activists, saying, "It helps us connect to other people and other parts of this issue that you wouldn’t think of right off the bat, like Cryptoparty, who gave us insight into the technology part of all this… It’s also good to see people working and that we’re not the only ones going through this struggle. There are other people fighting different parts of this system as hard as they can." 

The Electronic Frontier Alliance was launched in March 2016 to help inspire and connect community and campus groups across the United States to defend digital rights. While each group is independent and has its own focus areas, every member group upholds five principles:

  1. Free expression: people should be able to speak their minds to whomever will listen.
  2. Security: technology should be trustworthy and answer to its users.
  3. Privacy: technology should allow private and anonymous speech, and allow users to set their own parameters about what to share with whom.
  4. Creativity: technology should promote progress by allowing people to build on the ideas, creations, and inventions of others.
  5. Access to knowledge: curiosity should be rewarded, not stifled.

To learn more about the Electronic Frontier Alliance, find groups in your area, or join the alliance, check out our website.  To learn more about ReThink LinkNYC, visit their website.

Interviews with ReThink LinkNYC were conducted by phone with follow up over email, and responses edited lightly for clarity.

Update July 12, 2018. On July 11, Vigilant Solutions issued a press release disputing EFF’s report. We have posted the details and our response in a new post

Update 10:45 a.m., July 11, 2018: The Irvine Company has disclosed the three shopping centers are Irvine Spectrum Center, Fashion Island, and The Marketplace.  The local police departments are the Irvine, Newport Beach, and Tustin police departments.   

Update 7:30 p.m. July 10, 2018: The Irvine Company provided The Verge with the following response. 

"Irvine Company is a customer of Vigilant Solutions. Vigilant employs ALPR technology at our three Orange County regional shopping centers. Vigilant is required by contract, and have assured us, that ALPR data collected at these locations is only shared with local police departments as part of their efforts to keep the local community safe."

EFF urges the Irvine Company to release the names of the three regional shopping centers that are under surveillance and to provide a copy of the contract indicating the data is only shared with local police. The company should also release the names of which local agencies are accessing its data.  We remain concerned and skeptical.  EFF would appreciate any information that would clear up this matter. The public deserves greater transparency from The Irvine Company and Vigilant Solutions. 

A company that operates 46 shopping centers up and down California has been providing sensitive information collected by automated license plate readers (ALPRs) to Vigilant Solutions, a surveillance technology vendor that in turn sells location data to Immigrations & Customs Enforcement. 

The Irvine Company—a real estate company that operates malls and mini-malls in Irvine, La Jolla, Newport Beach, Redwood City, San Jose, Santa Clara and Sunnyvale—has been conducting the ALPR surveillance since just before Christmas 2016, according to an ALPR Usage and Privacy Policy published on its website (archived version). The policy does not say which of its shopping centers use the technology, only disclosing that the company and its contractors operates ALPRs at "one or more" of its locations. 

Automated license plate recognition is a form of mass surveillance in which cameras capture images of license plates, convert the plate into plaintext characters, and append a time, date, and GPS location. This data is usually fed into a database, allowing the operator to search for a particular vehicle’s travel patterns or identify visitors to a particular location. By adding certain vehicles to a "hot list," an ALPR operator can receive near-real time alerts on a person’s whereabouts.

EFF contacted the Irvine Company with a series of questions about the surveillance program, including which malls deploy ALPRs and how much data has been collected and shared about its customers and employees. After accepting the questions via phone, Irvine Company did not provide further response or answer questions.

The Irvine Company's Shopping Centers in California: 

mytubethumb play
Privacy info. This embed will serve content from google.com

The Irvine Company’s policy describes a troubling relationship between the retail world and the surveillance state. The cooperation between the two companies allows the government to examine the travel patterns of consumers on private property with little transparency and no consent from those being tracked. As private businesses, Vigilant Solutions and the Irvine Company are generally shielded from transparency measures such as the California Public Records Act. The information only came to light due to a 2015 law passed in California that requires ALPR operators—both public and private aliketo post their ALPR policies online. Malls in other states where no such law exists could well be engaged in similar violations of customer privacy without any public accountability.  

In December 2017, ICE signed a contract with Vigilant Solutions to access its license-plate reader database. Data from Irvine Company’s malls directly feeds into Vigilant Solutions’ database system, according to the policy. This could mean that ICE can spy on mall visitors without their knowledge and receive near-real-time alerts when a targeted vehicle is spotted in a shopping center’s parking lot. 

Vigilant Solutions’ dealings with ICE have come under growing scrutiny in California as the Trump administration accelerates its immigrant enforcement. The City of Alameda rejected a contract with Vigilant Solutions following community outcry over its contracts with ICE. The City of San Pablo put an expansion of its surveillance network on hold due to the same concerns.

But ICE isn’t the only agency accessing ALPR data. Vigilant Solutions shares data with as many as  1,000 law enforcement agencies nationwide. Through its sister company, Digital Recognition Network, Vigilant Solutions also sells ALPR data to financial lenders, insurance companies, and debt collectors.

"Irvine is committed to limiting the access and use of ALPR Information in a manner that is consistent with respect for individuals' privacy and civil liberties," the Irvine Company writes in its policy. "Accordingly, contractors used to collect ALPR Information on Irvine's behalf and Irvine employees are not authorized to access or use the ALPR Information or ALPR System." And the Irvine Company says it deletes the data once it has been transmitted to Vigilant Solutions.

Although the Irvine Company pays lip service to civil liberties, the company undermines that position by allowing Vigilant Solutions to apply its own policy to the data. Vigilant Solutions does not purge data on a regular basis and instead "retains LPR data as long as it has commercial value."

The Irvine Company must shut down its ALPR system immediately. By conducting this location surveillance and working with Vigilant Solutions, the company is putting not only immigrants at risk, but invading the privacy of its customers by allowing a third-party to hold onto their data indefinitely.

We will update this post if and when the Irvine Company decides to respond to our questions.

Special thanks to Zoe Wheatcroft, the EFF volunteer who first spotted The Irvine Company's ALPR policy. 

text Announcing EFF’s New Logo (and Member Shirt)
Tue, 10 Jul 2018 00:27:04 +0000

 EFF was founded on this day, exactly 28 years ago. Since that time, EFF’s logo has remained more or less unchanged. This helped us develop a consistent identity — people in the digital rights world instantly recognize our big red circle and the heavy black "E." But the logo has some downsides. It’s hard to read, doesn’t say much about our organization, and looks a bit out of date.

Today, we are finally getting around to a new look for our organization thanks to the generosity of top branding organization Pentagram. We’ve launched a new logo nicknamed "Insider," and it was created for us by Pentagram under the leadership of the amazing Michael Bierut.

To celebrate, we’re releasing our new EFF member shirt, featuring the new logo. It’s a simple black shirt with the logo in bright red and white. Join us or renew your membership and get a new shirt today! 

A photo of the front and back of the Insider EFF member shirt.

A photo of two EFF staffers wearing the new shirt.

There’s a good story behind how this new logo came about. 

Last year, EFF defended Kate Wagner, the blogger behind McMansion Hell, a popular blog devoted to the many flaws and failures of so-called "McMansions," those oversized suburban tract homes that many people love to hate. The online real estate database Zillow objected to Wagner's use of their photos, and threatened her with legal action.

EFF stepped in to defend Wagner. EFF sent a letter on the blogger’s behalf, explaining that her use of Zillow’s images was a textbook example of fair use. Zillow backed down, and her many supporters let out a collective cheer.  

One of those supporters was Michael Bierut, who also happens to be one of the best logo designers on the planet. (You have probably seen some of his work: among his many recognizable works are logos for MIT’s Media Lab, MasterCard, and Hillary Clinton.) Bierut said he loved EFF's letter, recognized it as great legal writing, and also saw that EFF needed a new logo. He and his team at Pentagram offered to make us a new one, pro bono. 

We were really touched and pleased by his offer. Over subsequent months, we worked with Bierut and his team to come up with something new. In describing what we were looking for, we told Pentagram that we wanted something simple, classic, and that matched the boldness of our vision for the Internet. 

After several rounds and revisions, they came up with this new logo, Insider. One of the great things about this logo is that, in true Pentagram fashion, this logo is really a logo system. The logo can be reconfigured and adjusted in multiple ways, allowing us to adjust our look for many purposes. This logo will look as good on a formal legal letter as it does in an activist campaign. It also uses a great open source typeface called League Gothic!

You can access your own copies of this logo in various configurations and file formats from our logo page. Please feel free to use them for any legal purpose.

We hope you like the new logo as much as we do—and that when you see it, wear it, or display it, it continues to convey our history of working for your online rights, and our plan to keep up the fight long into the future. 




A happy ending, shared with Kate Wagner and Michael Bierut’s consent.

After a hearing that stripped California’s gold standard net neutrality bill of much of its protections, California legislators have negotiated new amendments that restore the vast majority of those protections to the bill. The big ISPs and their money did not defeat the voices of the many, many people who want and need a free and open Internet.

On June 20, the Communications and Conveyance Committee of the California Assembly, after having rejected proposed amendments to move Senator Scott Wiener’s S.B. 822 and Senator Kevin de León’s S.B. 460 forward as a package, also voted to gut S.B. 822's strong net neutrality protections. It was a move that resulted in a hollowed-out version of S.B. 822 that left huge loopholes for ISPs.

Since then, there’s been an outcry from Team Internet in California, making clear how important effective, strong net neutrality protections are. Senator Wiener, Senator de León, Assemblymember Rob Bonta, and Assemblymember Miguel Santiago, the Chair of the Assembly Committee on Communications and Conveyance that voted on the watered-down bill, have all come to an agreement that once again makes California’s proposed legislation the strongest net neutrality bill in the country.

The willingness of Assemblymember Santiago to listen to his constituents’ opinions and realize their needs, as opposed to those of large ISPs like AT&T, is laudable. And the resulting agreement puts California net neutrality back on track.

As was initially proposed by Senator Wiener and Senator de Leon, both net neutrality bills will now become a package. The ban on blocking, throttling, and paid prioritization remains—paid prioritization has been a particular target of misleading ISP arguments. The ban on certain kinds of zero rating—the kinds that lead consumers to services that ISPs want them to use rather than giving them choices—also remains. And so does the ban on access fees, which means ISPs will not be able to get around these protections by charging fees at the places where data enters their networks.

This is what real net neutrality looks like. And it all happened because people spoke out. You sent emails, called offices, crowdfunded a billboard—all of that was heard. People’s voices trumped company money this time.

The fight’s not over: these bills still need to be passed by the California legislature and signed by the governor. So keep telling them to vote for S.B. 822.

Take Action

Tell California Assemblymembers to Vote Yes on S.B. 822

Big companies are harvesting and monetizing your face print, fingerprints, and other sensitive biometric information, without your knowledge and consent. That’s why Illinois wisely enacted the Biometric Information Privacy Act (BIPA), which prohibits companies from gathering, using, or sharing your biometric information without your informed opt-in consent. Now companies are asking the Illinois Supreme Court to defang BIPA, by narrowly interpreting its enforcement tool and thus depriving injured parties of their day in court.

EFF has joined an amicus curiae brief urging the Illinois Supreme Court to adopt a robust interpretation of BIPA. Our fellow amici are ACLU, CDT, the Chicago Alliance Against Sexual Exploitation, PIRG, and Lucy Parsons Labs. In the case on appeal, Rosenbach v. Six Flags, an adolescent who purchased a season pass to an amusement park alleges the park scanned and stored his thumbprint biometrics without written consent or notice about its plan to collect, store, and use his biometric information.

The Illinois Supreme Court will decide the effectiveness of BIPA’s enforcement tool. BIPA provides that "any person aggrieved by a violation of this Act" may file their own lawsuit against the company that violated the Act. The question before the court is whether a person is "aggrieved," and may sue, based solely on the collection of their biometric information without their informed opt-in consent, or whether a person must also show some additional injury.

EFF and our fellow amici argue that a person is "aggrieved," and may sue, based just on capture of their biometric information without notice and informed consent. We offer several reasons. First, biometric surveillance is a growing menace to our privacy. Our biometric information can be harvested at a distance and without our knowledge, and we often have no ability as individuals to effectively shield ourselves from this grave privacy intrusion. Second, BIPA follows in the footsteps of a host of other privacy laws that prohibit the capture of private information absent informed opt-in consent, and that define capture without notice and consent by itself as an injury. Third, allowing private lawsuits is a necessary means to ensure effective enforcement of privacy laws.

Perhaps most importantly, more businesses than ever are capturing and monetizing our biometric information. Retailers use face recognition to surveil shoppers’ behavior as they move about the store, and to identify potential shoplifters. Employers use fingerprints, iris scans, and face recognition to manage employee access to company phones and computers. People have filed BIPA lawsuits against major technology companies like Facebook, Google, and Snapchat, alleging the companies applied face recognition to their uploaded photographs without their consent. The U.S. Chamber of Commerce recently filed an amicus brief in one of these lawsuits, urging a federal appellate court to gut BIPA.

Illinois’ BIPA is the strongest biometric privacy law in the United States. EFF and other privacy groups for years have resisted big business efforts to gut BIPA through the legislative process. Now we are proud to join our privacy allies in an ami­cus brief before the Illinois Supreme Court to push back against the latest effort to weaken BIPA.

It’s World Cup time. That means goals. And goals means goal celebrations. Here’s a compilation of U.S. soccer fans celebrating a last-second goal in the 2010 World Cup. Ah, memories. Anyway, FIFA apparently doesn’t like it when fans celebrate near their television sets. It sent a takedown notice aimed at a five-second video of a young boy celebrating in his living room.

Following a goal in the England-Tunisia match, Kathryn Conn posted a five-second video of her seven-year-old son celebrating. Conn explained that her son "is a massive Spurs fan and he absolutely worships Harry Kane so he started dancing around in the living room." Unfortunately, the dancing occurred in front of a television still playing the game. And if there’s one thing FIFA is serious about, it’s their copyright.

Conn says she woke up the next morning to find the video deleted from Twitter and a notice that it was due to a DMCA takedown notice from FIFA, which apparently was worried that a blurry background shot of a soccer game in a five-second video would make people less likely to watch 2018’s most-viewed TV event in England.

Hmmm. A dancing child in a short video with copyrighted material playing incidentally in the background? We hope that it won’t take 10 years of litigation for FIFA to learn its lesson here. It should respect fair use and respect its fans.

Foreign languages have been taught, and studied, for thousands of years. People who teach languages are the last folks that should be dealing with patent threat letters—but incredibly, that’s exactly what has happened to Mihalis Eleftheriou. Hodder and Stoughton, a large British publisher, has sent a letter to Eleftheriou claiming that it has rights to a patent that covers recorded language lessons, and demanding that he stop providing online courses. 

Eleftheriou teaches a variety of online classes through his Language Transfer project. The courses are simple audio files uploaded to platforms like Soundcloud and YouTube. So you can imagine his surprise when he received a letter [PDF] from Hodder and Stoughton, saying that his project infringes a U.S. patent.

Hodder and Stoughton contends that Language Transfer infringes U.S. Patent No. 6,565,358, titled "Language teaching system." The patent essentially covers a language lesson on tape. In the patent’s words, it claims a particular sequence of "expression segments" to be played on a "recorded medium" using a "playing device." In plain English, the "expression segments" amount to the following sequence: the teacher asks how to translate a phrase, there is a short pause, an example student attempts to answer the question, and then the teacher provides the correct answer. 

At this point you might be asking yourself, wait what? How on Earth did someone get a patent on putting a language lesson on tape? Those are good questions. The answer, frankly, is that the Patent Office needs to do a much better job

Today EFF has sent a response [PDF] to Hodder and Stoughton on Eleftheriou’s behalf. We explain that the ’358 patent is plainly invalid under the Supreme Court’s 2014 decision in Alice v. CLS Bank. That decision holds that an abstract idea does not become eligible for patent protection merely by being implemented on conventional or generic technology. The ’358 patent—which claims a sequence of human expressions on an ordinary tape—is a quintessential example of the kind of patent that fails this test. It is no more patentable than a sequence of musical notes on tape. 

Our letter also explains that the ’358 patent is invalid as anticipated and obvious. Any student that has ever sat in a language class has probably heard the sequence of "expression segments" claimed in the patent. A search quickly revealed prior art. Indeed, the named inventor, Michel Thomas, was featured in a BBC documentary in 1997, more than three years before the patent application was filed. This documentary includes a number of sequences that match the patent’s claims. Hodder and Stoughton’s lawyer himself claimed that the patent would cover a recording done via "television system," so has essentially admitted that the documentary is invalidating prior art. 

Hodder and Stoughton not only demanded that Eleftheriou stop making Language Transfer course available in the United States, it also demanded that he abandon plans to publish a book about language instruction. This is an abuse of the patent system. First, Hodder and Stoughton has never even seen Eleftheriou’s book. The book will be Eleftheriou’s original work about languages and his language teaching method, and not about editing audio lessons. Second, the patent is invalid. But even more fundamentally, a patent does not allow this kind of censorship.

Hodder and Stoughton appears to be using a patent to make an end-run around the idea/expression dichotomy in copyright law. Copyright allows authors to protect particular expression (their prose), but not ideas (like building suspense via cliffhangers). In language teaching, this might play out so that someone can copyright their specific written lessons or recorded tapes, but not the idea of teaching a second language through immersion. Abstract ideas also cannot be patented. They are fundamental building blocks of knowledge, and not subject to exclusive ownership. Rather, they must remain available to future creators and inventors.

Last week, we visited Congress and presented the ’358 patent to staffers there as an example of how important it is to maintain common-sense limits on patentable subject matter. The patent lobby—in the form of the Intellectual Property Owners Association and the American Intellectual Property Law Association—wants Congress to undo Alice through legislation. These groups are pushing to change the law so that everything is eligible for patent protection unless it is "solely in the human mind." The ’358 patent shows what a disaster such legislation would be. It could make the patent system a kind of "super copyright" where people can monopolize ideas just by putting them on tape.

Eleftheriou will continue to offer free language courses to people within the United States. We hope Hodder and Stoughton comes to its senses and abandons its absurd demands.

For many years, EFF has urged technology companies and legislators to do a better job at protecting the privacy of technology users and other members of the public. We hoped the companies, particularly mature players, would realize the importance of implementing meaningful privacy protections. But this year’s Cambridge Analytica scandal, following on the heels of many others, was the last straw.  Corporations are willfully failing to respect the privacy of technology users, and we need new approaches to give them real incentives to do better—and that may include updating our privacy laws.

To be clear, any new regulations must be judicious and narrowly tailored, avoiding tech mandates and expensive burdens that would undermine competition—already a problem in some tech spaces. To accomplish that, policymakers must start by consulting with technologists as well as lawyers.  After the passage of SESTA/FOSTA, we know Congress can be insensitive about the potential consequences of the rules it embraces. Looking to experts would help.

Just as importantly, new rules must also take care not to sacrifice First Amendment protections in the name of privacy protections; for example, EFF opposes the "right to be forgotten," that is, laws that force search engines to de-list publicly available information. Finally, one size does not fit all: as we discuss in more detail below, new regulations should acknowledge and respect the wide variety of services and entities they may affect.  Rules that make sense for an ISP may not make sense for an open-source project, and vice versa.

With that in mind, policymakers should focus on the following: (1) addressing when and how online services must acquire affirmative user consent before collecting or sharing personal data, particularly where that data is not necessary for the basic operation of the service; (2) creating an affirmative "right to know," so users can learn what data online services have collected from and about them, and what they are doing with it; (3) creating an affirmative right to "data extraction," so users can get a complete copy of their data from a service provider; and (4) creating new mechanisms for users to hold companies accountable for data breaches and other privacy failures. 

But details matter. We offer some below, to help guide lawmakers, users, and companies alike in properly advancing user privacy without intruding on free speech and innovation.

Opt-in Consent to Online Data Gathering

Technology users interact with many online services. The operators of those services generally gather data about what the users are doing on their websites. Some operators also gather data about what the users are doing on other websites, by means of tracking tools. They may then monetize all of this personal data in various ways, including but not limited to targeted advertising, and selling the bundled data—largely unbeknownst to the users that provided it.

New legislation could require the operator of an online service to obtain opt-in consent to collect, use, or share personal data, particularly where that collection, use, or transfer is not necessary to provide the service. The request for opt-in consent should be easy to understand and clearly advise the user what data the operator seeks to gather, how the operator will use it, how long the operator will keep it, and with whom the operator will share it. The request should be renewed any time the operator wishes to use or share data in a new way, or gather a new kind of data. And the user should be able to withdraw consent, including for particular purposes.

Some limits are in order. For example, opt-in consent might not be required for a service to take steps that the user has requested, like collect a user's mailing address in order to ship them the package they ordered. But the service should always give the user clear notice of the data collection and use, especially when the proposed use is not part of the transaction, like renting the shipping address for junk mail.

Finally, there is a risk that extensive and detailed opt-out requirements can lead to "consent fatigue." Any new regulations should encourage entities seeking consent to explore new ways of obtaining meaningful consent to avoid that fatigue. At the same time, research suggests companies are becoming skilled at manipulating consent, steering users to share personal data.  

Right to Know About Data Gathering and Sharing

Users should have an affirmative "right to know" what personal data companies have gathered about them, where they got it, and with whom these companies have shared it (including the government).

Again, some limits are in order to ensure that the right to know doesn’t impinge on other important rights and privileges.  For example, there needs to be an exception for news gathering, which is protected by the First Amendment, when undertaken by professional reporters and lay members of the public alike. Thus, if a newspaper tracked visitors to its online edition, the visitors’ right-to-know could cover that information, but not extend to a reporter’s investigative file. 

Data Extraction

In general, users should have a legal right to extract a copy of the data they have provided to an online service. People might use this copy in myriad ways, such as self-publishing their earlier comments on social media. Also, this copy might help users to better understand their relationship with the service provider.

In some cases, it may be possible for users to take this copy of their extracted data to a rival service. For example, if a user is dissatisfied with their photo storage service, they could extract a copy of their photos (and associated data) and take it to another photo storage system. In such cases, data portability may promote competition, and hopefully over time will improve services.

However, this right to extraction may need limits for certain services, such as social media, where various users’ data is entangled. For example, suppose Alice posts a photo of herself on social media, under a privacy setting that allows only certain people to see the photo, and Bob (one of those people) posts a comment on the photo. If Bob seeks to extract a copy of the data he provided to that social media, he should get his comment, but might not necessarily also get Alice’s photo.

Data Breach

Many kinds of organizations gather sensitive information about large numbers of people, yet fail to securely store it. As a result, such data is often leaked, misused, or stolen. What is worse,  some organizations fail to notify and assist the injured parties. Victims of data breaches often suffer financial and non-financial harms for years to come.

There are many potential fixes, some easier than others.  An easy one: it should be simple and fast to get a credit freeze from a credit reporting agency, which will help prevent any credit fraud following a data breach.

Also, where a company fails to adopt basic security practices, it should be easier for people harmed by data breaches—including those suffering non-financial harms—to take those companies to court.

Considerations When Drafting Any Data Privacy Law

  • One Size Does Not Fit All: Policymakers must take care that any of the above requirements don’t create an unfair burden for smaller companies, nonprofits, open source projects, and the like. To avoid that, they should consider tailoring new obligations based on size and purpose of the service in question. For example, policymakers might take account of the entity’s revenue, the number of people employed by the entity, or the number of people whose data the entity collects, among other factors.
  • Private Causes of Action: Policymakers should consider whether to include one of the most powerful enforcement tools: Giving ordinary people the ability to take violators to court.
  • Independent Audits: Policymakers should consider requiring periodic independent privacy audits. Audits are not a panacea, and policymakers should attend to the issues raised here.
  • Data Collection Is Complicated: Policymakers should consult with data experts so they understand what data can be collected and used, under what circumstances.
  • Preemption Should Not Be Used To Undermine Better State Protections: There are many benefits to having a uniform standard, rather than forcing companies to comply with 50 different state laws. That said, policymakers at the federal level should take care not to allow weak national standards to thwart better state-level regulations.
  • Waivers: Too often, users gain new rights only to effectively lose them when they "agree" to terms of service and end user license agreements that they haven’t read and aren’t expected to read. Policymakers should consider whether and how the rights and obligations they create can be waived, especially where users and companies have unequal bargaining power, and the "waiver" takes the form of a unilateral form contract rather than a fully negotiated agreement. We should be especially wary of mandatory arbitration requirements given that mandatory arbitration is often less protective of users than a judicial process would be.
  • No New Criminal Bans: Data privacy laws should not expand the scope or penalties of computer crime laws. Existing computer crime laws are already far too broad.

No privacy law will solve all privacy problems. And every privacy bill must be carefully scrutinized to ensure that it plugs existing gaps without inadvertently stifling free speech and innovation.

In April, Mexican federal police arrested Keith Raniere, taking him from the $10,000-per-week villa where he was staying and extraditing him to New York. According to the NY Daily News, Raniere, leader of self-help group NXIVM (pronounced "nexium"), is now being held without bail while he awaits trial on sex-trafficking charges. Through NXIVM, he preached "empowerment," but critics say the group was a cult, and engaged in extreme behavior, including branding some women with an iron.

This was not the first controversial program Raniere was involved in. In 1992, Raniere ran a multilevel marketing program called "Consumer Buyline," which was described as an "illegal pyramid," by the Arkansas Attorney General’s office. More recently, he has collected more than two dozen patents from the U.S. Patent Office, and has more applications pending—including this one, which is for a method of determining "whether a Luciferian can be rehabilitated."

The USPTO has granted Raniere protection for a variety of curious inventions, including a patent on "analyzing resonance," which eliminates unwanted frequencies in anything from musical instruments to automobiles. Raniere also received a patent on a virtual currency system, which he dubbed an "entrance-exchange structure and method." He applied for a patent on a method of "active listening," and received patents on a system for finding a lost cell phone, and a way of preventing a motor vehicle from running out of fuel. NXIVM members reportedly identified their levels with various colored sashes, which helps explain Raniere’s design patent on a "rational inquiry sash."

Today, we’re going to focus on Raniere’s U.S. Patent No. 9,421,447, a "method and apparatus for improving performance." The patent simply adds trivial limitations to the basic functioning of a treadmill, like timing the user and recording certain parameters (speed, heart rate, or turnover rate.) Since most modern treadmills allow users to precisely measure performance on a variety of metrics, the patent is arguably broad enough that it could be used to sue treadmill manufacturers or sellers.

Given Raniere’s litigation history, that’s not such a remote possibility. NXIVM has sued its critics for defamationenough that the Albany Times-Union called NIXVM a "Litigation Machine." And Raniere sued both AT&T and Microsoft for infringement of some patents relating to video conferencing. The latter suit ended very badly for Raniere, who was ordered to pay attorneys’ fees after he couldn’t prove that he still had ownership of the patents in question. So it’s worth taking a look at how Raniere got the ‘447 patent.

Raniere’s Law ™

Raniere has never been shy about proclaiming how special he is. His bio on a website for Executive Success Programs, a series of courses run by NXIVM, explains that he could "construct full sentences and questions" by the age of one, and read by the age of two. Raniere was an East Coast Judo Champion at age 11, recruits are told, and he entered college at Rensselaer Polytechnic Institute by age 16. The honorifics continue:

He has an estimated problem-solving rarity of one in 425,000,000 with respect to the general population. He has intellectual patents pending in the areas of human potential and ethics, expression, voice and musical training, athletic performance, commerce, education and learning, information processing and human modeling. He also holds several technological patents on computer inventions and a sleep guidance system.

Raniere may be able to convince NXIVM followers that he is a one-in-425 million level genius. A new article from Vanity Fair explains that, inside NXIVM, Raniere’s patents were often used as evidence of his brilliance. But how did Raniere convince the US Patent and Trademark Office of his inventing abilities?

Ultimately, he didn’t really have to. Taking a close look at the history of Raniere’s patent application shows how the deck is stacked in favor of a determined, well-funded applicant. For someone who’s determined to prove they’re a great inventor, and is reasonably well-funded, the patent office can ultimately be cowed into compliance.

In this case, Raniere’s original patent application claimed a "performance system" with a "control system" and a sensor for monitoring "at least one parameter." His examples went beyond exercise: he intended to patent humans making mathematical calculations at increasing speed, or a weightlifter decreasing the time between repetitions.

Appropriately, the examiner rejected all 13 of his proposed claims. But nothing stops patent applicants from coming back and trying again—and again—and that’s exactly what Raniere did. To his bare-bones description of a "performance system" he added this dose of jargon:

Wherein said control system includes a device to determine a point of efficiency, said point of efficiency occurring when the linear proportional rate of change in [] at least one parameter of the subject being trained varies rapidly outside of the state of accommodation and the range of tolerance.

Whew! That’s a lot of verbiage just to explain that the same "performance system" is measuring how fast changes occurs. The patent would be infringed by any treadmill that could measure a changing variable. Even though earlier patents had described essentially the same thing—Raniere’s lawyers insisted that his idea of measure the "rate of change" was "completely different" from a system that used a "precalculated range."

The examiner rejected Raniere’s application again, noting that an older patent for an exercise bike attached to a video game still fulfilled all the elements of Raniere’s new, jargon-filled patent.

But Raniere simply paid $470 to file a "request for continued examination," and kept pounding his fist on the proverbial table. Raniere, or his lawyers, bloated Claim 1 up with yet more language about the point of efficiency occurring "just prior to the subject no longer being able to accommodate additional stress" and entering a state of exhaustion, and claimed now that it was this more narrow description that was his stroke of genius.

"Nowhere in [earlier patent] Hall-Tipping is it suggested that the user be exercised to the point of exhaustion," pointed out Raniere’s lawyers, this time around.

Rejected again, they had an interview with the examiner before coming back with yet another $470 "continued examination" request. Then Raniere loaded up Claim 1 with almost twice as much language about the system repeating itself, and re-measuring new "points of efficiency."

This went on and on [PDF], with Raniere continuing to change language and add limitations. Eight times, the examiner threw out every single one of his claims. Finally, after he added language about the "range of tolerance" being plus or minus two percent, his claims were allowed.

In his specification, Raniere was typically un-self-effacing. He crowed that he had created "Raniere’s Maximal Efficiency Principle™" or "Raniere’s Law™." (The guy is clearly into branding.)

Unfortunately, this is par for the course. Determined patent applicants get an endless number of chances to create a piece of intellectual property that just barely avoids all the other patents and non-patent art that overworked patent examiners are able to find. The strategy is: find a basic process, and slowly add limitations until you get a patent. That’s how we get patents on filming a yoga class and Amazon’s patent on white-background photography. The fault lies not so much with the examiner here, but with the Federal Circuit for interpreting patent law’s obviousness standard in a way that effectively prohibits the Patent Office from relying on common sense.

So what’s the solution? We need the Federal Circuit to apply the Supreme Court’s decision in KSR v Teleflex more faithfully and allow the Patent Office to use common sense when faced with mundane claims. We also need to defend the Alice v. CLS Bank ruling so that examiners can reject patents that claim abstract ideas implemented with conventional tools (like treadmills). Patent law should also be changed so that applicants don’t get an endless number of bites at the apple.

As we reported last week, JURI, the key European Parliamentary committee working on copyright reform, voted on June 20th to support compulsory copyright filters for media platforms (Article 13), and to create a new requirement on websites to obtain a license before linking to news stories (Article 11).

That vote marked the last chance for the democratically-elected members of the European Parliament (MEPs) to submit fixes to the directive — under the usual procedures. But this is not an ordinary regulation, and there still remains a couple of unusual procedures that would let them throw out these disastrous two articles.

The first of these happens next week. Generally, the text agreed by the JURI committee would immediately become the basis of a "Trilogue" negotiation between the Parliament, the European Commission (the EU's executive) and the European Council (representatives of its member states). What comes out of that negotiation becomes EU law — and with the JURI vote, all three groups have agreed to include copyright filters and link taxes in the final directive.

However, given the controversy over the directive's contents, we expect some MEPs will invoke "Rule 69(c)" next week. That would lead to a vote of the full Parliament on the JURI text as a negotiating mandate, probably on July 5th.

As Julia Reda, the Pirate Party MEP explains in the interview below, with enough noise, it may well be possible to get a majority of the Parliament to oppose the JURI decision. That would re-open the directive's text, and allow ordinary MEPs to vote on amendments. Even if we don't get a majority then, it'll will be important groundwork for the next, highly unusual step: another plenary vote for on the negotiated directive some time later this year.

mytubethumb play
Privacy info. This embed will serve content from youtube-nocookie.com

Will they rise to the challenge? Most MEPs — like most Europeans — were unaware of the controversy surrounding Article 11 and 13 until this month. Right now, they're being heavily lobbied by the regulations' supporters: but they're also hearing from thousands of their constituents.

There are protests being organized across Europe; Internet-savvy figures like Stephen Fry and Neil Gaiman are raising the alarm. And MEPs are beginning to see the light.


Write, call or arrange to meet with your MEP or their staff now.

As Reda says, do a little research on where your MEP stands on the issues. Right now, progressive MEPs are being told that Article 13 and 11 will teach the big tech companies a lesson and defeat fake news (no, we still don't understand that one); and conservative MEPs are being told that Europe's businesses support these new property rights. These arguments are deeply misleading — Google, Facebook, Apple and other giants are rarely happy with new regulations, but they'd be able to comply quickly and easily with Article 13 and 11, unlike any emerging competitors who would have no negotiating powers to gain new licenses or build copyright scanning tools. And while it's true that the multinational media and rightsholder conglomerates have pushed for the link tax and the copyright filters, there are many other businesses and non-profit groups that would be caught in the directive's filtering and licensing net.

You don't have to be on the right or the left to decry this directive: the primary community it will affect doesn't have lobbyists in Brussels. They're just ordinary Internet and digital technology users and creators. Europe's lawmakers need to understand that the details of digital copyright are more than just a deal to be brokered between commercial giants — they're a matter of free expression, privacy, and human rights.

That's why the United Nation's human rights experts oppose these articles; why Wikipedia and Creative Commons are fighting it; why the Internet's pioneering technologists and creators told the EU to think again. And that's why your MEP needs to hear from you.

Call now at saveyourinternet.eu.

Fears of Criminal Charges Muzzle Online Speech about Sex Work and Force Community Forums Offline

San Francisco – Two human rights organizations, a digital library, an activist for sex workers, and a certified massage therapist have filed a lawsuit asking a federal court to block enforcement of FOSTA, the new federal law that silences online speech by forcing speakers to self-censor and requiring platforms to censor their users. The plaintiffs are represented by the Electronic Frontier Foundation (EFF), Davis, Wright Tremaine LLP, Walters Law Group, and Daphne Keller.

In Woodhull Freedom Foundation et al. v. United States, the plaintiffs argue that FOSTA is unconstitutional, muzzling online speech that protects and advocates for sex workers and forces well-established, general interest community forums offline for fear of criminal charges and heavy civil liability for things their users might share.

FOSTA, or the Allow States and Victims to Fight Online Sex Trafficking Act, was passed by Congress in March. But instead of focusing on the perpetrators of sex trafficking, FOSTA goes after online speakers, imposing harsh penalties for any website that might "facilitate" prostitution or "contribute to sex trafficking." The vague language and multiple layers of ambiguity are driving constitutionally protected speech off the Internet at a rapid pace.

For example, plaintiff the Woodhull Freedom Foundation works to support the health, safety, and protection of sex workers, among other things. Woodhull wanted to publish information on its website to help sex workers understand what FOSTA meant to them. But instead, worried about liability under FOSTA, Woodhull was forced to censor its own speech and the speech of others who wanted to contribute to their blog. Woodhull is also concerned about the impact of FOSTA on its upcoming annual summit, scheduled for next month.

"FOSTA chills sexual speech and harms sex workers," said Ricci Levy, executive director Woodhull Freedom Foundation. "It makes it harder for people to take care of and protect themselves, and, as an organization working to protect people’s fundamental human rights, Woodhull is deeply concerned about the damaging impact that this law will have on all people."

FOSTA calls into serious question the legality of online speech that advocates for the decriminalization of sex work, or provides health and safety information to sex workers. Human Rights Watch (HRW), an international organization that is also a plaintiff, advocates globally for ways to protect sex workers from violence, health risks, and other human rights abuses. The group is concerned that its efforts to expose abuses against sex workers and decriminalize voluntary sex work could be seen as "facilitating" "prostitution," or in some way assisting sex trafficking.

"HRW relies heavily on individuals spreading its reporting and advocacy through social media," said Dinah Pokempner, HRW General Counsel. "We are worried that social media platforms and websites may block the sharing of this information out of concern it could be seen as demonstrating a "reckless disregard" of sex trafficking activities under FOSTA. This law is the wrong approach to the scourge of sex trafficking."

But FOSTA doesn’t just impede the work of sex educators and activists. It also led to the shutdown of Craigslist’s "Therapeutic Services" section, which has imperiled the business of a licensed massage therapist who is another plaintiff in this case. The Internet Archive joined this lawsuit against FOSTA because the law might hinder its work of cataloging and storing 330 billion web pages from 1996 to the present.

Because of the critical issues at stake, the lawsuit filed today asks the court to declare that FOSTA is unconstitutional, and asks that the government be permanently enjoined from enforcing the law.

"FOSTA is the most comprehensive censorship of Internet speech in America in the last 20 years," said EFF Civil Liberties Director David Greene. "Despite good intentions, Congress wrote an awful and harmful law, and it must be struck down."

For the full complaint in Woodhull v. United States:

For more on FOSTA:

Civil Liberties Director
Staff Attorney

We are asking a court to declare the Allow States and Victims to Fight Online Sex Trafficking Act of 2017 ("FOSTA") unconstitutional and prevent it from being enforced.  The law was written so poorly that it actually criminalizes a substantial amount of protected speech and, according to experts, actually hinders efforts to prosecute sex traffickers and aid victims.

In our lawsuit, two human rights organizations, an individual advocate for sex workers, a certified non-sexual massage therapist, and the Internet Archive, are challenging the law as an unconstitutional violation of the First and Fifth Amendments. Although the law was passed by Congress for the worthy purpose of fighting sex trafficking, its broad language makes criminals of those who advocate for and provide resources to adult, consensual sex workers and actually hinders efforts to prosecute sex traffickers and aid victims.

EFF strongly opposed FOSTA throughout the legislative process. During the months-long Congressional debate on the law we expressed our concern that the law violated free speech rights and would do heavy damage to online freedoms. The law that was ultimately passed by Congress and signed into law by President Trump was actually the most egregiously bad of those Congress had been considering.

What FOSTA Changed

FOSTA made three major changes to existing law. The first two involved changes to federal criminal law:

  • First, it created an entirely new federal crime by adding a new section to the Mann Act. The new law makes it a crime to "own, manage or operate" an online service with the intent to "promote or facilitate" "the prostitution of another person."  That crime is punishable by up to 10 years in prison. The law further makes it an "aggravated offense," punishable by up to 25 years in prison and also subject to civil lawsuits if "facilitation" was of the prostitution of 5 or more persons, or if it was done with "reckless disregard" that it "contributed to sex trafficking." An aggravated violation may also be the basis for an individual’s civil lawsuit. The prior version of the Mann Act only made it illegal to physically transport a person across state lines for the purposes of prostitution.
  • Second, FOSTA expanded existing federal criminal sex trafficking law. Before SESTA, the law made it a crime to knowingly advertise sexual services of a minor or any person doing so only under force, fraud, or coercion, and also criminalized several other modes of conduct. The specific knowledge requirement for advertising (that one must know he advertisement was for sex trafficking) was an acknowledgement that advertising was entitled to some First Amendment protection. The prior law additionally made it a crime to financially benefit from "participation in a venture" of sex trafficking. FOSTA made seemingly a small change to the law: it defined "participation in a venture" extremely broadly to include "assisting, supporting, or facilitating."  But this new very broad language has created great uncertainty about liability for speech other than advertising that someone might interpret as "assisting" or "supporting" sex trafficking, and what level of awareness of sex trafficking the participant must have.

As is obvious, these expansions of the law are fraught with vague and ambiguous terms that have created great uncertainty about what kind of online speech is now illegal. FOSTA does not define "facilitate", "promote", "contribute to sex trafficking," "assisting," or supporting" – but the inclusion of all of these terms shows that Congress intended the law to apply expansively. Plaintiffs thus reasonably fear it will be applied to them. Plaintiffs Woodhull Freedom Foundation and Human Rights Watch advocate for the decriminalization of sex work, both domestically and internationally. It is unclear whether that advocacy is considered "facilitating" prostitution under FOSTA. Plaintiffs Woodhull and Alex Andrews offer substantial resources online to sex workers, including important health and safety information. This protected speech, and other harm reduction efforts, can also be seen as "facilitating" prostitution. And although each of the plaintiffs vehemently opposes sex trafficking, Congress’s expressed sense in passing the law was that sex trafficking and sex work were "inextricably linked." Thus, plaintiffs are legitimately concerned that their advocacy on behalf of sex workers will be seen as being done in reckless disregard of some "contribution to sex trafficking," even though all plaintiffs vehemently oppose sex trafficking.

The third change significantly undercut the protections of one of the Internet’s most important laws, 47 U.S.C. § 230, originally a provision of the Communications Decency Act, commonly known simply as Section 230 or CDA 230:

  • FOSTA significantly undermined the legal protections intermediaries had under 42 U.S.C. § 230, commonly known simply as Section 230. Section 230 generally immunized intermediaries form liability arising from content created by others—it was thus the chief protection that allowed Internet platforms for user-generated content to exist without having to review every piece of content appearing posted to them for potential legal liability. FOSTA undercut this immunity in three significant ways. First, Section 230 already had an exception for violations of federal criminal law, so the expansion of criminal law described above also automatically expanded the Section 230 exception. Second, FOSTA nullified the immunity also for state criminal lawsuits for violations of state laws that mirror the violations of federal law. And third, FOSTA allows for lawsuits by individual civil litigants.

The possibility of these state criminal and private civil lawsuit is very troublesome. FOSTA vastly magnifies the risk an Internet host bears of being sued. Whereas federal prosecutors typically carefully pick and choose which violations of law they pursue, the far more numerous state prosecutors may be more prone to less selective prosecutions. And civil litigants often do not carefully consider the legal merits of an action before pursing it in court. Past experience teaches us that they might file lawsuits merely to intimidate a speaker into silence – the cost of defending even a meritless lawsuit being quite high. Lastly, whereas with federal criminal prosecutions, the US Department of Justice may offer clarifying interpretations of a federal criminal law that addresses concerns with a law’s ambiguity, those interpretations are not binding on state prosecutors and the millions of potential private litigants.

 FOSTA Has Already Censored The Internet

As a result of these hugely increased risks of liability, many platforms for online speech have shuttered or restructured. The following as just two examples:

  • Two days after the Senate passed FOSTA, Craigslist eliminated its Personals section, including non-sexual subcategories such as "Missed Connections" and "Strictly Platonic." Craigslist attributed this change to FOSTA, explaining "Any tool or service can be misused.  We can’t take such risk without jeopardizing all our other services, so we are regretfully taking craigslist personals offline.  Hopefully we can bring them back some day." Craigslist also shut down its Therapeutic Services section and will not permit ads that were previously listed in Therapeutic Services to be re-listed in other sections, such as Skilled Trade Services or Beauty Services.
  • VerifyHim formerly maintained various online tools that helped sex workers avoid abusive clients. It described itself as "the biggest dating blacklist database on earth."  One such resource was JUST FOR SAFETY, which had screening tools designed to help sex workers check to see if they might be meeting someone dangerous, create communities of common interest, and talk directly to each other about safety.  Following passage of FOSTA, VerifyHim took down many of these tools, including JUST FOR SAFETY, and explained that it is "working to change the direction of the site."

Plaintiff Eric Koszyk is a certified massage therapist running his own non-sexual massage business as his primary source of income. Prior to FOSTA he advertised his services exclusively in Craigslist’s Therapeutic Services section. That forum is no longer available and he is unable to run his ad anywhere else on the site, thus seriously harming his business. Plaintiff the Internet Archive fears that it can no longer rely on Section 230 to bar liability for content created by third parties and hosted by the Archive, which comprises the vast majority of material in the Archive’s collection, on account of FOSTA’s changes to Section 230. The Archive is concerned that some third-party content hosted by the Archive, such as archives of particular websites, information about books, and the books themselves, could be construed as promoting or facilitating prostitution, or assisting, supporting, or facilitating sex trafficking under FOSTA’s expansive terms. Plaintiff Alex Andrews maintains the website RateThatRescue.org, a sex worker-led, public, free, community effort to share information about both the organizations and services on which sex workers can rely, and those they should avoid. Because the site is largely user-generated content, Andrews relies on Section 230’s protections. She is now concerned that FOSTA now exposes her to potentially ruinous civil and criminal liability. She has also suspended moving forward with an app that would offer harm reduction materials to sex workers. Human Rights Watch relies heavily on individuals spreading its reporting and advocacy through social media. It is concerned that social media platforms and websites that host, disseminate, or allow users to spread their reports and advocacy materials may be inhibited from doing so because of FOSTA.

And many many others are experiencing the same uncertainty and fears of prosecution that are plaguing other advocates, service providers, platforms, and platform users since FOSTA became law.

We have asked the court to preliminarily enjoin enforcement of the law so that the plaintiffs and others can exercise their First Amendment rights until the court can issue a final ruling. But there is another urgent reason to halt enforcement of the law. Plaintiff Woodhull Freedom Foundation is holding its annual Sexual Freedom Summit August 2-, 2018. Like past years, the Summit features a track on sex work, this year titled "Sex as Work," that seeks to advance and promote the careers, safety, and dignity of individuals engaged in professional sex work. In presenting and promoting the Sexual Freedom Summit, and the Sex Work Track in particular, Woodhull operates and uses interactive computer services in numerous ways: Woodhull uses online databases and cloud storage services to organize, schedule and plan the Summit; Woodhull exchanges emails with organizers, volunteers, website developers, promoters and presenters during all phases of the Summit; Woodhull has promoted the titles of all workshops on its Summit website; Woodhull also publishes the biographies and contact information for workshop presenters on its website, including those for the sex workers participating in the Sex Work Track and other tracks. Is publishing the name and contact information for a sex worker "facilitating the prostitution of another person"? If it is, FOSTA makes it a crime.

Moreover, most, if not all, of the workshops are also promoted by Woodhull on social media such as Facebook and Twitter; and Woodhull wishes to stream the Sex Work Track on Facebook, as it does other tracks, so that those who cannot attend can benefit from the information and commentary.

Without an injunction, the legality under FOSTA of all of these practices is uncertain. The preliminary injunction is necessary so that Woodhull can conduct the Sex as Work track without fear of prosecution.

It is worth emphasizing that Congress was repeatedly warned that it was passing a law that would censor far more speech than was necessary to address the problem of sex trafficking, and that the law would indeed hinder law enforcement efforts and pose great dangers to sex workers. During the Congressional debate on FOSTA and SESTA, anti-trafficking groups such as Freedom Network and the International Women’s Health Coalition issued statements warning that the laws would hurt efforts to aid trafficking victims, not help them.

Even Senator Richard Blumenthal, an original cosponsor of the SESTA (the Senate bill) criticized the new Mann Act provision when it was proposed in the House bill, telling Wired "there is no good reason to proceed with a proposal that is opposed by the very survivors it claims to support." Nevertheless, Senator Blumenthal ultimately voted to pass FOSTA.

In support of the preliminary injunction, we have submitted the declarations of several experts who confirm the harmful effects FOSTA is having on sex workers, who are being driven back to far more dangerous street-based work as online classified sites disappear, to the loss of online  "bad date lists" that informed sex workers of risks associated with certain clients, to making sex less visible to law enforcement, which can no longer scour and analyze formerly public websites where sex trafficking had been advertised. For more information see the Declarations of Dr. Alexandra Lutnick, Prof. Alexandra Frell Levy, and Dr. Kimberly Mehlman-Orozco.

The power of the Internet historically arose from its edges: innovation, growth, and freedom came from its users and their contributions, rather than from some centrally controlled core of overseers. But today, for an increasing number of users, there is a powerful center to the net—and a potentially uncompetitive and unrepresentative center at that.

Too many widely relied-upon functions are now controlled by a few giant companies.

The whole Internet itself is still vast and complex, enabling billions of users to communicate regardless of their physical location. Billions of websites, apps, and nearly costless communications channels remain open to all. Yet too many widely relied-upon functions are now controlled by a few giant companies. Worse, unlike previous technology cycles, the dominance of these companies has proven to be sticky. It’s still easy and cheap to put up a website, build an app, or organize a group of people online—but a few large corporations dominate the key resources needed to do those things. That, in turn, gives those companies extraordinary power over speech, privacy, and innovation.

Some Specifics

Google and Facebook dominate the tools of information discovery and the advertising networks that track users’ every move across much of the Western world. Along with Apple, Microsoft, Twitter, and a few similar companies, they moderate an enormous volume of human communication. This gives them extraordinary power to censor and to surveil.

Amazon dominates online retail in the United States and back-end hosting across much of the globe, making it a chokepoint for a broad range of other services and activities. A few credit card networks process most online payments, giving them the power to starve any organization that relies on sales or donations. Even more fundamentally, most people in the U.S. have little or no ability to choose which company will connect them to the Internet in the first place. That gives a few broadband ISPs the power to block, throttle, and discriminate against Internet users.

Civil Liberties at Stake

A lack of competition and choice impacts nearly every facet of Internet users’ civil liberties. When so much of our interaction with friends, family, and broader social circles happens on Facebook, its arrangement and takedowns of content matter. When so much search happens on Google, and so much video discovery on YouTube, their rankings of results and recommendations matter. When Google, Facebook, and Amazon amass a huge trove of people’s communications as well as data about purchases, physical movements, and Internet use, their privacy policies and practices matter. When Comcast and AT&T are the only options for fixed broadband Internet access for millions of people, their decisions to block, throttle or prioritize certain traffic matter.

The influence of these companies is so great that their choices can impact our lives as much as any government’s. And as Amazon’s recent sale of facial recognition technology to local police demonstrates, the distance between the big tech companies and government is shrinking.

Diverse Voices Need Diverse Options

Careful action to bring a variety of options back in these important portions of the Internet could re-empower users. Competition—combined with and fostered by meaningful interoperability and data portability—could let users vote with their feet by leaving a platform or service that isn’t working for them and taking their data and connections to one that does. That would encourage companies to work to keep their users rather than hold them hostage.

Increasing competition is one of the few strategies that has the promise of opening up space for innovation.

More competition can also strengthen civil liberties. Innovators could develop alternative apps and platforms that safeguard their users’ speech, protect their privacy, foster community, and promote constructive debate, confident that those tools will have a level playing field to reach potential users. And those alternatives don’t have to be commercial: decentralized, federated, or other co-operative solutions can put power back into the hands of their users, giving them the ability to change and adapt tools.

Increasing competition by itself won’t fix all of these problems. But it’s one of the few strategies that, if handled correctly by courts and policymakers, has the promise of opening up space for innovation from the bottom up, driven by individuals, small businesses, and communities with great ideas.

The good news is some competition does exist. We have surveillance-free search by companies like DuckDuckGo and Qwant, open source social media tools like Mastodon and Secure Scuttlebutt, independent services like Snapchat and Yelp, and competitive ISPs like Sonic, just to name a few.  But many of these are under threat from the giants, and many, many more options are needed.

Antitrust Law Needs A Shot in the Arm

So how do we get there from here? One avenue may be antitrust law (or, internationally, competition law), which is supposed to focus directly on promoting competition and avoiding abuses of monopoly power. Unfortunately, although it was once a powerful "charter of freedom," U.S. antitrust has lost its vigor in recent years, through a combination of lax enforcement and a narrow judicial and academic focus on avoiding higher short-term prices for consumers. The result is that the current doctrine has little to say about consolidation of power and the resulting reduction in user choice on the Internet where powerful services have business models that make them seemingly "free" to users.  The recent court decision approving the merger of AT&T and Time Warner is just the latest example.

Still, antitrust enforcement has played an important role in the Internet’s development. The explosive growth of the Internet in the 1990s owes a lot to the Department of Justice’s breakup of AT&T’s telephone monopoly in the ‘80s. That antitrust action spurred ISPs to use the telephone system to connect people to the Internet. And the government’s antitrust case against Microsoft over its abuse of the Windows operating system monopoly (1998-2002), though ultimately unsuccessful, did effectively force the company to abandon its practice of strangling newer competitors in their infancy (including the nascent Google and Amazon).

Today, voices from across the political spectrum are looking at new approaches to antitrust. The Federal Trade Commission, one of the U.S. government’s antitrust enforcers, has announced a series of public hearings on updating antitrust enforcement for today’s Internet, to take place in September. This is welcome, and we’ll be joining the conversation.

Avoiding censorship and protecting users’ privacy are at the heart of any definition of quality for a digital service or product.

A fresh look at U.S. antitrust doesn’t require abandoning a rigorous approach grounded in economics and practical experience. Declines in the quality of products and services are a harm that antitrust law recognizes. And as EFF has long advocated, avoiding censorship and protecting users’ privacy are at the heart of any definition of quality for a digital service or product.

As a start, we encourage closer scrutiny of proposed mergers and acquisitions. Restraining Internet giants’ ability to squash new competitors can help allow new services and platforms to arise, including ones that are not based on a surveillance business model. We also need new ways to measure and describe the harms of censorship and loss of privacy as a basis for antitrust analysis. Where these harms flow from abuse of monopoly power, or improper attempts to gain or maintain such power, regulators may need to consider breaking up companies as well.

Competition Impacts of CFAA, DMCA 1201, and Terms of Service

Antitrust isn’t the only area of law that has a role to play here. EFF has long battled three legal doctrines that have been misused to thwart competition:  the Computer Fraud and Abuse Act (CFAA), section 1201 of the Digital Millennium Copyright Act (DMCA), and the unthinking enforcement of website terms of service.

The CFAA (and its state law counterparts) have been used to threaten interoperable tools. For example, Facebook sued a company Power Ventures for creating and deploying a tool that let users effectively unify their social media feeds and contact lists.

A variety of companies, from major entertainment companies to printer manufacturers, have used the DMCA to try to control the design and functionality of our devices.

And big Internet companies use overreaching terms of service to prohibit reverse engineering and similar activities, blocking competitors who would build upon and interact with existing services.

Any effort to spur competition needs to include reform of these legal tools.

Bad Solutions Are Not Solutions

It’s tricky to ensure that rules created to curb the Internet giants don’t cement their dominance. We’ve already seen proposals for government-imposed "platform neutrality," or filtering mandates like the EU’s proposed Article 13, which would require prohibitively expensive (and ineffective) forms of editorial control. We’ve also heard calls to further erode Section 230 of the Communications Decency Act, which protects Internet services from liability for the actions of their users. These proposals, which do not seem to be grounded in any evidence that Section 230 aids in the creation of monopoly power, paradoxically threaten to impose costly burdens that only the Googles and Facebooks of the world can meet.

Vigorous merger reviews, and potentially breaking up companies that use their dominance in one area to squelch competition in another, may be better alternatives. Other alternatives may arise from public pressure for the companies to use open standards for interaction between social networks, and the ability to use independently developed, user-empowering tools in connection with online services.

Promoting Diversity and Competition To Protect Speech and Privacy

In the coming weeks, we’ll be delving into these areas: exploring what future competition and diversity in the networked world should look like, how interoperability and data portability could be achieved, and how antitrust, along with other tools, can help get us there.  For people who care about speech, privacy, and innovation, it’s time to take a hard look at how to level the playing field, reduce the power of the large platforms and foster the re-emergence of a multiplicity of online services and tools that serve and empower us, not the other way around. Our free speech, privacy, and continued innovation depend on it.

"YouTube keeps deleting evidence of Syrian chemical weapon attacks"

"Azerbaijani faces terrorist propaganda charge in Georgia for anti-Armenian Facebook post"

"Medium Just Took Down A Post It Says Doxed ICE Employees"

These are just a sampling of recent headlines relating to the regulation of user-generated online content, an increasingly controversial subject that has civil society and Silicon Valley at loggerheads. Through Onlinecensorship.org and various other projects—including this year’s censorship edition of our annual Who Has Your Back? report—we’ve highlighted the challenges and pitfalls that companies face as they seek to moderate content on their platforms. Over the past year, we’ve seen this issue come into the spotlight through advocacy initiatives like the Santa Clara Principles, media such as the documentary The Cleaners, and now, featured in the latest report by Professor David Kaye, the United Nations' Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression.

Toward greater freedom, accountability, and transparency

The Special Rapporteur’s latest is the first-ever UN report to focus on the regulation of user-generated content online, and comes at a time of heated debate on the impact of disinformation, extremism, and hateful speech. The report focuses on the obligations of both State actors and ICT companies. It aims at finding user-centered, human rights law-aligned approaches to content policy-making, transparency, due process, and governance on platforms that host user-generated content.

Recognizing the complexities of governing such platforms at a time of conflicting interests and views about freedom of speech, the Special Rapporteur proposes a "framework for the moderation of user-generated online content that puts human rights at the very center" and investigates throughout the report the sometimes-conflicting laws, regulatory frameworks and other governance models that seek to find balance in corporate moderation practices. The report focuses on freedom of expression while acknowledging "the interdependence of rights, such as the importance of privacy as a gateway to freedom of expression."

Noting that "few companies apply human rights principles in their operations", the Special Rapporteur argues for companies to incorporate the UN Guiding Principles on Business and Human Rights into their operations (the Manila Principles on Intermediary Liability similarly call for their adoption). The Guiding Principles, which were endorsed by the UN Human Rights Council in 2011, provide a standard for States and companies to prevent and address the risk of adverse impacts on human rights.

The Special Rapporteur looks closely at the ways in which both government regulation and company content moderation practices can limit freedom of expression for users of platforms. The report specifically delves into areas of concern around content standards (from vague rules, hateful and abusive speech, and lack of context in adjudicating content decisions to real-name requirements and anonymity, and disinformation), as well as the processes and tools used by companies to moderate content (automated flagging, user and trusted flagging, human evaluation, action taken on accounts, notification given to users, and appeal and remedies).

The report further looks at the various modes of transparency (or lack thereof) undertaken by companies. Echoing our recent Who Has Your Back? research and a submission from our colleagues at Ranking Digital Rights, the Special Rapporteur notes that companies disclose "the least amount of information about how private rules and mechanisms for self- and co-regulation are formulated and carried out." As we have previously noted, most companies avoid transparency when it comes to their own proprietary content rules and practices.


The Special Rapporteur’s report—which additionally cites Onlinecensorship.org research and EFF’s own submission—puts forward a set of robust, even radical recommendations for companies (as well as a slightly more standard set of recommendations for State actors).

Private norms have created unpredictable environments for users, who often don’t know or understand how their speech is governed on private platforms. Similarly, national laws like Germany’s NetzDG, create divisions on the inherently global internet. The Special Rapporteur argues that human rights standards could provide a framework that could hold companies accountable to users worldwide.

Specifically, the Special Rapporteur recommends that terms of service and content policy models should move away from a "discretionary approach rooted in generic and self-serving ‘community’ needs" (indeed, companies all too often throw around the term "community" to refer to billions of diverse users with little in common) and adopt policy commitments that enable users to "develop opinions, express themselves freely and access information of all kinds in a manner consistent with human rights law."

Furthermore, companies should develop tools that "prevent or mitigate the human rights risks caused by national laws or demands inconsistent with international standards." In a closing set of recommendations, the Special Rapporteur argues that companies should:

  • Practice meaningful transparency: Company reporting about State requests should be supplemented with granular data concerning the types of requests received and actions taken (see our recent Who Has Your Back? report for where popular companies rank on this)
  • Provide specific examples when possible. Transparency reporting should include government demands under TOS and account for public-private initiatives such as the EU Code of Conduct on countering extremism.
  • Implement safeguards to mitigate risks to freedom of expression posed by the development and enforcement of their own policies. Companies should engage in consultations with civil society and users, particularly in the Global South. Such consultations could help companies recognize how "seemingly benign or ostensibly ‘community-friendly’ rules may have significant, ‘hyper-local’ impacts on communities."
  • Be transparent about how they make their rules. They should at least seek comment on their impact assessments and should clearly communicate to the public the rules and processes that produced them.
  • Ensure that any automated technology employed in content moderation is rigorously audited, that users are given the ability to challenge content actions through a robust appeals mechanism, and the ability to remedy "adverse impacts" of decisions.
  • Allow for user autonomy through relaxed rules in affinity-based closed groups, the ability to mute or block other users or specific types of content, and even the ability to moderate their own content in private groups.
  • Develop transparency initiatives that explain the impact of their various moderation tools. A social media council—an idea detailed at length by Article 19 here— "could be a credible and independent mechanism to develop [such] transparency."

Lastly, the Special Rapporteur argues that this is "a moment for radical transparency, meaningful accountability and a commitment to remedy in order to protect the ability of individuals to use online platforms as forums for free expression, access to information and engagement in public life."

We couldn’t agree more. This is the time for companies to rethink content regulations, consider the implications of the status quo, and work toward creating an environment in which users are able to freely express themselves. States should repeal laws criminalizing or restricting expression, and refrain from establishing laws requiring the proactive monitoring or filtering of content, as well as models of regulation where government agencies become the arbiters of lawful expression.

Finally, as the Special Rapporteur argues, it’s time for tech companies to recognize that the authoritative global standard for ensuring freedom of expression on their platforms is human rights law, and to re-evaluate their content standards accordingly. Companies must become more transparent and accountable to their users, and ensure that the right to due process and remedy is enshrined in their policies.

SAN FRANCISCO - Stephanie Lenz and Universal Music Publishing Group (UMPG) today announced they have amicably resolved Lenz v. Universal, the widely followed litigation sometimes referred to as the "Dancing Baby" case. Lenz filed her complaint in 2007, after UMPG requested the removal of a video in which Lenz’s then-toddler aged son dances to music playing in the background.

David Kokakis, UMPG’s Chief Counsel, said, "UMPG takes great pride in protecting the rights of our songwriters. Inherent in that objective is our desire to take a thoughtful approach to enforcement matters. The Lenz case helped us to develop a fair and tempered process for evaluation of potential takedowns."

"From what I have seen, UMPG's current takedown review process is much better," said Stephanie Lenz. "If UMPG's current processes had been in place eleven years ago when I posted my video of my young son dancing, I probably wouldn’t have had to contact the Electronic Frontier Foundation."

About Stephanie Lenz

Stephanie Lenz is a writer and editor, and the child in the video—Holden Lenz—is now 12 years old and in middle school. Lenz is represented pro bono by the Electronic Frontier Foundation (EFF) and by Michael Kwun at Kwun Bhansali Lazarus LLP (formerly at Keker, Van Nest & Peters LLP).

About Universal Music Publishing Group

Universal Music Publishing Group (UMPG) is a leading global music publisher with 44 offices in 37 countries. Headquartered in Los Angeles, UMPG represents music across every genre from some of the world’s most important songwriters and catalogs. These include ABBA, Adele, Jhené Aiko, Alabama Shakes, Alex Da Kid, Axwell & Ingrosso, J Balvin, Bastille, Beach Boys, Beastie Boys, Bee Gees, Irving Berlin, Leonard Bernstein, Jeff Bhasker, Justin Bieber, Benny Blanco, Chris Brown, Kane Brown, Mariah Carey, Michael Chabon, Desmond Child, The Clash, Coldplay, J. Cole, Elvis Costello, Miley Cyrus, Jason Derulo, Alexandre Desplat, Neil Diamond, Disclosure, Dua Lipa, Danny Elfman, Eminem, Gloria and Emilio Estefan, Florence + the Machine, Future, Martin Garrix, Selena Gomez, Ariana Grande, Al Green, HAIM, Halsey, Emile Haynie, Jimi Hendrix, Don Henley, Kacy Hill, Hit-Boy, Sam Hunt, Imagine Dragons, Carly Rae Jepsen, Jeremih, Tobias Jesso Jr., Billy Joel, Elton John/Bernie Taupin, Joe Jonas, Nick Jonas, Lil Yachty, Linkin Park, Demi Lovato, the Mamas & the Papas, Steve Mac, Maroon 5, Shawn Mendes, Metro Boomin, Miguel, Nicki Minaj, Stephan Moccio, Mumford & Sons, Jimmy Napes, Randy Newman, New Order, Ne-Yo, Pearl Jam, Rudy Perez, Post Malone, Otis Redding, R.E.M., Rex Orange County, Carole Bayer Sager, Gustavo Santaolalla, Sex Pistols, Carly Simon, Paul Simon, Britney Spears, Bruce Springsteen, Stax (East Memphis Music), Harry Styles, SZA, Shania Twain, Justin Timberlake, U2, Keith Urban, Troy Verges, Diane Warren, Jack White, Zedd and many more.

Andy Fixmer
Universal Music Group 
+1 310-865-0132

Rebecca Jeschke
Electronic Frontier Foundation
+1 415-436-9333 x177 

Litigation can always take twists and turns, but when EFF filed a lawsuit against Universal Music Group in 2007 on behalf of Stephanie Lenz, few would have anticipated it would be ten years until the case was finally resolved. But today, at last, it is. Along the way, Lenz v. Universal helped strengthen fair use law and brought nationwide attention to the issues of copyright and fair use in new digital movie-making and sharing technologies.

It all started when Lenz posted a YouTube video of her then-toddler-aged son dancing while Prince’s song "Let's Go Crazy" played in the background, and Universal used copyright claims to get the link disabled. We brought the case hoping to get some clarity from the courts on a simple but important issue: can a rightsholder use the Digital Millennium Copyright Act to take down an obvious fair use, without consequence?

Congress designed the DMCA to give rightsholders, service providers, and users relatively precise rules of the road for policing online copyright infringement. The center of the scheme is the notice and takedown process. In exchange for substantial protection from liability for the actions of their users, service providers must promptly take offline content on their platforms that has been identified as infringing, as well as several other prescribed steps. Copyright owners, for their part, are given an expedited, extra-judicial procedure for obtaining redress against alleged infringement, paired with explicit statutory guidance regarding the process for doing so, and provisions designed to deter and ameliorate abuse of that process.

Without Section 512, the risk of crippling liability for the acts of users would have prevented the emergence of most of the social media outlets we use today. Instead, the Internet has become the most revolutionary platform for the creation and dissemination of speech that the world has ever known.

But Congress also knew that Section 512’s powerful incentives could result also in lawful material being censored from the Internet, without prior judicial scrutiny—much less advance notice to the person who posted the material—or an opportunity to contest the removal. To inhibit abuse, Congress made sure that the DMCA included a series of checks and balances, including Section 512(f), which gives users the ability to hold rightsholders accountable if they send a DMCA notice in bad faith.

In this case, Universal Music Group claimed to have a good faith belief that Ms. Lenz’s video of her child dancing to a short segment of barely-audible music infringed copyright. Yet the undisputed facts showed Universal never considered whether Ms. Lenz’s use was lawful under the fair use doctrine. If it had done so, it could not reasonably have concluded her use was infringing.  On behalf of Stephanie Lenz, EFF argued that this was a misrepresentation in violation of Section 512(f).

In response, Universal argued that rightsholders have no obligation to consider fair use at all. The U.S. Court of Appeals for the Ninth Circuit rejected that argument, correctly holding that the DMCA requires a rightsholder to consider whether the uses she targets in a DMCA notice are actually lawful under the fair use doctrine. However, the court also held that a rightsholder’s determination on that question passes muster as long as she subjectively believes it to be true. This leads to a virtually incoherent result: a rightsholder must consider fair use, but has no incentive to actually learn what such a consideration should entail. After all, if she doesn’t know what the fair use factors are, she can’t be held liable for not applying them thoughtfully.

We were disappointed in that part of the ruling, but it came with a big silver lining: the court also held that fair use is not simply a narrow defense to copyright infringement but an affirmative public right. For decades, rightsholders and scholars had debated the issue, with many preferring to construe fair use as narrowly as possible. Thanks to the Lenz decision, courts will be more likely to think of fair use, correctly, as a crucial vehicle for achieving the real purpose of copyright law: to promote the public interest in creativity and innovation. And rightsholders are on notice: they must at least consider fair use before sending a takedown notice.

Lenz and Universal filed petitions requesting that the Supreme Court review the Ninth Circuit’s ruling. The Supreme Court denied both petitions. This meant that the case returned to the district court for trial on the question of whether Universal’s takedown was a misrepresentation under the Ninth Circuit’s subjective standard. Rather than go to trial, the parties have agreed to a settlement.

Lenz v. Universal helped make some great law on fair use and also played a role in leading to better takedown processes at Universal. EFF congratulates Stephanie Lenz for fighting the good fight, and we thank our co-counsel at Keker, Van Nest & Peters LLP and Kwun Bhansali Lazarus LLP for being our partners through this long journey. And in case you are wondering: that toddler dancing in the original video is now in middle school.

mytubethumb play
Privacy info. This embed will serve content from youtube-nocookie.com

Every three years, the US Copyright Office undertakes an odd ritual: they allow members of the public to come before their officials and ask for the right to use their own property in ways that have nothing to do with copyright law.

It's a strange-but-true feature of American life. Blame Congress. When they enacted the Digital Millennium Copyright Act in 1998, they included Section 1201, a rule that bans people from tampering with copyright controls on their devices. That means that manufacturers can use copyright controls to stop you from doing legitimate things, like taking your phone to an independent service depot; or modifying your computer so that you can save videos to use in remixes or to preserve old games. If doing these legal things requires that you first disable or remove a copyright control system, they can become illegal, even when you're using your own property in the privacy of your own home.

But every three years, the American people may go before the Copyright Office and ask for the right to do otherwise legal things with their own property, while lawyers from multinational corporations argue that this should not happen.

The latest round of these hearings took place in April, and of course, EFF was there, with some really cool petitions (as dramatized by the science fiction writers Mur Lafferty, John Scalzi, and Cory Doctorow [ahem]), along with many of our friends and allies, all making their own pleas for sanity in copyright law.

We commemorated the occasion with a collection of short video conversations between me and our pals. Here's a little guide:

We will learn the fate of all our petitions later this year, when the Copyright Office makes its recommendations and the Librarian of Congress decides. In the meantime, let's remember what's at stake here: the right to use the things you own in ways that make sense to you, not to the shareholders of distant and unaccountable corporations.

Today we’re announcing the launch of STARTTLS Everywhere, EFF’s initiative to improve the security of the email ecosystem.

Thanks to previous EFF efforts like Let's Encrypt, and Certbot, as well as help from the major web browsers, we've seen significant wins in encrypting the web. Now we want to do for email what we’ve done for web browsing: make it simple and easy for everyone to help ensure their communications aren’t vulnerable to mass surveillance.

Note that this is a high-level, general post about STARTTLS Everywhere. If you’d like a deeper dive intended for mailserver admins, with all the technical details and caveats, click here.

It’s important to note that STARTTLS Everywhere is designed to be run by mailserver admins, not regular users. No matter your role, you can join in the STARTTLS fun and find out how secure your current email provider is at:


Enter your email domain (the part of your email address after the "@" symbol), and we’ll check if your email provider has configured their server to use STARTTLS, whether or not they use a valid certificate, and whether or not they’re on the STARTTLS Preload List—all different indications of how secure (or vulnerable) your email provider is to mass surveillance.

Wait, Email Is Vulnerable to Mass Surveillance?

Email relies on something called the Simple Mail Transfer Protocol, or SMTP. SMTP is the technical language email servers use to communicate with each other. It was one of the very first application protocols developed for the Internet. It’s even older than HTTP, the protocol your browser uses to talk to webservers when you want to load a website!

Just like HTTP, SMTP was not developed with encryption or authentication in mind, as the trust model on the Internet today is starkly different from what it was in the 70s. Like regular old snail mail, senders can write whatever they want in the "From:" field, or even choose to omit it. And in the same way your post office or your postal carrier can read what you write on a postcard, machines responsible for delivering emails can read their contents, as can anyone who’s watching the traffic they send and receive. But unlike regular mail, the cost of sending emails, spoofing emails, collecting copies of emails, and altering emails in-transit is extremely low.

That means that without encryption, government agencies that perform mass surveillance, like the NSA, can easily sweep up and read everyone’s emails—no hacking or breaking encryption necessary.


STARTTLS is an addition to SMTP, which allows one email server to say to the other, "I want to deliver this email to you over an encrypted communications channel." The recipient email server can then say "Sure! Let’s negotiate an encrypted communications channel." The two servers then set up the channel and the email is delivered securely, so that anybody listening in on their traffic only sees encrypted data. In other words, network observers gobbling up worldwide information from Internet backbone access points (like the NSA or other governments) won't be able to see the contents of messages while they’re in transit, and will need to use more targeted, low-volume methods.

It’s important to note that if you don’t trust your mail provider and don’t want them to be able to read your emails, STARTTLS isn’t enough. That’s because STARTTLS only provides hop-to-hop encryption, not end-to-end. For example, if a Gmail user sends email to an EFF staffer, the operators of the Google and EFF mailservers can read and copy the contents of that email even if STARTTLS is negotiated perfectly. STARTTLS only encrypts the communications channel between the Google and EFF servers so that an outside party can’t see what the two say to each other—it doesn’t affect what the two servers themselves can see.

Thus, STARTTLS is not a replacement for secure end-to-end solutions. Instead, STARTTLS allows email service providers and administrators to provide a baseline measure of security against outside adversaries.

Thanks to multiple efforts over the years, effective STARTTLS encryption is as high as 89% according to Google's Email Transparency Report—a big improvement from 39% just five years ago.

Great! So if STARTTLS Exists Everything’s Fine, Right?

Unfortunately, STARTTLS has some problems. Although many mailservers enable STARTTLS, most still do not validate certificates. Just like in HTTPS, certificates are what a server uses to prove it really is who it says it is. Without certificate validation, an active attacker on the network can get between two servers and impersonate one or both, allowing that attacker to read and even modify emails sent through your supposedly "secure" connection. Since it’s not common practice for emails servers to validate certificates, there’s often little incentive to present valid certificates in the first place.

As a result, the ecosystem is stuck in a sort of chicken-and-egg problem: no one validates certificates because the other party often doesn’t have a valid one, and the long tail of mailservers continue to use invalid certificates because no one is validating them anyway.

Additionally, even if you configure STARTTLS perfectly and use a valid certificate, there’s still no guarantee your communication will be encrypted. That’s because when a sending email server says, "I want to deliver this email to you over an encrypted communications channel," that message is unencrypted. This means network attackers can jump in and block that part of the message, so that the recipient server never sees it. As a result, both servers think the other doesn’t support STARTTLS. This is known as a "downgrade attack," and ISPs in the U.S. and abroad have been caught doing exactly this. In fact, in 2014 several researchers found that STARTTLS encryption on outbound email from several countries was being regularly stripped.

STARTTLS Everywhere to the Rescue!

That’s where STARTTLS Everywhere comes in.

STARTTLS Everywhere provides software that a sysadmin can run on an email server to automatically get a valid certificate from Let’s Encrypt. This software can also configure their email server software so that it uses STARTTLS, and presents the valid certificate to other email servers. Finally, STARTTLS Everywhere includes a "preload list" of email servers that have promised to support STARTTLS, which can help detect downgrade attacks. The net result: more secure email, and less mass surveillance.

Mailserver admins can read more about how STARTTLS Everywhere’s list is designed, how to run it on your mailserver, and how to get your mailserver added to the preload list.

If you appreciate the work we’ve done on STARTTLS Everywhere, you can also donate to EFF! Your contribution will help further the development of projects like STARTTLS Everywhere that help raise everyone’s level of security.

Donate to EFF

With all that we have accomplished together to improve the state of encrypted communications on the Internet, it’s about time we focus on upgrading email, the backbone of communication for a large part of the world. STARTTLS Everywhere is a natural step in that direction, but there’s still plenty of work to do, so let’s get hopping on hop-to-hop encryption!

text A Technical Deep Dive into STARTTLS Everywhere
Mon, 25 Jun 2018 16:13:53 +0000

Today we’re announcing the launch of STARTTLS Everywhere, EFF’s initiative to improve the security of the email ecosystem.

Thanks to previous EFF efforts like Let's Encrypt, and Certbot, as well as help from the major web browsers, we've seen significant wins in encrypting the web. Now we want to do for email what we’ve done for web browsing: make it simple and easy for everyone to help ensure their communications aren’t vulnerable to mass surveillance.

Note that this is a technical deep dive into EFF’s new STARTTLS Everywhere project, which assumes familiarity with SMTP and STARTTLS. If you’re not familiar with those terms, you should first read our post intended for a general audience, available here.

The State of Email Security

There are two primary security models for email transmission: end-to-end, and hop-to-hop. Solutions like PGP and S/MIME were developed as end-to-end solutions for encrypted email, which ensure that only the intended recipient can decrypt and read a particular message.

Unlike PGP and S/MIME, STARTTLS provides hop-to-hop encryption (TLS for email), not end-to-end. Without requiring configuration on the end-user's part, a mailserver with STARTTLS support can protect email from passive network eavesdroppers. For instance, network observers gobbling up worldwide information from Internet backbone access points (like the NSA or other governments) won't be able to see the contents of messages, and will need more targeted, low-volume methods. In addition, if you are using PGP or S/MIME to encrypt your emails, STARTTLS prevents metadata leakage (like the "Subject" line, which is often not encrypted by either standard) and can negotiate forward secrecy for your emails.

Thanks to multiple efforts over the years, effective STARTTLS encryption is as high as 89% according to Google's Email Transparency Report—a big improvement from 39% just five years ago.

However, as we explain in our general STARTTLS Everywhere announcement, STARTTLS has some problems.

Nobody Validates Certificates, and It’s Hard to Blame Them

Although many mailservers enable STARTTLS, most still do not validate certificates. Without certificate validation, an active attacker on the network can read and even modify emails sent through your supposedly "secure" connection. Since it’s not common practice to validate certificates, there’s often little incentive to present valid certificates in the first place. A brief experiment on Censys shows that about half of the mailservers that support STARTTLS use self-signed certificates.

On the web, when browsers encounter certificate errors, these errors are communicated to the end user, who can then decide whether to continue to the insecure site. With email, this is not an option, since an email user's client, like Thunderbird or the Gmail app on a user’s phone, runs separately from the machine responsible for actually sending the mail. Since breakage means the email simply won’t send, the email ecosystem is naturally more risk-averse than the browser ecosystem when it comes to breakages.

As a result, the ecosystem is stuck in a sort of chicken-and-egg problem: no one validates certificates because the other party often doesn’t have a valid one, and the long tail of mailservers continue to use invalid certificates because no one is validating them anyway.

Even If You’re Doing It Right, It Could Still Go Wrong

But let’s say you have STARTTLS enabled with a valid certificate, and so does the other party. You both validate certificates. What could go wrong?

When two mailservers support STARTTLS, their insecure connection is opportunistically upgraded to a secure one. In order to make that upgrade, the two mailservers ask each other if they support STARTTLS. Since this initial negotiation is unencrypted, network attackers can alter these messages to make it seem like neither server supports STARTTLS, causing any emails to be sent unencrypted. ISPs in the U.S. and abroad have been caught doing exactly this, and in 2014, several researchers found that encryption on outbound email from several countries were being regularly stripped.

Can DANE Fix These Problems?

Absolutely! If you are deep into the email world, you may have heard of DANE. DANE relies on DNSSEC, a protocol for publishing and validating signed DNS entries. Consistent and full DANE deployment presents a scalable solution for mailservers to clarify certificate validation rules and prevent downgrade attacks.

However, DANE is dependent on deployment and validation of DNSSEC, the latter of which has remained stagnant (at around 10-15% worldwide) for the past five years. STARTTLS Everywhere’s aim is to decouple secure email from DNSSEC adoption with a stop-gap, intermediate solution.

What About MTA-STS?

MTA-STS is a proposed standard that will allow mailservers to announce the security policies of their mailservers. In MTA-STS, a mailserver administrator creates a TXT record in their domain’s DNS entries, which indicates that the domain supports MTA-STS. They then post their security policy (whether to require STARTTLS or continue sending email on failure, which MX hosts to use, and how long the policy is valid) at a well-known HTTPS URL on their domain, so that senders can retrieve it and adhere to the policy.

The problem with MTA-STS is that since most DNS requests are still unauthenticated (see the section on DANE above), an active attacker can still MitM the initial DNS request and convince the sender that the recipient doesn’t support MTA-STS, and then later MitM the STARTTLS messages, so the sender will never know the recipient supports STARTTLS.

Wow, Everything’s So Messed Up. How Is STARTTLS Everywhere Going to Help?

We have three primary goals for STARTTLS Everywhere:

Improve STARTTLS adoption.

We want to make it easy to deploy STARTTLS with valid certificates on mailservers. We’re developing Certbot plugins for popular MTA software, starting with Postfix, to make this a reality.

If you run a mailserver and use Postfix, help test out our Certbot plugin. Please note that the plugin is still very much beta—if you have problems with it, you can report an issue.

Not using Postfix? We’re also working on Certbot plugins for Dovecot and Sendmail, so stay tuned. We also welcome contributions of installer plugins for other MTAs!

Prevent STARTTLS downgrade attacks.

In order to detect downgrade attacks, we’re hosting a policy list of mailservers that we know support STARTTLS. This list acts essentially as a preload list of MTA-STS security policies. We’ve already preloaded a select number of big-player email domains, like Gmail, Yahoo, and Outlook.

If you’d like to add your email domain to the list, try out our website; otherwise, you can also email starttls-policy@eff.org with validation details or submit a pull request yourself to the code repository where we host the list.

If you’d like to use the list, check out our guidelines for how to do so.

Lower the barriers to entry for running a secure mailserver.

Email was designed as a federated and decentralized communication protocol. Since then, the ecosystem has centralized dramatically, and it has become exponentially more difficult to run your own mailserver. The complexity of running an email service is compounded by the anti-spam arms race that small mail operators are thrust into. At the very least, we’d like to lower the barriers to entry for running a functional, secure mailserver.

Beyond developing and testing Certbot plugins for popular MTAs, we’re still brainstorming ideas to decentralize the email ecosystem. If you work on easy-to-deploy MTA software, let’s get in touch.

You can help, too!

All of our software packages are currently in a developer beta state, and our team is stretched thin working on all of these projects. You can help make the email ecosystem more secure by:

Of course, if you appreciate the work we’ve done on STARTTLS Everywhere, you can also donate to EFF! Your contribution will help further development of projects like STARTTLS Everywhere that help raise everyone’s level of security.

Donate to EFF

With all that we have accomplished together to improve the state of encrypted communications on the Internet, it’s about time we focus on upgrading email, the backbone of communication for a large part of the world. STARTTLS Everywhere is a natural step in that direction, but there’s still plenty of work to do, so let’s get hopping on hop-to-hop encryption!

The Supreme Court handed down a landmark opinion today in Carpenter v. United States, ruling 5-4 that the Fourth Amendment protects cell phone location information. In an opinion by Chief Justice Roberts, the Court recognized that location information, collected by cell providers like Sprint, AT&T, and Verizon, creates a "detailed chronicle of a person’s physical presence compiled every day, every moment over years." As a result, police must now get a warrant before obtaining this data.

This is a major victory. Cell phones are essential to modern life, but the way that cell phones operate—by constantly connecting to cell towers to exchange data—makes it possible for cell providers to collect information on everywhere that each phone—and by extension, each phone’s owner—has been for years in the past. As the Court noted, not only does access to this kind of information allow the government to achieve "near perfect surveillance, as if it had attached an ankle monitor to the phone’s user," but, because phone companies collect it for every device, the "police need not even know in advance whether they want to follow a particular individual, or when."

For years, the government has argued that the sensitive nature of this data doesn’t matter; the mere fact that it’s collected by phone companies makes it automatically devoid of constitutional protection.

This argument is based on an outdated legal principle called the "Third Party Doctrine," which was developed by the Supreme Court in two main cases from the 1970s involving records of phone calls and bank transactions. Courts around the country had long been deeply divided on whether the Third Party Doctrine should apply to cell phone location information or whether the invasiveness of the tracking it enables should require a more privacy-protective rule.

...there is a "world of difference between the limited types of personal information addressed in" prior Supreme Court cases and "the exhaustive chronicle of location information casually collected by wireless carriers today."

EFF has been involved in almost all of the significant past cases, and in Carpenter, EFF filed briefs both encouraging the court to take the case and urging it to reject the Third Party Doctrine. We noted that cell phone usage has exploded in the last 30 years, and with it, the technologies to locate users have gotten and continue to get ever more precise.

Thankfully, in Carpenter, Justice Roberts rejected the government’s reliance on the Third Party Doctrine, writing that there is a "world of difference between the limited types of personal information addressed in" prior Supreme Court cases and "the exhaustive chronicle of location information casually collected by wireless carriers today." The Court also explained that cell phone location information "is not truly ‘shared’ as one normally understands the term," particularly because a phone "logs a cell-site record by dint of its operation, without any affirmative act on the part of the user beyond powering up."

We were pleased that the Court cited our amicus brief in its opinion and agreed with many of the points we raised. In particular, Justice Roberts noted that because cell phones generate a record of location information all the time and "because location information is continually logged for all of the 400 million devices in the United States—not just those belonging to persons who might happen to come under investigation—this newfound tracking capacity runs against everyone." What’s more, cell phone tracking enables the government to compile an "exhaustive chronicle of location information" so that "unlike the nosy neighbor who keeps an eye on comings and goings, [phone carriers] are ever alert, and their memory is nearly infallible."

As we pointed out, this means that the government can engage in long-term monitoring. In Carpenter, for example, the government obtained 127 days of the defendant’s cell phone records from MetroPCS—without a warrant—to try to place him at the locations of several armed robberies around Detroit. Other cases have involved even longer periods of time. In a footnote, the Supreme Court declined to reach the question of whether very short periods of tracking, less than the 7 days used at trial in Carpenter, might not be covered by the Fourth Amendment. We think the right rule is to require a warrant for any cell phone tracking, but that will have to wait for another day.

Perhaps the most significant part of today’s ruling for the future is its explicit recognition that individuals can maintain an expectation of privacy in information that they provide to third parties. The Court termed that a "rare" case, but it’s clear that other invasive surveillance technologies, particularly those than can track individuals through physical space, are now ripe for challenge in light of Carpenter. Expect to see much more litigation on this subject from EFF and our friends.

Observers often forget that surveillance offends not only privacy, but also the right to dissent. A recently defeated Illinois bill illustrates how First and Fourth Amendment rights intersect, by proposing to undermine the right to dissent not obliquely, but rather directly. That’s why EFF joined the successful fight to defeat this spying proposal.
The proposal, promoted by the City of Chicago, was embodied in SB 2562 and its companion bill, HB 4405. Theywould have authorized police to use surveillance drones to monitor peaceful protests without first securing a judicial warrant. Had the measure been adopted, it would have permitted police to use facial recognition technology to identify individual demonstrators photographed by drones even absent any suspicion of wrongdoing.
The defeated proposal would have rolled back a well-received state law passed in 2013 that led the country in protecting dissent from drone surveillance, and which enjoyed overwhelming bipartisan support. Illinois’ 2013 law sharply limits law enforcement from using drones, generally requiring agencies to first obtain a judicial warrant based on probable cause to suspect that a crime has been committed.
Warrants are important. They serve the crucial function of preventing police fishing expeditions against political dissenters, andthe politicization of public safety measures to pursue personal vendettas. Moreover, they’re not a burden for police to secure. That makes a warrant requirement a reasonable (yet increasingly threatened) way to protect vital (and increasingly threatened) rights on which democracy depends.
In sharp contrast, the defeated 2018 measure would have authorized drone surveillance of any gathering of more than 100 people for "legitimate public safety purposes," which expressly include "assessing public safety vulnerabilities or weaknesses…or identifying possible criminal activity."
As explained by the International Human Rights Clinic at the University of Chicago Law School, "Police already have the power to use drones in response to dangerous situations. What this legislation adds — and which current law explicitly rejects — is the active, continuous, and suspicion-less surveillance by drone of anyone and everyone at an event."
Karen Sheley, Director of the ACLU Police Practices Project, said, "This is too much unchecked power to give to the police – in Chicago or anywhere." The Chicago Sun-Times agreed, noting: "Unwarranted snooping, as any Chicagoan who knows our city’s history can attest, could become a real danger."
Ultimately, the proposed 2018 measure invited the kind of historically documented abuses and recurring problems that flourish behind a continuing wall of executive secrecy.
Incidentally, but of crucial relevance to state policymakers: President Trump is widely known for bearing petty grudges. The propensity of the President to pursue personal piques represents precisely why our Founders required warrants as a precondition to justify any police search: without review by an independent auditor, the executive branch is too prone to act arbitrarily. That’s why due process and access to justice are so important.
Beyond President Trump, even federal oversight bodies have been recently implicated in politicizing national security secrets. Closer to home, the Chicago Police Department (CPD) has also spied on political groups not only in the past, but also more recently.
Just two years ago, the CPD was caught spying for years on peaceful local dissenters including "union members, anti-Olympics protesters, anarchists, the Occupy movement, NATO demonstrators and critics of the Chinese government. And it has continued to [monitor them], according to…records….which the police department fought to withhold."
Political grudges should not be enough to trigger surveillance by legal authorities.
Molly Armour, a Chicago attorney whose clients include grassroots activists facing police investigations based on their speech, explained that "Surveillance stifles dissent. And that’s dangerous for all of us." And as explained by local activist Claude Walker in his letter to the editor:

"Giving City Hall or cops the right to dispatch drones to protests…without warrant – makes Red Squad tactics seem quaint….This technology has developed faster than our ability to use or regulate it….[L]awmakers should err on the side of privacy in drone laws."

The "Red Squad" was the Chicago police unit that spied on political dissent for much of the Twentieth Century.
Fortunately, advocates of free speech and privacy defeated the 2018 proposal. While the Illinois House and Senate each approved a version of this bill, the state legislative session expired on May 31 without reconciling their conflicting versions.
Illinois has retained its leading protections of dissent from drone surveillance for this year, but this struggle will likely recur. Fortunately, local grassroots allies including Lucy Parsons Labs and the Chicago Committee to Defend the Bill of Rights—both of which are members of the Electronic Frontier Alliance—are monitoring the situation. If the City of Chicago persists in trying to undermine constitutional rights by seeking more expansive powers to spy on demonstrators using surveillance drones without any basis for suspicion, we look forward to responding by raising the alarm.

text Disabilities vs DRM: the World Cup Edition
Fri, 22 Jun 2018 21:34:10 +0000

When the Russian and Saudi teams squared off in a World Cup match on June 14, many fans were treated to an enthralling football match; but for a minority of fans with a visual disability, the match was more confusing than exciting.

You see, the Russian team wears red jerseys and the Saudi team wears green jerseys, and red/green color-blindness ("achromatic vision") is the most common form of color-blindness, a hereditary condition that affects millions. For these people, the Saudi-Russia match was the red/green team versus the red/green team in a fight to the finish.

The good news is that color-blindness is no match for digital video analysis. Simple apps like DanKam can shift the colors in any video on your device, replacing the colors you can't see with the colors you can. For people with color-blindness, it's a profound and moving experience.

The bad news is that technologies designed to prevent you from making unauthorized uses of videos can't discriminate between uses that break the law (like copyright infringement) and ones that accomplish socially beneficial and legitimate ends like compensating for color-blindness.

Less than a year ago, the World Wide Web Consortium published its controversial "Encrypted Media Extensions" (EME) for video, which indiscriminately block any unauthorized alterations to videos, including color-shifting. During the long and often acrimonious fight over EME, EFF proposed a covenant for W3C members that would make them promise not to pursue legal action against people who bypassed EME to adapt videos for people with disabilities, a proposal that was rejected by the major rightsholder and technology companies, who said that they and they alone should be the arbiters of how people with disabilities could use their products.

We (genuinely) hate to say we told them so. Seriously. Because this is just the start of the ways that EME -- which affects about 3 billion web users -- will interfere with accessibility. Currently existent technologies that protect people with photosensitive epilepsy from strobe effects in videos are already blocked by EME. As machine learning advances, EME will also block such adaptive technologies as automated captioning and descriptive tracks.

We are suing the US government to overturn Section 1201 of the Digital Millennium Copyright Act, the law that bans bypassing technologies like EME, as part of an overall strategy to end the practice of designing computers to control their owners (rather than the other way around).

Technologies like EME, designed to stop users from adapting technologies to their needs, have found their way into everything from automobiles to voting machines to medical implants. We've asked the Copyright Office to protect the public from the abuse of these technologies, and we continue to back state Right to Repair bills that protect your right to choose independent service centers to fix your stuff, even if it means bypassing a manufacturer's locks.

But while these efforts are making their slow progress through the courts and regulators, it's on the shoulders of technologists to learn the lesson of EME: contributing to technologies that stop the public from adapting or auditing their tools is a profoundly unethical act, one that widens the gap between people with disabilities and the (temporarily) abled people who don't (yet) need to make those adaptations.

The leak investigation involving a Senate staffer and a New York Times reporter raises significant issues about journalists, digital security, and the ability of journalists to protect confidential sources.

The New York Times recently revealed that the FBI had been investigating a former aide to the Senate Intelligence Committee, James Wolfe, for possibly leaking classified information to reporters. So far Wolfe has only been indicted for making false statements to investigators about his contacts with reporters.

The investigation appears to have been focused on how New York Times reporter Ali Watkins, when she worked for Buzzfeed News, learned that Russian spies had attempted to recruit a former advisor to President Trump, Carter Page.

Reading the New York Times article, three things jumped out at us.

First, according to the article, FBI agents "secretly seized years’ worth" of Watkins’ phone and email records. "Among the records seized were those associated with her university email address from her undergraduate years." However, "Investigators did not obtain the content of the messages themselves."

We read this to mean that the FBI obtained "metadata" such as to/from and date/time information for each call and email, probably using a subpoena or court order authorized by the Electronic Communications Privacy Act (ECPA)/Stored Communications Act (SCA).

Many digital security resources, including EFF’s own Security Self-Defense (SSD) guide, emphasize using end-to-end encryption. However, it’s important to understand that while encryption protects the contents of communications, encryption does not mask metadata. Thus, without listening to or reading the communications themselves, government agents can see who you talked to and when, and sometimes from what location.

Metadata can be extremely revealing. Just the fact that Wolfe denied talking to reporters, when the metadata showed otherwise, earned him criminal charges.

Unfortunately, completely masking communications metadata is nearly impossible. Creating a temporary email account through an anonymizing tool like Tor can make it more difficult to associate that account with a particular person. Features like Signal’s Disappearing Messages will automatically delete some metadata after a set period of time, making it harder for law enforcement to acquire it after the fact.

Second, the government obtained the contents of communications Wolfe had with reporters over encrypted messaging apps (apparently Signal and WhatsApp).

Our guess is that the FBI got a warrant for Wolfe's phone and somehow accessed the apps—perhaps his phone wasn’t locked, agents had his password, or they used a forensic tool to bypass the lock screen and any device-based encryption. It’s also possible investigators found backups stored in the cloud or on a hard drive that contained the unencrypted messages. (This issue has also come up in the Mueller investigation.)

If this is what happened, then it's important to understand that although end-to-end encryption thwarts interception of communications content, if that content is sitting unencrypted at an end point—that is, in an app or a backup—then anyone who has access to the journalist’s or suspected source’s phone or backup may be able see those messages. Therefore, deleting unencrypted messages is an added security precaution. Once again, Signal’s Disappearing Messages feature is an effective way to defend against future device searches.

Third, a non-technical question is: did the Justice Department follow its own news media regulations? These regulations have been around for four decades and were most recently revised in 2014 after the shocking revelation that President Obama’s Justice Department in 2013 seized two months’ worth of phone records for reporters and editors of the Associated Press.

Among other requirements, such as first exhausting other avenues of information, the regulations require Justice Department investigators to provide journalists with prior notice and an opportunity to negotiate before seizing their records. But this is not what happened—as the New York Times article explains, Watkins received a letter from the Justice Department only after her phone and email records had already been obtained.

It wouldn’t be surprising if it came to light that the Justice Department invoked the exception to the prior notice requirement: where "such negotiations would pose a clear and substantial threat to the integrity of the investigation, risk grave harm to national security, or present an imminent risk of death or serious bodily harm." But these details have not been revealed.

The bottom line is that journalists shouldn’t expect to always be notified ahead of time. Accordingly, they should take as many precautions as possible—digital and otherwise—to protect their confidential sources.

In addition to EFF’s Security Self-Defense (SSD) guide, we published a digital privacy guide to crossing the U.S. border that journalists might find helpful, as journalists have been harassed at airports and border crossings. Other journalism groups have useful digital privacy and security guides, such as those from Freedom of the Press Foundation, the Committee to Protect Journalists, and Reporters Without Borders.

Finally, the seizure of Watkins’ phone and email records has once again highlighted the desperate need for a federal shield law so that the government can’t go after journalists—whether through their service providers or in court—to try to uncover their confidential sources. Vice President Mike Pence was a lead sponsor of the Free Flow of Information Act when he was in the House of Representatives.

We renew our call for Congress to pass a robust federal shield law to protect not only journalists and their confidential sources—but also the public’s right to know.

The Supreme Court issued a disappointing opinion [PDF] today holding that a company could recover patent damages for lost profits overseas. The court’s reasoning could make overseas damages available in many patent cases. This will disadvantage companies that do research and development in the United States. When patent law discourages domestic innovation, it achieves the opposite of its intended purpose. 

The case, called WesternGeco LLC v. ION Geophysical Corp., involved a patent on a method of conducting marine seismic surveys. ION exported components that, when combined, were used to infringe the patent overseas. Under Section 271(f) of the Patent Act, exporting components of a patented invention for assembly abroad is considered infringement. WesternGeco received damages for the U.S. sales of the components. The court considered whether WesternGeco could also receive damages for lost profits for the use of the invention overseas.

Together with the R Street Institute, EFF filed an amicus brief [PDF] in the case explaining that worldwide damages are not consistent with the domestic focus of the patent act. Our brief, co-written with Professors Bernard Chao and Brian Love, provided an example of how such a ruling could harm U.S. innovation:

[C]onsider how such a regime might impact two hypothetical companies. Two companies, a domestic one A and a foreign one B, design and test semiconductor chips and contract with a foreign manufacturer to produce their designs. A patent owner claims that both companies’ testing processes infringe a patent, and demands damages for the manufactured chips on the theory that those chips’ manufacture and sale are proximately and factually caused by the infringing testing. [If the Court allows worldwide damages then] Company A could be liable for a reasonable royalty on its worldwide sales. In contrast, Company B would likely only be liable for royalties on its U.S. sales. This would effectively punish Company A for conducting research and development in the United States. 

Justices Gorsuch and Breyer broadly agreed with this reasoning. Indeed, Justice Gorsuch’s dissent includes a similar hypothetical and notes that it is a "very odd role for U. S. patent law to play in foreign markets." Unfortunately, the other seven justices were unpersuaded. 

Most patent cases are brought under Section 271(a) of the Patent Act, which concerns infringement "within the United States." As noted, today’s case considered a claim under Section 271(f), which concerns the export of components. It is tempting to hope that the court’s ruling will only apply to 271(f) cases. Unfortunately, the Supreme Court’s reasoning might result in patent owners arguing they deserve damages in all patent cases where domestic infringement supposedly causes harm overseas. In our view, that would be a terrible result. 

It may be that courts will apply proximate cause principles to find that overseas damages are not available for sales loosely linked to US research and development. We hope that damages will be not awarded in cases where there was U.S. research and development but the manufacture and sales occur overseas. If that became the norm, it would be a big disincentive to innovate within the United States.

This week marks the fourth anniversary of the Supreme Court’s decision in Alice v. CLS Bank. In Alice, the court ruled that an abstract idea does not become eligible for a patent simply by being implemented on a generic computer. Now that four years have passed, we know the case’s impact: bad patents went down, and software innovation went up.

Lower courts have applied Alice to throw out a rogues’ gallery of abstract software patents. Counting both federal courts and the Patent Trial and Appeal Board, there are more than 400 decisions finding patent claims invalid under Alice. These include rulings invalidating patents on playing bingo on a computer, computerized meal plans, updating games, and many more. Some of these patents had been asserted by patent trolls dozens or even hundreds of times. A single ruling threw out 168 cases where a troll claimed that companies infringed a patent on the idea of storing and labeling information.

EFF’s Saved By Alice project collects stories of small businesses that used the Alice decision to defend themselves against attacks by entities asserting abstract software patents. Our series includes a photographer sued for running a website where users could vote for their favorite photo. Another post discusses a medical startup accused of infringing an extremely broad patent on telehealth. Without the Alice ruling, many of these small businesses could have been bankrupted by a patent suit.

Meanwhile, software innovation has thrived in the wake of Alice. R&D spending on software and Internet development shot up 27% in the year following the Supreme Court’s decision and has continued to grow at a rapid rate. Employment growth for software developers is also vastly outpacing growth in other sectors. At the end of 2017, PwC concluded that the "computer and software industries still shine in the R&D stakes, outperforming all other organizations in terms of billions spent." A recent paper found evidence that the increase in software R&D was linked to the Alice decision.

Unfortunately, Alice is under threat both in Congress and the courts. The patent lobby—in the form of the Intellectual Property Owners Association and the American Intellectual Property Law Association—wants Congress to undo Alice through legislation. Two recent decisions from the Federal Circuit, in Berkheimer v. HP and Aatrix Software v. Green Shades Software, may make it more difficult for defendants to assert Alice early in litigation. We filed an amicus brief [PDF] in the Berkheimer case urging the Federal Circuit to reconsider, but the court recently denied that petition. These rulings could help patent trolls use the cost of defending a suit as leverage, even when the trolls are asserting patents that are invalid under Alice.

Opponents of the Alice decision ignore the post-Alice boom in software innovation. Instead, they complain that it has become harder to get certain business method and software patents. But the patent system exists for the constitutional purpose of promoting the progress of the useful arts—not to provide work for patent prosecutors and litigators. With software R&D accelerating ahead of all other sectors, there is no need to return to the pre-Alice world of "do-it-on-a-computer" patents.

The Border Security and Immigration Reform Act (H.R. 6136), introduced before Congress last week, would offer immigrants a new path to citizenship in exchange for increased high tech government surveillance of citizens and immigrants alike. The bill calls for increased DNA and other biometric screening, updated automatic license plate readers, and expanded social media snooping. It also asks for 24 hours-a-day, five-days-a-week drone surveillance along the southern U.S. border.

This bill would give the U.S. Department of Homeland Security broad authority to spy on millions of individuals who live and work as far as 100 miles away from a U.S. border. It would enforce invasive biometric scans on innocent travelers, regardless of their citizenship or immigration status.

An Upcoming Vote

In mid-June, after months of stalled negotiations and failed legislative proposals, the Republican caucus of the House of Representatives agreed to a plan on immigration reform: Representatives would vote on two immigration bills.

Representatives smartly rejected one of those bills. The Securing America’s Future Act (H.R. 4760), which EFF opposed, failed in a 193-231 vote. That bill took a hardline stance on immigration and proposed the increased use of invasive surveillance technologies including biometric screening, social media monitoring, automatic license plate readers, and drones.

A vote is expected soon on the second bill: the Border Security and Immigration Reform Act. It would give children who came to this country without documentation—known as "Dreamers"—a path to citizenship. Unfortunately, this bill includes nearly the same bad border surveillance provisions as the bill that failed Thursday.

Given the grave impact this bill would have on individual privacy and rights, we urge Congress to vote the same way as it did Thursday and reject the Border Security and Immigration Reform Act.

More Surveillance Technologies and Drone Flights

The Border Security and Immigration Reform Act would fund multiple surveillance technologies across the United States. Near Detroit, for example, the bill calls for "mobile vehicle-mounted and man-portable surveillance capabilities" for U.S. Customers and Border Protection (CBP) agents. In Washington, the bill similarly calls for "advanced unattended surveillance sensors" and "ultralight aircraft detection capabilities."

The bill also requires that CBP’s Air and Marine operations fly unmanned drones "on the southern border of the United States for not less than 24 hours per day for five days per week."

This type of increased drone surveillance was proposed in H.R. 4760. As we previously wrote:

"Drones can capture personal information, including faces and license plates, from all of the people on the ground within the range and sightlines of a drone. Drones can do so secretly, thoroughly, inexpensively, and at great distances. Millions of U.S. citizens and immigrants live close to the U.S. border, and deployment of drones at the U.S. border will invariably capture personal information from vast numbers of innocent people."

Similar to H.R. 4760, the Border Security and Immigration Reform Act includes no meaningful limitations on the drones’ flight paths, or the collection, storage, and sharing of captured data. The bill could lead to deep invasions into innocent bystanders’ lives, revealing their private information and whereabouts.

More Biometric Screening

The Border Security and Immigration Reform Act also proposes the establishment of a "biometric exit data system" that would require everyone leaving the country—immigrant or citizen—to have their biometric data screened against government biometric databases.

Relatedly, the bill would authorize the CBP Commissioner, "to the greatest extent practicable," to use facial recognition scanning to inspect citizens traveling to the U.S. from nearly 40 visa waiver program countries, which include Japan, New Zealand, Australia, France, Germany, Italy, and Taiwan.

Further, the bill authorizes the Secretary of Homeland Security to "make every effort to collect biometric data using multiple modes of biometrics." That means that fingerprints, facial recognition data, and iris scans could all be up for grabs in the future, so long as the Secretary of Homeland Security deems it necessary.

These proposals are similar to those included in H.R. 4760. They are worrying for the very same reasons:

"Biometric screening is a unique threat to our privacy: it is easy for other people to capture our biometrics, and once this happens, it is hard for us to do anything about it. Once the government collects our biometrics, data thieves might steal it, government employees might misuse it, and policy makers might deploy it to new government programs. Also, facial recognition has significant accuracy problems, especially for people of color."

More Social Media Snooping on Visa Applicants

The Border Security and Immigration Reform bill also borrows the same deeply-flawed social media monitoring practices as those included in H.R. 4760.

The Border Security and Immigration Reform bill would authorize the Department of Homeland Security to look through the social media accounts of visa applicants from so-called "high-risk countries." As we said about the proposal in H.R. 4760:

"This would codify and expand existing DHS and State Department programs of screening the social media of certain visa applicants. EFF opposes these programs. Congress should end them. They threaten the digital privacy and freedom of expression of innocent foreign travelers, and the many U.S. citizens and lawful permanent residents who communicate with them. The government permanently stores this captured social media information in a record system known as 'Alien Files.'"

And similar to H.R. 4760, the Border Security and Immigration Act authorizes the Secretary of Homeland Security to use literally any criteria they find appropriate to determine what countries classify as "high-risk." This broad authority would allow the Secretary of Homeland Security to target Muslim-majority nations for social media collection.

No Compromising on Civil Liberties

As Congress weighs different factors in the ongoing immigration debate, we urge them to look closely at the expanded high-tech surveillance provisions in this proposed package. This bill would undermine the privacy of countless law-abiding Americans and visitors, regardless of citizenship. So, we urge a "no" vote.

On Wednesday, the Legislative Committee of the European Union narrowly voted to keep the two most controversial internet censorship and surveillance proposals in European history in the upcoming revision to the Copyright Directive -- as soon as July Fourth, the whole European Parliament could vote to make this the law of 28 EU member-states.

The two proposals were Article 11 (the link tax), which bans linking to news articles without paying for a license from each news-site you want to link to; and Article 13 (the copyright filters), requiring that everything that Europeans post be checked first for potential copyright infringements and censored if an algorithm decides that your expression might breach someone's copyright.

These proposals were voted through even though experts agree that they will be catastrophic for free speech and competition, raising the table-stakes for new internet companies by hundreds of millions of euros, meaning that the US-based Big Tech giants will enjoy permanent rule over the European internet. Not only did the UN's special rapporteur on freedom of expression publicly condemn the proposal; so did more than 70 of the internet's leading luminaries, including the co-creators of the World Wide Web, Wikipedia, and TCP.

We have mere days to head this off: the German Pirate Party has called for protests in Berlin this Sunday, June 24 at 11:45h outside European House Unter den Linden 78, 10117 Berlin. They'll march on the headquarters of Axel-Springer, a publisher that lobbied relentlessly for these proposals.

If you use the Internet to communicate, organize, and educate it’s time to speak out. Show up, stand up, because the Internet needs you!

Update:: The protests are spreading! Here's a list of more planned actions across the EU!

In the morning before S.B. 822 was to get its first hearing in front of a California Assembly committee before the cameras were on to catch it, the Chair of the Assembly Committee on Communications and Conveyance introduced and got a vote on amendments that substantially weakened the net neutrality provisions of S.B. 822. EFF received word that was his intent and we were disappointed he would carry out such a bait and switch on behalf of AT&T and Comcast.

Chair Miguel Santiago, along with seven other Assembly members both Republican and Democratic, voted for those amendments. Amendments proposed at 10 pm the night before the hearing. Amendments voted on before the bill was heard and before the bill’s author, State Sen. Scott Wiener, could argue against them. Amendments voted on before the witnesses and Wiener could argue for the bill as written.

This comes after the committee chair refused a move to join S.B. 822 and S.B. 460 so that there was a single net neutrality package rather than two bills. That proposal was rejected in favor of new amendments that stripped net neutrality protections right out including provisions that banned discriminatory zero rating that hurt low income Internet users.

Assemblymembers Quirk-Silva, Kamlager-Dove, Holden, Bonta, and Low abstained or were absent while the remaining Democratic and Republican Assembly members joined together to vote in hostile amendments that gutted a whole array of consumer protections of the bill.

Here are just some of the things they green-lighted with their amendment:

  • AT&T can continue to violate net neutrality under its zero rating program and will have even more power to discriminate over the internet with its ownership of Time Warner.
  • Comcast can create arbitrary charges on all websites and services simply for the "privilege" of allowing its customers to connect to those websites and services, which has been banned under federal law for years.
  • Comcast will be free to engage in past abuses over the interconnection market that resulted in consumer access to video services being slowed down arbitrarily in exchange for extortion fees.

 The result is, no matter what, not net neutrality.

Giant ISPs like AT&T and Comcast have worked overtime to defeat this bill, including donating a lot of money. Between the money, the disingenuous arguments of the telecoms, and the manipulated process that forced the hostile amendments into the bill, what happened this week shows just what giant corporations can accomplish with willing legislators. But that does not mean the net neutrality battle is over in California. Everyone, including Californians, deserves access to a free and open Internet. As the bill moves forward EFF will continue to support the work of Sen. Scott Wiener who has vowed to fight on.

New Data Shows Law Enforcement Abused Network 143 Times in 2017

San Francisco - Responding to years of investigations and pressure from the Electronic Frontier Foundation (EFF), the California Attorney General's Office has overhauled and improved its oversight of law enforcement access to a computer network containing the sensitive personal data of millions of state residents, which police abused 143 times in 2017.

The new policies and data will be presented at a regular oversight meeting on Thursday, June 21, 2018 at the Folsom City Council Chambers.

EFF has been investigating abuse of the California Law Enforcement Telecommunication System (CLETS)—the computer network that connects criminal record and DMV data with local and federal agencies across the state—since 2015. Law enforcement personnel access this data more than 2.8 million times daily.

EFF’s research found that misuse of this system was rampant. Examples include officers accessing confidential data for domestic disputes and running background checks on online dates. One particularly egregious case involved an officer who allegedly planned to hand sensitive information on witnesses to the family member of a convicted murderer.

Not only did the Attorney General’s CLETS Advisory Committee fail to hold these agencies accountable, in many cases it failed to enforce requirements that agencies disclose misuse investigations at all. As a result, the Attorney General has not maintained reliable data on misuse.

Earlier this month, the Attorney General’s office began implementing several changes to their oversight of law enforcement agencies, including stiffer penalties when agencies fail to report misuse. The agency also directed a team to bring several hundred delinquent agencies into compliance with misuse disclosure requirements.

"Accountability starts with good data, and so it’s a great start for the Attorney General’s office to give better instructions to law enforcement agencies and to use the enforcement mechanism to ensure disclosure of database abuse," EFF Senior Investigative Researcher Dave Maass said. "But this should only be the first step. We will be watching closely to see if the Attorney General actually follows through on his threats to sanction agencies who sweep CLETS abuse under the carpet."

EFF hopes that accurate data on misuse of CLETS will lead to investigations and accountability for any agency that fails to adequately protect people’s privacy. In addition, EFF is calling on the California Attorney General’s office to tighten its scrutiny of federal agencies, including the Department of Homeland Security, to ensure that they not abusing CLETS for immigration enforcement.

"The California Attorney General is finally taking police database abuse seriously," EFF Staff Attorney Aaron Mackey said. "It’s great that we will finally have good aggregate data on misuse. Now law enforcement needs to follow up on any improper behavior with thorough investigations."

For deeper analysis and links to the records:

Senior Investigative Researcher
Staff Attorney

In 2017, 22 law enforcement employees across California lost or left their jobs after abusing the computer network that grants police access to criminal histories and drivers' records, according to new data compiled by the California Attorney General’s office. The records obtained by EFF show a total of 143 violations of database rules—the equivalent of an invasion of privacy every two and half days. 

These numbers represent the first comprehensive accounting of misuse of the California Law Enforcement Telecommunications System (CLETS). While the acronym is not well known by the public, everyone with a driver’s license or criminal record has information accessible through CLETS. Police and other public safety employees access this sensitive information approximately 2.8 million times a day during the regular course of business.

For the last three years, EFF has exposed widespread misuse of CLETS, raising alarms about oversight deficiencies in the Attorney General’s office and its CLETS Advisory Committee. Among our findings: the Attorney General had lapsed in enforcing requirements that agencies who subscribe to CLETS report annually how many times they investigated misuse and what the outcomes were of the investigations. 

In response to EFF’s concerns, the Attorney General’s office issued new rules and cracked down on agencies that failed to report their misuse.

"The California Department of Justice, in response to increasingly low submissions of misuse reporting by subscribing agencies, will be instituting changes to reporting to achieve 100 percent reporting of CLETS misuse," California Justice Information Services Division Chief Joe Dominic wrote in a directive submitted to more than 1,200 law enforcement agencies. "The DOJ considers the failure to report CLETS misuse a serious matter and will proactively enforce this requirement."

 In 2017, only 704 agencies disclosed these records—approximately 53% compliance. Following an overhaul of the oversight system, in 2018 the Attorney General gathered information from 1,285 agencies—98 percent compliance.

KINGS 14 0
KERN 4 0
YOLO 2 0
LAKE 1 0
NAPA 1 0

* Counties where agencies reported they conducted zero misuse investigations in 2017 are not listed.

While specific information about the nature of the violations is not recorded, the Attorney General has outlined a variety of behaviors that would qualify as misuse. These include querying the database for personal reasons, searching data on celebrities, sharing passwords or access, providing information to unauthorized third parties, and researching a firearm the officer intends to purchase.

CADOJ also updated its rules around accessing CLETS, known as "Policies, Practices and Procedures" manual, which warns agencies that failure to report misuse will be "subject to sanctions, up to and including, removal of CLETS service." In addition, CADOJ will now require agencies who initially report the outcome of a misuse investigation as "pending" to update CADOJ when the investigation is completed. The PPP also now clearly states that any violation of CLETS policies will face discipline, including suspension or termination, and potential criminal prosecution.

According to the misuse data, law enforcement agencies reported that the 143 misuse cases resulted in 9 terminations, 13 resignations, and 18 suspensions. Four cases rose to the level of charge for misdemeanors or infractions. Unfortunately, 53 violations resulted in no action being taken at all. 

Notable among the records is the Los Angeles Police Department, which had failed to file misuse reports year after year with impunity. In 2017, LAPD reported three investigations, two of which resulted in no action being taken, while a third resulted in the suspension and resignation of an employee.

The special investigation unit in the Kings County Human Services Agency—which is charged with protecting at-risk families—raked up the most misuses: 13 cases in which the result was not disclosed. The Los Angeles County Sheriff’s Office reported 6 misuse cases, all of which resulted in suspensions. The Riverside County Sheriff’s Department also saw four resignations in the wake of misuse investigations. 

EFF applauds the Attorney General and the California Department of Justice officials who pushed law enforcement agencies to finally report misuse. We appreciate their hard work in ensuring the data is as complete as possible and that agencies are given clear instructions on how to report misuse. 

At the same time, it’s unclear whether the Attorney General or the CLETS Advisory Committee will follow up on the reports of widespread misuse in particular agencies or discipline those involved. Now that they have data, EFF urges these bodies to independently investigate these cases and hold public hearings on their findings. In addition, EFF urges the Attorney General to independently investigate access to CLETS by federal agencies to ensure they are not violating state law by accessing non-criminal records for immigration enforcement.

EFF is releasing the Attorney General’s spreadsheet of misuse and the misuse reporting forms for more than 1,200 agencies. Local news organizations may find untold stories about police misconduct in this data, and we urge reporters to call these law enforcement agencies to find out more about the nature of this misconduct.

CLETS Misuse Reporting Data (XLSX)

CLETS Misuse Reporting Forms (DocumentCloud)

CLETS Misuse Reporting Forms Bookmarked by County (Document-Cloud 150mb PDF)

Note: DocumentCloud links are subject to that organization's Privacy Policy

Browser fingerprinting is on a collision course with privacy regulations. For almost a decade, EFF has been raising awareness about this tracking technique with projects like Panopticlick. Compared to more well-known tracking "cookies," browser fingerprinting is trickier for users and browser extensions to combat: websites can do it without detection, and it’s very difficult to modify browsers so that they are less vulnerable to it. As cookies have become more visible and easier to block, companies have been increasingly tempted to turn to sneakier fingerprinting techniques.

But companies also have to obey the law. And for residents of the European Union, the General Data Protection Regulation (GDPR), which entered into force on May 25th, is intended to cover exactly this kind of covert data collection. The EU has also begun the process of updating its ePrivacy Directive, best known for its mandate that websites must warn you about any cookies they are using. If you’ve ever seen a message asking you to approve a site’s cookie use, that’s likely based on this earlier Europe-wide law.

This leads to a key question: Will the GDPR require companies to make fingerprinting as visible to users as the original ePrivacy Directive required them to make cookies?

The answer, in short, is yes. Where the purpose of fingerprinting is tracking people, it will constitute "personal data processing" and will be covered by the GDPR.

What is browser fingerprinting and how does it work?

When a site you visit uses browser fingerprinting, it can learn enough information about your browser to uniquely distinguish you from all the other visitors to that site. Browser fingerprinting can be used to track users just as cookies do, but using much more subtle and hard-to-control techniques. In a paper EFF released in 2010, we found that majority of users’ browsers were uniquely identifiable given existing fingerprinting techniques. Those techniques have only gotten more complex and obscure in the intervening years.

By using browser fingerprinting to piece together information about your browser and your actions online, trackers can covertly identify users over time, track them across websites, and build an advertising profile of them. The information that browser fingerprinting reveals typically includes a mixture of HTTP headers (which are delivered as a normal part of every web request) and properties that can be learned about the browser using JavaScript code: your time zone, system fonts, screen resolution, which plugins you have installed, and what platform your browser is running on. Sites can even use techniques such as canvas or WebGL fingerprinting to gain insight into your hardware configuration.

When stitched together, these individual properties tell a unique story about your browser and the details of your browsing interactions. For instance, yours is likely the only browser on central European time with cookies enabled that has exactly your set of system fonts, screen resolution, plugins, and graphics card.

By gathering that information together and storing it on its own servers, a site can track your browsing habits without the use of persistent identifiers stored on your computer, like cookies. Fingerprinting can also be used to recreate a tracking cookie for a user after the user has deleted it. Users that are aware of cookies can remove them within their browser settings, but fingerprinting subverts the built-in browser mechanisms that allow users to avoid being tracked.

And this doesn’t just apply to the sites you visit directly. The pervasive inclusion of remote resources, like fonts, analytics scripts, or social media widgets on websites means that the third parties behind them can track your browsing habits across the web, rather than just on their own websites.

Aside from the limited case of fraud detection (which needs transparency and opt-in consent for any further processing), browser fingerprinting offers no functionality to users. When the popular social media widget provider AddThis started using canvas fingerprinting in 2014, the negative reaction from their users was so overwhelming that they were forced to stop the practice.

Some fingerprinting tricks are potentially detectable by end-users or their software: for instance, a site changing some text into multiple fonts extremely quickly is probably scanning to see which fonts a user has installed. Privacy Badger, a browser extension that we develop at EFF, detects canvas fingerprinting to determine when a site looks like a tracker. And a W3C guidance document draft for web specification authors advises them to develop their specs with fingerprinting detectability in mind. Unfortunately, however, new and more covert techniques to fingerprint users are being discovered all the time.

Fingerprinting After the GDPR

You’ll struggle to find fingerprinting explicitly mentioned in the GDPR—but that’s because the EU has learned from earlier data protection laws and the current ePrivacy Directive to remain technologically neutral.

Apart from non-binding recitals (like Recital 30, discussing cookies), the GDPR avoids calling out specific technologies or giving exhaustive lists and examples. Instead, it provides general rules that the drafters felt should be neutral, flexible, and keep up with technological development beyond fingerprinting and cookies. Below we explain how those general rules apply to tracking Internet users, no matter what technique is used.

Browser Characteristics as Personal Data

The cornerstone of the GDPR is its broad definition of personal data.[1] Personal data is any information that might be linked to an identifiable individual. This definition not only covers all sorts of online identifiers (such as your computer’s MAC address, your networks’ IP address, or an advertising user ID in a cookie) but also less specific features — including the combination of browser characteristics that fingerprinting relies upon. The key condition is that a given element of information relates to an individual who can be directly or indirectly identified.

It is also worth noting that under the GDPR "identification" does not require establishing a user’s identity. It is enough that an entity processing data can indirectly identify a user, based on pseudonymous data, in order to perform certain actions based on such identification (for instance, to present different ads to different users, based on their profiles). This is what EU authorities refer to as singling-out[2], linkability[3], or inference.[4]

The whole point of fingerprinting is the ability of the tracking company (data controller) to be able to indirectly identify unique users among the sea of Internet users in order to track them, create their behavioural profiles and, finally, present them with targeted advertising. If the fingerprinting company has identification as its purpose, the Article 29 Working Party (an advisory board comprised of European data protection authorities) decided over ten years ago, regulators should assume that "the controller … will have the means ‘likely reasonably to be used’ to identify the people because "the processing of that information only makes sense if it allows identification of specific individuals." As the Article 29 Working Party noted, "In fact, to argue that individuals are not identifiable, where the purpose of the processing is precisely to identify them, would be a sheer contradiction in terms."[5]

Thus, when several information elements are combined (especially unique identifiers such as your set of system fonts) across websites (e.g. for the purposes of behavioral advertising), fingerprinting constitutes the processing of personal data and must comply with GDPR.[6]

Can Fingerprinting Be Legal Under The GDPR?

According to the GDPR, every entity processing personal data (including tracking user behavior online, matching ads with user profiles, or presenting targeted ads on their website) must be able to prove that they have a legitimate reason (by the definitions of the law) to do so.[7] The GDPR gives six possible legal grounds that enable processing data, with two of them being most relevant in the tracking/advertising context: user consent and the "legitimate interest" of whoever is doing the tracking.

How should this work in practice? User consent means an informed, unambiguous action (such as change of settings from "no" to "yes").[8] In order to be able to rely on this legal ground, companies that use fingerprinting would have to, in the first place, reveal the fingerprinting before it is executed and, then, wait for a user to give their freely-given informed consent. Since the very purpose of fingerprinting is to escape user’s control, it is hardly surprising that trackers refuse to apply this standard.

It is more common for companies that use fingerprinting to claim their own, or whoever is paying them to fingerprint users, "legitimate interest" in doing so.  

The concept of legitimate interest in the GDPR has been constructed as a compromise between privacy advocates and business interests.[9] It is much more vague and ambiguous than other legal grounds for processing data. In the coming months, you will see many companies who operate in Europe attempt to build their tracking and data collection of their users on the basis of their "legitimate interest."

But that path won’t be easy for covert web fingerprinters. To be able to rely on this specific legal ground, every company that considers fingerprinting has to, first, go through a balancing test[10] (that is, verify for itself whether its interest in obscure tracking is not overridden by "the fundamental rights and freedoms of the data subject, including privacy" and whether it is in line with "reasonable expectations of data subjects"[11]) and openly lay out its legitimate interest argument for end-users. Second, and more importantly, the site has to share detailed information with the person that is subjected to fingerprinting, including the scope, purposes, and legal basis of such data processing.[12] Finally, if fingerprinting is done for marketing purposes, all it takes for end-users to stop it (provided they do not agree with the legitimate interest argument that has been made by the fingerprinter) is to say "no."[13] The GDPR requires no further justification.

Running Afoul of the ePrivacy Rules

Fingerprinting also runs afoul of the ePrivacy Directive, which sets additional conditions on the use of device and browser identifiers. The ePrivacy Directive is a companion law, applying data protection rules more specifically in the area of communications. The Article 29 Working Party emphasised that fingerprinting—even if it does not involve processing personal data—is covered by Article 5(3) of the ePrivacy Directive (the section commonly referred to as the cookie clause) and thus requires user consent:

Parties who wish to process device fingerprints[14] which are generated through the gaining of access to, or the storing of, information on the user’s terminal device must first obtain the valid consent of the user (unless an exemption applies).[15]

While this opinion focused on device fingerprints, the logic still applies to browser fingerprints. Interpretations can vary according to national implementation and this has resulted in an inconsistent and ineffective application of the ePrivacy Directive, but key elements, such as the definition of consent, are controlled by the GDPR which will update its interpretation and operation. The EU aims to pass an updated ePrivacy Regulation in 2019, and current drafts target fingerprinting explicitly.

Looking at how web fingerprinting techniques have been used so far, it is very difficult to imagine companies moving from deliberate obscurity to full transparency and open communication with users. Fingerprinting companies will have to do what their predecessors in the cookie world did before now: face greater detection and exposure by coming clean about their practices, or slink even further behind the curtain, and hope to dodge European law.


When EFF first built Panopticlick in 2010, fingerprinting was largely a theoretical threat, in a world that was just beginning to wake up to the more obvious use of tracking cookies. Since then, we’ve seen more and more sites adopt the surreptitious methods we highlighted then, to disguise their behaviour from anti-tracking tools, or to avoid the increasing visibility and legal obligations of using tracking cookies within Europe.

With the GDPR in place, operating below the radar of European authorities and escaping rules that apply to commercial fingerprinting will be very difficult and—potentially—very expensive. To avoid severe penalties fingerprinting companies should, at least, be more upfront about their practices.

But that’s just in theory. In practice, we don’t expect the GDPR to make fingerprinting disappear any time soon, just as the ePrivacy Directive did not end the use of tracking cookies. The GDPR applies to any company as long as they process the personal data of individuals living within the European Economic Area for commercial purposes, or for any purpose when the behavior is within the EEA. However, many non-EU sites who track individuals in Europe using fingerprinting may decide to ignore European law in the belief that they can escape the consequences. European companies will inevitably claim a "legitimate interest" in tracking, and may be prepared to defend this argument. Consumers may be worn down by requests for consent, or ignore artfully crafted confessions by the tracking companies.

The rationale behind fingerprinting, as it is used today, is to evade transparency and accountability and make tracking impossible to control. If this rationale holds, fingerprinters won’t be able to convince the EU’s courts and regulators that, indeed, it is their legitimate interest to do so. In fact, there’s nothing legitimate about this method of tracking: that’s what privacy laws like the GDPR recognize, and that’s what regulators will act upon. Before we see results of their actions, browser companies, standards organizations, privacy advocates, and technologists will still need to work together to minimize how much third-parties can identify about individual users just from their browsers.

[1] Article 29 Data Protection Working Party, Opinion 4/2007 on the concept of personal data; GDPR Rec. 26 and 30; Art 4 (1)

[2] Article 29 Data Protection Working Party, Opinion 05/2014 on Anonymisation Techniques, pp 11-12. Singling-out: "the possibility to isolate some or all records which identify an individual in the dataset."

[3] Article 29 Working Party, Opinion 05/2014 on Anonymisation Techniques, pp 11-12. Linkability: "the ability to link, at least, two records concerning the same data subject or a group of data subjects (either in the same database or in two different databases). If an attacker can establish (e.g. by means of correlation analysis) that two records are assigned to a same group of individuals but cannot single out individuals in this group, the technique provides resistance against ‘singling out’ but not against linkability."

[4] Article 29 Data Protection Working Party, Opinion 05/2014 on Anonymisation Techniques, pp 11-12. Interference: "the possibility to deduce, with significant probability, the value of an attribute from the values of a set of other attributes."

[5] Article 29 Data Protection Working Party, Opinion 4/2007 on the concept of personal data; see also Article 29 Data Protection Working Party, Opinion 9/2014 on the application of Directive 2002/58/EC to device fingerprinting.

[6] It is possible to collect information on a browser’s fingerprint without allowing for indirect identification of a user, and therefore without implicating "personal data" under the GDPR, For example, because no further operations, such as tracking user behaviour across the web or collecting the data allowing one to link non-unique browser characteristics to other data about the user, take place. This would be unusual outside of rare cases like a fingerprinting research project. In any event, the ePrivacy Directive also applies to non-personal data. See Article 29 Data Protection Working Party, Opinion 9/2014 on the application of Directive 2002/58/EC to device fingerprinting; ePrivacy Directive Art 5(3).

[7] GDPR Rec 40 and Art. 5(1)(a)

[8] GDPR Rec and 42 Art. 4(11); Article 29 Data Protection Working Party, Guidelines on consent under Regulation 2016/679

[9] Article 29 Data Protection Working Party, Opinion 6/2014 on the notion of legitimate interests of the data controller under Article 7 of Directive 95/46/EC; GDPR Rec 47 and Art 6(1)(f)

[10] See Recital 47 EU GDPR, "The legitimate interests of a controller, including those of a controller to which the personal data may be disclosed, or of a third party, may provide a legal basis for processing, provided that the interests or the fundamental rights and freedoms of the data subject are not overriding, taking into consideration the reasonable expectations of data subjects based on their relationship with the controller."

[11] Article 29 Data Protection Working Party, Opinion 6/2014 on the notion of legitimate interests of the data controller under Article 7 of Directive 95/46/EC; GDPR Rec 47 and Art 6(1)(f)

[12] GDPR Art 13

[13] GDPR Art 21(2)

[14] See Article 29 Data Protection Working Party, Opinion 9/2014 on the application of Directive 2002/58/EC to device fingerprinting "The technology of device fingerprinting is not limited to the configuration parameters of a traditional web browser on a desktop PC. Device fingerprinting is not tied to a particular protocol either, but can be used to fingerprint a broad range of internet connected devices..." (p.4)

[15] Article 29 Data Protection Working Party, Opinion 9/2014 on the application of Directive 2002/58/EC to device fingerprinting

Senators Kevin de Leon and Wiener had recently joined forces to push their net neutrality bills through the Assembly Committee on Communications and Conveyance tomorrow as a joint package. Such a unified effort represented the most powerful way to move both S.B. 460 and S.B. 822 together to present Governor Brown the strongest net neutrality bill in the country.

However, EFF has learned that their effort to move a strong package has been rejected by Communications and Conveyance Chairman Miguel Santiago. In essence, it appears now that the Chair of the Assembly Committee is ready to strike key provisions out of the legislative package on behalf of AT&T and Comcast rather than allow an up or down vote on the bills as they stand.

EFF had expressed concerns that lawmakers in Sacramento would be fooled into removing some of the strongest provisions designed to protect low-income Internet users after an intense lobbying campaign by AT&T, and it appears our concerns have been validated.

But we still have time to make our voices heard and declare that any changes to S.B. 822 that remove provisions on behalf of AT&T and Comcast are unacceptable. The Committee is still 24 hours away from voting on the bills. We need every Californian right now that supports net neutrality to call their Assemblymember, especially if you reside in a district occupied by a member of the Committee on Communications and Conveyance.

Never underestimate the power of making your voice heard. Never underestimate the power of our collective effort as Team Internet.

You can also make sure Chairman Santiago hears that California is watching and won’t stand for anything less than true, strong net neutrality protections. Tweet him to let him know.

Take Action

Tell Chairman Santiago to protect net neutrality

Using word searches to find infringement is a bad way to go about things. It is likely why Volkswagen filed three takedown requests for art of beetles. Not Beetles with four wheels and headlights. Beetles with six legs and hard, shiny carapaces. For the record, Volkswagen holds no rights to literal bugs.

Peggy Muddles is a scientist and an artist who marries her two lives by making science-themed art. Among her many digital prints are a number of works featuring beetles—the type of insect. And, well, Volkswagen was not having any of that.

Muddles sells some of her prints through the website RedBubble. On December 1, 2017, she received a takedown notice for her rove beetle art from Volkswagen. Now, the rove beetle is a common insect found throughout Europe. A Volkswagen Beetle is a car.

Volkswagen, it turns out, does not own beetles the insect, the largest group of animals on this planet. Nor does it own rove beetles, the largest group of beetles alive. And it does not own the depiction of the species Paederus fuscipes, the species Muddles depicted in her art.

In response, Muddles did the right thing: she consulted a lawyer, crafted a counter-notice explaining that her bug was not the same as a car named for a bug, and sent it to RedBubble and Volkswagen. "After VW’s option to pursue expired, I repeatedly attempted to contact RedBubble to have my listing reinstated, but received only automated replies indicating that my email had been received," Muddles told EFF. "After about two months, I chalked it up to a simple error and re-uploaded the design."

Oh, if only that were the end of it. Mistakes made, corrected, and everyone moves on having learned something. However, months later, in mid-2018, Muddles received two more takedowns for drawings of beetles from Volkswagen. Once again, the art was of insects and not cars.

Faced with the takedown of prints titled "Buprestic rufipes - red-legged Buprestis beetle" and "Rhipicera femoralis - feather horned beetle" (you can see how Volkswagen got confused and thought these were prints of cars), Muddles once again went to her lawyer.

This time, the lawyer sent a letter saying "the beetles that are the subject matter of our client’s works of art evolved over 300 million years ago, pre-dating the fine motor vehicles manufactured by your company by approximately 300 million years."

"The next morning we had an apology and my listings were reinstated," said Muddles in an email to EFF. "While my illustrations on RedBubble do not net me a huge amount of money, my sales there do contribute to my financial stability, and so I was immensely frustrated."

Muddles is lucky. She knew the law and had access to a lawyer. Not everyone is in her position. Deciding to file a counter-notice can be a very fraught thing, even if you know you’re in the right.

This is why it’s important that actual human eyes, backed by actual human judgment, look at things before takedown notices are sent. Simple logic says that a sweep for the word "beetle" is going to turn up a lot of false positives for Volkswagen. And not every artist is going to be as knowledgeable as Muddles.

It’s also concerning that RedBubble didn’t get back to Muddles after the December incident. It should not take a sternly-worded letter from a lawyer after the third ridiculous takedown notice to get a response. After Muddles explained it and, again, human eyes confirmed that there was no infringement going on, her art should have been restored to the site.

At the very least, she shouldn’t have had to guess whether or not she was in the clear. Especially since receiving repeated, unresolved takedown notices can result in someone losing their account on a site. She should have known if she had a strike against her or not.

This kind of story really bugs. And, in case Volkswagen is reading, that’s in the colloquial sense, not the car.

California’s net neutrality bill, S.B. 822, is often referred to as the "gold standard" of state-based net neutrality laws. The bill tackles the full array of issues the FCC had addressed right up until the end of 2016 before it began repealing net neutrality. One such issue is the discriminatory use of zero rating, where ISPs could choose to give users access to certain content for "free"—that is, without digging into their data plans. ISPs can use zero rating to drive users to their own content and services to the detriment of competitors.

The FCC found that both AT&T’s and Verizon’s use of zero rating appeared to be in violation of the 2015 Open Internet Order, only to have those findings and investigations terminated as one of the first acts of President Trump’s FCC Chairman Ajit Pai. The core issue is the fact that companies like AT&T were simply exempting their own affiliated services from their datacaps in a blatant effort to drive wireless Internet users to their preferred products. Undoubtedly, AT&T’s recent victory over the Department of Justice’s antitrust lawsuit that sought to prevent the giant telecom company from becoming even bigger with Time-Warner content will result in even greater levels of self-dealing through discriminatory zero rating policies.

California’s legislature has so far opted to ban discriminatory users of zero rating and prevent the major wireless players from picking winners and losers online. But new and increased resistance by the ISP lobby (led by AT&T and their representative organization CALinnovates) unfortunately has legislators contemplating whether discriminatory zero rating practices should remain lawful despite their harms for low-income Internet users. In fact, AT&T and their representatives are even going so far as to argue that their discriminatory self-dealing practices that violate net neutrality are actually good for low income Internet users.

S.B. 822’s Zero Rating Provisions Ensure Low-Income Internet Users Get the Same Internet as All Other Internet Users

Studies by the Pew Research Center show that when an Internet user has limited income to purchase Internet access, they opt to get their entire Internet usage from a wireless device. As a result, the zero rating policies of wireless ISPs have a profound impact on shaping users’ Internet experience. Users who depend on their wireless device for Internet access are highly likely to pay overage fees when they try to take advantage of the full, open web. These overage fees are part of a scheme to force wireless Internet users to only use products and services that the wireless ISP has exempted from their own arbitrary data caps—and to punish users when they stray from those products and services. The CTIA’s own study confirms that if they can drive Internet users to their chosen zero rated products to the detriment of potentially superior services.

This is why California organizations that promote the digital civil rights of communities of color—such as the Center for Media Justice and Color of Change as well as experts who represent low income Californians such as the Western Center on Law and Poverty—have all come out in strong support for S.B. 822’s zero rating provisions.

S.B. 822 bans the practice of self-dealing and discriminatory gatekeeping by ISPs outright, which is why those same ISPs will fight to take it out of the legislation before it becomes law. It is why they are actively attempting to mislead legislators in Sacramento with bogus superficial studies from groups that represent ISP interests like CALinnovates that ignore the fact that the data cap is an artificial construct that is designed to raise rates on wireless users and zero rating is how they exploit that structure. There is no benefit to Internet users by simply saying the ISP’s selected services do not have additional fees associated with them and nothing about the current structure is "free" because we have all compensated companies like AT&T and Verizon to the tune of $26 billion in profits in just 2016 alone.

Without the ability to profit from discriminatory conduct, the wireless carriers will lose the financial incentive to use zero rating to create an inferior wireless Internet for those with limited income and will no longer be able to exploit their gatekeeper power.

Do Not Forget That the FCC Found That AT&T’s Zero Rating Practices Violated Net Neutrality Right Up Until It Began Repealing Net Neutrality

The FCC’s core issue with AT&T’s zero rating practices was that AT&T explicitly exempted its own products, such as DirecTV, while capping products that would compete with DirecTV. In effect, using something that was not owned by AT&T was more expensive for their wireless users forcing users with limited income to only use what AT&T had blessed. Even the Trump Administration’s Department of Justice, in its antitrust lawsuit against AT&T, cited concerns with the company weaponizing its ownership of content (in this instance HBO) against online video competitors. The only federal entity that did not seem concerned with AT&T’s discriminatory practices was the current FCC, which intentionally abandoned oversight over the industry and is even contemplating a new proposal by AT&T to impair private competition to the incumbents today.

Upholding S.B. 822 means upholding a free, open Internet for all Californians. Without it, ISPs may have free rein to create two Internets that will be premised on how much income you have to the benefit of their own services and partners. With AT&T's recent victory in the courts over the Department of Justice and the expiration of federal net neutrality rules, S.B. 822's net neutrality protections have become more important than ever. 

Take Action

Defend net neutrality in California


Vint Cerf, Tim Berners-Lee, and Dozens of Other Computing Experts Oppose Article 13

As Europe's latest copyright proposal heads to a critical vote on June 20-21, more than 70 Internet and computing luminaries have spoken out against a dangerous provision, Article 13, that would require Internet platforms to automatically filter uploaded content. The group, which includes Internet pioneer Vint Cerf, the inventor of the World Wide Web Tim Berners-Lee, Wikipedia co-founder Jimmy Wales, co-founder of the Mozilla Project Mitchell Baker, Internet Archive founder Brewster Kahle, cryptography expert Bruce Schneier, and net neutrality expert Tim Wu, wrote in a joint letter that was released today:

By requiring Internet platforms to perform automatic filtering all of the content that their users upload, Article 13 takes an unprecedented step towards the transformation of the Internet, from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users.

The prospects for the elimination of Article 13 have continued to worsen. Until late last month, there was the hope that that Member States (represented by the Council of the European Union) would find a compromise.  Instead, their final negotiating mandate doubled down on it.

The last hope for defeating the proposal now lies with the European Parliament. On June 20-21 the Legal Affairs (JURI) Committee will vote on the proposal. If it votes against upload filtering, the fight can continue in the Parliament's subsequent negotiations with the Council and the European Commission. If not, then automatic filtering of all uploaded content may become a mandatory requirement for all user content platforms that serve European users. Although this will pose little impediment to the largest platforms such as YouTube, which already uses its Content ID system to filter content, the law will create an expensive barrier to entry for smaller platforms and startups, which may choose to establish or move their operations overseas in order to avoid the European law.

For those platforms that do establish upload filtering, users will find that their contributions—including video, audio, text, and even source code—will be monitored and potentially blocked if the automated system detects what it believes to be a copyright infringement. Inevitably, mistakes will happen. There is no way for an automated system to reliably determine when the use of a copyright work falls within a copyright limitation or exception under European law, such as quotation or parody.

Moreover, because these exceptions are not consistent across Europe, and because there is no broad fair use right as in the United States, many harmless uses of copyright works in memes, mashups, and remixes probably are technically infringing even if no reasonable copyright owner would object. If an automated system monitors and filters out these technical infringements, then the permissible scope of freedom of expression in Europe will be radically curtailed, even without the need for any substantive changes in copyright law.

The upload filtering proposal stems from a misunderstanding about the purpose of copyright. Copyright isn't designed to compensate creators for each and every use of their works. It is meant to incentivize creators as part of an effort to promote the public interest in innovation and expression. But that public interest isn't served unless there are limitations on copyright that allow new generations to build and comment on the previous contributions. Those limitations are both legal, like fair dealing, and practical, like the zone of tolerance for harmless uses. Automated upload filtering will undermine both.

The authors of today's letter write:

We support the consideration of measures that would improve the ability for creators to receive fair remuneration for the use of their works online. But we cannot support Article 13, which would mandate Internet platforms to embed an automated infrastructure for monitoring and censorship deep into their networks. For the sake of the Internet’s future, we urge you to vote for the deletion of this proposal.

What began as a bad idea offered up to copyright lobbyists as a solution to an imaginary "value gap" has now become an outright crisis for future of the Internet as we know it. Indeed, if those who created and sustain the operation of the Internet recognize the scale of this threat, we should all be sitting up and taking notice.

If you live in Europe or have European friends or family, now could be your last opportunity to avert the upload filter. Please take action by clicking the button below, which will take you to a campaign website where you can phone, email, or Tweet at your representatives, urging them to stop this threat to the global Internet before it's too late. 


It’s not just the Department of Justice and the FBI that want to undermine your right to private communications and secure devices—some state lawmakers want to weaken encryption, too. In recent years, a couple of state legislatures introduced bills to restrict or outright ban encryption on smartphones and other devices. Fortunately, several Congress members recently introduced their own bill to stop this dangerous trend before it goes any further.

The bill is called the ENCRYPT Act. EFF gladly supports it and thanks Representatives Ted Lieu (D-CA), Mike Bishop (R-MI), Suzan DelBene (D-WA), and Jim Jordan (R-OH) for sponsoring and co-sponsoring the bill.

Encryption—the technology used to secure data on phones and computers and keep digital messages safe from eavesdroppers—is under threat around the world. In the U.S., some of those threats have come from the Department of Justice and FBI, which want technology companies to purposefully and irresponsibly weaken encryption so that law enforcement can more easily get their hands on the contents of encrypted data and messages.

But the threats have come from individual U.S. states, too.

Two years ago, lawmakers in California and New York introduced statewide legislation that would’ve significantly limited their residents’ access to encrypted devices and services. In California, for example, Assembly Bill 1681 would have originally required that any smartphone sold in the state be "capable of being decrypted and unlocked by its manufacturer or its operating system provider." To help compel this, manufacturers could have been subject to fines of $2,500 for every non-compliant device sold in the state.

This piecemeal approach to encryption is not just wrong-headed, it simply won’t work. If state legislatures individually meddle with encryption policy, we could see a landscape where Illinois residents can buy the latest iPhone and download messaging apps like Signal and WhatsApp, but Californians can’t. But the California and New York state bills, intended to help law enforcement catch criminals, ignored the reality that people could still cross into states where the technology is unrestricted to purchase encrypted devices. What’s more, it would be trivially easy for anyone to download encrypted messaging apps online, regardless of state laws.

The ENCRYPT Act would make sure this scenario doesn’t come to pass. In fact, the bill was originally introduced in 2016 as a bulwark against the California and New York state bills—both of which failed on their own.

The ENCRYPT Act would prevent U.S. states and local governments from compelling companies to weaken their encrypted products or store decryption keys for use on demand by law enforcement. It would also prevent states from prohibiting the sale and offering of certain devices and services based solely on their encryption capabilities. That means everyone across the United States, no matter what state they live in, could have equal access to strong encryption.

Of course, there are threats to encryption at the federal level as well, which is why EFF also supports the Secure Data Act. The Secure Data Act, which also has bipartisan sponsorship, would act as a perfect complement to the ENCRYPT Act by prohibiting courts and federal agencies from mandating weakened encryption or otherwise intentionally introducing security vulnerabilities. Together, the two bills would go a long way toward ensuring that strong encryption remains free of government interference in the United States.

In the meantime, the ENCRYPT Act gets encryption policy right. Your zip code shouldn’t determine your digital security.

On Monday, June 11, the FCC's rollback of net neutrality rules goes into effect, but don't expect the Internet to change overnight.

We still have promising avenues to restore net neutrality rules, meaning that Internet Service Providers need to be careful how much ammunition they give us in that political fight. If they're overt about discrimination or gouging customers they increase the chance that we'll succeed and restore binding net neutrality rules.

Much like the ten years before the Open Internet Order in 2015, ISPs are still disciplined by the threat of regulation if they generate too many examples of abuse.

What will happen, though, and what we have already seen under the Trump FCC, is that ISPs play games at the margins. Both landline and mobile ISPs with data caps have already been pushing customers to particular services and media with zero-rating and throttling. And they've been pushing hard to stick us all in slow lanes unless the sites we visit pay protection money -- Verizon even told federal judges it would do this if there were no net neutrality rules.

ISPs stand to gain from creating artificial scarcity -- reducing the available bandwidth to reach their customers so they can make people bid for the privilege. We know this because they turn down offers to build up the infrastructure that would prevent congestion, as when Netflix offered to build a content delivery network for Comcast, for free. Comcast refused and was ultimately able to use congestion to force Netflix to pay up.

Removing net neutrality won't lead to more investment but rather less, because it means ISPs have the option of auctioning off limited access to customers.

You can look forward to an Internet that's slower when you're trying to visit less popular sites, and where online services get a bit more expensive because they have to pay protection money to the ISPs. It will be harder for new companies to come in and compete with the ones that paid for fast lanes, and the nonprofit information resources on the web will be harder to use.

It's not going to be a flashy apocalypse; it will be a slow decline into the Internet of ISP gatekeeping, and you probably won't even know what neat services and helpful resources you're missing. And one day, when the ISPs are secure in their victory, they'll test the waters and see if you'll pay extra to access anything that's not Facebook, or Comcast's video platform, or AT&T's paying partners.

There's still time to avoid this future, though. We won in the Senate and now it's time for the House of Representatives to vote to reinstate the Open Internet Order and protect the neutral, vibrant Internet.

Take Action

Save the net neutrality rules

Last week, the New York Times and others reported that Facebook allowed hardware companies, including some in China, access to a broad range of Facebook users’ information, possibly without the users’ knowledge or consent. This included not only a given user’s personal information, but also that of their Facebook friends and friends-of-friends.

Right now, it's unclear precisely how much Facebook user data was shared through partnerships with third-party hardware manufacturers—but it is clear that Facebook has a consent problem. And the first step toward solving that problem is greater transparency about the full extent of Facebook’s data-sharing practices.

It might be tempting to think that the solution is for Facebook to cut off third-party hardware manufacturers and app developers entirely, but that would be a mistake. The solution to this latest issue is not to lock away user information. If we choose that as our aim, we risk enshrining Facebook as the sole guardian of its users’ data and leaving users with even less power to use third-party tools that they do trust to explore the data held by Facebook and hold the company accountable.

The solution to this latest issue is not to lock away user information from third parties entirely.

Instead, the problem is Facebook’s opacity about its data sharing practices. Facebook should have made available a list of all the third parties that might have had access to users’ data even after those users made clear they did not want their data shared. Facebook said that its agreements with device partners "strictly limited use of [user] data, including any stored on partners’ servers," but more transparency is necessary if Facebook is to gain users’ informed consent and fulfill their right to know who has their personal data.

Understanding how this happened—and why the resolution should be transparency, not locking away data—requires a brief smartphone history lesson. About 10 years ago, app stores did not exist, and apps like Facebook were not widely available on most phones and mobile operating systems. To get Facebook on more phones, the company built "device-integrated" APIs that allowed device manufacturers to write and serve their own version of Facebook-like experiences for their users. Over the past decade, Facebook partnered with about 60 device manufacturers for this purpose—but the scope of these partnerships had not been fully reported until now.

The revelations of Facebook’s device partnerships seem to be inconsistent with reasonable interpretations of Facebook’s privacy settings and recent API changes, announcements, and even congressional testimony in the wake of Cambridge Analytica. The New York Times report also questions whether the sharing agreements violate a 2011 consent decree Facebook reached with the FTC, which required Facebook to get explicit consent before changing the way it shares users’ data.

Facebook changed its Graph API in 2015 to limit third-party developers’ access to users’ friends’ and friends-of-friends' data. But even after that change, device manufacturers—another type of third party—could still obtain data about a user’s Facebook friends and friends-of-friends, even those who had changed their settings to ostensibly prevent third-party sharing. In response to allegations that this violates the FTC consent decree, Facebook pointed out a difference in the legal consent requirements when sharing user friend data with third-party "developers" as opposed to with third-party "service providers."

But to users, this is just a new twist on Cambridge Analytica: Facebook has shared our and our friends’ information with third parties without our knowledge or consent, and we only learn about it after the genie is already out of the bottle.

Protecting user privacy on a networked service poses a unique challenge—and Facebook has consistently failed to rise to that challenge. Much of the value of using Facebook comes from being able to see and engage with information from friends, raising the question of who must reasonably consent to what kind of sharing and to what degree. Until Facebook can navigate user expectations around meaningful, informed, ongoing consent and the transparency that requires, the company will continue to face these scandals and users’ trust in it will continue to diminish.

text California Can Lead the Way in Open Access
Mon, 11 Jun 2018 20:26:02 +0000

There’s a bill in the California legislature that would be a huge win for open access to scientific research. The California Assembly recently passed A.B. 2192 unanimously. We hope to see it pass the Senate soon, and for other states to follow California’s lead in passing strong open access laws.

Under A.B. 2192, all peer-reviewed, scientific research funded by the state of California would be made available to the public no later than a year after publication. Under current law, research funded by the California Department of Public Health is covered by an open access law, but that provision is set to expire in 2020. A.B. 2192 would extend it indefinitely and expand it to cover research funded by any state agency.

A.B. 2192 is a huge step in the right direction. When scientific research is available only to people with access to expensive journal subscriptions or subscription-based academic databases, it puts those without institutional connections at a severe disadvantage.

When EFF’s Ernesto Falcon testified to the CA Assembly on A.B. 2192, he pointed out that locking science behind a paywall often has the unintended consequence of keeping that research out of the hands of the people who most need it.

In 2012 Malaria researcher Bart Knols noted that while western societies had made great advances in treatments for malaria, it was slow going in sub Sahara Africa. The cause for this disparity? More than half of the requisite information researchers needed for treatments was locked behind a paywall (while the other half was free to access). Researchers and medical professionals in some of the most impoverished parts of the world simply could not make use of the knowledge that had already been established.

While the California bill would be a big win for open access, it leaves a few things to be desired. Under the bill, grantees would be required to put their works in a state-provided open access repository within a year of publication. An earlier version of the bill set that embargo period at six months, but it was changed to a year under pressure from lobbyists.

It’s not a coincidence that the 12-month embargo matches the one set by most federal agencies that fund scientific research: since 2013, when the White House directed government agencies to adopt open access policies, publishers have largely fallen in line with the one-year embargo period. (We’ve also been advocating for years that Congress pass a bill to lock the U.S. government’s open access policies into law.)

But let’s face it: science moves quickly and a one-year embargo is simply too long. In our letter to the Legislature about A.B. 2192, we urged lawmakers to find ways to find ways to ensure that more state-funded research is published under a gold open access model; that is, published in open access journals, available to the public with no fee:

EFF recommends the legislature also consider additional ways to ensure that more state-funded research becomes available to the public immediately upon publication, not just within the six-month embargo period the bill permits. In the fast-moving world of scientific research, a six-month embargo can put scientists without access to paid repositories at a severe disadvantage. One way to achieve that goal would be to require that publications be either shared in a public repository upon publication or published in an open access journal, similar to the University of California system’s excellent open access policy.

We also urged the legislature to consider passing an open licensing requirement for the research that it funds. Requiring that grantees publish research under a license that allows others to republish, remix, and add value ensures that the public can get the maximum benefit of state-funded science.

We hope to see A.B. 2192 pass quickly and become a model for similar open access laws in other states.

June 11, 2018 is the day that the FCC’s so-called "Restoring Internet Freedom Order" goes into effect. This represents the FCC’s abdication of authority in upholding the hard-won net neutrality protections of the 2015 Open Internet Order. But this does not mean the fight is over.

While the FCC ignored the will of the vast majority of Americans and voted not to enforce bans on blocking, throttling, and paid prioritization, it doesn’t get the final say. Congress, states, and the courts can all work to restore these protections. As we have seen, net neutrality needs and deserves as many strong protections as possible, be they state or federal. ISPs who control your access to the Internet shouldn’t get to decide how you use it once you get online.

Three states (Oregon, Washington, and Vermont) have passed state net neutrality laws. Six more (Hawai’i, Montana, New Jersey, New York, Rhode Island, and Vermont) have executive orders doing the same. Overall, 35 states have some form of net neutrality protections in the works.

Congress can overturn the FCC’s decision and reinstate the 2015 Open Internet Order with a simple majority vote under the Congressional Review Act (CRA). It passed the Senate on May 16 by a vote of 52-47. So now we have to ask the House of Representatives to follow suit. Even though House leadership has said they will not schedule a vote, one can still be called if a majority of representatives sign a discharge petition.

You can see where your representative stands and email them to support the CRA here. Now that the FCC repeal is in effect, we need to tell the House to restore protections and keep large ISPs from changing how we use the Internet.

Take Action

Save the net neutrality rules

Earlier this week, the Senate Homeland Security and Governmental Affairs Committee held a hearing on the Preventing Emerging Threats Act of 2018 (S. 2836), which would give the Department of Justice and the Department of Homeland Security sweeping new authority to counter malicious drones. Officials from both those agencies as well as the Federal Aviation Administration were present to discuss the government’s current response to drones, and how it would like to be able to respond. Interestingly, both the Senators and the witnesses seem to agree that there are some large, unresolved constitutional questions in this bill. In light of those questions, EFF strongly opposes this bill. 

Among other things, the bill would authorize DOJ and DHS to "track," "disrupt," "control," "seize or otherwise confiscate," or even "destroy" unmanned aircraft that pose a "threat" to certain facilities or areas in the U.S. The bill also authorizes the government to "intercept" or acquire communications around the drone for these purposes, which could be read to include capturing video footage sent from the drone. Most concerning, many of the bill’s key terms are undefined, but it is clear that it provides extremely broad authority, exempting officials from following procedures that ordinarily govern electronic surveillance and hacking, such as the Wiretap Act, Electronic Communications Privacy Act, and the Computer Fraud and Abuse Act.

Given the breadth of these proposed new powers, you would expect officials to have a strong case for passing the bill. But even after the hearing, it’s not clear why DHS and DOJ need any expanded authority to go after "malicious" drones. For example, the FAA already has the ability to impose public flight restrictions for non-military aircraft, including drones. S. 2836 would expand those restrictions to any "covered facility or asset," but does not narrowly define what is covered. Instead, the Secretary of Homeland Security or the Attorney General can make that determination, on their own, without public input and without public notice. While Committeeairman Ron Johnson claimed that the new authority would not give DHS the authority to "knock down drones flying around your backyard," that’s not exactly true.

The authorities in S. 2836 are explicitly written to support DHS missions, including those from U.S. Immigration and Customs Enforcement and U.S. Customs and Border Protection. If your backyard is on the border in San Diego or El Paso, nothing in this bill prevents DHS from determining that your backyard is now a covered area, allowing federal law enforcement to intercept or destroy a drone overhead. EFF has been concerned about government-owned drones operating along the border, capturing images of innocent bystanders on their own property. If this bill passes, DHS could access those same images, revealing those bystanders’ private activities and lives.

Even if the need were clear, the new authority would need to be specific and narrow to guard against misuse. But in fact many of the actions authorized by the bill are vague and even contradictory. On one hand, officials are authorized to "disrupt" threatening drones by "intercepting… communications used to control the unmanned aircraft" and are specifically exempted from following the Wiretap Act, which puts in place stringent requirements to go to court for a "super warrant" before intercepting communications. On the other, the bill directs the agencies to develop procedures to ensure that interceptions are conducted "consistent with the Fourth amendment  . . . and applicable provisions of Federal law." The problem is the most "applicable" federal law is the Wiretap Act, which is often said to embody the requirements of the Fourth Amendment as well. Wiretaps are among the most sensitive form of surveillance, and the bill seems to allow the government to dispense with these protections merely on the say so of DOJ and DHS officials.

Similarly, the bill does not define "threat," except to say the Attorney General and DHS Secretary determine what actions are necessary to mitigate threats. Hayley Chang, Deputy General Counsel of DHS, suggested that it was not necessary to define "threat" in this bill, because "we don’t want to be back here in six months asking for new authorities." However, if Congress does not define what threats DHS is allowed to target, this authority could be used to prevent journalists and private citizens from capturing footage of government activities or other legitimate news events. Additionally, S. 2836 would allow states to request federal law enforcement support at "mass gatherings," which could include protests or other First Amendment-protected activities. It is easy to imagine a scenario where drone footage of protesters clashing with police could be perceived as a threat and destroyed before the public views it, especially because the bill explicitly allows the agencies to "intercept, acquire or access communications" without a warrant and without any court oversight, even after the fact. The seizure and destruction of drones also raises due process and additional Fourth Amendment questions, none of which are adequately cabined by the bill’s language. Again, even though DHS and DOJ say that they do not intend to use all of the broadly constructed powers in this bill, there is nothing to prevent them from doing so.

To be clear, the government may have legitimate reasons for engaging drones that pose an actual, imminent, and narrowly defined "threat." And the Department of Defense already has the authority to take down drones, but only in much more narrowly circumscribed areas directly related to enumerated defense missions. EFF is aware of the threat that drones can pose to public safety and privacy—we have been concerned about government drones for a long time. While we definitely see a need for protection, we don’t think the solution requires handing the government such unfettered authority to interfere with drones.

 S. 2836 is currently scheduled for a markup in the Homeland Security & Governmental Affairs Committee on June 13. During the hearing, Ms. Chang acknowledged the concerns with giving the agencies the "broad, categorical" carve-out from Title 18, but still didn’t articulate why the agencies should be allowed to sidestep current law with broad, vague authorities. We don’t think they should, and we recommend that the Committee not pass this bill.

In what now appears to be an annual ritual, a bad right of publicity law is being rushed through at the end of the legislative session in New York. Assembly Bill 8155-B (and its counterpart Senate Bill 5857-B) would dramatically expand New York’s right of publicity, making it a property right that can be passed on to your heirs – even if you aren’t a New York resident. EFF has sent a memorandum [PDF] to members of the New York State Legislature urging them not to support the bill.

The right of publicity is an offshoot of state privacy law that gives a person the right to limit the public use of her name, likeness, or identity for commercial purposes. A limited version of this right makes sense—for example, allowing you to stop a company falsely claiming that you endorse its products. But the right of publicity has been expanded in recent years thanks to misguided legislation and court decisions. In some states, the right covers just about any speech that even "evokes" a person’s identity. Celebrities have brought right of publicity cases against movies, rap lyrics, magazine features, and computer games. The right of publicity has even been invoked to silence criticism of celebrities. Since the right of publicity can impact a huge range of speech, any changes to the law should be considered carefully.

We are asking New York legislators to oppose this bill. It has several problems, including:

  • Reframing the Right of Publicity as a Property Right: The bill would reframe a well-established privacy right into a freely transferable property right. But the right of publicity only make sense as a cause of action that gives people control over their own image. In this sense, it can be seen a form of false advertising law. When the right is treated like property that can be assigned, celebrities can lose control. For example, a celebrity might assign publicity rights to settle a debt and then find her image pasted over advertisements for products or causes the celebrity finds reprehensible.
  • Pressuring heirs to commercialize the image of deceased relatives: In a large estate, an inheritable and transferable right of publicity may add to the tax burden and thus lead heirs with no choice but to pursue advertising deals or some other commercial venture.
  • Creation of an unprecedented worldwide right: The bill would turn the State of New York into a litigation destination for celebrities from all over the world.
  • Unconstitutionally vague provisions: The bill includes a provision prohibiting use of a digital replica in a "pornographic work." But the bill does not include a definition of pornographic work and that term does not have a settled legal meaning (and appears to be broader than the First Amendment obscenity standard). Many works of art include R-rated depictions of real persons, including public figures who died within 40 years of the film being produced. These include award-winning movies such as Henry and June, Before Night Falls, and Milk. The bill’s vague statutory language will likely chill creative works protected by the First Amendment. 

Right of publicity expert Jennifer Rothman has listed some additional problems with the bill. We hope legislators take the time to consider all of these objections and oppose the latest attempt to expand the right of publicity.