feed2list will stop its service on 2019-01-01
Search and browse in Computer · Security
   search hits: 33
website Deeplinks
EFF's Deeplinks Blog: Noteworthy news from around the internet

For all intents and purposes, the fate of net neutrality this year sits completely within the hands of a majority of members of the House of Representatives. For one thing, the Senate has already voted to reverse the FCC. For another, 218 members of the House can agree to sign a discharge petition and force a vote to the floor, and nothing could stop it procedurally. This represents the last, best chance for a 2018 end to the FCC’s misguided journey into abandoning consumer protection authority over ISPs such as Comcast and AT&T.

But we need you to take the time to contact your elected officials and make your voice heard. Do not underestimate your power to protect the Internet. You’ve done it before when we stopped Congress from passing the Stop Online Piracy Act (SOPA) as it barreled forward towards passage. We’ve even done it on net neutrality just this year. Every time it seemed the ISP lobby had control over the state legislative process and was going to ruin progress on net neutrality laws, we collectively overcame their influence. In fact, every state that has passed net neutrality legislation so far as interim protections have done so on a bipartisan basis.

That should come as no surprise as 86 percent of Americans opposed the FCC decision to repeal net neutrality. At the end of the day the House of Representatives is the political body that is explicitly designed to represent the majority opinion in this country. That means you, your friends, and your family have to speak out now to force the change. No amount of special interest influence is more important or more powerful than Team Internet.

To help you make your voice heard, EFF has provided a guide on how to contact your Member of Congress and navigate the process of meeting your representative. You can also look up who represents you by going here and contact them.

Take Action

Tell Congress to Sign the Discharge Petition to Support Net Neutrality

The Senate passed a new version of the Music Modernization Act (MMA) as an amendment to another bill this week, a marked improvement over the version passed by the House of Representatives earlier in the year. This version contains a new compromise amendment that could preserve early sound recordings and increase public access to them.

Until recently, the MMA (formerly known as the CLASSICS Act) was looking like the major record labels’ latest grab for perpetual control over twentieth-century culture. The House of Representatives passed a bill that would have given the major labels—the copyright holders for most recorded music before 1972—broad new rights in those recordings, ones lasting all the way until 2067. Copyright in these pre-1972 recordings, already set to last far longer than even the grossly extended copyright terms that apply to other creative works, would a) grow to include a new right to control public performances like digital streaming; b) be backed by copyright’s draconian penalty regime; and c) be without many of the user protections and limitations that apply to other works.

Fundamentally, Congress should not be adding new rights in works created decades ago.

The drafting process was also troubling. It seemed a return to the pattern of decades past, where copyright law was written behind closed doors by representatives from a few industries and then passed by Congress without considering the views of a broader public. Star power, in the form of famous musicians flown to Washington to shake hands with representatives, eased things along.

Two things changed the narrative. First, a broad swath of affected groups spoke up and demanded to be heard. Tireless efforts by library groups, music libraries, archives, copyright scholars, entrepreneurs, and music fans made sure that the problems with MMA were made known, even after it sailed to near-unanimous passage in the House. You contacted your Senators to let them know the House bill was unacceptable to you, and that made a big difference.

Second, the public found a champion in Senator Ron Wyden, who proposed a better alternative in the ACCESS to Recordings Act. Instead of layering bits of federal copyright law on top of the patchwork of state laws that govern pre-1972 recordings, ACCESS would have brought these recordings completely under federal law, with all of the rights and limitations that apply to other creative works. While that still would have brought them under the long-lasting and otherwise deeply-flawed copyright system we have, at least there would be consistency.

Weeks of negotiation led to this week’s compromise. The new "Classics Protection and Access Act" section of MMA clears away most of the varied and uncertain state laws governing pre-1972 recordings, and in their place applies nearly all of federal copyright law. Copyright holders—again, mainly record labels—gain a new digital performance right equivalent to the one that already applies to recent recordings streamed over the Internet or satellite radio. But older recordings will also get the full set of public rights and protections that apply to other creative work. Fair use, the first sale doctrine, and protections for libraries and educators will apply explicitly. That’s important, because many state copyright laws—California’s, for example—don’t contain explicit fair use or first sale defenses.

The new bill also brings older recordings into the public domain sooner. Recordings made before 1923 will exit from all copyright protection after a 3-year grace period. Recordings made from 1923 to 1956 will enter the public domain over the next several decades. And recordings from 1957 onward will continue under copyright until 2067, as before. These terms are still ridiculously long—up to 110 years from first publication, which is longer than any other U.S. copyright. But our musical heritage will leave the exclusive control of the major record labels sooner than it would have otherwise.

The bill also contains an "orphan works"-style provision that could allow for more use of old recordings even if the rightsholder can’t be found. By filing a notice with the copyright office, anyone can use a pre-1972 recording for non-commercial purposes, after checking first to make sure the recording isn’t in commercial use. The rightsholder then has 90 days to object. And if they do, the potential user can still argue that their use is fair. This provision will be an important testcase for solving the broader orphan works problem.

The MMA still has many problems. With the compromise, the bill becomes even more complex, extending to 186 pages. And fundamentally, Congress should not be adding new rights in works created decades ago. Copyright law is about building incentives for new creativity, enriching the public. Adding new rights to old recordings doesn’t create any incentives for new creativity. And copyrights as a whole, including sound recording copyrights, still last for far too long.

Still, this compromise gives us reason for hope. Music fans, non-commercial users, and the broader public have a voice—a voice that was heard—in shaping copyright law as long as legislators will listen and act.

text Hill-Climbing Our Way to Defeating DRM
Tue, 18 Sep 2018 20:17:44 +0000

Computer science has long grappled with the problem of unknowable terrain: how do you route a packet from A to E when B, C, and D are nodes that keep coming up and going down as they get flooded by traffic from other sources? How do you shard a database when uncontrollable third parties are shoving records into it all the time? What's the best way to sort some data when spammers are always coming up with new tactics for re-sorting it in ways that suit them, but not you or your users?

One way to address the problem is the very useful notion of "hill-climbing." Hill-climbing is modeled on a metaphor of a many-legged insect, like an ant. The ant has forward-facing eyes and can't look up to scout the terrain and spot the high ground, but it can still ascend towards a peak by checking to see which foot is highest and taking a step in that direction. Once it's situated in that new place, it can repeat the process, climbing stepwise toward the highest peak that is available to it (of course, that might not be the highest peak on the terrain, so sometimes we ask our metaphorical ant to descend and try a different direction, to see if it gets somewhere higher).

This metaphor is not just applicable to computer science: it's also an important way to think about big, ambitious, fraught policy fights, like the ones we fight at EFF. Our Apollo 1201 Project aims to kill all the DRM in the world inside of a decade, but we don't have an elaborate roadmap showing all the directions we'll take on the way.

There's a good reason for that. Not only is the terrain complex to the point of unknowability; it's also adversarial: other, powerful entities are rearranging the landscape as we go, trying to head us off. As the old saying goes, "The first casualty of any battle is the plan of attack."

Instead of figuring out the whole route from A to Z, we deploy heuristics: rules of thumb that help us chart a course along this complex, adversarial terrain as we traverse it.

Like the ant climbing its hill, we're feeling around for degrees of freedom where we can move, ascending towards our goal. There are four axes we check as we ascend:

1. Law: What is legal? What is illegal? What chances are there to change the law? For example, we're suing the US government to invalidate Section 1201 of the Digital Millennium Copyright Act (DMCA), the abetting legislation that imposes penalties for bans breaking DRM, even for legal reasons.  If it was legal to break DRM for a legal purpose, the market would be full of products that let you unlock more value in the products you own, and companies would eventually give up on trying to restrict legal conduct.

We're also petitioning the US Copyright Office to grant more exemptions to DMCA 1201, despite the fact that those exemptions are limited in practice (e.g., "use" exemptions that let you jailbreak a device, but not "tools" exemptions that let you explain to someone how to jailbreak their device or give them a tool to do so).

Why bother petitioning the Copyright Office if they can only make changes that barely rise above the level of cosmetic? Glad you asked.

2. Norms: What is socially acceptable? A law that is widely viewed as unreasonable is easier to change than a law that is viewed as perfectly understandable. Copyright law is complicated and boring, and overshadowed by emotive appeals to save wretched "creators" (like me—my full-time job is as a novelist, and I work part-time for EFF as an activist because sitting on the sidelines while technology was perverted to control and oppress people was unbearable).

But in the twenty-first century, a tragic category error (using copyright, a body of law intended to regulate the entertainment industry's supply chain, to regulate the Internet, which is the nervous system of the entire digital world) has led to disastrous and nonsensical results. Thanks to copyright law, computer companies and car companies and tractor companies and voting machine companies and medical implant companies and any other company whose product has a computer in it can use copyright to make it a crime to thwart their commercial plans—to sell you expensive ink, or to earn a commission on every app, or to monopolize the repair market.

From long experience, I can tell you that the vast majority of people do not and will never care about copyright or DRM. But they do care about the idea that vast corporations have bootstrapped copyright and DRM into a doctrine that amounts to "felony contempt of business model." They care when their mechanic can't fix their car any longer, or the insulin for their artificial pancreas goes up 1000 percent, or when security experts announce that they can't audit their state's voting machines.

The Copyright Office proceedings can carve out some important freedoms, but more importantly, they are a powerful normative force, an official recognition from the branch of the US government charged with crafting and regulating copyright that DRM is messed up and getting in the way of legitimate activity.

3. Code: What is technically possible? DRM is rarely technologically effective. For the most part, DRM does not survive contact with the real world, where technologists take it apart, see how it works, find its weak spots, and figure out how to switch it off. Unfortunately, laws like DMCA 1201 make developing anti-DRM code legally perilous, and people who try face both civil and criminal jeopardy. But despite the risks, we still see technical interventions like papers at security conferences on the weaknesses in DRM or tools for bypassing and jailbreaking DRM. EFF's Coders' Rights project stands up for the right of developers to create these legitimate technologies, and our intake desk can help coders find legal representation when they're threatened.

4. Markets: What's profitable? When a policy goal intersects with someone else's business model, you get an automatic power-up. People who want to sell jailbreaking tools, third-party inkjet cartridges, and other consumables, independent repair services, apps and games for locked platforms are all natural opponents of DRM, even if they're not particularly worried about DRM itself, and only care about the parts of it that get in the way of earning their own living.

There are many very successful products that were born with DRM—like iPhones—and where no competing commercial interests were ever able to develop. It's a long battle to convince app makers that competition in app stores would result in their being able to keep more of that 30 percent commission they currently pay to Apple.

But in other domains, like the independent repair sector, there are huge independent commercial markets that are thwarted by DRM. Independent repair shops create local, middle-class jobs (no one sends a phone or a car overseas for service!) and they rely on manufacturers for third-party replacement parts and diagnostic tools. Farmers are a particularly staunch ally in the repair fight, grossly affronted at the idea of having to pay John Deere a service charge to unlock the parts they swap into their own tractors (and even more furious at having to wait days for a John Deere service technician to put in an appearance in order to enter the unlock code).

Law, Norms, Code, and Markets: these are the four forces that former EFF Board member Lawrence Lessig first identified in his 1999 masterpiece Code and Other Laws of Cyberspace, the forces that regulate all our policy outcomes. The fight to rescue the world from DRM needs all four.

When we're hill-climbing, we're always looking for chances to invoke one of these four forces, or better yet, to combine them. Is there a business that's getting shafted by DRM who will get their customers to write to the Copyright Office? Is there a country that hasn't yet signed a trade agreement banning DRM-breaking, and if so, are they making code that might help the rest of us get around our DRM? Is there a story to tell about a ripoff in DRM (like the time HP pushed a fake security update to millions of printers in order to insert DRM that prevented third-party ink) and if so, can we complain to the FTC or a state Attorney-General to punish them? Can that be brought to a legislature considering a Right to Repair bill?

On the way, we expect more setbacks than victories, because we're going up against commercial entities who are waxing rich and powerful by using DRM as an illegitimate means to cement monopolies, silence critics, and rake in high rents.

But even defeats are useful: as painful as it is to lose a crucial battle, such a loss can galvanize popular opposition, convincing apathetic or distracted sideliners that there's a real danger that the things they value will be forever lost if they don't join in (that would be a "normative" step towards victory).

As we've said before, the fight to keep technology free, fair and open isn't a destination, it's a journey. Every day, there are new reasons that otherwise reasonable people will find to break the tech we use in increasingly vital and intimate ways—and every day, there will be new people who are awoken to the need to fight against this temptation.

These new allies may get involved because they care about Net Neutrality, or surveillance, or monopolies. But these are all part of the same information ecology: what would it gain us to have a neutral internet if all the devices we connect to it use DRM to control us to the benefit of distant corporations? How can we end surveillance if our devices are designed to treat us as their enemies, and thus able to run surveillance code that, by design, we're not supposed to be able to see or stop? How can we fight monopolies if corporations get to use DRM to decide who can compete with them—or even criticize the security defects in their products?

On this Day Against DRM, in a year of terrible tech setbacks and disasters, it could be easy to despair. But despair never got the job done: when life gives you SARS, you make sarsaparilla. Every crisis and catastrophe bring new converts to the cause. And if the terrain seems impassible, just look for a single step that will take you to higher ground. Hill-climbing algorithms may not be the most direct route to higher ground, but as every programmer knows, it's still the best way to traverse unknowable terrain.

What step will you take today?

(Image: Jacob_Eckert, Creative Commons Attribution 3.0 Unported)

EFF has submitted an amicus brief [PDF] to the New Hampshire Supreme Court asking it to affirm a lower court ruling that found criticism of a patent owner was not defamatory. The trial judge hearing the case ruled that "patent troll" and other rhetorical characterizations are not the type of factual statements that can be the basis of a defamation claim. Our brief explains that both the First Amendment and the common law of defamation support this ruling.

This case began when patent assertion entity Automated Transactions, LLC ("ATL") and inventor David Barcelou filed a defamation complaint [PDF] in New Hampshire Superior Court. Barcelou claims to have come up with the idea of connecting automated teller machines to the Internet. As the complaint explains, he tried to commercialize this idea but failed. Later, ATL acquired an interest in Barcelou’s patents and began suing banks and credit unions.

ATL’s patent litigation did not go well. In one case, the Federal Circuit ruled that some of ATL’s patent claims were invalid and that the defendants did not infringe. ATL’s patents were directed to ATMs connected to the Internet and it was "undisputed" that the defendants’ products "are not connected to the Internet and cannot be accessed over the Internet." ATL filed a petition asking the U.S. Supreme Court to overturn the Federal Circuit. The Supreme Court denied that petition.

Unsurprisingly, ATL’s licensing revenues went down after its defeat in the federal courts. Rather than accept this, ATL and Barcelou filed a defamation suit in New Hampshire state court blaming their critics for ATL’s financial decline.

In the New Hampshire litigation, ATL and Barcelou allege that statements referring to them as a "patent troll" are defamatory. They also claim that characterizations of ATL’s litigation campaign as a "shakedown," "extortion," or "blackmail" are defamatory. The Superior Court found these statements were the kind of rhetorical hyperbole that is not capable of defamatory meaning and dismissed the complaint. ATL and Barcelou appealed.

EFF’s amicus brief [PDF], filed together with ACLU of New Hampshire, explains that Superior Court Judge Brian Tucker got it right. The First Amendment provides wide breathing room for public debate and does not allow defamation actions based solely on the use of harsh language. The common law of defamation draws a distinction between statements of fact and pure opinion or rhetorical hyperbole. A term like "patent troll," which lacks any settled definition, is classic rhetorical hyperbole. Similarly, using terms like "blackmail" to characterize patent litigation is non-actionable opinion.

ATL and Barcelou, like some other critics of the Superior Court’s ruling, spend much of their time arguing that "patent troll" is a pejorative term. This misunderstands the Superior Court’s decision. At one point in his opinion, Judge Tucker noted that some commentators have presented the patent assertion, or troll, business model in a positive light. But the court wasn’t saying that "patent troll" is never used pejoratively or even that the defendants didn’t use it pejoratively. The law reports are filled with cases where harsh, pejorative language is found not capable of defamatory meaning, including "creepazoid attorney," "pitiable lunatics," "stupid," "asshole," "Director of Butt Licking," etc.

ATL and Barcelou may believe that their conduct as inventors and patent litigants should be praised rather than criticized. They are entitled to hold that view. But their critics are also allowed to express their opinions, even with harsh and fanciful language. Critics of patent owners, like all participants in public debate, may use the "imaginative expression" and "rhetorical hyperbole" which "has traditionally added much to the discourse of our Nation."

Government can’t be accountable unless it is transparent. Voters and taxpayers can only know whether they approve of the actions of public officials and public employees if they know what they’re doing. That transparency is especially important when it comes to the actions of local police, who carry weapons and have the power of arrest.

In the age of the Internet, for most of us, access to the state, local and federal laws that we must follow is just a click away. But if a resident of a particular city wants to know the rules that the police she pays for must follow, it’s a lot more difficult. In the state of California, accessing records about basic police policies often requires the filing of a California Public Records Act (CPRA) request.

There’s a chance now to make it much easier. Both houses of the California legislature have passed S.B. 978, which requires local police departments to publish their "training, policies, practices, and operating procedures" on their websites. That’s exactly as it should be, with transparency as the default—not a special privilege that journalists or activists have to request.

In an age when police are enhancing their powers with extraordinary surveillance tools like automated license plate readers, facial recognition, drones, and social media monitoring, transparency in police procedures is especially important—because without it, it's much harder to hold law enforcement personnel accountable. 

The bill has exceptions that give us real concern. Governor Brown vetoed a similar bill last year that we also supported, which led the bill’s author to exempt several important state agencies that would have been covered under the earlier bill, including the Department of Justice and the Department of Corrections and Rehabilitation. Also, S.B. 978 doesn’t provide enforcement mechanisms or consequences for police agencies that fail to post the required information.

Despite those limitations, S.B. 978 will be a big step forward in creating a more transparent government, at a time when trust between police and vulnerable communities needs to be rebuilt. Join us in urging Governor Jerry Brown to sign this important bill.

Take Action

tell the governor to sign sb 978

Five of the largest U.S. technology companies pledged support this year for a dangerous law that makes our emails, chat logs, online videos and photos vulnerable to warrantless collection by foreign governments.

Now, one of those companies has voiced a meaningful pivot, instead pledging support for its users and their privacy. EFF appreciates this commitment, and urges other companies to do the same.

Microsoft’s long-titled "Six Principles for International Agreements Governing Law Enforcement Access to Data" serves as the clearest set of instructions by a company to oppose the many privacy invasions possible under the CLOUD Act. (Dropbox published similar opposition earlier this year, advocating for many safeguards.)

Quickly, Microsoft’s principles are:

  • The universal right to notice
  • Prior independent judicial authorization and required minimum showing
  • Specific and complete legal process and clear grounds to challenge
  • Mechanisms to resolve and raise conflicts with third-country laws
  • Modernizing rules for seeking enterprise data
  • Transparency

To understand how these principles could serve as a bulwark for privacy, we have to first revisit how the CLOUD Act does the opposite.

The CLOUD Act, Revisited

Bypassing responsible legislative procedure and robbed of a stand-alone floor vote before being signed into law in March, the CLOUD Act created new mechanisms for U.S. and foreign police to seize data across the globe.

Under the CLOUD Act, the president can enter into "executive agreements" that allow police in foreign countries to request data directly from U.S. companies, so long as that data does not belong to a U.S. person or person living in the United States. Now, you might wonder: Why should a U.S. person worry about their privacy when foreign governments can’t specifically request their data? Because even though foreign governments can’t request U.S. person data, that doesn’t mean they won’t get it.

As we wrote before, here is an example of how a CLOUD Act data request could work:

"London investigators want the private Slack messages of a Londoner they suspect of bank fraud. The London police could go directly to Slack, a U.S. company, to request and collect those messages. The London police would receive no prior judicial review for this request. The London police could avoid notifying U.S. law enforcement about this request. The London police would not need a probable cause warrant for this collection.

Predictably, in this request, the London police might also collect Slack messages written by U.S. persons communicating with the Londoner suspected of bank fraud. Those messages could be read, stored, and potentially shared, all without the U.S. person knowing about it. Those messages could be used to criminally charge the U.S. person with potentially unrelated crimes, too."

Many of the CLOUD Act’s privacy failures—failure to require notice, failure to require prior judicial authorization, and the failure to provide a clear path for companies and individuals to challenge data requests—are addressed by Microsoft’s newly released principles.

The Microsoft Principles

Microsoft’s principles encompass both itself and other U.S. technology companies that handle foreign data, including cloud technology providers. That’s because the principles sometimes demand changes to the actual executive agreements—changes that will affect how any company that receives CLOUD Act data requests can publicize, respond to, or challenge them. (No agreements have been finalized, but EFF anticipates the first one between the United States and the United Kingdom to be released later this year.)

Microsoft has committed to the "universal right to notice," saying that "absent narrow circumstances, users have a right to know when the government accesses their data, and cloud providers must have a right to tell them."

EFF agrees. For years, we have graded companies explicitly on their policies to inform users about U.S. government data requests prior to fulfilling such requests, barring narrow emergency exceptions. It is great to see Microsoft’s desire to continue this practice for any CLOUD Act data request it receives. The company has also demanded that it and other companies be allowed to fight nondisclosure orders that are tied to a data request. This is similar to another practice that EFF supports.

Providing notice is vital to empowering individuals to legally defend themselves from overbroad government requests. The more companies that do this, the better.

Further, Microsoft committed itself to "transparency," saying that "the public has a right to know how and when governments seek access to digital evidence, and the protections that apply to their data."

Again, EFF agrees. This principle, while similar to universal notice, serves a wider public. Microsoft’s desire is to not only inform users whose data is requested about those data requests, but to also spread broader information to everyone. For instance, Microsoft wants all cloud providers to "have the right to publish regular and appropriate transparency reports" that unveil the number of data requests a company receives, what governments are making requests, and how many users are affected by requests. This type of information is crucial to understanding, for instance, if certain governments make a disproportionate number of requests, and, if so, what country’s persons, if any, are they targeting? Once again, EFF has graded companies on this issue.

Microsoft’s interpretation on transparency also includes a demand that any executive agreement negotiated under the CLOUD Act must be published "prior to its adoption to allow for meaningful public input." This is the exact type of responsible procedure that Congressional leadership robbed from the American public when sneaking the CLOUD Act into the back of a 2,232-page government spending bill just hours before a vote. Removing the public from a conversation about their right to privacy was unacceptable then, and it remains unacceptable now.

Microsoft additionally demanded that any CLOUD Act data requests include "prior independent judicial authorization and required minimum showing." This is a big deal. Microsoft is demanding a "universal requirement" that all data requests for users’ content and "other sensitive digital evidence" be first approved by a judicial authority before being carried out. This safeguard is nowhere in the CLOUD Act itself.

One strong example of this approval process, which Microsoft boldly cites, is the U.S. requirement for a probable cause warrant. This standard requires a judicial authority, often a magistrate judge, to approve a government search application prior to the search taking place. It is one of the strongest privacy standards in the world and a necessary step in preventing government abuse. It serves as a bedrock to the right to privacy, and we are happy to see Microsoft mention it.

Elsewhere in the principles, Microsoft said that all CLOUD Act requests must include a "specific and complete legal process and clear grounds to challenge."

Currently, the CLOUD Act offers individuals no avenue to fight a request that sweeps up their data, even if that request was wrongfully issued, overbroad, or illegal. Instead, the only party that can legally challenge a data request is the company that receives it. This structure forces individuals to rely on technology companies to serve as their privacy stewards, battling for their rights in court.

Microsoft’s demand is for a clear process to do just that, both for itself and other companies. Microsoft wants all executive agreement data requests to show proof that prior independent judicial review was obtained, a serious crime is under investigation as defined by the executive agreement, and that the data request is not for an investigation that infringes human rights.

Finally, a small absence: EFF would like to see Microsoft commit to "minimization procedure" safeguards for how requested data is stored, used, shared, and eventually deleted by governments.

You can read the full set of principles here.

A Broader Commitment

Microsoft’s principles are appreciated, but it must be noted that some of their demands require the work of people outside the company’s walls. For example, lawmakers will decide how much to include the public when negotiating executive agreements under the CLOUD Act. And lawmakers will decide what actually goes in those agreements, including restrictions on the universal right to notice, language about prior judicial review, and instructions for legal challenges.

That said, Microsoft is powerful enough to influence CLOUD Act negotiations. And so are the four companies that, as far as we know, still non-conditionally support the CLOUD Act—Apple, Google, Facebook, and Oath (formerly Yahoo). EFF urges these four companies to make the same commitment as Microsoft and to publish principles that put privacy first when responding to CLOUD Act data requests.

EFF also invites all companies affected by the CLOUD Act to also publish their own set of principles similar to Microsoft’s.

As for Microsoft, Apple, Google, Facebook, and Oath, we can at least say that some have scored well on EFF’s Who Has Your Back reports, and some have shown a healthy appetite for defending privacy in court, challenging government gag orders, search warrants, and surveillance requests. And, of course, if these companies falter, EFF and its supporters will hold them accountable.

The CLOUD Act has yet to produce its first executive agreement. Before that day comes, we urge technology companies: support privacy and fight this dangerous law, both for your users and for everyone.

The Senate Commerce Committee is getting ready to host a much-anticipated hearing on consumer privacy—and consumer privacy groups don’t get a seat at the table. Instead, the Committee is seeking only the testimony of big tech and Internet access corporations: Amazon, Apple, AT&T, Charter Communications, Google, and Twitter. Some of these companies have spent heavily to oppose consumer privacy legislation and have never supported consumer privacy laws. They know policymakers are considering new privacy protections, and are likely to view this hearing as a chance to encourage Congress to adopt the weakest privacy protections possible—and eviscerate stronger state protections at the same time.

The upcoming hearing at the Senate Commerce Committee may be the launch pad for this strategy of undoing stronger state laws.

It is no coincidence that, in the past week, two leading industry groups (the Chamber of Commerce and the Internet Association) have called for federal preemption of state data privacy laws in exchange for weaker federal protections. For example, laws in California and Illinois require companies to have user consent to certain uses of their personal information (Nevada and Minnesota have these requirements for Internet access providers), while the industry proposals would only require transparency. That means that companies would be allowed to collect information without your permission as long as they tell you they’re doing it. The upcoming hearing at the Senate Commerce Committee may be the launch pad for this strategy of undoing stronger state laws.

Since we can’t be there to say this ourselves, we’ll say it here: EFF will oppose any federal legislation that weakens today’s hard-fought privacy protections or destroys the states’ ability to protect their citizens’ personal information. EFF has had a long and continuous battle with some of the testifying companies, such as Google and AT&T, regarding your right to data privacy, and we’re not going to give up now.

To be clear, we would look closely at a sensible federal legislation that offers meaningful protections for data privacy. Uniform laws offer predictability, making life easier for smaller companies, nonprofits and others that may struggle to meet the rules of different states. But a uniform law is only a good alternative if it’s actually a good law—not a weak placeholder designed only to block something stronger.

The State Consumer Privacy Laws That Big Tech and ISPs Want Congress to Nullify

California’s recently passed consumer privacy legislation has some valuable protections as well as room for improvement, but even this modest set of privacy protections is apparently too much for some big tech companies and the ISPs. If Congress passes the industry’s wish list, it won’t just kill the California privacy law. It will also preempt Illinois’ biometric privacy law, which landed Facebook in a class action lawsuit for allegedly collecting facial data without permission. And there’s more: Such a federal law would also block strong state data breach notification laws that forced companies like Equifax to tell us when they compromised the data of 145.5 million Americans. The upcoming one-sided congressional hearing will not yield valuable insights to the Senate Commerce Committee, but rather give the industry a lengthy amount of time to repeat talking points that reinforce their lobbyists’ arguments in hopes of persuading Congress to once again vote against our privacy rights.

The state legislators in California and Illinois who passed these laws did what they were supposed to do: protect the privacy of their residents. The absence of these state laws would mean that big companies face fewer consequences for compromising our personal information.

This Congress Has a Terrible Record on Protecting Privacy

There’s a reason states are taking action: They are filling a void. What did this Congress do when Facebook’s Cambridge Analytica scandal broke, besides hold a hearing? What did it do when Equifax failed to protect the personal data of 145 million Americans, causing lasting damage to their financial security, besides hold a hearing? Absolutely nothing. Despite overwhelming public support for privacy—a resounding 89 percent of Americans support privacy being a legal right and 91 percent believe we have lost control over our privacy—this legislature has taken little real action.

In fact, when this Congress has taken action on privacy hazards, whether from the government or from corporations, it has pro-actively stripped us of our privacy protections. When companies like AT&T, Verizon, and Comcast wanted to get away from strong federal broadband privacy regulations, Congress took the dramatic step of repealing those privacy protections. When the NSA requested an expansion of its warrantless surveillance program, Congress readily agreed.

Given this track record, Internet users should wonder whether the upcoming Senate Commerce hearing is just a prelude to yet another rollback of privacy protections. If so, policymakers can expect to hear the voices they excluded loud and clear in opposition.

This week, two California jurisdictions joined the growing movement to subject government surveillance technology to democratic transparency and civilian control. Each culminated a local process spearheaded by concerned residents who campaigned for years.

First, on Monday, the City of Palo Alto voted 8-1 to adopt an ordinance to "Establish Criteria and Procedures for Protecting Personal Privacy When Considering the Acquisition and Use of Surveillance Technologies, and Provide for Ongoing Monitoring and Reporting." Like a handful of similar ordinances adopted across the Bay Area over the past two years, it includes several requirements.

The new ordinance requires any proposed acquisition of surveillance technology to go through a public process. First, law enforcement must announce the proposal publicly, provide a formal analysis supporting the rationale, and also document potential impacts on privacy. Then, there is an opportunity for public comment to inform a transform, public vote by local elected officials. Only with their approval may the proposal proceed.

We are disappointed that the Palo Alto measure lacks a provision through which the public can enforce its protections. Instead, it empowers only Council members to hold law enforcement accountable if they violate the ordinance’s process requirements. This weakness aside, the adoption of the measure is an important step forward in the expansion of civilian oversight across the Bay Area, California, and beyond.

Three days later, the Board of Bay Area Rapid Transit (BART) voted unanimously to adopt a similar measure. This comes on the heels of a controversial proposed BART face surveillance program that lacked any public process. It also follows the activation of automated license plate readers (ALPRs) at a BART station without the Board’s prior approval, and the transfer of the resulting ALPR data to a regional fusion center, where it was accessible to U.S. Immigration and Customs Enforcement (ICE). Thus, the new oversight ordinance reflects a dramatic turn for BART.

Like the Palo Alto ordinance, the one adopted by BART is flawed in some respects. It includes a potentially dangerous exception for law enforcement to conduct a "trial" period use of unapproved spy tech for up to 60 days at a single station. We hope the limited duration for a trial suggests that it will not become a back door to permanence. The BART Board will need to actively ensure that potential trials remain truly temporary.

In June 2016, the first local surveillance oversight measure in the nation was adopted in Santa Clara County, the heart of Silicon Valley. These laws also have been adopted in Berkeley, Davis, and Oakland. By subjecting any proposed surveillance technology to a public process, these laws not only ensure community control over whether police acquire these tools. They also force into the open the increasingly common domestic use of powerful spy tech designed for use in foreign battlefields, which has proceeded largely in secret, despite being the subject of explicit warnings by the last U.S. President to command a wartime army.

Each of these measures was spearheaded by local community organizations, including Oakland Privacy, a member of the Electronic Frontier Alliance. Oakland Privacy was formed during the Occupy movement in response to a proposed Domain Awareness Center, and continues to champion civilian oversight across Oakland and beyond. It was joined in Palo Alto by the Peninsula Peace and Justice Center, another group in the Alliance.

Aboard the Arctic Sunrise, a working icebreaker that has sailed to the Arctic Circle, the Congo, and the Amazon Rivers under Greenpeace’s stead, EFF joined several civil liberties and environmental rights groups to send a message: no longer will we be bullied by malicious lawsuits that threaten our freedom of speech.

"We have the Constitution, we have our rights, and now, we have each other," said Greenpeace executive director Annie Leonard.

On September 5, EFF helped launch Protect the Protest, a coalition of nearly 20 organizations committed to fighting back against Strategic Lawsuits Against Public Participation, also known as SLAPPs. The coalition includes EFF, ACLU, Greenpeace, Freedom of the Press Foundation, Amnesty International, and Human Rights Watch.

Aboard a ship that is carrying three smaller boats, four people face an audience to discuss their civil liberties work. A banner that says

(Left to right) Mother Jones CEO Monika Bauerlein, Greenpeace executive director Annie Leonard, Rainforest Action Network director of communications Christopher Herrera, Wikimedia legal counsel Jacob Rogers, and EFF Civil Liberties Director David Greene discuss their civil liberties work aboard the Greenpeace ship The Arctic Sunrise.

SLAPPs are malicious lawsuits often filed by large corporations and wealthy individuals to silence journalists, activists, nonprofit organizations, and those who speak truth to power. The goal is not to win a SLAPP based on legal merits. Instead, it is to abuse the court system in a way that forces a victim to spend time and money to fight the lawsuit itself—draining their resources and chilling their right to free speech.

Countless Americans are hit with these lawsuits every year.

From 2014 to 2016, the online technology blog Techdirt published multiple detailed articles disputing claims from Shiva Ayyadurai that he invented email. In 2016, Techdirt published an article with the headline "Here’s the Truth: Shiva Ayyadurai Didn’t Invent Email." Months later, Techdirt founder Mike Masnick was hit with a $15 million libel lawsuit. The lawsuit and anticipated legal fees threatened Masnick’s entire business.

"It affects just about everything we do," Masnick said last week.

Last year, former Weed, CA mayor Bob Hall was attacked with a SLAPP for standing up for his city’s water rights. At the launch event, Hall empathized with every SLAPP victim who feels bullied into backing down.

"How many times has what’s good and right been destroyed because you don’t have the financial wherewithal to fight?" Hall said.

Every SLAPP recipient speaking at the Protect the Protest launch realized that they were a lucky minority: while many SLAPP victims are eventually silenced by crushing legal fees and intimidation, the men and women on stage found lawyers to fight for them, experts to help their cases, and support within their communities.

For the individuals and organizations that feel alone, Protect the Protest is here to help.

No longer will SLAPPs be fought in the dark. No longer will their recipients feel isolated. No longer will we defend the First Amendment without one another. If one of our organizations faces a SLAPP, Protect the Protest is committed to amplifying that wrongful attack. If our group is truly effective, said Greenpeace’s Leonard, perhaps SLAPP will no longer mean Strategic Lawsuits Against Public Participation. Perhaps it will mean Strategic Lawsuits Accelerating Public Participation. With enough resistance from us, hopefully First Amendment opponents will no longer benefit from filing SLAPPs at all and we can put an end to this entire practice that hurts organizations and individuals alike.

EFF feels right at home in Protect the Protest. We’ve represented individuals facing SLAPPs, we’ve connected others with legal help, and we’ve repeatedly advocated for a strong federal anti-SLAPP law.

The Internet should allow every person—no matter their income, assets, or connections in high places—the opportunity to participate in public debates. That is only possible when everyone can speak freely without the fear of legal bullying. Our constitutionally protected right to free speech carries through both online and off.

As EFF civil liberties director David Greene explained at the event, this is a crucial moment for our organizations and communities to stand together.

"We realized that we, the organizations that we are, have an obligation to make sure that there is a structure in place to support those who don’t always have a group around them all the time," Greene said.

Together with our allies in the Protect the Protest coalition, EFF is committed to providing that structure and support.

Here’s the thing about different people playing the same piece of music: sometimes, they’re going to sound similar. And when music is by a composer who died 268 years ago, putting his music in the public domain, a bunch of people might record it and some of them might put it online. In this situation, a combination of copyright bots and corporate intransigence led to a Kafkaesque attack on music.

Musician James Rhodes put a video of himself playing Bach on Facebook. Sony Music Entertainment claimed that 47 seconds of that performance belonged to them. Facebook muted the video as a result.

So far, this is stupid but not unusually stupid in the world of takedowns. It’s what happened after Rhodes got Sony’s notice that earned it a place in the Hall of Shame.

One argument in favor of this process is that there are supposed to be checks and balances. Takedown notices are supposed to only be sent by someone who owns the copyright in the material and actually believes that copyright’s been infringed. And if a takedown notice is wrong, a counter-notice can be sent by someone explaining that they own the work or that it’s not infringement.

Counter-notices have a lot of problems, not the least of which is that the requirements are onerous for small-time creators, requiring a fair bit of personal information. There’s always the fear that, even for someone who knows they own the work, that the other side will sue them anyway, which they cannot afford.

Rhodes did dispute the claim, explaining that "this is my own performance of Bach. Who died 300 years ago. I own all the rights." Sony rejected this reasoning.

While we don’t know for sure what Sony’s process is, we can guess that a copyright bot, or a human acting just as mechanically, was at the center of this mess. A human doing actual analysis would have looked at a video of a man playing a piece of music older than American copyright law and determined that it was not something they owned. It almost feels like an automatic response also rejected Rhodes’ appeal, because we certainly hope a thoughtful person would have received his notice and accepted it.

Rhodes took his story to Twitter, where it picked up some steam, and emailed the heads of Sony Classical and Sony’s public relations, eventually getting his audio restored. He tweeted "What about the thousands of other musicians without that reach…?" He raises a good point.

None of the supposed checks worked. Public pressure and the persistence of Rhodes was the only reason this complaint went away, despite how the rules are supposed to protect fair use and the public domain.

How many more ways do we need to say that copyright bots and filters don’t work? That mandating them, as the European Union is poised to do, is dangerous and shortsighted? We hear about these misfires roughly the same way they get resolved: because they generate enough noise. How many more lead to a creator’s work being taken down with no recourse?

A decade ago, before social media was a widespread phenomenon and blogging was still a nascent activity, it was nearly unthinkable outside of a handful of countries—namely China, Tunisia, Syria, and Iran—to detain citizens for their online activity. Ten years later, the practice has become all too common, and remains on the rise in dozens of countries. In 2017, the Committee to Protect Journalists found that more than seventy percent of imprisoned journalists were arrested for online activity, while Reporters Without Borders’ 2018 press freedom barometer cited 143 imprisoned citizen journalists globally, and ten citizen journalists killed. While Tunisia has inched toward democracy, releasing large numbers of political prisoners following the 2011 revolution, China, Syria, and Iran remain major offenders, and are now joined by several countries, including the Philippines, Saudi Arabia, and Egypt.

When we first launched Offline in 2015, we featured five cases of imprisoned or threatened bloggers and technologists, and later added several more. We hoped to raise awareness of their plight, and advocate for their freedom, but we knew it would be an uphill struggle. In two cases, our advocacy helped to secure their release: Ethiopian journalist Eskinder Nega was released from prison earlier this year, and the Zone 9 Bloggers, also from Ethiopia, were acquitted in 2015 following a sustained campaign for their freedom.

mytubethumb play
Privacy info. This embed will serve content from youtube-nocookie.com

Award-winning Ethiopian journalist Eskinder Nega on the power of the Internet and journalism. 

Today, the situation in several countries is dire. In Egypt, where a military coup brought the country back toward dictatorship, dozens of individuals have been imprisoned for expressing themselves. Activist Amal Fathy was detained earlier this year after a video she posted to Facebook detailing her experiences with sexual harassment in Cairo went viral, and awaits trial. And Wael Abbas, an award-winning journalist whose experiences with censorship we’ve previously documented, has been detained without trial since May 2018. We also continue to advocate for the release of Alaa Abd El Fattah, the Egyptian activist whose five-year sentence was upheld by an appeals court last year.

Three new Offline cases demonstrate the lengths to which states will go to silence their critics. Eman Al-Nafjan, a professor, blogger, and activist from Saudi Arabia, was arrested in May for her advocacy against the country’s ban on women driving, which was repealed just one month later. Ahmed Mansoor is currently serving a ten-year sentence for "cybercrimes" in his home country of the United Arab Emirates after being targeted several times in the past for his writing and human rights advocacy. And Dareen Tatour, a Palestinian citizen of Israel, recently began a five-month prison sentence after several years of house arrest and a lengthy trial for content she posted on social media that had been misinterpreted by police.

Advocacy and campaigns on behalf of imprisoned technologists, activists, and bloggers can make a difference. In the coming months, we will share more details and actions that the online community can take to support these individuals, defend their names, and keep them safe.

To learn more about these and other cases, visit Offline.

text What We Mean When We Say "Data Portability"
Thu, 13 Sep 2018 16:42:05 +0000

"Data portability" is a feature that lets a user take their data from a service and transfer or "port" it elsewhere. This often comes up in discussions about leaving a particular social media platform and taking your data with you to a rival service. But bringing data to a competing service is just one use for data portability; other, just-as-important goals include analyzing your data to better understand your relationship with a service, building something new out of your data, self-publishing what you learn, and generally achieving greater transparency.

Regardless of whether you are "porting" your data to a different service or to a personal spreadsheet, data that is "portable" should be easy to download, organized, tagged, and machine-parsable.

EFF supports users’ legal right to obtain a copy of the data they have provided to an online service provider. Once you move beyond that, however, the situation gets more complicated. Data portability interacts, and sometimes even conflicts, with other digital rights priorities, including privacy and security, transparency, interoperability, and competition. Here are some of the considerations EFF keeps in mind when looking at the dynamics of data portability.

Privacy and Security

Any conversation about data portability in practice should keep privacy and security considerations front and center.

First off, security is a critical concern. Ported data can contain extremely sensitive information about you, and companies need to be clear about the potential risks before users move their data to another service. Users shouldn’t be encouraged to share information with untrustworthy third parties. And data must always be protected with strong security in transit and at its new location.

How do we unravel the data you provide about yourself to a service from the data your friends provide about you?

Second, it’s not always clear what data a user should have the right to port. There are a lot of questions to grapple with here: When does "data portability" presume inclusion of one's social graph, including friends' contact information? What are all the ways that can go wrong for those friends’ privacy and security? How do we unravel the data you provide about yourself, the data your friends provide about you, and all the various posts, photos, and comments you may interact with? And then, how can we ensure data portability respects all of those users’ right to have control over their information?

While there are no easy answers, the concept of consent is a starting point. For example, a service could ask friends for their specific, informed consent to share contact information when you initiate a download of all your data. Companies should also explore technical solutions that might allow users to export lists of friends in an obfuscated, privacy-protective form.


Portability works hand-in-hand with transparency. If some of your data is easy to download and use (portable) but the rest is secret (not transparent), then you are left with an incomplete picture of your relationship with a service. Conversely, if you are able to find out all the information a company has about you (transparent) but have no way to take it and interact with it (not portable), you are denied opportunities to further understand and analyze it.

Companies first should be transparent about the profile data that they collect or generate about you for marketing or advertising purposes, including data from third parties and inferences the company itself makes about you. Comprehensive portability should include this information, too; these data should be just as easy for you to access and use as the information you share voluntarily.

Portability works hand-in-hand with transparency to return power to users.

Both portability and transparency return power to users. For example, a comprehensive download of the data Facebook stores about a user’s browsing habits and advertising preferences might help her reverse-engineer Facebook’s processes for making inferences about users for targeted advertising. Or, in another example, the ability to take complete metadata about one’s music preferences and listening patterns from Spotify to another streaming service might make for a better user experience; Spotify might have figured out over time that you can’t stand a certain genre of music, and your next streaming service can immediately accommodate that too.


Data portability can also work alongside "interoperability." Interoperability refers to the extent to which one platform’s infrastructure can work with others. In software parlance, interoperability is usually achieved through Application Programming Interfaces (APIs)—interfaces that allow other developers to interact with an existing software service.

This can allow "follow-on innovators" to not only interact with and analyze but also build on existing platforms in ways that benefit users. For example, PadMapper started by organizing data about rental housing pulled from Craigslist posts and presenting it in a useful way; Trillian allowed users to use multiple IM services through the same client and added features like encryption on top of AIM, Skype, and email. On a larger scale, digital interoperability enables decentralized, federated services like email, modern telephony networks, and the World Wide Web.


Depending on the context and platform, data portability is vital but not sufficient for encouraging competition. In many markets, it’s hard for competition to exist without portability, so we must get this part right.

Data portability can support users’ right to "vote with their feet" by leaving a platform or service that isn’t working for them.

But on its own, data portability cannot magically improve competition; the ability to take your data to another service is not helpful if there are no viable competitors. Similarly, data portability cannot fend off increasing centralization as big players buy up or squash smaller competitors. Initiatives like the Data Transfer Project among Facebook, Microsoft, Twitter, and Google could ultimately be important,  but won’t meaningfully help competition unless they allow users to move their data beyond a small cabal of incumbent services. Right now they don’t.

Combined with other substantive changes, data portability can support users’ right to "vote with their feet" by leaving a platform or service that isn’t working for them and taking their data and connections to one that does. Making these options real for people can encourage companies to work to keep their users, rather than hold them hostage.

Despite waves of calls and emails from European Internet users, the European Parliament today voted to accept the principle of a universal pre-emptive copyright filter for content-sharing sites, as well as the idea that news publishers should have the right to sue others for quoting news items online – or even using their titles as links to articles. Out of all of the potential amendments offered that would fix or ameliorate the damage caused by these proposals, they voted for worst on offer.

There are still opportunities, at the EU level, at the national level, and ultimately in Europe’s courts, to limit the damage. But make no mistake, this is a serious setback for the Internet and digital rights in Europe.

It also comes at a trepidatious moment for pro-Internet voices in the heart of the EU. On the same day as the vote on these articles, another branch of the European Union’s government, the Commission, announced plans to introduce a new regulation on "preventing the dissemination of terrorist content online". Doubling down on speedy unchecked censorship, the proposals will create a new "removal order", which will oblige hosting service providers to remove content within one hour of being ordered to do so. Echoing the language of the copyright directive, the Terrorist Regulation "aims at ensuring smooth functioning of the digital single market in an open and democratic society, by preventing the misuse of hosting services for terrorist purposes"; it encourages the use of "proactive measures, including the use of automated tools."

Not content with handing copyright law enforcement to algorithms and tech companies, the EU now wants to expand that to defining the limits of political speech too.

And as bad as all this sounds, it could get even worse. Elections are coming up in the European Parliament next May. Many of the key parliamentarians who have worked on digital rights in Brussels will not be standing. Marietje Schaake, author of some of the better amendments for the directive, announced this week that she would not be running again. Julia Reda, the German Pirate Party representative, is moving on; Jan Philipp Albrecht, the MEP behind the GDPR, has already left Parliament to take up a position in domestic German politics. The European Parliament’s reserves of digital rights expertise, never that full to begin with, are emptying.

The best that can be said about the Copyright in the Digital Single Market Directive, as it stands, is that it is so ridiculously extreme that it looks set to shock a new generation of Internet activists into action – just as the DMCA, SOPA/PIPA and ACTA did before it.

If you’ve ever considered stepping up to play a bigger role in European politics or activism, whether at the national level, or in Brussels, now would be the time.

It’s not enough to hope that these laws will lose momentum or fall apart from their own internal incoherence, or that those who don’t understand the Internet will refrain from breaking it. Keep reading and supporting EFF, and join Europe’s powerful partnership of digital rights groups, from Brussels-based EDRi to your local national digital rights organization. Speak up for your digital business, open source project, for your hobby or fandom, and as a contributor to the global Internet commons.

This was a bad day for the Internet and for the European Union: but we can make sure there are better days to come.

text Yes, You Can Name A Website "Fucknazis.us"
Wed, 12 Sep 2018 22:52:18 +0000

Jeremy Rubin just wanted to speak out about the rise of white supremacist groups in the U.S. and raise some money to fight against those groups. But the Internet domain name he registered in late 2017 for his campaign—"fucknazis.us"—ran afoul of a U.S. Department of Commerce policy banning certain words from .US domain names. A government contractor took away his domain name, effectively shuttering his website. Last month, after EFF and the Cyberlaw Clinic at Harvard Law School intervened, Mr. Rubin got his site back.

A government agency shutting down an Internet domain based on the contents of its name runs afoul of the First Amendment. After a long back-and-forth with EFF and the Cyberlaw Clinic, the Commerce Department’s contractor Neustar agreed to give Mr. Rubin back his domain, and to stop banning "dirty words." fucknazis.us has proudly returned to the Internet.

As anyone with a business or personal website knows, having a meaningful domain name can be the cornerstone of online presence. Mr. Rubin, moved to act after anti-Semitic and white supremacist incidents last summer, created a "virtual lapel pin" through the Ethereum computing platform as a fundraiser for opposition to these causes. The virtual pins, and the domain he registered to sell them, declared his message in a pithy fashion: "fucknazis.us"

The Internet’s domain name system as a whole is governed by ICANN, an independent nonprofit organization. While ICANN imposes questionable rules from time to time, a blanket ban on naughty words in domain names has never been one of them. Unluckily for Mr. Rubin, the .US top-level domain is a different animal, because it’s controlled by the U.S. government.

The .US domain was originally set up with a complex regional and topical hierarchy, but today anyone with a connection to the U.S. can register a second-level domain name under .US. Since 1998, it’s been controlled by the National Telecommunications and Information Administration (NTIA), a part of the Department of Commerce. And it’s managed by registry operator Neustar, Inc., under contract with NTIA.

Shortly after Mr. Rubin registered "fucknazis.us," Neustar suspended the domain, calling it a violation of an NTIA "seven dirty words" policy, a phrase with particular First Amendment significance.

As a general rule, First Amendment law makes clear that the government can rarely impose restrictions on speech based on the content of that speech, and when it does, must show some level of necessity. The well-known case of Federal Communications Commission v. Pacifica Foundation upheld the FCC’s decision to reprimand, though not fine or revoke the license of, a public broadcaster after it aired George Carlin’s famous monologue "Filthy Words." In so doing, the Court approved of the FCC’s definition of "indecency," a word otherwise without a constitutional definition. But the Supreme Court explained that "indecency" as a legal concept was limited to over-the-air broadcast media, because broadcasts made use of limited radio spectrum, were a scarce and highly regulated public resource, and were easily overheard by children in their everyday surroundings. Many years later, the Supreme Court directly rejected the US government’s attempt to impose a similar indecency regime on the Internet, and that regime has never been applied to any medium other than over-the-air radio and television broadcasts.

Last month, we learned that Neustar and NTIA were reversing course, allowing Mr. Rubin to proceed with the use of fucknazis.us, and more generally removing these kinds of restrictions from future .US domain name registrations.

Thanks to the First Amendment, the .US domain, advertised as "America’s Address," is a place where one can say "Fuck Nazis" without censorship.

Corrected to reflect that .US originally allowed non-government registrants, but only at third- and fourth-level domains.

Today, in a vote that split almost every major EU party, Members of the European Parliament adopted every terrible proposal in the new Copyright Directive and rejected every good one, setting the stage for mass, automated surveillance and arbitrary censorship of the internet: text messages like tweets and Facebook updates; photos; videos; audio; software code -- any and all media that can be copyrighted.

Three proposals passed the European Parliament, each of them catastrophic for free expression, privacy, and the arts:

1. Article 13: the Copyright Filters. All but the smallest platforms will have to defensively adopt copyright filters that examine everything you post and censor anything judged to be a copyright infringement.

2. Article 11: Linking to the news using more than one word from the article is prohibited unless you're using a service that bought a license from the news site you want to link to. News sites can charge anything they want for the right to quote them or refuse to sell altogether, effectively giving them the right to choose who can criticise them. Member states are permitted, but not required, to create exceptions and limitations to reduce the harm done by this new right.

3. Article 12a: No posting your own photos or videos of sports matches. Only the "organisers" of sports matches will have the right to publicly post any kind of record of the match. No posting your selfies, or short videos of exciting plays. You are the audience, your job is to sit where you're told, passively watch the game and go home.

At the same time, the EU rejected even the most modest proposals to make copyright suited to the twenty-first century:

1. No "freedom of panorama." When we take photos or videos in public spaces, we're apt to incidentally capture copyrighted works: from stock art in ads on the sides of buses to t-shirts worn by protestors, to building facades claimed by architects as their copyright. The EU rejected a proposal that would make it legal Europe-wide to photograph street scenes without worrying about infringing the copyright of objects in the background.

2. No "user-generated content" exemption, which would have made EU states carve out an exception to copyright for using excerpts from works for "criticism, review, illustration, caricature, parody or pastiche."

I've spent much of the summer talking to people who are delighted with this outcome, trying to figure out why they think this could possibly be good for them. Here's what I've discovered:

* They don't understand filters. They really don't.

The entertainment industry has convinced creators that there is a technology out there that can identify copyrighted works and prevent them from being shown without a proper license and that the only thing holding us back is the stubbornness of the platforms.

The reality is that filters primarily stop legitimate users (including creators) from doing legitimate things, while actual infringers find them relatively easy to get around.

Put it this way: if your occupation is figuring out how filters work and tinkering with getting around them, you can become skilled in the art. The filters used by the Chinese government to block images, for example, can be defeated by simple measures. Meanwhile, these filters that are bound to be thousands of times more effective than any copyright filter because they're doing a much more modest job with far more money and technical talent on hand.

But if you're a professional photographer, or just a regular person posting your own work, there's no time in your life to become a hardcore filter-warrior. When a filter mistakes your work for copyright infringement, you can't just bypass the filter with a trick from the copyright infringing underground: you have to send an appeal to the platform that blocked you, getting in line behind millions of other poor suckers in the same situation as you. Cross your fingers and hope that the overworked human reviewing the appeals decides that you're in the right.

Of course, the big entertainment and news companies aren't worried about this outcome: they have backchannels direct into the platforms, priority access to help-lines that will unstick their content when it gets stuck in a filter. Creators who align themselves with large entertainment corporations will be shielded from filters -- while independents (and the public) will have to fend for themselves.

* They grossly underestimate the importance of competition for improving their lot in life.

Building the filters the EU just mandated will cost hundreds of millions of dollars. There are precious few companies in the world who have that kind of capital: the US-based tech giants, and the Chinese-based tech giants, and a few others, like Russia's VK.

The mandate to filter the Internet puts a floor on how small the pieces can be when antitrust regulators want to break up the big platforms: only the largest companies can afford to police the whole net for infringement, so the largest companies can't be made much smaller. The latest version of the Directive has exemptions for smaller companies, but they will have to stay small or constantly anticipate the day that they will have to take the leap to being copyright police. Today, the EU voted to increase the consolidation in the tech sector, and to make it vastly more difficult to function as an independent creator. We’re seeing two major industries, both with competitiveness problems, negotiate for a deal that works for them, but will decrease competition for the independent creator caught in the middle. What we needed were solutions to tackle the consolidation of both the tech and the creative industries: instead we got a compromise that works for them, but shuts out everyone else.

How did this terrible state of affairs come to pass?

It's not hard to understand, alas. The Internet has become a part of everything we do, and so every problem we have has some intersection with the Internet. For people who don't understand technology very well, there's a natural way to solve those problems: "fix the technology."

Arthur C Clarke famously said that "Any sufficiently advanced technology is indistinguishable from magic." Some technological accomplishments do seem like magic, and it's natural to witness these workaday miracles and assume that tech can do anything.

An inability to understand what tech can and can't do is the source of endless mischief: from the people who blithely assert that networked voting machines can be made secure enough to run a national election; to the officials who insist that we can make cryptography that stops crooks from breaking into our data, but allows the police to break into crooks' data; to the hand-waving insistence that a post-Brexit Irish border can be "solved" with some undefined technical fix.

Once a few powerful entertainment industry figures were persuaded that filtering at scale was possible and consequence-free, it became an article of faith, and when technologists (including a who's who of the world's top experts on the subject) say it's not possible, they're accused of mulish stubbornness and lack of vision, not a well-informed perspective on what is and isn't possible.

That's a familiar-enough pattern, but in the case of the EU's Copyright Directive, there were exacerbating factors. Tying a proposal for copyright filters to a proposal to transfer a few million euros from tech giants to newspaper proprietors guaranteed favorable coverage from the very press looking for a solution to its problems.

Finally, there's the problem that the Internet promotes a kind of tunnel vision in which we assume that the part of the net we interact with is the whole thing. The Internet handles trillions of articles of public communication every day: birthday wishes and messages of condolences, notices of upcoming parties and meetings, political campaigns and love notes. A tiny, sub-one-percent slice of those communications are the kind of copyright infringement that Article 13 seeks to address, but the advocates for Article 13 keep insisting that the "primary purpose" of the platforms is to convey copyrighted works of entertainment.

There's no doubt that people from the entertainment industry interact with a lot of entertainment works online, in the same way that the police see a lot of people using the Internet to plan crimes and fashionistas see a lot of people using the Internet to show off their outfits.

The Internet is more vast than any of us can know, but that doesn't mean we should be indifferent to all the other Internet users and the things they lose when we pursue our own narrow goals at the expense of the wider electronic world.

Today's Copyright Directive vote not only makes life harder for creators, handing a larger share of their incomes to Big Content and Big Tech -- it makes life harder for all of us. Yesterday, a policy specialist for a creator's union that I'm a member of told me that their job isn't to "protect people who want to quote Shakespeare" (who might be thwarted by bogus registration of his works in the copyright filters) -- it was to protect the interests of the photographers in the union whose work is being "ripped off." Not only did my union's support of this catastrophic proposal do no good for photographers -- it will also do enormous damage to anyone whose communications are caught in the crossfire. An error rate of even one percent will still mean tens of millions of acts of arbitrary censorship, every day.

So what is to be done?

Practically speaking, there are several more junctures where Europeans can influence their elected leaders on this issue.

* Immediately: the Directive will now go into "trilogues" -- secretive, closed-door meetings between representatives from national governments and the European Union; these will be hard to influence, but they will determine the final language put before the Parliament for the next vote (Difficulty: 10/10)

* Next spring: The European Parliament will vote on the language that comes out of the trilogues. It's unlikely that they'll be able to revise the text any further, so this will probably come to a vote on whether to pass the Directive itself. It's very difficult to defeat the Directive at this stage. = (Difficulty: 8/10)

* After that: 28 member states will have to debate and enact their own versions of the legislation. In many ways, it's going to be harder to influence 28 individual parliaments than it was to fix this at the EU level, but on the other hand, the parliamentarians in member states will be more responsive to individual Internet users, and victories in one country can be leveraged for others ("See, they got it right in Luxembourg, let's do the same") (Difficulty: 7/10)

* Somewhere around there: Court challenges. Given the far-reaching nature of these proposals, the vested interests involved, and the unresolved questions about how to balance all the rights implicated, we can expect this to rise — eventually — to the European Court of Justice. Unfortunately, court challenges are slow and expensive. (Difficulty: 7/10)

In the meantime, there are upcoming EU elections, in which EU politicians will have to fight for their jobs. There aren't many places where a prospective Member of the European Parliament can win an election by boasting about expansions of copyright, but there are lots of potential electoral opponents who will be too happy to campaign on "Vote for me, my opponent just broke the Internet."

As we've seen with Net Neutrality in the USA, the movement to protect the free and open Internet has widespread popular support and can turn into a potential third rail for politicians.

Look, this was never going to be a fight we "won" once and for all -- the fight to keep the Internet free, fair and open is ongoing. For so long as people have:

a) problems; that

b) intersect with the Internet;

there will always be calls to break the Internet to solve them.

We suffered a crushing setback today, but it doesn't change the mission. To fight, and fight, and fight, to keep the Internet open and free and fair, to preserve it as a place where we can organise to fight the other fights that matter, about inequality and antitrust, race and gender, speech and democratic legitimacy.

If this vote had gone the other way, we'd still be fighting today. And tomorrow. And the day after.

The fight to preserve and restore the free, fair and open Internet is a fight you commit yourself to, not a fight that you win. The stakes are too high to do otherwise.

Donate to EFF

Help Us Protect the Free, Fair, and Open Internet

The Senate Judiciary Committee is charged with scrutinizing whether U.S. Circuit Judge Brett Kavanaugh’s nomination to the U.S. Supreme Court by President Trump should be confirmed. Three days of lengthy hearings, however, failed to meaningfully address crucial questions about how old law designed for analog situations might apply to our digital age. It would be premature for Congress to vote on any confirmation before a record of the judge’s views on new technologies and markets is thoroughly developed. 

It would be premature for Congress to vote on any confirmation before a record of the judge’s views on new technologies and markets is thoroughly developed.

This summer, we published a detailed list of suggestions for questions the Senate should ask Judge Kavanaugh during his confirmation hearings. We recommended that Senators explore the nominee’s views of mass surveillance and law enforcement access to digital information, net neutrality and innovation (spanning both patent and copyright law), and competition and antitrust law.

But after three days of hearings, the Senate process yielded only breadcrumbs as answers to the questions we identified as crucial for assessing how Judge Kavanaugh might rule in cases impacting your rights online.

Of the topics we identified, mass surveillance was one of the few discussed in last week’s hearings. During an exchange with Senator Patrick Leahy (D-VT), Judge Kavanaugh—who worked in a senior capacity in the White House under the Bush administration—flatly declared that he had no role in the "Terrorist Surveillance Program," one of the Bush administration’s many mass surveillance programs. 

Kavanaugh did, however, allude to other mass surveillance programs. In response to Sen. Leahy’s questions about his role, he stated that he "can’t rule out" having participated in the approval process. His answer raises important questions, including whether he misrepresented key facts under oath during his confirmation to the U.S. Court of Appeals for the District of Columbia, as some documents may suggest.

EFF has been fighting in the courts for well over a decade, across multiple federal court proceedings, to establish the unconstitutionality of warrantless bulk surveillance. The Supreme Court may one day rule on whether the government’s warrantless mass surveillance programs violate constitutional rights, as we (and many others) have argued.

Judge Kavanaugh’s potential participation in crafting these controversial and unconstitutional programs means not only that he may be inclined uphold them, but also that he may have an interest in their continuation. Judge Kavanaugh refused to comment on circumstances in which he might recuse himself from a case. His refusal to address how he would handle such a conflict of interest is deeply troubling, especially because Supreme Court Justices face no higher ethical authority and have previously ruled on cases despite apparent conflicts of interest.

Setting aside unresolved ethical questions, substantive questions remain not only about Judge Kavanaugh’s previous role in approving unconstitutional government programs, but also his views on the legal justification for mass surveillance. 

In another line of questioning from Senator Leahy, Judge Kavanaugh dodged Leahy’s question about the Fourth Amendment rationale for surveillance. In Klayman v. Obama, Judge Kavanaugh presented a troubling analysis: he found that phone companies could turn over customer records to surveillance agencies under an outdated theory known as the "third-party doctrine," and that even if that rationale were insufficient, the national security interest presents a "special need" to disregard the Fourth Amendment’s warrant requirement.

Senator Leahy asked Judge Kavanaugh why he went out of his way to find that the government has the authority to collect records of all domestic phone calls even though Congress had recently passed the Leahy-Lee USA Freedom Act and the Presidential Civil Liberties and Oversight Board had recently issued a report concluding that mass surveillance of telephone records was neither necessary to stop terrorism, nor even helpful

Under oath, Judge Kavanaugh acknowledged that the recent U.S. v. Carpenter decision negates his analysis of the third-party doctrine, but he side-stepped his troubling legal analysis regarding the Fourth Amendment justification for mass surveillance, which was at the heart of Sen. Leahy’s question.

Judge Kavanaugh’s failure to address his dangerous reasoning in Klayman only presents more questions: Would the judge find that warrantless domestic mass surveillance programs violate the Fourth Amendment when they collect enough data to create a detailed picture of the lives of millions of Americans—including where they go and who they associate with, like the records at issue in Carpenter? Or would he find such programs justified on national security grounds? Also, would the judge recognize the violation of First Amendment freedom of association implicit in such pervasive surveillance of Americans? 

These are questions that must be answered before the Senate votes on whether to confirm Judge Kavanaugh’s nomination.

Net neutrality was another topic on our list that Senators discussed. Judge Kavanaugh was more forthcoming about his opposition to net neutrality, which a majority of Americans support, than his position on mass surveillance. Senator Amy Klobachaur (D-MN) asked why the judge went "beyond the bounds of what the parties argued to reach a constitutional issue" in his dissent from the D.C. Circuit’s upholding of net neutrality. Kavanaugh attempted to defend his unconvincing position, and indicated that if a net neutrality case were to come before the Supreme Court, he would likely find (again) that requiring Internet service providers to carry content violated their First Amendment rights. This admission should alarm observers concerned about digital rights.

Senators have a constitutional obligation to scrutinize the nominee.  But Senators have to get meaningful answers from the nominee about how his jurisprudence could impact the future of digital rights. Having not yet secured answers, the Senate cannot cast an informed vote on Judge Kavanaugh’s nomination.

We urge the Senate to continue the examination process and to vote on Judge Kavanaugh’s confirmation only after securing answers to the crucial questions we’ve raised. In addition to exploring his judicial analysis of mass surveillance, Senators should also probe his views on law enforcement access to digital content, as well as the antitrust, patent, and copyright principles that play so crucial a role in innovation and competition.

The future of digital rights hangs in the balance.

In modern society, getting young people an education isn’t optional. For youths who are under the care of the state—whether in foster care, or in the juvenile justice system—it’s the state that must be responsible for making sure they get a proper education.

While incarcerated youths don’t lose their right to an education, current law doesn’t guarantee them Internet access. That’s a serious problem. With so much information in our society moving online, the Internet has become a critical starting point for research of all kinds. Getting kids a proper education also maximizes their chance of making a successful integration back into society.

For the second year in a row, the California legislature has moved to correct this problem. Last week, lawmakers passed A.B. 2448, a bill that would mandate that kids who are incarcerated or in foster care in California get Internet access so they can further their education.

We supported a similar bill last year that, unfortunately, was vetoed by the governor. While Governor Jerry Brown said he agreed with the bill’s intent, he had concerns about vague language and the costs of providing Internet access.

This year’s bill has been substantially thinned down, and the relevant agencies have had time to prepare for the budgetary impact. As we said last year, if climbers can tweet from Mount Everest, California should be able to manage safe and supervised Internet access from its own facilities.

EFF is part of a coalition of more than a dozen groups that support the bill, including the Youth Law Center, the ACLU, the Los Angeles LGBT Center, and the California chapter of the National Association of Social Workers. 

This bill also took one step backward from the previous version, which we hope gets corrected in future legislation. Last year’s bill required Internet access for purposes of family communication as well as education, while this year’s bill makes online family communication merely a suggestion. In the meantime, however, there’s no reason for the governor not to sign this bill into law.

All kids in California deserve to be educated, and an education today requires Internet access. According to the California Department of Education, juvenile court schools served more than 25,000 students during the 2015-16 school year. That’s far too many kids to leave behind. Tell Governor Brown to sign A.B. 2448 today.

Take Action

tell the governor to sign ab 2448

In the United States, a secret federal surveillance court approves some of the government’s most enormous, opaque spying programs. It is near-impossible for the public to learn details about these programs, but, as it turns out, even the court has trouble, too. 

According to new opinions obtained by EFF last month, the Foreign Intelligence Surveillance Court (FISC) struggled to get full accounts of the government’s misuse of its spying powers for years. After learning about the misuse, the court also struggled to rein it in.

In a trio of opinions, a judge on the FISC raised questions about unauthorized surveillance and potential misuse of a request he had previously granted. In those cases, the secrecy inherent in the proceedings and the government’s obfuscation of its activities made it difficult for the court to grasp the scope of the problems and to prevent them from happening again.

The opinions were part of a larger, heavily redacted set—31 in total—released to EFF in late August as part of a Freedom of Information Act lawsuit we filed in 2016 seeking all significant FISC opinions. The government has released 73 FISC opinions to EFF in response to the suit, though it is continuing to completely withhold another six. We are fighting the government’s secrecy in court and hope to get the last opinions disclosed soon. You can read the newly released opinions here. To read the previous opinions released in the case, click here, here, and here.

Although many of the newly released opinions appear to be decisions approving surveillance and searches of particular individuals, several raise questions about how well equipped FISC judges are to protect individuals’ statutory and constitutional rights when the government is less than candid with the court, underscoring EFF’s concerns with the FISC’s ability to safeguard individual privacy and free expression.

Court Frustrated by Government’s "Chronic Tendency" to Not Disclose the Full Scope of Its Surveillance

An opinion written by then-FISC Judge Thomas F. Hogan shows that even the judges approving foreign intelligence surveillance on specific targets have difficulty understanding whether the NSA is complying with its orders, much less the Constitution.

The opinion, the date of which is redacted, orders the deletion of materials the NSA collected without court authorization. The opinion recounts how after the court learned that the NSA had exceeded an earlier issued surveillance order—resulting in surveillance it was not authorized to conduct—the government argued that it had not actually engaged in unauthorized surveillance. Instead, the government argued that it had only violated "minimization procedures," which are restrictions on the use of the material, not the collection of it.

Judge Hogan, who served on the FISC from 2009-16 and was its chief judge from 2014-16, expressed frustration both with the government’s argument and with its lack of candor, as the court believed officials had previously acknowledged that the surveillance was unauthorized. The opinion then describes how the surveillance failed to comply with several provisions of the Foreign Intelligence Surveillance Act (FISA) in collecting the intelligence. Although the redactions make it difficult to know exactly which FISA provisions the government did not comply with, the statue requires the government to identify a specific target for surveillance and has to show some proof that the facilities being surveilled were used by a foreign power or the agent of one.

As a result, the court ruled that the surveillance was unauthorized. It went on to note that the government’s failure to meet FISA’s requirements also inhibited the court’s ability to do its job, writing that "the Court was deprived of an adequate understanding of the facts known to the NSA and, even if the government were correct that acquisition [redacted] was authorized, a clear and express record of that authorization is lacking."

The opinion goes on to note that the government’s conduct provided additional reasons to rule that the surveillance was unauthorized. It wrote:

Moreover, the government’s failures in this case are not isolated ones. The government has exhibited a chronic tendency to mis-describe the actual scope of NSA acquisitions in its submissions to this Court. These inaccuracies have previously contributed to unauthorized electronic surveillance and other forms of statutory and constitutional deficiency.

FISC Judge Frustrated by Government’s Years-Long Failure to Disclose the Scope of Its Surveillance

In another order, Judge Hogan required the government to answer a series of questions after it appeared that the NSA’s surveillance activities went beyond what the court authorized. The order shows that, though the FISC approved years-long surveillance, government officials knowingly collected information about individuals that the court never approved.

The court expressed concern that the "government has not yet provided a full account of non-compliance in this case." Although the particular concerns the court had with the government are redacted, the court appeared frustrated by the fact that it had been kept in the dark for so long:

It is troubling that, for many years, NSA failed to disclose the actual scope of its surveillance, with the result that it lacked authorization for some of the surveillance that it conducted. It is at least troubling that, once the NSA and the Department of Justice had finally recognized that unauthorized surveillance was being conducted, they failed to take prompt measures to discontinue the surveillance, or even to obtain prospective authorization for the already-ongoing collection.

As a result, the court ordered the government to respond to several questions: How and why was the surveillance allowed to continue after officials realized it was likely unauthorized? What steps were being taken to prevent something like it from happening again? What steps were officials taking to identify the information the government obtained through the unauthorized surveillance?

The court wrote that it would examine the government’s responses "and determine whether a hearing is required to complete the record on these issues."

Court Concerned By FBI’s Use of Ambiguity in Order to Conduct Unauthorized Surveillance

In another order with its date redacted, Judge Hogan describes a case in which the FBI used some ambiguous language in an earlier order to conduct surveillance that the court did not authorize.

Although the specifics of the incident are unclear, it appears as though the FISC had previously authorized surveillance of a particular target and identified certain communications providers—such as those that provide email, phone, or messaging services—in the order that would be surveilled. The FBI later informed the court that it had engaged in "roving electronic surveillance" and targeted other communications providers. The court was concerned that the roving surveillance "may have exceeded the scope of the authorization reflected" in the earlier order.

Typically, FISA requires that the government identify the "facilities or places" used by a target that it will surveil. However, the law contains a provision that allows the government to engage in "roving electronic surveillance," which is when the court allows the government to direct surveillance at unspecified communications providers or others that may help follow a target who switches services.

To get an order granting it authority to engage in roving electronic surveillance, the government has to show with specific facts that the surveillance target’s actions may thwart its ability to identify the service or facility the target uses to communicate. For example, the target may frequently change phone numbers or email accounts, making it difficult for the government to identify a specific communications provider.

The problem in this particular case, according to the court, was that the FBI didn’t seek authority to engage in roving electronic surveillance. "The Court does not doubt that it could have authorized" roving electronic surveillance, it wrote. "But the government made no similar request in the above-captioned docket." Moreover, the government never provided facts that established the target may thwart their ability to identify the service provider.

Although the court was concerned with the government’s unauthorized surveillance, it acknowledged that perhaps its order was not clear and that it "sees no indication of bad faith on the part of the agents or attorneys involved."

Other FISC decisions authorize various surveillance and searches

 The other opinions released to EFF detail a variety of other orders and opinions issued by the court authorizing various forms of surveillance. Because many are heavily redacted, it is difficult to know precisely what the concern. For example:

  • One opinion explains the FISC’s reasoning for authorizing an order to install a pen register/trap and trace device—which allows for the collection of communications’ metadata—and allow the government to acquire business records. The court cites the Supreme Court’s 1978 decision in Smith v. Maryland to rule that the surveillance at issue does not violate the Fourth Amendment.

  • Another opinion concerns an issue that other, previously disclosed FISC opinions have also wrestled with: the government’s aggressive interpretation of FISA and similar laws that authorize phone call metadata collection that can sometimes also capture the content of communications. The government asked to be able to record the contents of the communications it captured, though it said it would not use those contents in its investigations unless there was an emergency. The court ordered the government to submit a report explaining how it was ensuring that it did not make use of any contents of communications it had recorded.

  • Several other opinions, including this one, authorize electronic surveillance of specific targets along with approving physical searches of property.

  • In another case the court authorized a search warrant to obtain "foreign intelligence information." The warrant authorized the government to enter the property without consent of the owner or resident, though it also ordered that the search "shall be conducted with the minimum physical intrusion necessary to obtain the information being sought."

Obtaining these FISC opinions is extraordinarily important, both for government transparency and for understanding how the nation’s intelligence agencies have gone beyond what even the secret surveillance court has authorized.

Having successfully pried the majority of these opinions away from the government’s multi-layered regime of secrecy, we are all the more hopeful to receive the rest.

You can review the full set of documents here.

As the European Parliament prepares for tomorrow's vote on the new Copyright Directive with its provisions requiring mass-scale filtering of the majority of public communications to check for copyright infringement (Article 13) and its provisions requiring paid permission to link to the news if you include as little as two words from the headline in your link text (Article 11), a dismaying number of "creators groups" are supporting it, telling their members that this will be good for them and their flagging financial fortunes.

The real incomes of real creators are really important (disclosure: my primary income source comes from writing science fiction novels for Tor Books, a division of Macmillan). Improving the incomes of the creators who enliven our days, inform, shock, delight and frighten us is a genuine Good Thing.

And creators are not faring well: as both the entertainment industry and tech industry have consolidated, our power to negotiate for a fair slice of the pie has been steadily eroded. While it's never been the case that the majority of people who wanted to be artists managed to make a career out of it, we're also at a very low point in how much of the money generated by artists' work finds its way into artists' pockets.

Enter the Copyright Directive. Under Article 11, tech platforms are expected to send license fees to news companies, and the journalists whose work appears on news sites are presumed to get a share of these new profits.

But this will not happen on its own. A tax on linking means that smaller news sites—where writers are paid to analyze and criticize the news—will be frozen out of the market. They will face legal jeopardy if they link to the news they are discussing, and they will be unable to pay expensive linking fees geared to multinational tech platforms. Publishers have little incentive to negotiate licenses with small players – particularly if those writers wish to criticize the publisher’s work. Meanwhile, experience has shown that in the absence of competitive or legal pressure, news proprietors are more apt to disburse profits to shareholders, not journalists. The most likely outcome of Article 11 is fewer places to sell our work, and a windfall for the corporations who have been slicing our pay for decades.

Even worse, though, is Article 13, the copyright filters. Creative people worry that their works are displayed and sold wholesale, without permission, and to the benefit of unscrupulous "content farmers" and other unsavory networked bottom-feeders.

To address this, Article 13 mandates that all online platforms create databases of copyrighted works and block people from posting things that match items in the database. Any site that lets the public post text (whether that's whole articles or works of fiction or short tweets and Facebook updates), still images, videos, audio, software code, etc, will have to create and run these filters.

Will the filters work? Experience says they won't. Defeating these filters is just not hard, because it's not hard to trick computers. The most sophisticated image filters in the world have been deployed by Chinese internet giants in concert with the Chinese government's censorship effort. As a recent analysis from the University of Toronto's world-leading Citizen Lab has shown, it's relatively straightforward to beat these filters—and they were built by the most skilled engineers on the planet, who operated with an effectively unlimited budget, and who have the backing of a state that routinely practices indefinite detention and torture for people who defy its edicts.

The Chinese censorship system is attempting something far more modest than the EU's proposed copyright filters—checking for matches to a paltry few hundred thousands of images, using unimaginably giant pools of talent and money—and they can't make it work.

Neither can YouTube, as it turns out. For more than a decade, YouTube has operated an industry-leading copyright filter called "Content ID." Despite costing more than $60,000,000 to build, Content ID has not prevented copyright infringement—far from it! The failure of Content ID to catch infringing material has been the subject of frequent, sharp criticism from rightsholder groups, and indeed, has been cited as a major factor in the case for Article 13 and its copyright filters.

But that's not the only problem with Content ID: not only does it fail to catch copyright infringement, it also fails by falsely blocking works that don't infringe copyright. Content ID has falsely accused NASA of infringing when it uploaded its own Mars lander footage; it blocks classical pianists from posting their own performances (because Sony has claimed the whole catalogues of long-dead composers like Bach and Beethoven); it wipes out the whole audio track of scientific conference proceedings because the mics picked up the background music played in the hall during the lunch break; it even prevents you from posting silence because half a dozen companies have claimed to own nothing at all.

The thing is, Content ID is much less censorious than the Article 13 filters will be. Content ID only accepts mass submissions from a few "trusted rightsholders" (Article 13 filters will allow any or all of the internet's 3,000,000,000 users to claim copyright in works); it only checks videos (Article 13 filters will check text, audio, video, stills, code, etc); and it only affects a single service (Article 13 will affect all major services operating in Europe).

As creators, we often have cause to quote materials, both from the public domain and from copyrighted sources, under fair dealing. Our own works, then, are quite liable to trigger automated censorship from these filters, while actual infringers, who have plenty of time and motivation, dance around them as they have done with Content ID and as Chinese dissidents do with the country's social media filters.

If you happen to work for a giant media corporation, this may not be a problem. When I've had my books wrongfully taken down due to fraudulent copyright claims, I've been able to bring the might of Macmillan to bear to get them reinstated. But try calling Google or Facebook or Twitter in your individual capacity and ask them to task a human staffer to pay real attention to the thorny question of deciding whether your photo infringes copyright (say, because it captured an advertisement on the side of a bus with a copyrighted stock image that triggered a filter).

What's more, these filters aren't cheap. The $60 million pricetag on Content ID is just for starters, for a system that only filters part of one media type. The table-stakes for competing with the US tech giants in Europe is about to skyrocket to hundreds of millions of dollars for filters that don't stop infringers but do interfere with our legitimate creativity.

Article 13 will disadvantage any creator who isn't sheltered under the wing of a large corporation, and it will reduce competition in the tech sector, and thus the kind of deal we can get if we try to go it alone—it gets us coming and going.

In the face of real problems, we have to work out real solutions. We have to resist the temptation to adopt harmful responses that are pitched as solutions to the problem but only make it worse.

Seventeen years ago, some terrible people committed a terrorist atrocity on a scale never seen. In response to this genuine horror, the public and politicians demanded that Something Must Be Done. They should have been more specific.

In the seventeen years since the September 11th attacks, we've spent trillions on war and surveillance, eroded human rights and free speech, and still we fear terrorism. The security theater that followed 9-11 is a sterling example of the security syllogism: "Something must be done. There, I've done something."

The big entertainment and newspaper companies would be glad to have a few millions directed from the coffers of the big tech companies, regardless of the consequences to the creators whom they're claiming to represent. But Articles 11 and 13 are a catastrophe for both competition and free expression, the two most important values for creators who want to get speak freely and get paid for it.

Wanting it badly is not enough. If we allow ourselves to be stampeded into support for half-baked measures that line the pockets of big business and hope that the money will trickle down to us, we're digging ourselves even deeper into the hole. It's not too late to ask your MEPs to vote against this: visit Save Your Internet to contact them.

On Wednesday, the EU will vote on whether to accept two controversial proposals in the new Copyright Directive; one of these clauses, Article 13, has the potential to allow anyone, anywhere in the world, to effect mass, rolling waves of censorship across the Internet.

The way things stand today, companies that let their users communicate in public (by posting videos, text, images, etc) are required to respond to claims of copyright infringement by removing their users' posts, unless the user steps up to contest the notice. Sites can choose not to remove work if they think the copyright claims are bogus, but if they do, they can be sued for copyright infringement (in the United States at least), alongside their users, with huge penalties at stake. Given that risk, the companies usually do not take a stand to defend user speech, and many users are too afraid to stand up for their own speech because they face bankruptcy if a court disagrees with their assessment of the law.

This system, embodied in the United States' Digital Millennium Copyright Act (DMCA) and exported to many countries around the world, is called "notice and takedown," and it offers rightsholders the ability to unilaterally censor the Internet on their say-so, without any evidence or judicial oversight. This is an extraordinary privilege without precedent in the world of physical copyright infringement (you can't walk into a cinema, point at the screen, declare "I own that," and get the movie shut down!).

But rightsholders have never been happy with notice and takedown. Because works that are taken down can be reposted, sometimes by bots that automate the process, rightsholders have called notice and takedown a game of whac-a-mole, where they have to keep circling back to remove the same infringing files over and over.

Rightsholders have long demanded a "notice and staydown" regime. In this system, rightsholders send online platforms digital copies of their whole catalogs; the platforms then build "copyright filters" that compare everything a user wants to post to this database of known copyrights, and block anything that seems to be a match.

Tech companies have voluntarily built versions of this system. The most well-known of the bunch is YouTube's Content ID system, which cost $60,000,000 to build, and which works by filtering the audio tracks of videos to categorise them. Rightsholders are adamant that Content ID doesn't work nearly well enough, missing all kinds of copyrighted works, while YouTube users report rampant overmatching, in which legitimate works are censored by spurious copyright claims: NASA gets blocked from posting its own Mars rover footage; classical pianists are blocked from posting their own performances, birdsong results in videos being censored, entire academic conferences lose their presenters' audio because the hall they rented played music at the lunch-break—you can't even post silence without triggering copyright enforcement. Besides that, there is no bot that can judge whether something that does use copyrighted material is fair dealing. Fair dealing is protected under the law, but not under Content ID.

If Content ID is a prototype, it needs to go back to the drawing board. It overblocks (catching all kinds of legitimate media) and underblocks (missing stuff that infuriates the big entertainment companies). It is expensive, balky, and ineffective.

It's coming soon to an Internet near you.

On Wednesday, the EU will vote on whether the next Copyright Directive will include "Article 13," which makes Content-ID-style filters mandatory for the whole Internet, and not just for the soundtracks of videos—also for the video portions, for audio, for still images, for code, even for text. Under Article 13, the services we use to communicate with one another will have to accept copyright claims from all comers, and block anything that they believe to be a match.

This measure will will censor the Internet and it won't even help artists to get paid.

Let's consider how a filter like this would have to work. First of all, it would have to accept bulk submissions. Disney and Universal (not to mention scientific publishers, stock art companies, real-estate brokers, etc) will not pay an army of data-entry clerks to manually enter their vast catalogues of copyrighted works, one at a time, into dozens or hundreds of platforms' filters. For these filters to have a hope of achieving their stated purpose, they will have to accept thousands of entries at once—far more than any human moderator could review.

But even if the platforms could hire, say, 20 percent of the European workforce to do nothing but review copyright database entries, this would not be acceptable to rightsholders. Not because those workers could not be trained to accurately determine what was, and was not, a legitimate claim—but because the time it would take for them to review these claims would be absolutely unacceptable to rightsholders.

It's an article of faith among rightsholders that the majority of sales take place immediately after a work is released, and that therefore infringing copies are most damaging when they're available at the same time as a new work is released (they're even more worried about pre-release leaks).

If Disney has a new blockbuster that's leaked onto the Internet the day it hits cinemas, they want to pull those copies down in seconds, not after precious days have trickled past while a human moderator plods through a queue of copyright claims from all over the Internet.

Combine these three facts:

1. Anyone can add anything to the blacklist of "copyrighted works" that can't be published by Internet users;

2. The blacklists have to accept thousands of works at once; and

3. New entries to the blacklist have to go into effect instantaneously.

It doesn't take a technical expert to see how ripe for abuse this system is. Bad actors could use armies to bots to block millions of works at a go (for example, jerks could use bots to bombard the databases with claims of ownership over the collected works of Shakespeare, adding them to the blacklists faster than they could possibly be removed by human moderators, making it impossible to quote Shakespeare online).

But more disturbing is targeted censorship: politicians have long abused takedown to censor embarrassing political revelations or take critics offline, as have violent cops and homophobic trolls.

These entities couldn't use Content ID to censor the whole Internet: instead, they had to manually file takedowns and chase their critics around the Internet. Content ID only works for YouTube — plus it only allows "trusted rightsholders" to add works wholesale to the notice and staydown database, so petty censors are stuck committing retail copyfraud.

But under Article 13, everyone gets to play wholesale censor, and every service has to obey their demands: just sign up for a "rightsholder" account on a platform and start telling it what may and may not be posted. Article 13 has no teeth for stopping this from happening: and in any event, if you get kicked off the service, you can just pop up under a new identity and start again.

Some rightsholder lobbyists have admitted that there is potential for abuse here, they insist that it will all be worth it, because it will "get artists paid." Unfortunately, this is also not true.

For all that these filters are prone to overblocking and ripe for abuse, they are actually not very effective against someone who actually wants to defeat them.

Let's look at the most difficult-to-crack content filters in the world: the censoring filters used by the Chinese government to suppress "politically sensitive" materials. These filters have a much easier job than the ones European companies will have to implement: they only filter a comparatively small number of items, and they are built with effectively unlimited budgets, subsidized by the government of one of the world's largest economies, which is also home to tens of millions of skilled technical people, and anyone seeking to subvert these censorship systems is subject to relentless surveillance and risks long imprisonment and even torture for their trouble.

Those Chinese censorship systems are really, really easy to break, as researchers from the University of Toronto's Citizen Lab demonstrated in a detailed research report released a few weeks ago.

People who want to break the filters and infringe copyright will face little difficulty. The many people who want to stay on the right side of the copyright —but find themselves inadvertently on the wrong side of the filters—will find themselves in insurmountable trouble, begging for appeal from a tech giant whose help systems all dead-end in brick walls. And any attempt to tighten the filters to catch these infringers, will of course, make it more likely that they will block non-infringing content.

A system that allows both censors and infringers to run rampant while stopping legitimate discourse is bad enough, but it gets worse for artists.

Content ID cost $60,000,000 and does a tiny fraction of what the Article 13 filters must do. When operating an online platform in the EU requires a few hundred million in copyright filtering technology, the competitive landscape gets a lot more bare. Certainly, none of the smaller EU competitors to the US tech giants can afford this.

On the other hand, US tech giants can afford this (indeed, have pioneered copyright filters as a solution, even as groups like EFF protested it), and while their first preference is definitely to escape regulation altogether, paying a few hundred million to freeze out all possible competition is a pretty good deal for them.

The big entertainment companies may be happy with a deal that sells a perpetual Internet Domination License to US tech giants for a bit of money thrown their way, but that will not translate into gains for artists. The fewer competitors there are for the publication, promotion, distribution and sale of creative works, the smaller the share will be that goes to creators.

We can do better: if the problem is monopolistic platforms (and indeed, monopolistic distributors), tackling that directly as a matter of EU competition law would stop those companies from abusing their market power to squeeze creators. Copyright filters are the opposite of antitrust, though: it will make the biggest companies much bigger, to the great detriment of all the "little guys" in the entertainment industry and in the market for online platforms for speech.

Expanded Government Authority to Destroy Drones Expected As Part of Routine FAA Bill

When government agencies hide their activities from the public, private drones can be a crucial tool for transparency and public oversight. But now, some members of Congress want to give the federal government the power to intercept and destroy private drones it considers a "threat," with no safeguards ensuring that power isn’t abused.

Even more troubling, they’re specifically aiming to give those powers to the Department of Homeland Security and the Department of Justice, two government offices notorious for their hostility to public oversight. And worst of all, we expect these powers to come in a routine Federal Aviation Administration (FAA) reauthorization bill, with no chance for meaningful debate on how best to limit the government’s authority to intercept or destroy drones.

Please join us in telling Congress to reject the FAA reauthorization unless these provisions are removed from it.

Take Action

Don’t Give DHS and DOJ Free Rein to Shoot Down Private Drones

The Department of Homeland Security routinely denies reporters access to detention centers. On the rare occasions DHS does allow entry, the visitors are not permitted to take photos or record video. Without other ways to report on these activities, drones have provided crucial documentation of the facilities being constructed to hold children.

While the language expected to appear in the FAA bill hasn’t been made public yet, a similar bill was introduced in Congress earlier this year. It would have given DHS the ability to "track," "disrupt," "control," and "seize or otherwise confiscate" any drone that the government deems to be a "threat," without a warrant or due process. It’s easy to see how language this broad and vague could be abused: DHS and DOJ could interpret it to include the power to stop journalists or private citizens from using drones to document their activities, including abuses at DHS detention facilities.

The FAA reauthorization is simply not the place for these dangerous provisions. If lawmakers want to give the government the power to hack or destroy private drones, then Congress and the public should have the opportunity to debate how best to provide adequate oversight and limit those powers to protect our right to use drones for journalism, activism, and recreation.

Take Action

Don’t Give DHS and DOJ Free Rein to Shoot Down Private Drones

Among the many bills awaiting the signature—or veto—of Governor Jerry Brown is AB 3131, a measure that would ensure transparency about police militarization across the State of California. While we are disappointed in recent legislative amendments that weakened the original bill, we remain eager to see it signed into law. Today’s pervasive secrecy about police acquisition of military hardware—including high tech spying devices—impedes a long overdue public debate. 

EFF is troubled by the transfer of powerful spying technologies from our armed forces to state and local police. 

Police militarization has prompted concerns from across the political spectrum. For instance, progressives and racial justice advocates decry the frequency with which police use force, which all too often has lethal consequences partly driven by the use of military training and equipment. Meanwhile, libertarians and many conservatives with fiscal concerns bemoan spending public tax dollars on expensive weapons (often built by powerful corporations).

EFF is especially troubled by the transfer of powerful spying technologies, built for combat on foreign battlefields, from our armed forces to our state and local police. This phenomenon has proceeded aggressively across California, where law enforcement agencies have gained access to thermal imaging equipment, drones, and sonic crowd control devices worth over $130 million. Currently, police acquisition and use of this equipment is not subject to civilian oversight by any legislative body.

In the face of these concerns, A.B. 3131 would at last ensure transparency in law enforcement acquisition of military equipment. And it would apply to law enforcement agencies generally, including sheriff’s departments that sometimes escape civilian oversight by county boards.

We ultimately hope to establish robust opportunities for community control over whether police agencies may acquire high tech military spying equipment, and not just (as in this bill) transparency when police agencies decide to acquire such equipment. Such community control would have been provided by earlier versions of A.B. 3131. We are disappointed that recent amendments removed this provision, watering down the influence of communities over law enforcement militarization. However, the transparency provided by A.B. 3131—into both the acquisition of military equipment and policies governing its use—pierces the wall of secrecy that has prevented public discussion for entirely too long in far too many places.

Concerns about the role of militarization in undermining our rights were articulated by none other than President Eisenhower, the last President who commanded a wartime army. It may have taken the state legislature over 55 years to respond to his historic speech, but—even though today’s response does not go far enough—late is better than never. 

We encourage Governor Brown to sign A.B. 3131 into law, so that Californians across the state can finally learn about how police are acquiring and using military equipment, including powerful spying technologies, in our communities.

Online harassment is a serious problem, and one that defies easy solutions. As the digital world grapples with potential strategies to make online life safer, we have to also fight back against misguided approaches that would undercut what makes the Internet an essential tool for modern life. That’s why EFF filed an amicus brief in Herrick v. Grindr, asking an appeals court to affirm a lower court’s dismissal of the case.

The plaintiff, Matthew Herrick, alleges that he has been mercilessly harassed online by an ex-boyfriend, who appears to have created a series of fake profiles of Herrick on the gay-dating app Grindr. Herrick says that more than 1000 men have arrived at his home and his work, thinking that they were invited for sex. In his lawsuit, Herrick is asking that Grindr be held responsible for the fake profiles and the damage his ex-boyfriend has done. But this strategy risks both free speech on the Internet, as well as the future of online innovation.

A provision of the Communication Decency Act called Section 230—short for 47 U.S.C. § 230—protects intermediaries like ISPs, social media sites, and dating sites like Grindr from liability for what their users say or do. But this is not for the platforms’ sake: it’s for the users. When Congress passed Section 230, it recognized that if our legal system failed to robustly protect intermediaries, it would fail to protect free speech online.  Intermediary platforms are the essential architecture of today’s Internet.  They are often the primary way in which the majority of people engage with one another online.  Platforms from giants like Facebook and Twitter to small community forums and local news sites allow users to connect with family and friends and people all over the world—all without learning to code or expending significant financial resources. Protecting intermediaries protects users. 

Section 230 encourages intermediaries to host a vast array of content, without having to worry about the devastating litigation costs they would incur if they could be sued for what their users says online. Without Section 230, intermediaries would likely limit who could use their service and censor more speech than ever before. Smaller platforms that could not afford to take these steps would cease to exist, meaning users would have fewer tools to communicate online.

Section 230 does not mean that victims of online harassment have nowhere to turn. Most jurisdictions have laws against abusive speech. Law enforcement needs to get smarter about online harassment so it can protect people in danger, while courts should become comfortable with legal remedies against online perpetrators. We hope that the appeals court recognizes that holding platforms responsible is not the answer, and dismisses this case.

The European Copyright Directive vote is in three days and it will be a doozy: what was once a largely uncontroversial grab bag of fixes to copyright is now a political firestorm, thanks to the actions of Axel Voss, the German MEP who changed the Directive at the last minute, sneaking in two widely rejected proposals on the same day the GDPR came into effect, forming a perfect distraction (you can contact your MEP about these at Save Your Internet).

These two proposals are:

1. "Censorship Machines": Article 13, which forces online providers to create databases of text, images, videos, code, games, mods, etc that anyone can add anything to -- if a user tries to post something that may match a "copyrighted work," in the database, the system has to censor them.

2. "Link Tax": Article 11, which will only allow internet users to post links to news sites if the service they're using has bought a "linking license" from the news-source they're linking to; under a current proposal, links that repeat more than two consecutive words from an article's headline could be made illegal without a license.

We're all busy and we all rely on trusted experts to give us guidance on what side of an issue to take, and creators often take their cues from professional societies and from the entertainment industry, but in this case, both have proven to be unreliable.

In a recent tweetstorm, Niall from the UK's Society of Authors sets out his group's case for backing these proposals. As a UK author, I was alarmed to see an organisation that nominally represents me taking such misguided positions and I tried to rebut them, albeit within Twitter's limitations.

Here's a less fragmented version.

Niall writes that Article 11 ("link taxes") will not stop you from linking to the news. That's just wrong. The Article calls for new rights for publishers to block even very short quotations of articles and headlines. Those pushing the Article have suggested that quoting a "single word" might be acceptable to them, but not more.

Article 11 doesn't actually define what level of quotation is permitted (this is a pretty serious oversight). But Article 11 is an EU-wide version of local laws that were already attempted in Spain and Germany, and under those laws, links that included the headline in "anchor text" (that's the underlined, blue text that goes with a hyperlink) were banned. In the current amendments, Axel Voss has proposed that using more than two consecutive words from a headline would not be allowed without a license.

Niall says that memes and other forms of parody will not be blocked by Article 13's filters, because they are exempted from European copyright. That's doubly wrong.

First, there's no EU-wide exemption for parody. Under the 2001 Copyright Directive, European countries get to choose zero or more exemptions from a list of twenty permissible ones. And as you can see from this patchwork map of those exceptions, there are plenty of countries where you can still be sued for infringement for a parody. Which means that a site operating in that country will be liable.

Second, even in countries where parody is legal, Article 13's copyright filters won't be able to detect it. No one has ever written a software tool that can tell parody from mere reproduction, and such a thing is so far away from our current AI tools as to be science fiction (as both a science fiction writer and a Visiting Professor of Computer Science at the UK's Open University, I feel confident in saying this).

Niall says that Wikipedia won't be affected by Article 13 and Article 11. This is so wrong, I published a long article about it. tl;dr: Wikipedia's articles rely on being able to link to analyses of the news, which Article 11 will limit; Wikipedia's projects like Wikimedia Commons are not exempted from Article 13; and commercial Wikipedia offshoots lose what little carveouts are present in Article 13.

Niall says Article 13 will not hurt small businesses, only make them pay their share. This is wrong. Article 13's copyright filters will cost hundreds of millions to build (existing versions of these filters, like Youtube's Content ID, cost $60,000,000 and only filter a tiny slice of the media Article 13 requires), which will simply destroy small competitors to the US-based multinationals.

What's more, these filters are notorious for blocking lawful uses, blocking copyrighted works that have been uploaded by their own creators (because they are similar to something claimed by a giant corporation), and even missing copyrighted works.

Niall says Article 13 is good for creators' rights. This is wrong. Creators benefit when there is a competitive market for our works. When a few companies monopolise the channels of publication, payment, distribution, and promotion, creators can't shop around for better deals, because those few companies will all converge on the same rotten policies that benefit them at our expense.

We've seen this already: once Youtube became the dominant force in online video, they launched a streaming music service and negotiated licenses from all the major labels. Then Youtube told the independent labels and indie musicians that they would have to agree to the terms set by the majors -- or be shut out of Youtube forever. In a market dominated by Youtube, they were forced to take the terms. Without competition, Youtube became just another kind of major label, with the same rotten deals for creators.

Niall says that Article 13 will stop abuses of copyright like when the fast-fashion brand Zara ripped off designers for its clothing. This is wrong (and a bit silly, really). Zara's clothes are physical objects in shops (and not files that Zara posts to user-generated content sites), so web filters do not address any infringement of this type.

Niall says that Article 13 isn't censorship. This is wrong. Copyright filters always overblock, catching dolphins in their tuna-nets. It's easy to demonstrate that these filters are grossly overblocking. When the government orders private actors to take measures that stop you from posting lawful communications, that's censorship.

Niall says that multinational companies will get a "huge victory" if Article 13 is stopped. That's wrong. While it's true that the Big Tech companies would prefer not to have any rules, they could very happily live with these rules, because they would effectively end any competition from new entrants into the field. Spending a few hundred million to comply with the Copyright Directive is a cheap alternative to having to buy out or crush any new companies that pose a threat.

I sympathise with Niall. As someone's who's volunteered as a regional director for other creators' rights groups, I understand that they're well-intentioned and trying to stand up for their members' interests.

But the Society of Authors and its allies have it wrong here. Articles 11 and 13 are catastrophes for both free expression and artists' livelihoods. They're a bargain in which Europe's big entertainment companies propose to sell Big Tech an Eternal Internet Domination license for a few hundred mil, cementing both Big Content and Big Tech's strangleholds on our ability to earn a living and reach an audience.

Don't take my word for it. David Kaye, the UN's Special Rapporteur on Human Rights, has condemned the proposals in the strongest possible terms.

And Wyclef Jean from the Fugees agrees, seeing Article 13 as a measure that will get between him and his audience by limiting his fans' ability to promote his work and pay his bills.

Meanwhile, Pascal Nègre (who recently stepped down after 20 years as President of Universal Music France) agrees, saying that the deal was "a net negative for artists, for the industry and, ultimately, for the public good."

Link taxes are a bad idea. In an era of fake news, anything that limits the ability of internet users to link to reliable news sources deals a terrible blow to our already weakened public discourse.

Copyright filters are an even worse idea. Not only will these both overblock and underblock, they'll also be ripe for abuse. Because the filters' proponents have rejected any penalties for fraudulently claiming copyright in works in order to censor them, anyone will be able to censor anything. You could claim all of Shakespeare's works on Wordpress's filters, and no one would be able to quote Shakespeare until the human staff at the company had hand-deleted those entries.

More seriously, corrupt politicians and other public figures have already made a practice of using spurious copyright claims in order to censor unflattering news. Automating the process is a gift to any politician who wants to suppress video of an embarrassing campaign-event remark and any corrupt employer who wants to suppress video of an unsafe and abusive workplace incident.

Creators in the 21st Century struggle to earn a living -- just as we have in all the centuries since the invention of the printing press -- and we will forever be busy making things, and reliant on our professional organisations for guidance on which political currents run in our favour.

But there is a simple rule of thumb we can always follow that will keep us from being led astray: creators should always, always be on the side of free expression and always, always be opposed to censorship. We should always oppose anything that makes it easier to silence legitimate speech, anything that narrows who can control our public discourse by concentrating power into a few hands.

Creators, you have three days to talk to your lawmakers. Save Your Internet is the place to go to call, write and tweet them. This travesty is being undertaken in our name and we have a duty to stop it.

In July, millions of Europeans called on the Members of European Parliament (MEPs) to vote down a proposal that would impose copyright filters on European social media, and create a new power for newspapers to charge or sue anyone linking to their news stories. The MEPs listened to that call, and in an historic rejection of the standard procedure, voted to revisit that language.

Now they have a more complicated choice to make, and less than a week to learn how to make it.

Next Wednesday at Noon CET, all 751 MEPS will be voting on 203 brand new amendments to the "Copyright in the Digital Single Market" directive, including genuine reforms as well as innocuous-looking language that would double-down on the copyright filters and link taxes from the previous, rejected, draft.

If you’re someone who understands how the Internet works, and a voter in an EU memberstate – or you know someone who is – you may be those MEPs’ best and most honest guide to those amendments.

Your voice will be up against a renewed lobbying campaign from the music and mainstream news industry, a conflicted and wavering set of Internet tech giants, plus a heavily-lobbied set of fake compromise proposals. But the odds are better than they were in July – and we won then. We just have to win again.

Here’s where matters stand with 6 days to go, and what you can do right now to steer Europe’s Internet away from the IP iceberg it’s heading toward.

What’s At Stake on September 12

A large part of the European Parliament’s job is to revise – and hopefully improve – proposals that come to it from the EU’s other branches of government: the European Commission and European Council. The "Copyright in the Digital Single Market" directive unfortunately started bad and ended worse, including language that would require compulsory copyright filters (Article 13), and a wide new power for news media to license and control the text of links to their news stories (Article 11).

Most of the language of these proposals had been pushed in Parliament by MEP Axel Voss through the JURI subcommittee. On July 5th, the Parliament agreed to reset the text back to its original proposal, without the Voss amendments. A vote on a new set of amendments, open for any group of MEPs to suggest, was scheduled instead for September 12th.

The deadline for these new amendments was just a week before, on September 5th. This Thursday was the first chance MEPs and their advisors had to look at over 200 new proposals to the original European Council/Commission text.

The Fake Voss "Compromises" on Article 13 and Article 11

The faint hope that Axel Voss would hammer out a reasonable deal with his critics faded when he announced his "compromise amendments" to MEPs last week. Just as previously, when Voss claimed he had fixed the problems with his plans by pasting in clumsy exceptions for "online encyclopedias" and "open source code-sharing sites" (Wikipedia and Github weren’t convinced, and still lead the charge against Article 13), Voss’s changes look friendly, but do little. This time they will make Article 13 even worse.

The new Voss amendment on Article 13 no longer includes an explicit mention of copyright filters – it just makes it impossible for any community content-sharing website to survive without them. In fact, most sites would struggle to stay legal even with their user filters turned up to the max.

Voss’s new Article 13, now simply declares that:

Without prejudice to Article 3(1) and (2) of Directive 2001/29/EC, online content sharing service providers perform an act of communication to the public.

By defining the sharing websites themselves as "performing an act of communication" (rather than their users), this language strips away existing liability protections websites have if their users are accused of copyright infringement. From remixes to memes, sharing sites would suddenly be liable en masse for all the errors in rights management committed by their users.

The rest of the new Article says that the sites can be saved if they pre-emptively negotiate licenses for everything they show. Which means that they might be saved from being sued by the major music labels and Hollywood studios, as long as they put up perfect copyright filters – but they will still remain liable for any other suit.

Far from being an improvement, the change makes bad language even worse.

There are more fake concessions in Voss’s Article 11 amendment too, which now says "individual words" can be used to link to a new story. That’s language that nobody opposing the link tax thinks improves the proposal, and only exists so that Voss can claim that you’ll still be able to use your own text in a link to a news story – as long as you create a hostage-letter-style cut-up that doesn’t accidentally quote two consecutive words from the article.

Better Deals on The Table

The Articles should be deleted from the text completely. That's already the position of European-sceptic groups on both the left and right.

Far better compromise proposals have primarily come from Julia Reda, the German Pirate Party MEP, and Marietje Schaake, the Dutch MEP who has specialized in digital issues throughout her career in the European Parliament. (You can read the Reda and Schaake language, together with their explanations and comparison with the other proposals, on Julia Reda’s website.)

Barring a travesty in the voting process, MEPs will be able to vote both for deletion of the articles, and then the Reda/Schaake amendments on September 12. The complete deletion amendments will go first, and then if each fails, the MEPs will go on to vote on more detailed language changes.

We are encouraging all MEPs to reject Article 13 and Article 11 entirely. This is a vote for the status quo as it has stood for nearly two decades, supported across the political spectrum, and from all sides of industry – far from an extreme stance. If the votes aren't there for deletion, the Reda/Schaake amendments are the best fallback.

As Pascal Nègre, who was the CEO of Universal Music France and now owns his own music label said in an editorial this week in Le Monde:

"Thirty-five years of experience in the music industry lead me to believe that this directive would be a net negative for artists, for the industry and, ultimately, for the public good."

What You Can Do Now

Next week the various parties and pressure groups will be writing up a more detailed set of voting lists on all the amendments – including the other articles of the directive, such as those on data-mining, that we’ve barely had a chance to touch upon.

For now though, you should already know the direction the EU should take on the most infamous parts of this directive.

Call your MEP now and if you know others who are residents in the EU, tell them to call or write to their MEPs too.

You can find info on each of your MEPs on the SaveYourInternet.eu site, or on Parliament’s own site (just click on the maps).

If you’re outside of Europe, please consider sharing this blog post with your European friends and family and let them know that this is a red alert. We have just days until the vote.

Tell them to reject the Voss amendments, and reject Article 11 and 13, reject copyright filters, and reject ancillary rights on press snippets. Encourage your MEP to choose options that avoid filtering uploads or restricting links.

When the San Diego police targeted black children for DNA collection without their parents' knowledge in 2016, it highlighted a critical loophole in California law. The California State Legislature recently passed a new bill, A.B. 1584, to ensure that law enforcement cannot stop-and-swab youth without either judicial approval or the consent of a parent or attorney. The bill, introduced by Assemblymember Lorena Gonzalez Fletcher, is now on Gov. Jerry Brown’s desk. EFF has strongly supported this bill from the start and now urges the governor to sign the bill into law. 

DNA can reveal an extraordinary amount of private information about a person, from familial relationships to medical history to predisposition for disease. Children should not be exposed to this kind of privacy invasion without strict guidelines and the advice and consent of a parent, legal guardian, or attorney. Kids need to have an adult present who represents their interests and can help them understand both their rights and the lifelong implications of handing one’s sensitive genetic material over to law enforcement.

A.B. 1584 would require law enforcement to obtain a court order, a search warrant, or the written consent of both the minor and their parent, legal guardian, or attorney before collecting DNA. There are a few narrow exceptions, such as when DNA collection is already required under existing law. As the ACLU said in their statement of support for the bill, this is sensible, common sense legislation: "If police officers don’t have a warrant, they must not be allowed to ask a minor child to give consent to this invasion of privacy without also obtaining the consent of a parent. Common sense dictates that involving parents in these kinds of situations is a good police practice and is in the best interest of every child."

A.B. 1584 is a direct response to abuse of existing legal protections designed to protect California kids. The legislation is needed to create a higher standard for local law enforcement when collecting DNA samples from minors.

In San Diego, as Voice of San Diego reported, law enforcement has taken advantage of loopholes in existing law and targeted black children for unlawful DNA collection. The San Diego Police Department instituted a policy of collecting samples from minors for "investigative purposes" based on a signed consent form that was used for both minors and adults alike, without any parental notification or consent. In at least one case, police stopped a group of kids who were walking through a park after leaving a basketball game at a rec center and asked each to sign a form "consenting" to a cheek swab. The ACLU has sued SDPD over the incident.

California's existing DNA collection law, Proposition 69, attempts to place limitations on when law enforcement can collect DNA from kids, but SDPD found a gaping loophole in the law and crafted a policy to take advantage of that loophole. Under Proposition 69, law enforcement can collect DNA from minors only in extremely limited circumstances. That includes after a youth is convicted or pleads guilty to a felony, or if they are required to register as a sex offender. But here's the loophole: this only applies to DNA that law enforcement seizes for inclusion in statewide or federal databases. That means local police departments have been able to maintain local databases not subject to these strict limitations.

A.B. 1584 will fix this loophole by requiring law enforcement to obtain a court order, a search warrant, or the written consent of both the minor and their parent, legal guardian, or attorney before collecting DNA directly from the minor. In cases where law enforcement collects a minor’s DNA with proper written consent, A.B. 1584 also requires law enforcement to provide kids with a form for requesting expungement of their DNA sample. Police must make reasonable efforts to promptly comply with such a request. Police must also automatically expunge after two years any voluntary sample collected from a minor if the sample doesn’t implicate the minor as a suspect in a criminal offense.

We urge Governor Brown to take a stand for California’s kids and sign A.B. 1584 into law.

The Justice Department’s announcement yesterday that it will meet with states to discuss whether social media companies are "intentionally" stifling free speech represents a potentially dangerous new step in the wrong direction. Instead of focusing on making social media accountable and transparent, the Justice Department’s effort seems aimed at pressuring social media platforms to censor or promote speech in accordance with the government’s views of which speech it prefers.

If the Justice Department is aiming at convincing state attorneys general to exercise their new powers under FOSTA, or worse, further undermine platforms' ability to host our speech by making them liable for what we say, all of us who rely on social media platforms will lose out. We know that governmental censorship schemes almost always silence the less powerful, quiet the opposition voices, and harm legal speech that the government doesn’t like. This power is dangerous in the hands of any government, which is why the First Amendment places it beyond governmental control.

Social media platforms play an outsize role in what we can say on the Internet. The industry is under massive pressure to do a better job at handling hate speech and misinformation. The answer to this must include increased accountability and due process for speakers online.  We should demand that platforms be open about their takedown rules and to follow a consistent, fair, and transparent process, as outlined in the Santa Clara Principles.

Facebook, Twitter, and other platforms need to step up their game to protect their users—but the way to do that is through increasing accountability, not by government strong-arming them into silencing government critics and promoting government favorites.

In exactly one week, the European Parliament will hold a crucial debate and vote on a proposal so terrible, it can only be called an extinction-level event for the Internet as we know it.

At issue is the text of the new EU Copyright Directive, which updates the 17-year-old copyright regulations for the 28 member-states of the EU. It makes a vast array of technical changes to EU copyright law, each of which has stakeholders rooting for it, guaranteeing that whatever the final text says will become the law of the land across the EU.

The Directive was pretty uncontroversial, right up to the day last May when the EU started enforcing the General Data Protection Regulation (GDPR), a seismic event that eclipsed all other Internet news for weeks afterward. On that very day, a German MEP called Axl Voss quietly changed the text of the Directive to reintroduce two long-discarded proposals — "Article 11" and "Article 13" — proposals that had been evaluated by the EU's own experts and dismissed as dangerous and unworkable.

Under Article 11 — the "link tax" — online services are banned from allowing links to news services on their platforms unless they get a license to make links to the news; the rule does not define "news service" or "link," leaving 28 member states to make up their own definitions and leaving it to everyone else to comply with 28 different rules.

Under Article 13 — the "censorship machines" — anyone who allows users to communicate in public by posting audio, video, stills, code, or anything that might be copyrighted — must send those posts to a copyright enforcement algorithm. The algorithm will compare it to all the known copyrighted works (anyone can add anything to the algorithm's database) and censor it if it seems to be a match.

These extreme, unworkable proposals represent a grave danger to the Internet. The link tax means that only the largest, best-funded companies will be able to offer a public space where the news can be discussed and debated. The censorship machines are a gift to every petty censor and troll (just claim copyright in an embarrassing recording and watch as it disappears from the Internet!), and will add hundreds of millions to the cost of operating an online platform, guaranteeing that Big Tech's biggest winners will never face serious competition and will rule the Internet forever.

That's terrible news for Europeans, but it's also alarming for all the Internet's users, especially Americans.

The Internet's current winners — Google, Facebook, Twitter, Apple, Amazon — are overwhelmingly American, and they embody the American regulatory indifference to surveillance and privacy breaches.

But the Internet is global, and that means that different regions have the power to export their values to the rest of the world. The EU has been a steady source of pro-privacy, pro-competition, public-spirited Internet rules and regulations, and European companies have a deserved reputation for being less prone to practicing "surveillance capitalism" and for being more thoughtful about the human impact of their services.

In the same way that California is a global net exporter of lifesaving emissions controls for vehicles, the EU has been a global net exporter of privacy rules, anti-monopoly penalties, and other desperately needed corrections for an Internet that grows more monopolistic, surveillant, and abusive by the day.

Many of the cheerleaders for Articles 11 and 13 talk like these are a black eye for Google and Facebook and other U.S. giants, and it's true that these would result in hundreds of millions in compliance expenditures by Big Tech, but it's money that Big Tech (and only Big Tech) can afford to part with. Europe's much smaller Internet companies need not apply.

It's not just Europeans who lose when the EU sells America's tech giants the right to permanently rule the Internet: it's everyone, because Europe's tech companies, co-operatives, charities, and individual technologists have the potential to make everyone's Internet experience better. The U.S. may have a monopoly on today's Internet, but it doesn't have a monopoly on good ideas about how to improve tomorrow's net.

The global Internet means that we have friends and colleagues and family all over the world. No matter where you are in the world today, please take ten minutes to get in touch with two friends in the EU, send them this article, and then ask them to get in touch with their MEPs by visiting Save Your Internet.

Take Action

There's only one Internet and we all live on it. Europeans rose up to kill ACTA, the last brutal assault on Internet freedom, helping Americans fight our own government's short-sighted foolishness; now the rest of the world can return the favor to our friends in the EU.

UPDATE September 14, 2018: This blog has been updated at the bottom to include information about two Senators’ reactions to the NSA’s call detail record deletion.

In late June, the NSA announced a magic trick—hundreds of millions of collected call records would disappear. Its lovely assistant? Straight from the agency’s statement: "Technical irregularities."

These "technical irregularities" are part of a broad and troubling pattern within the NSA—it has repeatedly blamed its failure to comply with federal laws on technical problems purportedly beyond its control. EFF has a long history of criticizing Congress for giving the NSA broad authority for its surveillance programs, but allowing the NSA to flout what limits Congress has put on the programs because of vague "technical" issues is wholly unacceptable. If the NSA can’t get its technology in order, Congress should question whether the NSA should be conducting mass surveillance at all.

For example, the NSA is currently required to report numbers called "unique identifiers" in a transparency report compiled annually by the agency’s Office of the Inspector General (OIG). These numbers could help the public understand just how many Americans are burdened by NSA surveillance. But the NSA didn’t report the numbers this year, or the two years prior, because, according to the report, "the government does not have the technical ability."

And in May 2018, the agency discovered that its massive telephone metadata surveillance program was surveilling too massively. During call detail record collection authorized under Section 215 of the Patriot Act, as amended by the USA Freedom Act of 2015, the NSA said it also collected records that it had no legal authority to obtain. Countless records were, in effect, illegally collected and stored for years. The NSA blamed this on "technical irregularities."

The same "technical irregularities" that led to improper data collection also made it impossible to separate improperly collected call records from properly collected ones, the NSA claimed. Apparently unable to disentangle this techno-Gordian knot, the agency decided to just throw the whole thing out. All 685 million call detail records collected from telecommunications companies since 2015 would be deleted, the agency said. (Confusingly, even though the NSA said it found this problem "several months" prior, it waited until late May to act—and then took another month to tell the public what happened.)

Something is clearly amiss here. The NSA has repeatedly insisted to the American public and Congress that these call records are necessary for "national security," and yet, the agency’s solution to discovering the over-collection was to delete everything it had grabbed for the past three years.

The NSA may blame its computer systems, but Senator Ron Wyden (D-OR), who sits on the Senate Select Committee on Intelligence, does not. Sen. Wyden instead blamed telecommunication providers for the over-collection, telling the New York Times:

"Telecom companies hold vast amounts of private data on Americans. This incident shows these companies acted with unacceptable carelessness, and failed to comply with the law when they shared customers’ sensitive data with the government."

Because the NSA only offered a sparse, uninformative public statement, many questions are left unanswered. What technical problem did the agency actually discover? What was its root cause? How did the NSA originally identify the problem, and why did it take three years to find it? Considering Sen. Wyden’s comments, who is at fault for the over-collection? The companies? The NSA? Both?

Let’s not get lost here. Whether the NSA over-collected or the companies over-delivered is only tangential to the core problem—there are no legal consequences for violating the rules. 

Most importantly, how is it that the NSA—which has consistently defended its mass surveillance as necessary for "national security"—decided that national security was not at risk when deleting these records? Does the NSA’s about-face mean that, as we’ve said for years, the agency doesn’t actually need to collect these types of records in the first place?

In a sense, the deletion of these records is good news. The fewer records the NSA has on us, the better. (Although the telecom companies’ troubling retention of these records remains). The warrantless collection of Americans’ private data is something EFF has fought for years, advocating for meaningful reform both in court cases and in legislation. We need more answers, and we need to stop letting the NSA blame "technical irregularities" for its failures, something it has done for years.  

Between 2009 and 2017, the NSA cited technology failures for more than 15 violations of federal law regarding a separate NSA surveillance program that sweeps up Americans’ online communications including emails, web chats, and browsing history. According to released court opinions and documents, the NSA’s remedy to these technical failures is often unknown. The NSA could have fixed its errors, or it could have ignored them. We simply don’t know.

This lack of transparency only compounds the NSA’s irresponsibility in its failure to comply with the law. When the NSA has admitted a technical error, it has done next to nothing to explain the problem in any detail, why the problem is allegedly too hard to fix, or how the problem began in the first place.

For the NSA’s failure to report unique identifiers this year, the OIG transparency report offered a one-sentence explainer and then hand-waved the problem away, saying that, if anything, the statistics reported were "over-inclusive" because of potentially duplicated counts of single call records.

As for the agency’s mass deletion of call detail records, the public received no further explanation of the "technical irregularities" themselves. Instead, the NSA claimed that it had fixed the problem, and that all future call detail record collection would be compliant with federal law.

These statements mean little to us by now. Too often, the NSA has responded to its own mistakes and outside attempts at oversight with one of three options: neglect, denial, or misleading statements. We saw a similar reaction when, in 2015, Congress passed the USA Freedom Act, the first successful, legislative attempt to meaningfully restrict the NSA’s surveillance under Section 215—the very same program under which the NSA has now deleted hundreds of millions of call records. 

Former NSA general counsel Glenn Gerstell initially expressed concern about the potentially "cumbersome" collection requirements under the USA Freedom Act, but, he still said:

"NSA is confident, however, that it can operate the new scheme in compliance with the law."

We now see that this confidence was misplaced. Shame it took three years for us to find out.

With the NSA’s call record surveillance program up for reauthorization in 2019, we must demand meaningful explanations for the NSA’s failures, refusing to accept the agency’s bland assurances. We worry that meaningful reforms, even if successfully approved by Congress, could go ignored once again.

We ask the NSA to finally explain what is happening inside its databases, what is it doing to fix these continued problems, and what is it doing to protect the Fourth Amendment right of privacy of all Americans. Finally, Congress, we urge you to find out—if the NSA’s collection is so easily deleted, why can’t we stop it entirely? 

Update: Last month, Senators Ron Wyden and Rand Paul sent a letter to NSA Inspector General Robert Storch asking his office to investigate many of the same concerns we wrote above. We thank the Senators for their work. You can read the letter here.

Whistleblower Chelsea Manning was released from prison more than a year ago, after former President Barack Obama commuted her sentence for releasing military and diplomatic records to WikiLeaks. But her case still continues, as Manning wants to appeal her original conviction—including one charge under a controversial a federal anti-hacking law.

The Computer Fraud and Abuse Act (CFAA) is intended to punish people for breaking into computer systems. Yet Manning didn’t break into anything. Instead, she was found guilty of violating the CFAA for using a common software utility called Wget to access a State Department database—a database she was generally authorized to access—in violation of a computer use policy. The policy prohibited the use of unauthorized software, even though the prohibition, which covers everything from computer games to simple automated Web browsing tools like Wget, is rarely enforce by the chain of command. Prosecutors have argued that Manning’s use of the Wget software violates the law’s provision again intentionally exceeding "authorized access" to a computer connected to the Internet.

But as EFF and the National Association of Criminal Defense Lawyers (NACDL) argued in an amicus brief filed last week in Manning’s request for a hearing on appeal, violating an employer’s policy on computer use is not a crime under the CFAA. If it were, then it would turn scores of people into criminals for things like browsing Facebook or viewing online sports scores at work. It would also threaten the work of researchers and journalists, who increasingly rely on common automated Web browsing tools to more efficiently access publicly available information on the Internet so that they can do their work, even though such tools are often prohibited in websites’ terms of service. Overzealous prosecutors and private companies have long taken advantage of the CFAA’s vague language to threaten criminal charges that go beyond Congress’s original goal to police computer crime, and Manning is only one of the latest high-profile victims.

We can’t have ordinary online behavior—such as the use of simple, common tools for making it easier to collect publicly available information—become a federal criminal offense. Four other circuit courts have agreed. We hope the United States Court of Appeals for the Armed Forces takes Manning’s case and helps bring some fairness to the CFAA.

Since the Supreme Court’s 2014 Alice v. CLS Bank decision, courts have invalidated hundreds of patents that should never have been issued. Unfortunately, the Patent Office may restrict the impact of that ruling on patent applications under examination.

The Patent Office has issued a request for comment on a proposal to give guidance to examiners that would put a thumb on the scale in favor of patent applicants. If adopted, the guidance would make it too hard for examiners to reject applications on abstract ideas. We’ve argued before against Patent Office proposals that water down the Supreme Court’s Alice decision. We have submitted new comments urging the office to apply Alice comprehensively and correctly, rather than biasing the process in favor of applicants hoping to patent generic computer functions.

The Alice ruling was a big win for software developers and users. The decision empowered district courts across the country to invalidate hundreds patents that should not have issued, and to do so at the earliest stages of a lawsuit, before litigation costs become prohibitive. But lawsuits over patents on basic ideas, like the idea of using categories to store and retrieve information, keep coming. These patents may use technical jargon, but actually require no technology beyond an off-the-shelf general-purpose computer.

Examiners need to understand the change in the law that Alice made. Our comments emphasize the key part of Alice’s landmark holding—describing generic computers performing generic computer functions can’t save a patent.

So why is this guidance coming now? The Patent Office’s new request comes on the heels of the Federal Circuit’s decision in Berkheimer v. HP, Inc., the first case to find evidence outside a patent necessary to decide whether the patent is abstract under Alice. If courts take the direction of Berkheimer, it could mean that those accused of infringement will have to present evidence to a jury at trial before they get a decision on eligibility under Alice.

But Berkheimer is just one outlier case. The Federal Circuit has heard cases both before and after it that confirm courts can make patent-eligibility decisions without litigating the extra evidence demanded in Berkheimer. Patent examiners should consider all of those cases, and not be encouraged to see Berkheimer as a loophole.

In any case, guidance that expands Berkheimer beyond its limits could quickly become obsolete. That’s because the Berkheimer decision could be reviewed by the Supreme Court, where the patent at issue could meet the same fate as the bad patent in Alice. In fact, that’s exactly what EFF believes should happen. The patent in the case covers a conventional parser that’s found in any general purpose computer. It’s exactly the kind of generic implementation the Supreme Court rejected in Alice. No additional facts can change the generic structure and operation of the parser in this case. The Federal Circuit should be overruled, and the patent thrown out.

Until then, EFF hopes the Patent Office takes our comments into account. The Berkheimer case is a mistake that needs correcting. The Patent Office should not view it as an opportunity to skew the outcome of decisions on pending patent applications in ways that undermine the rules so recently set forth in Alice.

text Victory! California Passes Net Neutrality Bill
Fri, 31 Aug 2018 22:30:26 +0000

California’s net neutrality bill, S.B. 822 has received a majority of votes in the Senate and is heading to the governor’s desk. In this fight, ISPs with millions of dollars to spend lost to the voice of the majority of Americans who support net neutrality. This is a victory that can be replicated.

ISPs like Verizon, AT&T, and Comcast hated this bill. S.B. 822 bans blocking, throttling, and paid prioritization, classic ways that companies have violated net neutrality principles. It also incorporates much of what the FCC learned and incorporated into the 2015 Open Internet Order, preventing new assaults on the free and open Internet. This includes making sure companies can’t circumvent net neutrality at the point of interconnection within the state of California. It also prevents companies from using zero rating—the practice of not counting certain apps or services against a data limit—in a discriminatory way. That is to say that, say, there could be a plan where all media streaming services were zero-rated, but not one where just one was. One that had either paid for the privilege or one owned by the service provider. In that respect, it’s a practice much like discriminatory paid prioritization, where ISPs create fast lanes for those who can pay or for other companies they own.

ISPs and their surrogates waged a war of misinformation on this bill. They argued that net neutrality made it impossible to invest in expanding and upgrading their service, even though they make plenty of money. Lobbying groups sent out robocalls that didn’t mention net neutrality—which remains overwhelmingly popular—merely mentioned the bill’s number and claimed, with no evidence, that it would force ISPs to raise their prices by $30. And they argued against the zero-rating provision when we know those practices disproportionately affect lower-income consumers [pdf].

There was a brief moment in this fight when it looked like the ISPs had won. Amendments offered in the Assembly Committee on Communication and Conveyance after the bill had passed the California Senate mostly intact gutted the bill. But you made your voices heard again and again until the bill’s strength was restored and we turned opponents into supporters in the legislature.

In the middle of all of this, the story broke that Verizon had throttled the service of a fire department in California during a wildfire. During the largest wildfire in California history, the Santa Clara fire department found that its "unlimited" data plan was being throttled by Verizon and, when contacted, the ISP told the fire department they needed to pay more for a better plan. Under the 2015 Open Internet Order, the FCC would have been able to investigate Verizon’s actions. But since that order’s been repealed, Verizon might escape meaningful punishment for its actions.

The story underscored the importance of FCC oversight and its public safety implications. On August 30, S.B. 822 passed the California Assembly and then, on August 31, it received enough Senate votes to continue to the governor. With the governor’s signature, California will have passed a model net neutrality bill.

California’s fight is a microcosm of the nation’s. Net neutrality is popular across the country. The same large ISPs that led the fight against it in California are the ones that serve the rest of the country, a majority of which don’t have a choice of provider. The arguments that they made in California are the same ones they made to the FCC to get the Open Internet Order repealed. The only thing preventing what happened to California’s firefighters from happening elsewhere is Verizon saying it won’t.

We need to net neutrality protections on as many levels as we can get them. And Congress can still vote to restore the FCC’s 2015 Open Internet Order. In fact, the Senate already did. So contact your member of the House of Representatives and tell them to vote for the Congressional Review Act and save national net neutrality protections. Californians, tell Gov. Jerry Brown to sign S.B. 822.

Take Action

tell the governor to sign the california net neutrality bill

Prevalence of Link Shorteners and Other Tools Mean You Might Not Know Where You Are Going

San Francisco - The Electronic Frontier Foundation (EFF) has asked an appeals court to ensure that a click on a URL isn’t enough to get a search warrant for your house.

In U.S. v. Nikolai Bosyk, law enforcement discovered a link to a file-sharing service that was suspected of being used to share child pornography. Prosecutors got a warrant to search Bosyk’s home based only on the fact that someone attempted to access the link from his home. The warrant application included no information on why or how the user encountered the link, or if he had any knowledge of what it linked to.

In an amicus brief filed in the United States Court of Appeals for the Fourth Circuit, EFF argues that law enforcement should gather more evidence before subjecting someone to an invasive home search. It’s not always clear what kinds of sites URLs link to, particularly with the prevalence of link shorteners or other tools that obscure a link’s destination.

"It’s easy to see how someone could be misdirected to content that they never intended to access. In fact, it happens all the time, as part of ‘Rickrollingand other pranks," said EFF Staff Attorney Stephanie Lacambra. "Police officers shouldn’t be able get a warrant to rifle through your house just because you clicked on something you might never have wanted to see. Without further evidence that the person suspected of the crime knew what the content was and intended to access it, courts should decline to authorize such search warrants."

This case is one of many stemming from the investigation of the "Playpen" site, a hidden web service that hosted child pornography. The government has used a number of legally questionable tactics in trying to find Playpen users, including using a single warrant to target hundreds of different people, and installing malware into thousands of computers around the world.

"Although it may be tempting to overlook law enforcement overreach when it comes to tracking down potential pedophiles, the ramifications for our Fourth Amendment rights are dire," said EFF Staff Attorney Aaron Mackey. "If we let police trample on the privacy rights of individuals suspected of reprehensible crimes, we erode everyone’s constitutional rights."

For the full amicus brief:

Criminal Defense Staff Attorney
Staff Attorney

At EFF, we often criticize software patents that claim small variations on known techniques. These include a patent on updating software over the Internet, a patent on out-of-office email, and a patent on storing data in a database. Now, Google is trying to patent the use of a known data compression algorithm - called asymmetric numeral systems (ANS) – for video compression. In one sense, this patent application is fairly typical. The system seems designed to encourage tech giants to flood the Patent Office with applications for every little thing they do. Google’s application stands out, however, because the real inventor of ANS did everything he could to dedicate his work to the public domain.

Jarek Duda developed ANS from 2006-2013. When he published his work, he wanted it to be available to the public free of restrictions. So he was disappointed to learn that Google was trying to patent the use of his algorithm. In his view, Google’s patent application merely applied ANS to a standard video compression pipeline. Earlier this summer, Timothy B. Lee of Ars Technica published a detailed article about the patent application and Duda’s attempt to stop it.

This week, the Patent Office issued a non-final rejection of all claims in Google’s application. The examiner rejected the claims on a number of grounds. First, he found the three broadest claims ineligible under Alice v CLS Bank, which holds that abstract ideas do not become eligible for a patent merely because they are implemented on a generic computer. The examiner rejected all of the claims for lack of clarity and for claiming functions that are not described with sufficient detail (applicants are often able to overcome these kinds of rejections with an amendment).

The examiner also rejected all of Google’s claims as obvious in light of Duda’s work, in combination with an article by Fabian Giesen and a 20 year-old patent on data management in a video decoder. Duda had made a third-party submission to ensure his work was before the examiner. Notably, this is a non-final rejection (and even final rejections at the Patent Office are not really final). This means Google can still amend its claims and/or argue that the examiner was wrong.

It is time for Google to abandon its attempt to patent the use of ANS for video compression. Even if it could overcome the examiner’s rejection, that would only reflect the failings of a patent system hands out patents for tiny variations on existing methods. It may be that Google is seeking the patent solely for defensive purposes. In other contexts, Google has worked to make video codecs royalty free. But that doesn’t make it okay for one of the world’s biggest companies to get a software patent on a minor tweak to someone else’s work. Perhaps it is unlikely that Google would assert an ANS patent in the short or medium term. But many once-dominant companies have turned to their patent portfolios as their star has faded.

ANS should not belong to tech giants willing to push applications through a compliant Patent Office. ANS should belong to all of us.

Update 8-30-2018: The vote count was updated to reflect the final vote count. 

After a long and hard-fought battle, one where you made your voices heard, California’s Assembly passed S.B. 822, the net neutrality bill. But we’re not quite done yet.

In a bipartisan vote of 61-18, S.B. 822 passed the Assembly. Now it needs to pass the Senate again.

ISPs have tried hard to gut and kill this bill, pouring money and robocalls into California. There was a moment where that campaign looked like it might have been successful, but you spoke out and got strong net neutrality protections restored. But that hiccup means that, although a version of the bill already passed in the California Senate, it’s now different enough from that initial version to have to be re-voted on.

We’re in the home stretch here. California could pass a gold standard net neutrality bill, providing a template for states going forward. California can prove that ISP money can’t defeat real people’s voices. So, one more time, contact your California state senator and tell them to vote yes on S.B. 822.

Take Action

Tell California Senators to Vote Yes on S.B. 822

Cases Among First Since Landmark Supreme Court Decision in Carpenter

Portland, Maine—The Electronic Frontier Foundation (EFF) and the ACLU are urging the state’s highest courts in Massachusetts and Maine to rule that law enforcement agents need a warrant to access real-time location information from cell phones, a clear application of a landmark U.S. Supreme Court ruling from June.

EFF, in partnership with ACLU chapters in Massachusetts and Maine, is asking the state courts to recognize, as the Supreme Court did in U.S. v Carpenter, that people have a constitutional right to expect privacy in their physical movements, which can be revealed in minute detail by the cell phones they carry. Cell phone use is ubiquitous in our society. People have their phones with them all the time, and the location information produced by the phone can reveal our every move—where we live, socialize, visit, vacation, worship, and whom we meet with, including friends, colleagues, relatives, doctors, partners, political associates, and much more. In Carpenter, the Supreme Court said that government cell phone tracking "achieves near perfect surveillance," and is like the government attaching ankle monitors on cell phone users. Cell phone location information searches fall under the Fourth Amendment and require a warrant, the court ruled.

Courts around the country are now being asked to address the scope of this ruling. Cases in Massachusetts and Maine, which were pending on appeal when the Carpenter ruling was issued, are among the first to deal directly with how the Supreme Court’s ruling should be applied when police track and locate people in real-time.

In a brief filed today in Maine and one filed August 20 in Massachusetts, EFF said that while the Carpenter decision involved historical cell phone location data, the rule articulated by the Supreme Court—that collection of cell phone location data from third party phone companies is a Fourth Amendment search that requires a warrant—applies equally to real-time collection.

In State of Maine v. O’Donnell, police asked Verizon to provide cell phone location information on the phones of two burglary suspects. The carrier "pinged" the phones—surreptitiously accessing GPS functions and causing the phones to send their coordinates back to Verizon—and transmitted the locations to police, who arrested the pair. A trial court ruled the suspects’ Fourth Amendment rights weren’t violated because the location information was obtained from a third-party: Verizon.

In Commonwealth of Massachusetts v. Almonor, police had a phone carrier ping the cell phone of a suspect in a murder case. The real-time location search pinpointed the suspect in a private home. The state contends it can warrantlessly get cell phone location data to locate anyone, anytime, at any place for up to six hours. A trial court disagreed and the state appealed.

"Our right to privacy in our cell phone location information is the same whether the police seek data in real-time or past data. Both implicate our rights to keep our everyday travels private," said EFF Senior Staff Attorney Jennifer Lynch. "The Maine and Massachusetts courts should clarify that Carpenter establishes important limitations on government searches of location information, without which, as the Supreme Court said in Carpenter, law enforcement agencies will have unfettered powers of surveillance."

The Maine court should correct the O’Donnell trial court’s reliance on the "Third Party Doctrine," an outdated legal standard that says people don’t have an expectation of privacy in information they share with a third-party.

"The Supreme Court expressly ruled that the doctrine doesn’t apply to cell phone location information because cell phone use is so pervasive and indispensable in modern life," said EFF Staff Attorney Andrew Crocker. "The court recognized that when police seek location information from carriers, that’s an intrusion on privacy that requires a warrant. The Maine and Massachusetts courts should do the same."

For the O'Donnell brief:

For the Almonor brief:

For more on the Carpenter decision:

Senior Staff Attorney
Staff Attorney

Here’s the not-so-secret recipe for strong passphrases: a random element like dice, a long list of words, and math. And as long as you have the first two, the third takes care of itself. All together, this adds up to diceware, a simple but powerful method to create a passphrase that even the most sophisticated computer could take at least thousands of years to guess. 

In short, diceware involves rolling a series of dice to get a number, and then matching that number to a corresponding word on a wordlist. You then repeat the process a few times to create a passphrase consisting of multiple words. 

In 2016, EFF debuted a series of wordlists that can be used with five six-sided dice to generate strong passphrases. This year, we’re upping our game. At Dragon Con 2018 in Atlanta over Labor Day weekend, EFF will be testing new wordlists optimized for three 20-sided dice. Since Dragon Con is largely a fantasy and science fiction convention, we’ve also created four new wordlists drawn from fan-created Wikia pages for Star Trek, Star Wars, Game of Thrones, and Harry Potter.

If you’re at Dragon Con, come visit our table on the second floor of the Hilton Atlanta. EFF and Access Now are teaming up to teach people how to create passwords using giant 20-sided dice. Attendees will also be encouraged to write sentences or little stories using the words to help remember their passphrases. Participants who successfully create a strong passphrase will receive a gift (while supplies last).

We’re also releasing the wordlists and password worksheet online, so folks at home can play along:

(Note: Any trademarks within the wordlist are the property of their respective trademark holders, who are not affiliated with the Electronic Frontier Foundation and do not sponsor or endorse these passwords.)

How We Created the Wordlists

A diceware passphrase is just a set of rare and unusual words that is easy for humans to remember, but hard for computers to guess. When we set out to create fandom-specific wordlists, we weren’t sure where to gather unique but relevant words. Official encyclopedias for Star Trek and Star Wars only had hundreds of entries—nowhere close to the thousands of possible rolls of three 20-sided dice.

So, we began to look at the FANDOM Wikia pages for various science fiction and fantasy universes. At first, we tried using the unique page titles for sections like Memory Alpha and Wookieepedia. While we were easily able to gather enough words for wordlists, too many of the words were complicated, obscure names or words from fictional languages. They would have been too difficult for most fans to memorizeand memorability is one of the key features of diceware technique.

Instead, we narrowed in on some of the most popular pages for various fandoms, such as limiting ourselves to the main Star Wars films, a selection of Star Trek episodes from the original series and Discovery, the Harry Potter books, and a few episodes from each season of Game of Thrones. Then, we filtered the text of each page to just its unique words. As a result, our wordlists are mostly regular English words with a distinct flavor of the corresponding fandom.

Each wordlist is 4,000 unique words, repeated once to match the possible 8,000 outcomes of the three 20-sided dice.

The Math

For this method, it’s important to use carefully constructed wordlists. It’s also important that the user not modify the words after they’ve been chosen or re-roll for new words because they don’t like the original ones that came up. This process relies on randomness—so, the second some words on the list are prioritized over others or changed in the generation process, the mathematical analysis starts to fall apart.

To see why, we need to understand how to analyze the security of a passphrase.

Let’s assume an attacker trying to crack our passphrase knows the method we used (in this case, a particular fandom wordlist and three 20-sided dice). We also assume the attacker is going to use the most effective attack for that particular method. For our method, that means trying all combinations of words in the wordlist, rather than, say, trying every individual letter combination.

Assuming the attacker knows that our passphrase is made up of words from a particular list, then the security of a passphrase is determined by how many possibilities there are. In our wordlists, there are 4000 words, and we’re choosing five of them, so the number of possibilities is 4000 times 4000 times 4000 times 4000 times 4000, which is about 1018 possibilities. Around 1018 to 1024 is usually a good number to aim for, for most people. The easiest way to increase this number is by adding another word to the passphrase using the same dice-rolling method.

How long will it take for an attacker to crack this password in practice? That number depends on how fast the computer is. Using a desktop today, computers can try about 15 million passwords per second. The world’s fastest supercomputer can try about 92 trillion passwords per second.

If you assume the attacker has a copy of the wordlist you used and a computer that can try 15 million passwords a second, it would take them over two thousand years to try every possible combination, cracking the password in just over a thousand years on average.

The world’s fastest supercomputer could crack that same password in an hour and a half on average, but not to worry: adding two more words to the password increases that time to almost three thousand years for even the fastest supercomputer.

Check out EFF's full list of panels at Dragon Con here

Earlier this week, we joined with Human Rights Watch, Amnesty International, Article 19, and 10 other international human rights groups in a letter to Google’s senior leadership, calling on the company to come clean on its intentions in China – both to the public, and within the company.

A little background: it’s been almost a month since The Intercept first broke the story that Google was planning to release a censored version of its search service inside China. Since that time, very little new information about the effort, known as Project Dragonfly, has come to light. Over 1,400 employees have asked Google to be more transparent about the search giant’s plans, but at an all-hands meeting executives only responded with generalities before the conversation was cut short. Google certainly hasn’t provided the public with any details, leaving many in the human rights community to continue wondering how Google plans to avoid becoming complicit in human rights abuses by the Chinese government.

Google still owes both audiences—Google employees and the public—an explanation.  

Google has committed to following certain human rights principles wherever it chooses to operate.

Google has been through this before regarding the pros and cons of Internet companies entering the Chinese market. As we mentioned in our last post, Google is a founding member of the Global Network Initiative (GNI)—a group dedicated to ensuring that "technology companies [can] best respect the rights of their users." That means Google has committed to following certain human rights principles wherever it chooses to operate.

The GNI was launched in 2008 following the controversy surrounding Yahoo’s Chinese operations handing over the private emails of a journalist, Shi Tao, who subsequently served eight years of forced hard labor in prison because he trusted a Western company with his communications. To avoid such tragedies from happening again, Google, Yahoo and other technology companies formed the organization in co-operation with human rights groups, including EFF, to provide process, transparency, and accountability to their decisions to enter jurisdictions like China where tech companies may be pressured to comply with similar demands that violate human rights law related to privacy and freedom of expression.

To address this, the GNI Principles explicitly state that:

"ICT companies should comply with all applicable laws and respect internationally recognized human rights, wherever they operate. If national laws, regulations and policies do not conform to international standards, ICT companies should avoid, minimize, or otherwise address the adverse impact of government demands, laws, or regulations, and seek ways to honor the principles of internationally recognized human rights to the greatest extent possible. ICT companies should also be able to demonstrate their efforts in this regard."

China’s Internet censorship and surveillance programs have repeatedly violated international standards regarding privacy and freedom of expression, association, and religion. Because of that, Google needs now to live up to the GNI principles and "demonstrate its efforts" to "address the adverse impact" of China’s demands for censorship.

Google executives can start by answering the following questions.

How does Project Dragonfly fit into Google’s obligation to support the freedom of expression of Chinese users?

As a member of the GNI, Google is obligated to "respect and work to protect the freedom of expression of [its] users by seeking to avoid or minimize the impact of government restrictions…on the information available to users".

Further, Google has committed to "respect and work to protect the freedom of expression rights of users when confronted with government demands, laws and regulations to… limit access to communications, ideas and information in a manner inconsistent with internationally recognized laws and standards."

It’s hard to see how Google plans to create a search app that somehow abides by China’s arbitrary, secretive and overtly politicized censorship regime while still protecting the freedom of expression of users. If Google thinks they have a way around this, they should publicly explain how.

How does Project Dragonfly fit into Google’s obligation to protect the privacy of Chinese users?

The second principle of the GNI is privacy. As a member of the GNI, Google has promised "to protect the privacy rights of users when confronted with government demands, laws or regulations that compromise privacy in a manner inconsistent with internationally recognized laws and standards."

In 2009, a group with ties to the Chinese government launched several sophisticated cyberattacks targeting the Gmail contents of Chinese human rights activists. This event, among others, contributed to Google’s decision to pull out of China. Since then, the Chinese government has doubled down on laws which would make targeted surveillance even easier.

How does Google plan to protect the privacy of its users from the Chinese government, which has sought access to personal data in order to persecute Internet users for their political speech, religious associations, and for writing code? So far, no answers have been forthcoming.

How will Google prevent Project Dragonfly from leading Google to become complicit in Chinese human rights abuses?

Companies have an obligation to protect the human rights of their users. The United Nation’s Guiding Principles on Business and Human Rights (the "Ruggie Principles") explicitly "require[] that business enterprises: avoid causing or contributing to adverse human rights impacts through their own activities, and address such impacts when they occur; [and] seek to prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services by their business relationships, even if they have not contributed to those impacts."

Nobody at Google has explained how providing censored search won’t contribute to "adverse human rights impacts."

Google’s Customers and Its Employees Deserve Answers

When Google first considered entering China, the constraints it set on its actions, and its subsequent decision to leave, were determined by a group who knew intimately the risks of what it was doing: its employees. We know from our work advocating to Google over a decade ago that there was a wide-ranging and often contentious debate within that younger, smaller Google. It wasn’t just from high principles (though they played their part): there was also a recognition that if Google ended up, like Yahoo, being complicit in human rights abuses, the reputational damage would demolish the trust that Google needed to conduct the rest of its business.

The modern Google has shown that it’s sometimes willing to listen when its employees raise similar concerns in new contexts. When news came to light that Google was contracting with the U.S. military, outcry from Google employees helped bring that project to account.

Now, we see the same happening over Project Dragonfly. Over 1400 Google employees have already signed a letter demanding clarity over the Projects’ aims and plans, arguing that Googlers "currently…do not have the information required to make ethically-informed decisions about [their] work, [their] projects, and [their] employment."

That conversation, based on facts, was cut short in the last internal Google discussion, but it still needs to take place.

Google employees are uniquely positioned to demand that their company does the right thing by China, and by the world.

Google employees are uniquely positioned to demand that their company does the right thing by China, and by the world. After all, if Google does the wrong thing, it will be the employees’ own work that will be affected.

Without a proper debate by Google’s staff, fueled by complete answers to core questions about Project Dragonfly, the company’s Chinese project risks setting a bad precedent for the global tech sector. Not only is Google walking down a path to complicity in acts that no Googler would want on their conscience or their resume; what Google does—given the behemoth that it is—will surely influence other companies that are contemplating entering the Chinese market—for good or for ill. Human rights organizations will keep up the pressure from the outside. We urge Google employees to keep up the pressure within. Google’s management should listen to both — and remember its history, its GNI obligations, and its moral obligations as a global company whose actions have real-world consequences for millions of users.

In Passing A.B. 2192, California Leads the Country in Open Access

The California legislature just scored a huge win in the fight for open access to scientific research. Now it’s up to Governor Jerry Brown to sign it.

Under A.B. 2192—which passed both houses unanimously—all peer-reviewed, scientific research funded by the state of California would be made available to the public no later than one year after publication. There’s a similar law on the books in California right now, but it only applies to research funded by the Department of Public Health, and it’s set to expire in 2020. A.B. 2192 would extend it indefinitely and expand it to cover research funded by any state agency. EFF applauds the legislature for passing the bill, and especially Assemblymember Mark Stone for introducing it and championing it at every step.

A.B. 2192’s fate was much less certain a few weeks ago. Lawmakers briefly put the bill in the Suspense File, a docket of bills to be put on the back burner because of their potential impact on the California budget. Fortunately, the Senate Appropriations Committee removed A.B. 2192 from the file after EFF explained that its fiscal impact would be negligible.

We hope that the governor signs A.B. 2192 and that it becomes a model for similar bills around the country. As I said when I testified to the California Senate about the bill, the traditional system for publishing scientific papers puts researchers who can afford expensive journal subscriptions at an advantage over those who can’t. While California can’t do anything to stop that disparity for most scientific research, it can and should find ways to make sure that research funded by the state doesn’t exacerbate it.

mytubethumb play
Privacy info. This embed will serve content from youtube.com

While we’re delighted to see A.B. 2192 pass, it’s only one step in the right direction. Science moves quickly, and a one-year embargo period is simply too long. Lawmakers should work to ensure that more grantees publish their papers in open access journals, available free of cost to the public on the date of publication.

Lawmakers in California and elsewhere should also consider requiring open licenses in future laws. Requiring that grantees publish research under a license that allows others to republish, remix, and add value ensures that the public can get the maximum benefit of state-funded science.

Finally, it’s time for Congress to pass a federal open access bill. Despite having strong support in both parties, the Fair Access to Science and Technology Research Act (FASTR, S. 1701, H.R. 3427) has been stuck in Congressional gridlock for five years. Take a moment to celebrate the passage of A.B. 2192 by writing your members of Congress and urging them to pass FASTR.

Take action

Tell Congress: It’s time to move FASTR

One of the oldest challenges in journalism is deciding what goes on the front page. How big should the headline be? What articles merit front-page placement? When addressing these questions, publishers deal with a physical limit in the size of the page. Digital publishing faces a similar constraint: the storage capacity of the user’s device. You can only put as much content on the device as will fit. If that sounds like a fundamental to you, and unpatentable, idea, we agree. Unfortunately, the Patent Office does not. They recently decided to issue our latest Stupid Patent of the Month: U.S. No. 10,042,822, titled "Device, Method, and System for Displaying Pages of a Digital Edition by Efficient Download of Assets."

The ’822 patent adds nothing remotely inventive or technological to the basic idea of providing a portion of a periodical—i.e., a newspaper—based on the amount of space available. The patent owner, Nuglif, makes an application for distributing news and media content.

Even a cursory glance at the patent reveals the limits of its technological reach. It explains: "The present invention is concerned with a processor-implemented method for displaying a digital edition readable by a dedicated software application running on a data processing device having a display screen, even though the digital edition is not completely downloaded on the data processing device." The specification is typically elusive as to what that invention actually is, instead repeating the boilerplate phrase beloved by patent applicants, that "the description set forth herein is merely exemplary to the present invention and is not intended to limit the scope of protection."

For the limits of the patent, we look to its claims, which define the applicant’s legal rights instead of describing the operation of the "invention" to which the claims supposedly correspond. The patent has only one independent claim, which includes steps of (a) receiving a pre-generated file linking to at least some content from current and upcoming digital editions, (b) requesting the linked-content for display, and (c) determining how much content from the upcoming edition to download based on publication date and device capacity.

Notably, the patent does not claim as the invention the processor, the network, the digital edition, the software application for reading the digital edition on the device, or any other technical aspect. Instead, it claims the combination of receiving, requesting, and determining, without limiting it to any particular device or manner of operation. Aside from the reference to a "processor-implemented" method in the preamble to the claim, nothing in the claim indicates these steps would even have to be performed by machinery rather than a human. Nor does it indicate why providing a partial edition would be challenging once a complete edition can be provided.

In 2014, the Supreme Court’s Alice v. CLS Bank decision confirmed what numerous earlier decisions had already established: to be eligible for a patent, an applicant must actually invent something. Patents on abstract ideas, laws of nature, and naturally-occurring phenomena are prohibited. These represent the fundamental building blocks of innovation and scientific progress that must remain available to the public. When a patent claims something in these prohibited categories and adds nothing to transform the claims into a specific invention, the patent takes from the public domain, and adds nothing in return.

Abstract ideas are basic principles that apply and often represent methods of organizing human activity that people have known and used for years without technological intervention. Too often, applicants obtain patents on abstract ideas by claiming systems or methods that merely apply these ideas using off-the-shelf computer hardware and software and without adding anything that is inventive and patent-eligible—i.e., something attributable to the applicant other than the abstract idea or pre-existing computer technology that supposedly makes it concrete.

The ’822 patent issued on August 7, 2018, and has a priority date of January 10, 2014. That means the Alice decision came out in plenty of time to block its issuance. The idea of providing less based on resource constraints is not even technological, let alone innovative. It is a basic idea that drives human activity every day: from our decision not to consume an entire day’s worth of food at breakfast, to our decision to fill our bag with only what we can carry, and actually need, for work or school.

Nothing in the patent suggests that the applicant came up with anything beyond the idea of making a determination based on timing and capacity. Even the patent relies on the obvious analog analogies, explaining that Saturday editions are typically "more voluminous" and thus demand more capacity than "lighter" Sunday editions with fewer sections. But that was just as true for paper editions distributed by newspaper carriers as for digital editions distributed on devices today. The need to adapt to the constraints of a medium is not a problem tied to any particular technological tool or environment.

Right now, we have no concerns about the conduct of the assignee, Nuglif. But we are worried that the Patent Office is still issuing patents like this one. Because the ’822 patent issued so recently, it has the potential to be used to threaten or bring suit until it expires in 2034. Since it directly relates to the distribution of news content, these threats could add to risks and costs of creating and distributing newspapers, magazines, and other creative content—activities the First Amendment protects.

On Monday, the Second Circuit Court of Appeals in New York held argument in United States v. Hasbajrami, an important case involving surveillance under Section 702 of the FISA Amendments Act. It is only the second time a federal appeals court has been asked to rule on whether the government can collect countless numbers of electronic communications—including those of Americans—and use these communications in criminal investigations, all without a warrant. In a lengthy and engaged argument [.mp3], a three-judge panel of the Second Circuit heard from lawyers for the United States and the defendant Agron Hasbajrami, as well as from ACLU attorney Patrick Toomey representing ACLU and EFF, which filed a joint amicus brief in support of the defendant. As we explained to the court in our amicus brief and at the argument, this surveillance violates Americans' Fourth Amendment rights on a massive scale.

Hasbajrami is a U.S. resident who was arrested at JFK airport in 2011 on his way to Pakistan and charged with providing material support to terrorists. Only after his conviction did the government explain that its case was premised in part on emails between Hasbajrami and an unnamed "Individual #1"—a foreigner associated with terrorist groups—obtained using PRISM, one of the government’s Section 702 programs.

Under Section 702, the government is authorized to warrantlessly intercept private online communications of foreigners located outside the U.S., an authority that the government claims extends to conversations between foreigners and Americans, so long as it doesn’t intentionally target specific Americans.

Much of the argument was spent probing at the legal justification for this "incidental collection" of Americans’ private communications. The government pointed out that the Fourth Amendment does not protect foreigners outside the U.S., so it does not need a warrant to surveil them. Even though Hasbajrami does have Fourth Amendment rights, the government likens its ability to read emails between Individual #1 and him to a traditional wiretap in which someone suspected of bank robbery might be "incidentally overheard" discussing prostitution with others not named in the wiretap order. The problem with this analogy is that a wiretap must be individually and closely supervised by a court to avoid violating the privacy of bystanders, something not required by Section 702.

The government’s analogy also obscures the actual operation and scope of Section 702 and PRISM. Agents do not sit with headphones in a darkened room listening to a wiretap in near real-time. Instead, the majority of the millions of emails, chats, calls, and other private communications collected are placed unread into vast databases, accessible to a number of federal agencies including the FBI. It seems likely that the government came across the emails in question in Hasbajrami long after the fact by searching these databases.

We don’t know for certain whether this search involved a query using Hasbajrami’s name, a practice known as a "backdoor search," or someone else’s. During the argument, Judge Gerard Lynch pressed the government on this question, noting that its briefing was ambiguous. Judge Lynch pointed out that under the government’s "incidental overhear" argument, assuming the emails were lawfully collected in the first place, "we’ve got the government saying we could do kinda anything we want with that information." So why, he asked, wouldn’t the government say definitively whether it used a backdoor search in this case so that the court would know whether it needed to rule on its constitutionality? As Patrick Toomey of the ACLU argued, at best, the government’s evasive answers showed the need for more public disclosure of the relevant facts in this case. And, he reminded the judges, recent decisions by the Supreme Court and the Second Circuit itself show that even when the government has an initially valid reason to collect information, its continued retention of that information may still violate the Fourth Amendment.

The court’s questions at oral argument demonstrated not only its close reading of the record but a keen awareness that its decision in Hasbajrami will not be written on a blank slate. It will follow on the heels of United States v. Mohamud, in which the Ninth Circuit upheld a similar use of Section 702 surveillance. That decision was so riddled with flaws that it led Orin Kerr, perhaps the most influential law professor on digital search issues, to write that some parts of its reasoning "border on the incoherent." The Mohamud court uncritically accepted the government’s justification for incidental collection and, as Judge Lynch noted, it believed that backdoor searches weren’t at issue despite a similarly unclear record. Given the importance of the case, we hope the Second Circuit rules more carefully.

The fight to secure net neutrality protections for Californians keeps showing how far ISPs and their surrogates will go to make a buck off of ending the free and open Internet. The latest maneuver is a flood of deceptive robocalls targeting seniors and stating that net neutrality will raise their cell phone bills by $30 a month and slow down the Internet. It’s not just a lie, it’s proof that you’ve successfully put them on the defensive by contacting your representatives about net neutrality.

The robocalls don’t mention net neutrality by name. Instead, they simply assert that S.B. 822 will raise their bills and slow down their Internet. If ISPs decided to make this true by coordinating to raise prices in reaction to net neutrality legislation it would probably be illegal under federal antitrust law. There is no evidence that says net neutrality harms ISPs to the point where they must raise prices to make money. In fact, the evidence says the exact opposite. The fact that this is even possible reveals that we seriously lack sufficient competition in the wireless market. Such intentional misrepresentations demonstrate the extent major ISPs oppose any legal requirements to keep the Internet free and open, even after it has been discovered that they would go so far as to upsell public safety during an emergency in California.

The thing is, we know that none of these large companies is operating on so small a margin that complying with net neutrality would "force" them to raise their prices. We also know net neutrality rules have never raised their operational costs. We know these things because the evidence is already publicly available.

Major ISPs Have Had Their Profits Enhanced by Billions in Tax Cuts and Have Delivered Nothing in Return

This year, the two major wireless and wireline providers (Verizon and AT&T) that are leading the effort to oppose California passing net neutrality legislation are expected to receive an additional $7 billion in cash in hand from Congress’ tax cuts. (Verizon - $4 billionAT&T - $3 billion). That’s after having their 2017 net income receive a one-time jump of approximately $38.7 billion ($20 billion to AT&T, $18.7 billion to Verizon) in deductions from those tax cuts. Yet these high profits augmented by tax policy changes give them no pause in deploying their surrogates to falsely state that they must raise everyone’s bills simply because they do not like consumer protection.

We should ask why these companies feel comfortable engaging in such a misinformation campaign during their most profitable year on record, rather than aggressively improving the competitive landscape in high-speed Internet access. EFF recently noted to the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) in their recent inquiries into ISP access competition that nearly 85 percent of Americans live in a market where only their cable provider offers broadband access at 100 Mbps and above. Yet we have seen no signs of the two largest companies in the telephone industry deploying fiber to the home in their markets despite it being a proven technology that is cheaper to upgrade once installed and capable of reaching speeds that are 400 times the speed most Americans have access to now. That means they are not actually trying to compete with each other anymore and instead we are seeing efforts by them to stop laws from entitling Internet users to a free and open platform.

ISPs Have Never Raised Prices Because of Net Neutrality and None Have Dropped Prices in Response to its Repeal

When talking to their stockholders, ISPs have never claimed that net neutrality has forced them to raise their prices. Not one single legal document or financial disclosure report that carries a potential liability for lying have large ISPs represented that net neutrality will require them to raise prices. In fact, at least one ISP flat out admitted that the entire 2015 Open Internet Order with its legal landscape change in ISP privacy, competition, and consumer protection did little to affect their business plans.

This is because net neutrality efforts by the FCC dating back more than a decade were meant to preserve the status quo and keep the Internet as we have known it free and open. ISPs have long made their profits from charging their customers a subscription fee for service and the monthly bills we already pay them yield tremendous profits in return. However, as the ISP industry grew more concentrated and vertically integrated with content companies such as Time Warner and NBC Universal, so too have their ambitions to reshape the Internet with things such as arbitrary fees and preferential treatment for their own products. Verizon itself asserted under penalty of perjury when it sued to block net neutrality years ago that it would have already explored violating net neutrality if not for the FCC.

California, and Every State, Should Respond to the Public’s Demand for Net Neutrality With State Laws Until the Federal Rules Are Restored

The FCC’s decision to abandon the 2015 Open Internet Order and surrender oversight over the ISP industry will go down as the biggest mistake in Internet policy history. Already the U.S. Senate has voted to reverse the FCC and, with enough pressure, the House of Representatives may follow in September. An overwhelming number of businesses, education institutions, civil rights activists, and individuals across the political spectrum weighed in opposition but were ignored by the federal agency. It should come as no surprise that dozens of states have introduced bills with many having enacted various protections.

California stands on the brink of passing what many have called the "gold standard" of state-based net neutrality laws. You’ve already beaten back big ISPs’ attempts to gut and kill this bill once, and you can do it again. If you live in the state, take the time to call your state representative today before the bill is voted on this week. Real voices, not ISP robocalls, need to be heard. Tell your California assemblymember to vote "yes" on S.B. 822.

Take Action

Tell California Representative to Vote Yes on S.B. 822

text Back to School Essentials for Security
Tue, 28 Aug 2018 18:28:00 +0000

Going back to school? This is a perfect time for a digital security refresh to ensure the privacy of you and your friends is protected!

It’s a good time to change your passwords. The best practice is to have passwords that are unique, long, and random. In order to keep track of these unique, long and random passwords, consider downloading a password manager.

As a great additional measure: You can add login notifications to your accounts, so that you can monitor logins from devices you don’t recognize.

If you’re a regular user of a public computer, like at the school library or lab, keep in mind that public computers can remember information from your logins. Adding two factor authentication to your accounts is a great way to bolster your security. Adding a second factor in addition to your unique, long, and random password makes it much harder for someone else to access your account. There are many types of two factor authentication, including SMS text messages, apps like Authenticator, or hardware tokens like Yubikey.

Applying for an internship, job, fellowship, or for further education at a school? Worried about an embarrassing photo being found by a recruiter? Now’s a great time to check your social media privacy settings. Helping your friends with this daunting task? Consider looking through the Security Education Companion’s lesson plan on Locking Down Social Media. If your study group, student organizing club, or class uses Facebook Groups, you can help members understand who can see what is posted.

Applying for student loans, scholarships and grants? Maybe you’ve started to get a flood of new emails and some of them seem phishy. You might want a refresher on how to spot phishing from Surveillance Self-Defense.

Looking for an app that has disappearing messages that are actually just between you and your recipient? You might want to try an end-to-end encrypted messaging app like Signal. A service is no fun without friends on it: teach your friends and family how to use end-to-end encrypted messaging with the Security Education Companion’s lesson plan.

Exciting new technology in the classroom can also mean privacy violations, including the chance that your personal devices and online accounts may be demanded for searches. If you’re a student, parent, or teacher, we’ve written tips for you.

If you’re a teacher, librarian, professor, or extracurricular leader looking for fresh material, try out our lesson plans from the Security Education Companion at sec.eff.org! We have an assortment of lesson plans on basic digital security concepts, such as threat modeling, end-to-end encrypted messaging, and password managers.

In the last few years, we’ve discovered just how much trust — whether we like it or not — we have all been obliged to place in modern technology. Third-party software, of unknown composition and security, runs on everything around us: from the phones we carry around, to the smart devices with microphones and cameras in our homes and offices, to voting machines, to critical infrastructure. The insecurity of much of that technology, and increasingly discomforting motives of the tech giants that control it from afar, has rightly shaken many of us.

But latest challenge to our collective security comes not from Facebook or Google or Russian hackers or Cambridge Analytica: it comes from the Australian government. Their new proposed "Access and Assistance" bill would require the operators of all of that technology to comply with broad and secret government orders, free from liability, and hidden from independent oversight. Software could be rewritten to spy on end-users; websites re-engineered to deliver spyware. Our technology would have to serve two masters: their customers, and what a broad array of Australian government departments decides are the "interests of Australia’s national security." Australia would not be the last to demand these powers: a long line of countries are waiting to demand the same kind of "assistance."

In fact, Australia is not the first nation to think of granting itself such powers, even in the West. In 2016, the British government took advantage of the country’s political chaos at the time to push through, largely untouched, the first post-Snowden law that expanded not contracted Western domestic spying powers. At the time, EFF warned of its dangers —- particularly orders called "technical capability notices", which could allow the UK to demand modifications to tech companies’ hardware, software, and services to deliver spyware or place backdoors in secure communications systems. These notices would remain secret from the public.

Last year we predicted that the other members of Five Eyes (the intelligence-sharing coalition of Canada, New Zealand, Australia, the United Kingdom, and the United States) might take the UK law as a template for their own proposals, and that Britain "… will certainly be joined by Australia" in proposing IPA-like powers.

That’s now happened. This month, in the midst of a similar period of domestic political chaos, the Australian government introduced their proposal for the "Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018." The bill unashamedly lifts its terminology and intent from the British law.

But if the Australian law has taken elements of the British bill, it has also whittled them into a far sharper tool. The UK bill created a hodge-podge of new powers; Australia’s bill recognizes the key new powers in the IPA and has zeroed in on their key abilities: those of assistance and access.

If this bill passes, Australia will — like the UK — be able to demand complete assistance in conducting surveillance and planting spyware, from a vast slice of the Internet tech sector and beyond. Rather than having to come up with ways to undermine the increasing security of the Net, Australia can now simply demand that the creators or maintainers of that technology re-engineer it as they ask.

It’s worth underlining here just how sweeping such a power is. To give one example: our smartphones are a mass of sensors. They have microphones and cameras, GPS locators, fingerprint and facial scanners. The behavior of those sensors is only loosely tied to what their user interfaces tell us.

Australia seeks to give its law enforcement, border and intelligence services, the power to order the creators and maintainers of those tools to do "acts and things" to protect "the interests of Australia’s national security, the interests of Australia’s foreign relations or the interests of Australia’s national economic well-being".

The "acts and things" are largely unspecified — but they include enabling surveillance, hacking into computers, and remotely pulling data from private computers and public networks.

The range of people who would have to secretly comply with these orders is vast. The orders can be served on any "designated communications provider", which includes telcos and ISPs, but is also defined to include a "person [who] develops, supplies or updates software used, for use, or likely to be used, in connection with: (a) a listed carriage service; or (b) an electronic service that has one or more end users in Australia"; or a "person [who] manufactures or supplies customer equipment for use, or likely to be used, in Australia".

Examples of electronic services may "include websites and chat fora, secure messaging applications, hosting services including cloud and web hosting, peer-to-peer sharing platforms and email distribution lists, and others."

You can see the full list in the draft bill in section 317C, page 16.

As Mark Nottingham, co-chair of the IETF’s HTTP group and member of the Internet Architecture Board, notes, this seems to include "Everyone who’s ever written an app or hosted a Web site — worldwide, since one Australian user is the trigger — is a potential recipient, whether they’re a multimillion dollar company or a hobbyist." It includes Debian ftpmasters, and Linux developers; Mozilla or Microsoft; certificate authorities like Let’s Encrypt, or DNS providers.

This is not an error: when we were critiquing a similarly broad definition in the UK’s IPA, we pointed out that the wording would allow the authorities to target a particular developer at a company (while requiring them to not inform their boss), or non-technical bystander who would not know the impact of what they were being asked to do. Commentators from close to GCHQ denied this would be the case and said that this would be clarified in later documents — but subsequent draft codes of practice actually doubled down on the breadth of the orders, saying that it was deliberately broad, and that even café owners who operated a wifi hotspot could be served with an order.

There are some signs that the companies affected by these orders have learned the lesson of the IPA, and pushed back during the Assistance and Access’s preliminary stages. Unlike the UK bill, there are clauses forbidding Australia from being required to "implement or build [a] systemic weakness or systemic vulnerability into a form of electronic protection" (S.317ZG); and preventing actions in some cases that would cause material loss to others lawfully using a targeted computer (e.g. S.199 (3), pg 163. Companies have an opportunity to be paid for their troubles, and billing departments can’t be targeted. There is some attempt to prevent government agencies forcing providers to "make false or misleading statements or engage in dishonest conduct"(S.317E).

But these are tiny exceptions in a sea of permissions, and easily circumvented. You may not have to make false statements, but if you "disclose information", the penalty is five years’ imprisonment (S.317ZF). What is a "systemic weakness" is determined entirely by the government. There is no independent judicial oversight. Even counselling an ISP or telco to not comply with an assistance or capability order is a civil offence.

If the passage of the UK surveillance law is any guide, Australian officials will insist that while the language is broad, no harm is intended, and the more reasonable, narrower interpretations were meant. But none of those protestations will result in amendments to the law: because Australia, like Britain, wants the luxury of broad, and secret powers. There will be — and can be no true oversight — and the kind of malpractice we have seen in the surveillance programs of the U.S. and U.K. intelligence services will spread to Australia’s law enforcement. Trust and security in the Australian corner of the Internet will diminish — and other countries will follow the lead of the anglophone nations in demanding full and secret control over the technology, the personal data, and the individual innovators of the Internet.

"The government," says Australia’s Department of Home Affairs web site, "welcomes your feedback" on the bill. Comments are due by September 10th. If you are affected by this law — and you almost certainly are — you should read the bill, and write to the Australian government to rethink this disastrous proposal. We need more trust and security in the future of the Internet, not less. This is a bill that will breed digital distrust, and undermine the security of us all.

Sen. Ron Wyden has sent a letter to the U.S. Department of Justice concerning disruptions to 911 emergency services caused by law enforcement’s use of cell-site simulators (CSS, also known as IMSI catchers or Stingrays). In the letter, Sen. Wyden states that:

Senior officials from the Harris Corporation—the manufacturer of the cell-site simulators used most frequently by U.S. law enforcement agencies—have confirmed to my office that Harris’ cell-site simulators completely disrupt the communications of targeted phones for as long as the surveillance is ongoing. According to Harris, targeted phones cannot make or receive calls, send or receive text messages, or send or receive any data over the Internet. Moreover, while the company claims its cell-site simulators include a feature that detects and permits the delivery of emergency calls to 9-1-1, its officials admitted to my office that this feature has not been independently tested as part of the Federal Communication Commission’s certification process, nor were they able to confirm this feature is capable of detecting and passing-through 9-1-1 emergency communications made by people who are deaf, hard of hearing, or speech disabled using Real-Time Text technology.

The full text of the letter can be read here [PDF].

Researchers of CSS technology have long suspected that using such technologies, even professionally designed and marketed CSS’s, would have a detrimental effect on emergency services, and now—for the first time—we have confirmation.

It is striking, but unfortunately not surprising, that law enforcement has been allowed to use these technologies and has continued to use them despite the significant and undisclosed risk to public safety posed by disabling 911 service, not to mention the myriad privacy concerns related to CSS use. What’s more, a cell-site simulator wouldn’t just disrupt service for the specific person or persons being tracked but would likely disrupt service for every mobile device in the area as it tricks every phone in the area into connecting to the fake base station in search of the target phone. This could be especially dangerous during a natural disaster when IMSI catchers are being used to locate missing persons in damaged buildings or other infrastructure, cutting off 911 service at a time like that could be a grave danger to others trapped in dangerous situations.

Harris Corporation claims that they have the ability to detect and deliver calls to 911, but they admit that this feature hasn’t been tested. Put bluntly, there is no way for the public or policy makers to know if this technology works as intended.  Thanks to the onerous non-disclosure agreements that customers of Harris Corp and other CSS vendors’ customers have regularly been required to enter into there is very little public information about how CSS work and what their capabilities are. Even if a security researcher did audit a CSS, the results would be unlikely to ever see the light of day.

Furthermore, even if Harris’ technology works the way they claim it does, they are far from the only manufacturer of CSS devices. There are several other companies that manufacture such technology and we know even less about the workings of their technologies or whether they have any protections against blocking 911 calls. Cell-site simulators are now easy to acquire or build, with homemade devices costing less than $1000 in parts. Criminals, spies, and anyone else with malicious intent could easily build a CSS specifically to disrupt phone service, or use it without caring whether it disrupts 911 service.

The only way to stop the public safety and public privacy threats that cell-site simulators pose is to increase the security of our mobile communications infrastructure at every layer. All companies involved in mobile communications from the network layer (AT&T, T-Mobile, Verizon, etc.) to the hardware layer (Qualcomm, Samsung, Intel), to the software layer (Apple, Google) need to work together to ensure that our cellular infrastructure is safe, secure, and private from attacks by spys, criminals, and rogue law enforcement.  For their part, policymakers such as Sen. Wyden can help by continuing to provide transparency on how IMSI catchers work and are used, and funds to upgrade our aging cellular infrastructure.

For more information about cell-site simulators please consult our Street-Level Surveillance guides on law enforcement surveillance technology.

Right now, the U.S. Senate is debating an issue that’s critical to our democratic future: secure elections. Hacking attacks were used to try to undermine the 2016 U.S. election, and in recent years, elections in Latin America and Ukraine were also subject to cyber attacks.

It only makes sense to harden the security of U.S. voting machines, which are perhaps the most direct route to impacting an election’s results. But the current bill that’s advancing in the Senate, the Secure Elections Act, is no solution at all. If it isn’t strengthened dramatically, senators should vote against this deeply flawed bill.

The best solution to stop a possible hack of voting machines is clear: all machines must use a paper trail that’s regularly audited. Many states with voting machines already use paper, but more than a dozen are using at least some machines that provide no paper trail. In five states—New Jersey, Delaware, South Carolina, Georgia, and Louisiana—not a single jurisdiction has a paper trail.

As important as they are, paper trails only work if they’re checked. As we’ve said since the aftermath of the 2016 election, we not only need elections to be auditable, we need them to be audited.

Currently, U.S. elections are usually audited only when they are extremely close or in other unusual situations. There is a cheap and effective way to audit all of our elections, using a system that statisticians call "risk-limiting audits." By hand-verifying a small number of randomly chosen ballots, election officials can check, with a high degree of certainty, that the election results were recorded properly. Because they don’t involve massive statewide recounts, such audits can and should be performed after each election. Election audits should be like an annual checkup, not like a visit to the emergency room.

The current bill moving ahead in the Senate, S. 2593, falls far short. The bill once included both of these measures, but following amendments, now has neither. It isn’t a mystery how to get this done. A competing bill introduced by Sen. Ron Wyden would mandate both risk-limiting audits and a verifiable paper trail, and has gained three more cosponsors since S. 2593 has been watered down.

Secure and verifiable voting isn’t optional. Tell the Senate to either pass a strong bill or oppose the Secure Elections Act.

Take Action

Tell your Senators to pass a strong election security bill

There is room to debate what makes an invention patentable, but one thing should be uncontroversial: patentable inventions should actually be new. That’s what EFF and the R Street Institute told the Supreme Court this week in an amicus brief urging it to grant certiorari and reverse the Federal Circuit’s decision in Ariosa v. Illumina [PDF]. We explained that the Federal Circuit’s decision is wrong on the law and bad for innovation, access to knowledge, and the patent system.

In Ariosa, the Federal Circuit departed from more than a century of case law to uphold a patent that claimed an "invention" that someone else had already described in a published patent application. According to the court, the description didn’t qualify as material that could invalidate the patent being challenged because it did not appear in the "claims"—the section specifying the legal boundaries of the applicant’s rights – but rather in the section of the patent application describing the nature and operation of the applicant’s work.

This hair-splitting exception flies in the face of the Patent Act, which treats published patent applications just like granted patents in that they can invalidate later-filed patents based on all they describe, not just what they claim, based on the earliest filing date associated with the application. The Supreme Court has twice held that that rule applies to granted patents, but has not yet had a chance to confirm that the same rule applies to published patent applications.

EFF and R Street’s brief emphasizes the need for the Supreme Court to confirm what should be uncontroversial: to be patentable, inventions must be new. That follows from the Constitution’s mandate that the patent system promote innovation and technological progress. It is also consistent with the words of the Patent Act and the statements that Congress made when writing those words into law.

It also makes sense: Patents claiming advances made by others deplete from, rather than contribute to, the stock of public knowledge. The teachings of the earlier patent would be free for the public to use if not for the second-comer’s efforts at the Patent Office. Given the volume of patent applications and the resulting backlog at the Patent Office, the longstanding rule against such patents makes more sense now than ever.

By swinging open the Patent Office’s door to patents on old ideas, Ariosa’s approach cuts against the pro-innovation goal the Constitution sets forth for the patent system. We hope the Supreme Court will grant certiorari to confirm that the Patent Office must say no to second-comers seeking to patent the achievements of others.


People are mad about the revelation that Verizon throttled the wireless service of the Santa Clara Fire Department in the middle of fighting a massive fire. In response, Verizon is making the very narrow claim that this wasn’t a clear violation of the 2015 Open Internet Order’s ban on throttling. That intentionally misses the point. The 2015 order, by reclassifying ISPs under Title II of the Federal Communications Act, would have likely made what happened with the fire department illegal.

Under the 2015 Open Internet Order, the Federal Communications Commission did two things. First, it established that all broadband Internet service providers were common carriers subject to the federal laws that protect consumers, promote competition, and guard user privacy. Second, it established a set of "net neutrality" rules based on its Title II authority through the bright line rules of "no blocking, no throttling, no paid prioritization" as well as a general conduct rule.

So when the FCC repealed that order, it not just ended a ban on blocking, throttling, and paid prioritization, it also declared federal laws that would be directly applicable to Verizon’s conduct to no longer apply.

Verizon Upselling Fire Fighters While the State Was Burning Would Likely Have Been an Unjust and Unreasonable Practice Under the 2015 FCC Order

All common carriers under the now repealed Open Internet Order were subject to legal obligations that required all of their practices to be just and reasonable, and that anything that was not just and reasonable would be illegal. Now the industry complains that this broad standard is difficult for them to know if they are in compliance, and they are right that the term "reasonable" is not an ironclad rule (this is why the FCC creates regulation, it provides clarity to their legal responsibilities). But I think we can all agree that trying to upsell firefighters after giving them the run around for four weeks right in the middle of fighting a fire is an unreasonable and unjust practice. In fact, trying to upsell anyone in an emergency—someone trying to check in on their families after a natural disaster and so on—instinctively feels like something companies should not be able to do.

And under the 2015 Open Internet Order, the FCC could investigate the issue, penalized Verizon for its conduct, and subsequently adopt a regulation stating ISPs cannot throttle public safety agencies during the time of emergency. Wireless providers have claimed during the net neutrality debate that they needed flexibility in order to address the needs of first responders when it was needed. In response, the order stated that the FCC found it fine for ISPs to prioritize first responders during an emergency, and it was after the repeal that Verizon did the opposite to firefighters.

The FCC is Now Prohibited From Looking into the Practice of Throttling 4G Markets Services Down to Dial-up Speeds Despite Clear Public Safety Implications

What Verizon leaves out in its defense of its throttling of public safety is that it is pretty clear throttling a service down from 50 Mbps down to effectively kilobit dial-up speeds in today’s world basically shutdowns wireless service. This is what they did to the Santa Clara fire department, for example.

Furthermore, it has nothing to do with managing congestion on the network because it is such a drastic drop in service. Congestion with wireless service, and the related throttling to address congestion, has been about dividing up the resource of wireless bandwidth amongst customers to ensure the network operated as efficiently as possible. That is not what is happening here. What we have here is a business model that separates various wireless data packages with a strong negative incentive if public safety didn’t switch from the lower tier plan to a plan that cost twice as much.

There Still Might be a Violation of the Net Neutrality Rules, But We Have No Agency to Investigate the Question

That was the point of the Restoring Internet Freedom Order. It was to strip away federal oversight over the ISP industry. Within the documents submitted by Santa Clara, it appears to be some confusion as to what exactly was Verizon selling to the fire department. Twice the public safety officials asserted they thought they were buying an unlimited plan and twice they discovered during the emergency they did not. It could be that Verizon was upfront on the plans being purchased and Santa Clara was mistaken, which if that is the case they will be fine under the net neutrality transparency rules.

While proponents of repealing net neutrality will argue the Federal Trade Commission (FTC) can manage this specific issue of transparency (they are right to a limited extent), they ignore the most critical differences between FTC power and the now-repealed FCC power. The FTC can only do something after the fact and nothing more. Meaning, in a literal sense, after the fire. And then if this came up again in another state, the FTC would have to wait until after the fire burned there. Notably, the FTC can’t ban throttling and upselling during an emergency.

The FCC, however, could have proactively addressed this problem in the future and establish a federal rule applicable in all fifty states that ISPs are not allowed to throttle public safety services during emergencies. The FCC under its Title II authority could even go so far and categorically say it is illegal to try to upsell public safety agencies, or anyone, wireless plans during official emergencies as a matter of public safety. The agency would have good arguments as to why to set these rules, and thanks to Verizon, the evidence to demonstrate that threats to health and life are at stake in the absence of such rules. But the Restoring Internet Freedom Order abandoned those powers, so right now in America, firefighters don’t have a government agency they can turn to for help.

Here is how we can change that. The House of Representatives can pass the Congressional Review Act to reverse the Restoring Internet Freedom Order. States can empower themselves by passing their own laws that exert oversight over ISP broadband practices. For example, California’s S.B. 822, headed for an Assembly vote in the next week, would provide that oversight power. And lastly, the FCC can abandon the biggest mistake in Internet policy history and reinstate its authority over broadband providers. Until then, public safety has no recourse when the next emergency comes.

Take Action

Tell Your Representative to Stand Up for Net Neutrality

text Don’t Shoot Messenger
Thu, 23 Aug 2018 22:43:49 +0000

Late last week, Reuters reported that Facebook is being asked to "break the encryption" in its Messenger application to assist the Justice Department in wiretapping a suspect's voice calls, and that Facebook is refusing to cooperate. The report alarmed us in light of the government’s ongoing calls for backdoors to encrypted communications, but on reflection we think it’s unlikely that Facebook is being ordered to break encryption in Messenger and that the reality is more complicated.  

The wiretap order and related court proceedings arise from an investigation of the MS-13 gang in Fresno, California and is entirely under seal. So while we don’t know exactly what method for assisting with the wiretap the government is proposing Facebook use, if any, we can offer our informed speculation based on how Messenger works. This post explains our best guess(es) as to what’s going on, and why we don’t think this case should result in a landmark legal precedent on encryption.  

We do fear that this is one of a series of moves by the government that would allow it to chip away at users’ security, done in a way such that the government can claim it isn’t "breaking" encryption. And while we suspect that most people don’t use Messenger for secure communications—we certainly don’t recommend it—we’re concerned that this move could be used as precedent to attack secure tools that people actually rely on.

The nitty gritty: 

Messenger is Facebook’s flagship chat product, offering users the ability to exchange text messages, stickers, send files, and make voice and video calls. Unlike Signal and WhatsApp (also a Facebook product), however, Messenger is not marketed as a "secure" or encrypted means of communication. Messenger does have the option of enabling "secret" text conversations, which are end-to-end encrypted and make use of the Signal protocol (also used by WhatsApp).  

End-to-end encryption of messages

However, end-to-end encryption is not an option for Messenger voice calls.

At issue here is a demand by the government that Facebook help it intercept Messenger voice calls. While Messenger’s protocol isn’t publicly documented, we believe that we have a basic understanding how it works—and how it differs from actual secure messaging platforms. But first, some necessary background on how Messenger handles non-voice communications. 

Encryption of messages in Facebook Messenger

When someone uses Messenger to send a text chat to a friend, the user’s client (the app on their smartphone, for example) sends the message to Facebook’s servers, encrypted so that only Facebook can read it. Facebook then saves and logs the message, and forwards it on to the intended recipient, encrypted so that only the intended recipient can read it. When the government wants to listen in on those conversations, because Facebook sees every message before it’s delivered, the company can turn those chats over in real time (in response to a wiretap order) or turn over some amount of the user’s saved chat history (in response to a search warrant).

However, when someone uses Messenger to initiate a voice call, the process is different. Messenger uses a standard protocol called WebRTC for voice (and video) connections. WebRTC relies on Messenger to set up a connection between the two parties to the call that doesn’t go through Facebook’s servers. Rather—for reasons having to do with cost, efficiency, latency, and to ensure that the audio skips as little as possible—the data that makes up a Messenger voice call takes a shorter route between the two parties. That voice data is encrypted with something called the "session key" to ensure that a nosy network administrator sitting somewhere between the two parties to the call can’t listen in.  

This two-step process is typical in Voice over IP (VoIP) calling applications: first the two parties each communicate with a central server which assists them in setting up a direct connection between them, and once that connection is established, the actual voice data (usually) takes the shortest route.  

Step 1: A central server facilitates a key exchange between two devices. The servers cannot decrypt to see these keys.

Step 2: The session keys are then used for encrypting the call between the devices.

But in Messenger, some information related to the voice call does go through Facebook’s servers, especially when the call is first initiated. That data includes the session key that encrypts the voice data.

Step 1: The two devices communicate with a Facebook central server, sending their keys through the server.

Step 2: The two devices use the session keys to encrypt the call.

This differs in a major way from other secure messaging applications like Signal, WhatsApp, and iMessage. All of those apps use protocols that encrypt that initial session key—the key to the voice data—in a way that renders it unreadable by anyone other than the intended participants in the conversation. 

So even though Facebook doesn’t actually have the encrypted voice data, if it did somehow have that data, we’re pretty sure that it would have the technical means to decrypt it. In other words, despite the fact that the voice data is encrypted all the way between the two callers, it’s not really what we refer to as "end-to-end encrypted" because someone other than the intended recipient of the call—in this case Facebook—could decrypt it with the session key. 

Although the voice data is encrypted all the way between the two callers, it’s not really what we refer to as

So what’s at stake in this case: 

Assuming our technical understanding is roughly correct, Facebook can’t currently turn over unencrypted voice communications to the government without additional engineering effort. The question is what sort of engineering would be required, and what effect it would have on user security, both within Facebook and more generally. We’ve been able to identify at least four possible ways the government might ask Facebook to assist with its wiretap: 

  1. Force Facebook to retain the session key to the suspect’s conversation and turn it over to the government. The government would then use that key to decrypt voice data separately captured by the subject’s ISP (likely a mobile provider in this case).
  2. Force Facebook to construct a man-in-the-middle attack by directing the suspect’s phone to route Messenger voice data through Facebook’s servers, then capture and use the session key to decrypt the data.
    The government could force Facebook to construct a man-in-the-middle attack
  3. Force Facebook to push out a custom update to the suspect’s version of Messenger that would record conversations on the device and send them directly to the government.
  4. Demand that Facebook just figure out how to record the suspect’s conversations and turn them over—decrypted—to the government.

The government could force Facebook to push out a custom update to the suspect’s version of Messenger that would record conversations on the device and send them directly to the government

In broad strokes, these scenarios look similar to the showdown between Apple and the FBI in the San Bernardino case: the government compelling a tech company to alter its product to effectuate a search warrant (here a wiretap order). One obvious difference on the legal front is that the Apple case turned on the All Writs Act, whereas here the government is almost certainly relying on the technical assistance provision of the Wiretap Act, 18 U.S.C. § 2518(4). As we saw in the Apple case, the All Writs Act is a general-purpose gap-filling statute that allows the government to get orders necessary to further existing court orders, including search warrants. The Wiretap Act’s technical assistance provision is narrower and more specific, requiring communication service providers to furnish "technical assistance necessary to accomplish the interception unobtrusively and with a minimum of interference with the services."   

What are the limits of this duty to provide necessary technical assistance, and would it extend to the four possible demands we listed above? While we’re not aware of a judicial decision that’s directly on point, the Ninth Circuit Court of Appeals wrote in a well-known case interpreting this "minimum of interference" language that private companies' obligations to assist the government have "not extended to circumstances in which there is a complete disruption of a service they offer to a customer as part of their business." And, invoking case law on the All Writs Act, the court held that an "intercept order may not impose an undue burden on a company enlisted to aid the government." 

The government could of course be expected to argue that the options above are not unreasonably burdensome and that Messenger service would not be significantly disrupted. These arguments might have some force if Facebook’s participation is limited to preserving the session key for the suspect’s conversations. After all, this information already likely passes through Facebook’s servers in a way that Facebook could choose to capture it. One unknown is to what extent Facebook sees its role in facilitating Messenger calls as ensuring the security of the calls. If, as in the Apple case, Facebook tried to make it difficult to bypass security features in the system, cooperation would potentially be quite disruptive. But the government might say that in this context Facebook is much like a webmail provider such as Gmail that uses TLS to encrypt mail between the user and Google. Google has the keys to decrypt this data, so it can comply with a wiretap. Facebook’s role isn’t exactly the same, but it certainly can obtain the session keys.

In the scenario where Facebook is being asked to push a custom update, the company might raise more forceful arguments like those made by security experts in the Apple case about the risks of undermining public trust in automatic security updates. Computer security is hard, and using a trusted channel to turn a suspect’s phone into a surveillance device could have disastrous consequences. And if the government is simply telling Facebook to "figure it out," (option 4), Facebook might have reason to question the necessity of its assistance as well as its feasibility, since the government would not have demonstrated why other techniques would be unsuccessful in carrying out the surveillance. 

All of this points to a strong need for the public to know more about what’s going on in the Fresno federal court. The Reuters article indicates that Facebook is opposing the order in some respect, and we at EFF would love the opportunity to weigh in as amicus, as we did in San Bernardino. We hope the company will do its utmost to get the court to unseal at least the legal arguments in the case. It should also ask the court to allow amicus participation on any issues involving novel or significant interpretations of the Wiretap Act or other technical assistance law. 

Most important, we cannot allow the government to weaponize any ruling in this case in its larger push to undermine strong encryption and digital security.

Most important, we cannot allow the government to weaponize any ruling in this case in its larger push to undermine strong encryption and digital security. The government’s narrative has long been that there is a "middle ground," and that companies should engage in "responsible encryption." It loves to point to services that use TLS as examples of encrypted data that can yield to lawful court orders for plaintext. Similarly, in the San Bernardino case, the FBI did not technically ask Apple to "break the encryption" in iOS, but instead to reengineer other security features that protected that encryption. These are dangerous requests that still put users at risk, even though they don’t involve tampering with the math supporting strong encryption.

We will follow this case closely as it develops, and we’ll push buck on all efforts to undermine user security.

The Federal Trade Commission (FTC) is wondering whether it might be time to change how the U.S. approaches competition and consumer protection. EFF has been thinking the same thing and come to the conclusion that yes, it is. On August 20, we filed six comments with the FTC on a variety of related topics to tell them some of the history, current problems, and thoughtful recommendations that EFF has come up with in our 28 years working in this space.

Back in June 2018, the FTC announced it was going to hold hearings on "competition and consumer protection in the 21st century" and invited comment on 11 topics. As part of our continuing work looking at these areas as they intersect with the future of technology, EFF submitted comments on six of the topics listed by the FTC: competition and consumer protection issues in communication, information, and media technology networks; the identification and measurement of market power and entry barriers, and the evaluation of collusive, exclusionary, or predatory conduct or conduct that violates the consumer protection statutes enforced by the FTC, in markets featuring "platform" businesses; the intersection between privacy, big data, and competition; evaluating the competitive effects of corporate acquisitions and mergers; the role of intellectual property and competition policy in promoting innovation; and the consumer welfare implications associated with the use of algorithmic decision tools, artificial intelligence, and predictive analytics.

Our goal in submitting these comments was to provide information and recommendations to the FTC about these complicated areas of Internet and technology policy. The danger is always that reactionary policies created in response to a high-profile incident may result in rules that restrict the rights of users and are so onerous that only established, big companies can afford to comply.

On the other hand, the Internet status quo has moved increasingly away from the ideal of decentralization and towards a few large companies acting as gatekeepers, so some thoughtful regulation or scrutiny is warranted.

Take, for example, our comments on the topic of competition and consumer protection issues in communication, information, and media technology networks. When everyone is talking on Twitter or Facebook, those platforms become important venues for speech. And so the rules used to prevent someone from using those platforms need to be carefully considered and made transparent.

Another obvious example of how consolidation hurts users is found in consumers’ lack of choices for broadband Internet service. A majority of Americans [.pdf] find themselves with little or no choice in high-speed ISPs, giving those providers little to no incentive to improve or expand their service. And these companies have a history of net neutrality violations, an area that is up to the FTC to police since the FCC’s "Restoring Internet Freedom Order" went into effect.

We point out a similar tension in our comment on the intersection between privacy, big data, and competition. The notion that something needs to be done about what tech companies do with the data they have on their users has gained momentum since the revelations of what Cambridge Analytica did with Facebook data. But many proposed rules to address this issue risk creating burdens that only companies of Facebook’s size and reach can meet, further cementing their dominance.

Looking at ways to promote meaningful opt-in consent, "right to know" rules that let users see their data and know how it is being used, right to take your data with you somewhere else ("data portability") and use it there ("data interoperability"), and new ways to hold companies to account when they fail to secure customer privacy would create a healthier ecosystem.

The problem of access to data is also included in our comment on market power and entry barriers and the evaluation of collusive, exclusionary, or predatory conduct by platforms. Companies further control their data by using computer crime laws to ensure that only they have access to it.

We also encourage policymakers to account for privacy and data concerns when looking at acquisitions and mergers. Google and Facebook both purchased companies that track what their users are doing online, augmenting the large amounts of data they already have access to. In the case of Facebook, that means they can follow you when you click a link that you might think takes you away from Facebook, and they can tie that information to your profile. In the case of Google, the company initially said their data would be kept separate from the data gathered by Doubleclick, the service it acquired. But that siloing was eventually ended by the company.

Intellectual property and competition is another topic that requires competition policymakers to look beyond the usual concerns. Intellectual property by its very nature is exclusionary. Copyright and patent holders own exclusive rights in things. When it’s a patent on a standard—that is on a technology or process that’s required to build a compatible product—patents give the holder the ability to charge huge license fees, knowing everyone has to pay. Small businesses and new businesses are especially vulnerable. Among other recommendations, we told the FTC that we need to make sure these standards-essential patents are licensed in fair, reasonable, and non-discriminatory ways (also called "FRAND" or "RAND" obligations). We see intellectual property harm competition again when we are not sold products but licensed them. Meaning we can’t reverse engineer and build our own versions. Or test the security on them. Or make our own repairs. We’re further restricted by the Digital Millennium Copyright Act’s section 1201, which bars circumvention of access controls and technical protection measures. That law creates legal risk for people who tinker with their own devices, or make repairs, promoting obsolescence and raising the cost of "authorized" repair services.

Another new concern comes from the use of algorithmic decision tools, artificial intelligence, and predictive analytics. These are tools rapidly growing in importance and ubiquity. They can also, however, insinuate imaginary correlations, suggest misleading conclusions, and technologically launder longstanding discrimination and bias. This can affect consumer welfare, making it an issue of concern to the FTC.

As with data privacy, recent events have illuminated concerns with these methods being used by companies like Facebook. Newsfeed algorithms have been seen to perpetuate misinformation and be vulnerable to manipulation. And algorithms used for content moderation have falsely flagged and taken down posts by public figures including a European head of state, silencing his criticism of U.S. foreign policy using an iconic war image published long ago.

Information about these tools has been jealously guarded by their owners. AI and its many potential applications could invite innovation and foster new enterprises. On the other hand, the same sampling bias and secrecy that prevent AI tools from being replicated and tested scientifically can skew their operation in practice and entrench firms wielding market power. Requirements for transparency into code and data sets used in significant AI systems would enhance the public interest. Transparency would help prevent exacerbated discrimination that would inevitably grow worse and more deeply entrenched if the sector remains entirely unregulated.

These comments are distillations of work EFF has spent decades on and represent areas we know will become major issues in the future. We shared this expertise with the FTC in the hopes of making sure the policymakers there understand the way civil rights, consumer protections, and competition play out on the Internet and in emerging technology.