Connect with us

Technology

Surveillance powers in UK’s Online Safety Bill are risk to E2EE, warns legal expert

Published

on

Independent legal analysis of a controversial UK government proposal to regulate online speech under a safety-focused framework — aka the Online Safety Bill — says the draft bill contains some of the broadest mass surveillance powers over citizens every proposed in a Western democracy which it also warns pose a risk to the integrity of end-to-end encryption (E2EE).

The opinion, written by the barrister Matthew Ryder KC of Matrix Chambers, was commissioned by Index on Censorship, a group that campaigns for freedom of expression.

Ryder was asked to consider whether provisions in the bill are compatible with human rights law.

His conclusion is that — as is –– the bill lacks essential safeguards on surveillance powers that mean, without further amendment, it will likely breach the European Convention on Human Rights (ECHR).

The bill’s progress through parliament was paused over the summer — and again in October — following political turbulence in the governing Conservative Party. After the arrival of a new digital minister, and two changes of prime minister, the government has indicated it intends to make amendments to the draft — however these are focused on provisions related to so-called ‘legal but harmful’ speech, rather than the gaping human rights hole identified by Ryder.

We reached out to the Home Office for a response to the issues raised by his legal opinion.

A government spokesperson replied with an emailed statement, attributed to minister for security Tom Tugendhat, which dismisses any concerns:

“The Online Safety Bill has privacy at the heart of its proposals and ensures we’re able to protect ourselves from online crimes including child sexual exploitation. It‘s not a ban on any type of technology or service design.

“Where a company fails to tackle child sexual abuse on its platforms, it is right that Ofcom as the independent regulator has the power, as a last resort, to require these companies to take action.

“Strong encryption protects our privacy and our online economy but end-to-end encryption can be implemented in a way which is consistent with public safety. The Bill ensures that tech companies do not provide a safe space for the most dangerous predators online.”

Ryder’s analysis finds key legal checks are lacking in the bill which grants the state sweeping powers to compel digital providers to surveil users’ online communications “on a generalised and widespread basis” — yet fails to include any form of independent prior authorisation (or independent ex post facto oversight) for the issuing of content scanning notices.

In Ryder’s assessment this lack of rigorous oversight would likely breach Articles 8 (right to privacy) and 10 (right to freedom of expression) of the ECHR.

Existing very broad surveillance powers granted to UK security services, under the (also highly controversial) Investigatory Powers Act 2016 (IPA), do contain legal checks and balances for authorizing the most intrusive powers — involving the judiciary in signing off intercept warrants.

But the Online Safety Bill leaves it up to the designated Internet regulator to make decisions to issue the most intrusive content scanning orders — a public body that Ryder argues is not adequately independent for this function.

“The statutory scheme does not make provision for independent authorisation for 104 Notices even though it may require private bodies – at the behest of a public authority – to carry out mass state surveillance of millions of user’s communications. Nor is there any provision for ex post facto independent oversight,” he writes. “Ofcom, the state regulator, cannot in our opinion, be regarded as an independent body in this context.”

He also points out that given existing broad surveillance powers under the IPA, the “mass surveillance” of online comms proposed in the Online Safety Bill may not meet another key human rights test — of being “necessary in a democratic society”.

While bulk surveillance powers under the IPA must be linked to a national security concern — and cannot be used solely for the prevention and detection of serious crime between UK users — yet the Online Safety Bill, which his legal analysis argues grants similar “mass surveillance” powers to Ofcom, covers a much broader range of content than pure national security issues. So it looks far less bounded. 

Commenting on Ryder’s legal opinion in a statement, Index on Censorship’s chief executive, Ruth Smeeth, denounced the bill’s overreach — writing:

“This legal opinion makes clear the myriad issues surrounding the Online Safety Bill. The vague drafting of this legislation will necessitate Ofcom, a media regulator, unilaterally deciding how to deploy massive powers of surveillance across almost every aspect of digital day-to-day life in Britain. Surveillance by regulator is perhaps the most egregious instance of overreach in a Bill that is simply unfit for purpose.”

Impact on E2EE

While much of the controversy attached to the Online Safety Bill — which was published in draft last year but has continued being amended and expanded in scope by government — has focused on risks to freedom of expression, there are a range of other notable concerns. Including how content scanning provisions in the legislation could impact E2EE, with critics like the Open Rights Group warning the law will essentially strong-arm service providers into breaking strong encryption.

Concerns have stepped up since the bill was introduced after a government amendment this July — which proposed new powers for Ofcom to force messaging platforms to implement content-scanning technologies even if comms are strongly encrypted on their service. The amendment stipulated that a regulated service could be required to use “best endeavours” to develop or source technology for detecting and removing CSEA in private comms — and private comms puts it on a collision course with E2EE.

E2EE remains the ‘gold standard’ for encryption and online security — and is found on mainstream messaging platforms like WhatsApp, iMessage and Signal, to name a few — providing essential security and privacy for users’ online comms.

So any laws that threaten use of this standard — or open up new vulnerabilities for E2EE — could have a massive impact on web users’ security globally.

In the legal opinion, Ryder focuses most of his attention on the Online Safety Bill’s content scanning provisions — which are creating this existential risk for E2EE.

The bulk of his legal analysis centers on Clause 104 of the bill — which grants the designated Internet watchdog (existing media and comms regulator, Ofcom) a new power to issue notices to in-scope service providers requiring them to identify and take down terrorism content that’s communicated “publicly” by means of their services or Child Sex Exploitation and Abuse (CSEA) content being communicated “publicly or privately”. And, again, the inclusion of “private” comms is where things look really sticky for E2EE.

Ryder takes the view that the bill, rather than forcing messaging platforms to abandon E2EE altogether, will push them towards deploying a controversial technology called client side scanning (CSS) — as a way to comply with 104 Notices issued by Ofcom — predicting that’s “likely to be the primary technology whose use is mandated”.

Clause 104 does not refer to CSS (or any technology) by name. It mentions only ‘accredited technology’. However, the practical implementation of 104 Notices requiring the identification, removal and/or blocking of content leads almost inevitably to the concern that this power will be used by Ofcom to mandate CSPs [communications service providers] using some form of CSS,” he writes, adding: “The Bill notes that the accredited technology referred to c.104 is a form of ‘content moderation technology’, meaning ‘technology, such as algorithms, keyword matching, image matching or image classification, which […] analyses relevant content’ (c.187(2)(11). This description corresponds with CSS.”

He also points to an article published by two senior GCHQ officials this summer — which he says “endorsed CSS as a potential solution to the problem of CSEA content being transmitted on encrypted platforms” — further noting that out their comments were made “against the backdrop of the ongoing debate about the OLSB [Online Safety Bill].”

Any attempt to require CSPs to undermine their implementation of end-to-end encryption generally, would have far-reaching implications for the safety and security of all global on-line of communications. We are unable to envisage circumstances where such a destructive step in the security of global online communications for billions of users could be justified,” he goes on to warn.

Client side scanning risk

CSS refers to controversial scanning technology in which the content of encrypted communications is scanned with the goal of identifying objectionable content. The process entails a message being converted to a cryptographic digital fingerprint prior to it being encrypted and sent, with this fingerprint then compared with a database of fingerprints to check for any matches with known objectionable content (such as CSEA). The comparison of these cryptographic fingerprints can take place either on the user’s own device — or on a remote service.

Wherever the comparison takes place, privacy and security experts argue that CSS breaks the E2E trust model since it fundamentally defeats the ‘zero knowledge’ purpose of end-to-end encryption and generates new risks by opening up novel attack and/or censorship vectors.

For example they point to the prospect of embedded content-scanning infrastructure enabling ‘censorship creep’ as a state could mandate comms providers scan for an increasingly broad range of ‘objectionable’ content (from copyrighted material all the way up to expressions of political dissent that are displeasing to an autocratic regime, since tools developed within a democratic system aren’t likely to be applied in only one place in the world).

An attempt by Apple to deploy CSS last year on iOS users’ devices — when it announced it would begin scanning iCloud Photo uploads for known child abuse imagery — led to a huge backlash from privacy and security experts. Apple first paused — and then quietly dropped reference to the plan in December, so it appears to have abandoned the idea. However governments could revive such moves by mandating deployment of CSS via laws like the UK’s Online Safety Bill which relies on the same claimed child safety justification to embed and enforce content scanning on platforms.

Notably, the UK Home Office has been actively supporting development of content-scanning technologies which could be applied to E2EE services — announcing a “Tech Safety Challenge Fund” last year to splash taxpayer cash on the development of what it billed at the time as “innovative technology to keep children safe in environments such as online messaging platforms with end-to-end encryption”.

Last November, five winning projects were announced as part of that challenge. It’s not clear how ‘developed’ — and/or accurate — these prototypes are. But the government is moving ahead with Online Safety legislation that this legal expert suggests will, de facto, require E2EE platforms to carry out content scanning and drive uptake of CSS — regardless of the state of development of such tech.

Discussing the government’s proposed amendment to Clause 104 — which envisages Ofcom being able to require comms service providers to ‘use best endeavours’ to develop or source their own content-scanning technology to achieve the same purposes as accredited technology which the bill also envisages the regulator signing off — Ryder predicts: It seems likely that any such solution would be CSS or something akin to it. We think it is highly unlikely that CSPs would instead, for example, attempt to remove all end-to-end encryption on their services. Doing so would not remove the need for them analyse the content of communications to identify relevant content. More importantly, however, this would fatally compromise security for their users and on their platforms, almost certainly causing many users to switch to other services.”

“[I]f 104 Notices were issued across all eligible platforms, this would mean that the content of a almost all internet-based communications by millions of people — including the details of their personal conversations — would be constantly surveilled by service providers. Whether this happens will, of course, depend on how Ofcom exercises its power to issue 104 Notices but the inherent tension between the apparent aim, and the need for proportionate use is self-evident,” he adds. 

Failure to comply with the Online Safety Bill will put service providers at risk of a range of severe penalties — so very large sticks are being assembled and put in place alongside sweeping surveillance powers to force compliance.

The draft legislation allowing for fines of up to 10% of global annual turnover (or £18M, whichever is higher). The bill would also enable Ofcom to be able to apply to court for “business disruption measures” — including blocking non-compliant services within the UK market. While senior execs at providers who fail to cooperate with the regulator could risk criminal prosecution.

For its part, the UK government has — so far — been dismissive of concerns about the impact of the legislation on E2EE.

In a section on “private messaging platforms”, a government fact-sheet claims content scanning technology would only be mandated by Ofcom “as a last resort”. The same text also suggests these scanning technologies will be “highly accurate” — without providing any evidence in support of the assertion. And it writes that “use of this power will be subject to strict safeguards to protect users’ privacy”, adding: “Highly accurate automated tools will ensure that legal content is not affected. To use this power, Ofcom must be certain that no other measures would be similarly effective and there is evidence of a widespread problem on a service.”

The notion that novel AI will be “highly accurate” for a wide-ranging content scanning purpose at scale is obviously questionable — and demands robust evidence to back it up.

You only need consider how blunt a tool AI has proven to be for content moderation on mainstream platforms, hence the thousands of human contractors still employed reviewing automated reports. So it seems highly fanciful that the Home Office has or will be able to foster development of a far more effective AI filter than tech giants like Google and Facebook have managed to devise over the past decades.

As for limits on use of content scanning notices, Ryder’s opinion touches on safeguards contained in Clause 105 of the bill — but he questions whether these are sufficient to address the full sweep of human rights concerns attached to such a potent power.

“Other safeguards exist in Clause 105 of the OLSB but whether those additional safeguards will be sufficient will depend on how they are applied in practice,” he suggests. “There is currently no indication as to how Ofcom will apply those safeguards and limit the scope of 104 Notices.

“For example, Clause 105(h) alludes to Article 10 of the ECHR, by requiring appropriate consideration to be given to interference with the right to freedom of expression. But there is no specific provision ensuring the adequate protection of journalistic sources, which will need to be provided in order to prevent a breach of Article 10.”

In further remarks responding to Ryder’s opinion, the Home Office emphasized that Section 104 Notice powers will only be used where there is no alternative, less intrusive measures capable of achieving the necessary reduction in illegal CSEA (and/or terrorism content) appearing on the service — adding that it will be up to the regulator to assess whether issuing a notice is necessary and proportionate, taking into account matters set out in the legislation including the risk of harm occurring on a service, as well as the prevalence of harm.

Technology

Tesla more than tripled its Austin gigafactory workforce in 2022

Published

on

Tesla’s 2,500-acre manufacturing hub in Austin, Texas tripled its workforce last year, according to the company’s annual compliance report filed with county officials. Bloomberg first reported on the news.

The report filed with Travis County’s Economic Development Program shows that Tesla increased its Austin workforce from just 3,523 contingent and permanent employees in 2021 to 12,277 by the end of 2022. Bloomberg reports that just over half of Tesla’s workers reside in the county, with the average full-time employee earning a salary of at least $47,147. Outside of Tesla’s factory, the average salary of an Austin worker is $68,060, according to data from ZipRecruiter.

TechCrunch was unable to acquire a copy of the report, so it’s not clear if those workers are all full-time. If they are, Tesla has hired a far cry more full-time employees than it is contracted to do. According to the agreement between Tesla and Travis County, the company is obligated to create 5,001 new full-time jobs over the next four years.

The contract also states that Tesla must invest about $1.1 billion in the county over the next five years. Tesla’s compliance report shows that the automaker last year invested $5.81 billion in Gigafactory Texas, which officially launched a year ago at a “Cyber Rodeo” event. In January, Tesla notified regulators that it plans to invest another $770 million into an expansion of the factory to include a battery cell testing site and cathode and drive unit manufacturing site. With that investment will come more jobs.

Tesla’s choice to move its headquarters to Texas and build a gigafactory there has helped the state lead the nation in job growth. The automaker builds its Model Y crossover there and plans to build its Cybertruck in Texas, as well. Giga Texas will also be a model for sustainable manufacturing, CEO Elon Musk has said. Last year, Tesla completed the first phase of what will become “the largest rooftop solar installation in the world,” according to the report, per Bloomberg. Tesla has begun on the second phase of installation, but already there are reports of being able to see the rooftop from space. The goal is to generate 27 megawatts of power.

Musk has also promised to turn the site into an “ecological paradise,” complete with a boardwalk and a hiking/biking trail that will open to the public. There haven’t been many updates on that front, and locals have been concerned that the site is actually more of an environmental nightmare that has led to noise and water pollution. The site, located at the intersection of State Highway 130 and Harold Green Road, east of Austin, is along the Colorado River and could create a climate catastrophe if the river overflows.

The site of Tesla’s gigafactory has also historically been the home of low-income households and has a large population of Spanish-speaking residents. It’s not clear if the jobs at the factory reflect the demographic population of the community in which it resides.

Continue Reading

Technology

Launch startup Stoke Space rolls out software tool for complex hardware development

Published

on

Stoke Space, a company that’s developing a fully reusable rocket, has unveiled a new tool to let hardware companies track the design, testing and integration of parts. The new tool, Fusion, is targeting an unsexy but essential aspect of the hardware workflow.

It’s a solution born out of “ubiquitous pain in the industry,” Stoke CEO Andy Lapsa said in a recent interview. The current parts tracking status quo is marked by cumbersome, balkanized solutions built on piles of paperwork and spreadsheets. Many of the existing tools are not optimized “for boots on the ground,” but for finance or procurement teams, or even the C-suite, Lapsa explained.

In contrast, Fusion is designed to optimize simple inventory transactions and parts organization, and it will continue to track parts through their lifespan: as they are built into larger assemblies and go through testing. In an extreme example, such as hardware failures, Fusion will help teams connect anomalous data to the exact serial numbers of the parts involved.

Image credit: Stoke Space

“If you think about aerospace in general, there’s a need and a desire to be able to understand the part pedigree of every single part number and serial number that’s in an assembly,” Lapsa said. “So not only do you understand the configuration, you understand the history of all of those parts dating back to forever.”

While Lapsa clarified that Fusion is the result of an organic in-house need for better parts management – designing a fully reusable rocket is complicated, after all – turning it into a sell-able product was a decision that the Stoke team made early on. It’s a notable example of a rocket startup generating pathways for revenue while their vehicle is still under development.

Fusion offers particular relevance to startups. Many existing tools are designed for production runs – not the fast-moving research and development environment that many hardware startups find themselves, Lapsa added. In these environments, speed and accuracy are paramount.

Brent Bradbury, Stoke’s head of software, echoed these comments.

“The parts are changing, the people are changing, the processes are changing,” he said. “This lets us capture all that as it happens without a whole lot of extra work.”

Continue Reading

Technology

Amid a boom in AI accelerators, a UC Berkeley-focused outfit, House Fund, swings open its doors

Published

on

Companies at the forefront of AI would naturally like to stay at the forefront, so it’s no surprise they want to stay close to smaller startups that are putting some of their newest advancements to work.

Last month, for example, Neo, a startup accelerator founded by Silicon Valley investor Ali Partovi, announced that OpenAI and Microsoft have offered to provide free software and advice to companies in a new track focused on artificial intelligence.

Now, another Bay Area outfit — House Fund, which invests in startups with ties to UC Berkeley — says it is launching an AI accelerator and that, similarly, OpenAI, Microsoft, Databricks, and Google’s Gradient Ventures are offering participating startups free and early access to tech from their companies, along with mentorship from top AI founders and executives at these companies.

We talked with House Fund founder Jeremy Fiance over the weekend to get a bit more color about the program, which will replace a broader-based accelerator program House Fund has run and whose alums include an additive manufacturing software company, Dyndrite, and the managed app development platform Chowbotics, whose most recent round in January brought the company’s total funding to more than $60 million.

For founders interested in learning more, the new AI accelerator program runs for two months, kicking off in early July and ending in early September. Six or so companies will be accepted, with the early application deadline coming up next week on April 13th. (The final application deadline is on June 1.) As for the time commitment involved across those two months, every startup could have a different experience, says Fiance. “We’re there when you need us, and we’re good at staying out of the way.”

There will be the requisite kickoff retreat to spark the program and founders to get to know one another. Candidates who are accepted will also have access to some of UC Berkeley’s renowned AI professors, including Michael Jordan, Ion Stoica, and Trevor Darrell. And they can opt into dinners and events in collaboration with these various constituents.

As for some of the financial dynamics, every startup that goes through the program will receive a $1 million investment on a $10 million post-money SAFE note. Importantly, too, as with the House Fund’s venture dollars, its AI accelerator is seeking startups that have at least one Berkeley-affiliated founder on the co-founding team. That includes alumni, faculty, PhDs, postdocs, staff, students, dropouts, and other affiliates.

There is no demo day. Instead, says Fiance, founders will receive “directed, personal introductions” to the VCs who best fit with their startups.

Given the buzz over AI, the new program could supercharge House Fund, the venture organization, which is already growing fast. Fiance launched it in 2016 with just $6 million and it now manages $300 million in assets, including on behalf of Berkeley Endowment Management Company and the University of California.

At the same time, the competition out there is fierce and growing more so by the day.

Though OpenAI has offered to partner with House Fund, for example, the San Francisco-based company announced its own accelerator back in November. Called Converge, the cohort was to be made up of 10 or so founders who received $1 million each and admission to five weeks of office hours, workshops and other events that ended and that received their funding from the OpenAI Startup Fund.

Y Combinator, the biggest accelerator in the world, is also oozing with AI startups right now, all of them part of a winter class that will be talking directly with investors this week via demo days that are taking place tomorrow, April 5th, and on Thursday.

Continue Reading

Trending