Draft post-quantum cryptography (PQC) standards have been published by the US National Institute of Standards and Technology (NIST). The new framework is designed to help organizations protect themselves from future quantum-enabled cyber-attacks.
The draft documents were published on August 24, 2023, and encompass three draft Federal Information Processing Standards (FIPS).
These standards were selected by NIST following a process that began in December 2016, when the agency issued a public call for submissions to the PQC Standardization Process.
After several rounds of selection, NIST announced the four encryption algorithms that would form its PQC standard in July 2022. The CRYSTALS-Kyber algorithm was chosen for general encryption (used for access to secure websites) and CRYSTALS-Dilithium, FALCON and SPHINCS+ were selected for digital signatures.
The technology secretary has drawn the ire of encryption experts by repeating false claims and half-truths about the Online Safety Bill.
The proposed legislation will effectively force private messaging companies that use end-to-end encryption to scan their users’ content for child abuse material. This would require users to download client-side scanning software to read messages on their devices before they’re encrypted.
Michelle Donelan told Radio 4’s Today program: “Technology is in development to enable you to have encryption as well as to be able to access this particular information.”
This prompted a furious backlash from experts.
Matthew Hodgson, CEO of secure messaging app Element, branded the statement as “factually incorrect.”
“No technology exists which allows encryption and access to ‘this particular information.’ Detecting illegal content means all content must be scanned in the first place. By adding the ability to use scanning technology at all, you open the floodgates to those who would exploit and abuse it,” he said.
“You put the mechanism in place for mass surveillance on UK citizens by the ‘good guys’ and the bad. It is utterly unacceptable to attempt to force tech companies to implement mass surveillance within their products.”
Donelan added that “the onus is on tech companies to invest in technology to solve this issue.” It’s an argument often repeated by lawmakers and law enforcers but roundly dismissed by technology experts as either disingenuous or ignorant.
“Countless experts, from private companies to academics and civil society organizations have told you this technology is impossible to build,” Hodgson responded. “Is the government expecting every tech company to plough money into a never-ending R&D project that will never result in a workable product?”
THE HEAD of the Financial Conduct Authority (FCA) has stated that Artificial Intelligence (AI) could disrupt the financial services sector “in ways and at a scale not seen before”, in parallel issuing a warning that the regulator would be forced to take action against AI-based fraud.
In a speech delivered to company executives in central London, Nikhil Rathi (CEO of the FCA) noted that there are risks of “cyber fraud, cyber attacks and identity fraud increasing in scale, sophistication and effectiveness” as AI becomes more widespread.
Prime Minister Rishi Sunak is fervently hoping to make the UK a centre for the regulation of AI, while the FCA’s work on this subject area is part of a much broader effort designed to work out how to regulate the big tech sector as it increasingly offers financial products.
During his delivery, Rathi warned that AI technology will increase risks for financial firms in particular. Senior managers at those firms will be “ultimately accountable for the activities of the business”, including decisions taken by AI.
“As AI is further adopted,” observed Rathi, “the investment in fraud prevention and operational and cyber resilience will have to accelerate simultaneously. We will take a robust line on this. There’s going to be full support for beneficial innovation alongside proportionate protections.”
Deepfake video
Rathi cited the example of a recent deepfake video of the personal finance expert Martin Lewis supposedly selling speculative investments. Lewis himself said the video was “terrifying” and has called for regulators to force big technology companies to take action in order to prevent similar scams.
Responding to Rathi’s comments, cyber specialist Suid Adeyanju (CEO of RiverSafe) said: “AI is set to become a regulatory minefield for the FCA, so maintaining a clear line of communication with businesses about the challenges and opportunities ahead is going to be critical in terms of maintaining high standards within the market.”
Adeyanju continued: “The tidal wave of AI-enabled cyber attacks and online scams adds an even greater level of complexity, so it’s vital that financial services firms beef up their cyber credentials and capabilities in order to identify and neutralise these threats before they can establish a foothold.