- Their declaration said AI could ‘confidently lie and easily deceive’ its users
- They called for more regulation based on ‘hard laws with enforcement powers’
Two leading Japanese communications and media companies have warned that AI could cause ‘social collapse and wars’ if governments do not act to regulate the technology.
Nippon Telegraph and Telephone (NTT) – Japan’s biggest telecoms firm – and Yomiuri Shimbun Group Holdings – the owners of the nation’s largest newspaper – today published a joint manifesto on the rapid development of generative AI.
The media giants recognise the benefits of the technology, describing it as ‘already indispensable to society’, specifically because of its accessibility and ease of use for consumers and its potential for boosting productivity.
But the declaration said AI could ‘confidently lie and easily deceive’ users, and may be used for nefarious purposes, including the undermining of democratic order by interfering ‘in the areas of elections and security… to cause enormous and irreversible damage’.
The companies also suggested widespread integration of AI could lead to a worsening of the ‘attention economy’ – the notion that the overwhelming presence of information means human attention is becoming more scarce.
This pushes governments and corporations to compete and engineer new ways to garner as much attention from their citizens and customers as possible – something NTT and Yomiuri Shimbun say has ‘made the information space unhealthy and damages the dignity of the individual’, and could become far more damaging with the rise of AI.
In response, the Japanese firms said countries worldwide must ensure that education around the benefits and drawbacks of AI must be incorporated into compulsory school curriculums and declared ‘a need for strong legal restrictions on the use of generative AI – hard laws with enforcement powers’.
It comes as the EU prepares to implement new legislation seen as the most comprehensive regulation of AI the world has seen thus far.
European governments are at the forefront of regulating AI at present, with all 27 EU member states endorsing the ‘Artificial Intelligence Act‘ proposed in December.
Set to come into effect later this year, the Act categorises different AI products into four risk classes, implementing a blanket ban on AI considered to have ‘unacceptable risk’ and enforcing a range of restrictions and obligations on the developers and providers of high-risk systems.
Any AI systems deemed capable of ‘deploying subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making’ will be prohibited, along with any technology that seeks to ‘exploit persons’ vulnerabilities’, categorise people according to attributes like race, gender, sexuality, or social behaviour.
Developers whose AI are deemed high-risk must subject their product to a raft of risk management, cybersecurity and data governance controls, and must ensure human oversight is maintained over the system at all times.
Even general-purpose AI systems not deemed a security risk will still be subject to specific controls.
The Act is set to be implemented this summer and bans will begin to be enforced six months later, with entities found to be flouting the regulations liable to pay hefty fines – though it is unclear whether senior individuals would face further legal action for a serious violation.
And until the Act comes into force, the use of AI in Europe can continue largely unrestricted.
The EU’s approach is far more comprehensive than the US, where the regulation of AI is more decentralised with each state largely responsible for its own legislation.
But the interest in AI regulation has gathered pace – at least 25 US states considered AI-related legislation in 2023, and 15 passed laws or resolutions, according to a Bloomberg report and data from the National Conference of State Legislatures.
Meanwhile, The White House said last month it will require federal agencies using artificial intelligence to adopt ‘concrete safeguards’ by December 1, 2024 to protect Americans’ rights and ensure safety.
The Office of Management and Budget issued a directive to federal agencies to monitor, assess and test AI’s impacts ‘on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI.’
Agencies must also conduct risk assessments and set operational and governance metrics.
And in October last year, President Biden signed an executive order invoking the Defense Production Act to require developers of AI systems posing risks to US national security, the economy, public health or safety to share the results of safety tests with the US government before rolling out their products.
However, many have criticised governments’ adoption of AI regulation, arguing that excessive legislation implemented by uninformed politicians could stifle innovation.
Others say that restricting the development of AI, particularly for security or military applications, could leave Western countries vulnerable to more advanced systems pioneered by rivals with fewer legal barriers.
The US has attempted to ward off these criticisms by announcing plans to hire more than 100 AI professionals to advise policymakers, and in March declared that federal agencies will be required to designate chief AI officers within 60 days.
Robert Johnson is a UK-based business writer specializing in finance and entrepreneurship. With an eye for market trends and a keen interest in the corporate world, he offers readers valuable insights into business developments.