Scarlett Johansson’s AI row has echoes of Silicon Valley’s bad old days

Image source, Getty Images

Scarlett Johansson’s AI row has echoes of Silicon Valley’s bad old days

  • Author, Zoe Kleinman
  • Role, Technology editor
  • Twitter,

“Move fast and break things” is a motto that continues to haunt the tech sector, some 20 years after it was coined by a young Mark Zuckerberg.

Those five words came to symbolise Silicon Valley at its worst – a combination of ruthless ambition and a rather breathtaking arrogance – profit-driven innovation without fear of consequence.

I was reminded of that phrase this week when the actor Scarlett Johansson clashed with OpenAI. Ms Johansson claimed both she and her agent had declined for her to be the voice of its new product for ChatGPT – and then when it was unveiled it sounded just like her anyway. OpenAI denies that it was an intentional imitation.

It’s a classic illustration of exactly what the creative industries are so worried about – being mimicked and eventually replaced by artificial intelligence.

Last week Sony Music, the largest music publisher in the world, wrote to Google, Microsoft and OpenAI demanding to know whether any of its artists’ songs had been used to develop AI systems, saying they had no permission to do so.

There are echoes in all this of the macho Silicon Valley giants of old. Seeking forgiveness rather than permission as an unofficial business plan.

But the tech firms of 2024 are extremely keen to distance themselves from that reputation.

OpenAI wasn’t shaped from that mould. It was originally created as a non-profit organisation that would invest any extra profits invested back into the business.

In 2019, when it formed a profit-oriented arm, they said the profit side would be led by the non-profit side, and there would be a cap imposed on the returns investors could earn.

Not everybody was happy about this shift – it was said to have been a key reason behind original co-founder Elon Musk’s decision to walk away.

When OpenAI CEO Sam Altman was suddenly fired by his own board late last year, one of the theories was that he wanted to move further away from the original mission. We never found out for sure.

But even if OpenAI has become more profit-driven, it still has to face up to its responsibilities.

In the world of policy-making, almost everyone is agreed that clear boundaries are needed to keep companies like OpenAI in line before disaster strikes.

So far, the AI giants have largely played ball on paper. At the world’s first AI Safety Summit six months ago, a bunch of tech bosses signed a voluntary pledge to create responsible, safe products that would maximise the benefits of AI technology and minimise its risks.

Those risks, originally identified by the event organisers, were the proper stuff of nightmares. When I asked back then about the more down-to-earth threats to people posed by AI tools discriminating against them, or replacing them in their jobs, I was quite firmly told that this gathering was dedicated to discussing the absolute worst-case scenarios only – this was Terminator, Doomsday, AI-goes-rogue-and-destroys-humanity territory.

Six months later, when the summit reconvened, the word “safety” had been removed entirely from the conference title.

Last week, a draft UK government report from a group of 30 independent experts concluded that there was “no evidence yet” that AI could generate a biological weapon or carry out a sophisticated cyber attack. The plausibility of humans losing control of AI was “highly contentious”, it said.

Some people in the field have been saying for quite a while that the more immediate threat from AI tools was that they will replace jobs or cannot recognise skin colours. AI ethics expert Dr Rumman Chowdhury says these are “the real problems”.

The AI Safety Institute declined to say whether it had safety-tested any of the new AI products that have been launched in recent days; notably OpenAI’s GPT-4o, and Google’s Project Astra, both of which are among the most powerful and advanced generative AI systems available to the public that I have seen so far. In the meantime, Microsoft has unveiled a new laptop containing AI hardware – the start of AI tools being physically embedded in our devices.

The independent report also states that there is currently no reliable way of understanding exactly why AI tools generate the output that they do – even among developers – and that the established safety testing practice of Red Teaming, in which evaluators deliberately try to get an AI tool to misbehave, has no best-practice guidelines.

At the follow-up summit running this week, hosted jointly by the UK and South Korea in Seoul, the firms have committed to shelving a product if it doesn’t meet certain safety thresholds – but these will not be set until the next gathering in 2025.

Some fear that all these commitments and pledges don’t go far enough.

“Volunteer agreements essentially are just a means of firms marking their own homework,” says Andrew Straight, associate director of the Ada Lovelace Institute, an independent research organisation. “It’s essentially no replacement for legally binding and enforceable rules which are required to incentivise responsible development of these technologies.”

OpenAI has just published its own 10-point safety process which it says it is committed to – but one of its senior safety-focused engineers recently resigned, writing on X that his department had been “sailing against the wind” internally.

“Over the past years, safety culture and processes have taken a backseat to shiny products,” posted Jan Leike.

There are, of course, other teams at OpenAI who continue to focus on safety and security.

Currently though, there’s no official, independent oversight of what any of them are actually doing.

“We have no guarantee that these companies are sticking to their pledges,” says Professor Dame Wendy Hall, one of the UK’s leading computer scientists.

“How do we hold them to account on what they’re saying, like we do with drugs companies or in other sectors where there is high risk?”

We also may find that these powerful tech leaders grow less amenable once push comes to shove and the voluntary agreements become a bit more enforceable.

When the UK government said it wanted the power to pause the rollout of security features from big tech companies if there was the potential for them to compromise national security, Apple threatened to remove services from Britain, describing it as an “unprecedented overreach” by lawmakers.

The legislation went through and so far, Apple is still here.

The European Union’s AI Act has just been signed into law and it is both the first and strictest legislation out there. There are also tough penalties for firms which fail to comply. But it creates more leg-work for AI users than the AI giants themselves, says Nader Henein, VP analyst at Gartner.

“I would say the majority [of AI developers] overestimate the impact the Act will have on them,” he says.

Any companies using AI tools will have to categorise them and risk-score them – and the AI firms which provided the AI will have to supply enough information for them to be able to do that, he explains.

But this doesn’t mean that they’re off the hook.

“We need to move towards legal regulation over time but we can’t rush it,” says Prof Hall. “Setting up global governance principles that everyone signs up to is really hard.”

“We also need to make sure it’s genuinely worldwide and not just the western world and China that we are protecting.”

Those who attended the AI Seoul Summit say it felt useful. It was “less flashy” than Bletchley but with more discussion, said one attendee. Interestingly, the concluding statement of the event has been signed by 27 countries but not China, although it had representatives there in-person.

The overriding issue, as ever, is that regulation and policy moves a lot more slowly than innovation.

Prof Hall believes the “stars are aligning” at government levels. The question is whether the tech giants can be persuaded to wait for them.

BBC InDepth is the new home on the website and app for the best analysis and expertise from our top journalists. Under a distinctive new brand, we’ll bring you fresh perspectives that challenge assumptions, and deep reporting on the biggest issues to help you make sense of a complex world. And we’ll be showcasing thought-provoking content from across BBC Sounds and iPlayer too. We’re starting small but thinking big, and we want to know what you think – you can send us your feedback by clicking on the button below.

Get in touch

InDepth is the new home for the best analysis from across BBC News. Tell us what you think.

Reference

Denial of responsibility! Elite News is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a comment