“Our theoretical analysis is so far off what these models can do.”
Mystery Machine
The AI industry exploded in late 2022, following the wild success of OpenAI’s ChatGPT release in November of that year. Generative AI tools — from chatbots to remarkably lifelike music and voice-generators to image and video creators, and more — continue to dazzle the public, while AI and machine learning advancements continue to find applications in fields like healthcare and drug discovery.
Just one problem: not even the folks creating all this AI fully understand how it really works.
“Obviously, we’re not completely ignorant,” University of California, San Diego computer scientist Mikhail Belkin told MIT Technology Review. “But our theoretical analysis is so far off what these models can do.”
Indeed, as MIT Tech explains, many AI models are notoriously black boxes, which in short means that while an algorithm might produce a useful output, it’s unclear to researchers how it actually got there. This has been the case for years, with AI systems often defying statistics-based theoretical models. Regardless, the AI industry is careening ahead, fueled by billions of investment dollars and a hefty share of near-fanatical belief. (And, of course, the C-Suite vision of eliminating vast swaths of the workforce.)
In other words, AI is already everywhere. But as it’s increasingly integrated into human life, the scientists building the tech are still trying to fully understand how it learns and functions.
Spaghetti, Meet Wall
Some experts chalk the lack of understanding up to the burgeoning nature of the field, arguing that AI’s nascency means that sometimes researchers will have to work backward from experimental results and outputs.
“These are exciting times,” Boaz Barak, a Harvard University computer scientist, told MIT Tech. “Many people in the field often compare it to physics at the beginning of the 20th century.”
“We have a lot of experimental results that we don’t completely understand,” Borak continued, “and often when you do an experiment it surprises you.”
No Guarantees
If the industry’s still-minimal degree of understanding seems at all cavalier, though, it’s because it is.
To be sure, experimentation and uncertainty are natural, if not inherent, to the scientific process. But AI models are no longer restricted to metaphorical Silicon Valley test tubes; the AI industry is a lucrative financial behemoth, and like with other technologies, the familiar “move fast and break things” approach that many in the industry have moved with could well present challenges down the road.
After all, as Belkin told MIT Tech, guarantees are important. And here? They don’t exist quite yet.
“I’m very interested in guarantees,” said Belkin, according to the report. “If you can do amazing things but you can’t really control it, then it’s not so amazing.”
“What good is a car that can drive 300 miles per hour,” he added, “if it has a shaky steering wheel?”
More on shaky AI steering wheels: Researchers Create AI-Powered Malware That Spreads on Its Own
Laura Adams is a tech enthusiast residing in the UK. Her articles cover the latest technological innovations, from AI to consumer gadgets, providing readers with a glimpse into the future of technology.