Sunday, 03 Mar, 2024

Tech

Should AI be stopped before it is too late?

Technology Desk | banglanews24.com
Update: 2023-05-30 12:21:06
Should AI be stopped before it is too late?

Steve Wozniak is no fan of Elon Musk. In February, the Apple co-founder described the Tesla, SpaceX and Twitter owner as a “cult leader” and called him dishonest.

Yet, in late March, the tech titans came together, joining dozens of high-profile academics, researchers and entrepreneurs in calling for a six-month pause in training artificial intelligence systems more powerful than GPT-4, the latest version of Chat GPT, the chatbot that has taken the world by storm.

Their letter, penned by the United States-based Future of Life Institute, said the current rate of AI progress was becoming a “dangerous race to ever-larger unpredictable black-box models”. The “emergent capabilities” of these models, the letter said, should be “refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy and loyal”.

ChatGPT, released in November 2022 by research lab OpenAI, employs what is known as generative artificial intelligence – which allows it to mimic humans in producing and analysing text, answering questions, taking tests, and other language and speech-related tasks. Terms previously limited to the world of computer programming, such as “large language model” and “natural language processing,” have become commonplace in a growing debate around ChatGPT and its many AI-based rivals, like Google’s Bard or the image-generating app, Stability AI.

But so have stories of unhinged answers from chatbots and AI-generated “deepfake” images and videos – like the Pope sporting a large white puffer jacket, Ukrainian President Volodymyr Zelenskyy appearing to surrender to Russian forces and, last week, of an apparent explosion at the Pentagon.

Meanwhile, AI is already shaking up industries from the arts to human resources. Goldman Sachs warned in March that generative AI could wipe up 300 million jobs in the future. Other research shows that teachers could be among the most affected.

Then there are more nightmare scenarios of a world where humans lose control of AI – a common theme in science fiction writing that suddenly seems not quite so implausible. Regulators and legislators the world over are watching closely. And sceptics received endorsement when computer scientist and psychologist Geoffrey Hinton, considered one of the pioneers of AI, left his job at Google to speak publicly about the “dangers of AI” and how it will become capable of outsmarting humans.

So, should we pause or stop AI before it’s too late?

The short answer: An industry-wide pause is unlikely. In May, OpenAI CEO Sam Altman told a US Senate hearing that the company would not develop a new version of ChatGPT for six months and has used his new mainstream fame to call for more regulation. But competitors could still be pushing forward with their work. For now, say experts, the biggest fear involves the misuse of AI to destabilise economies and governments. The risks extend beyond generative AI, and experts say it is governments – not the market – that must set guardrails before the technology goes rogue.

At the moment, much of the technology on offer revolves around what the European Parliamentary Research Service calls “artificial narrow intelligence” or “weak AI”. This includes “image and speech recognition systems … trained on well-labelled datasets to perform specific tasks and operate within a predefined environment”.

While this form of AI can’t in itself escape human control, it has its own problems. It reflects the biases of its designer. Some of this technology has been used for nefarious purposes, like surveillance through facial recognition software.

On the positive side, this AI also includes things like digital voice assistants and the tech behind self-driving cars.

Generative AI like GPT-4 is also technically “weak AI”, but even OpenAI admits it does not fully understand how the chatbot works, raising some troubling questions and speculation. GPT-4 has also reputedly shown “sparks of artificial general intelligence”, according to a major study by Microsoft Research released in April, due to its “core mental capabilities”, its “range of topics on which it has gained expertise”, and the “variety of tasks it is able to perform”.

Artificial general intelligence, or strong AI, is still a hypothetical scenario, but one where AI could operate autonomously and even surpass human intelligence. Think 2001 Space Odyssey or Blade Runner.

It is exciting and troubling.

“It’s worth remembering that nuclear weapons, space travel, gene editing, and machines that could converse fluently in English were all once considered science fiction,” Stuart Russell, director of the Center for Human-Compatible AI at the University of California, Berkeley, told Al Jazeera. “But now they’re real.”

Source: Al Jazeera 

BDST: 1221 HRS, MAY 30, 2023
SMS

All rights reserved. Sale, redistribution or reproduction of information/photos/illustrations/video/audio contents on this website in any form without prior permission from banglanews24.com are strictly prohibited and liable to legal action.