ChatGPT gave the world a glimpse of recent advances in computer science even if not everyone figured out quite how it works or what to do with it. (Pixabay)News 

2023: The year of grappling with artificial intelligence and its uncertain implications

In 2023, artificial intelligence finally gained widespread popularity, marking a significant milestone. However, despite this achievement, there is still a considerable journey ahead for the technology to meet the expectations of individuals who envision human-like machines as depicted in science fiction.

ChatGPT fueled the AI fanfare of the year. Chatbot gave the world a glimpse of recent advances in information technology, even though not everyone understood how it worked or what it was supposed to do.

“I would call this a bending moment,” AI researcher Fei-Fei Li said. “The year 2023 will hopefully be remembered in history for both the profound changes in technology and the awakening of the public. It also shows how messy this technology is.”

It was the year people figured out “what this is, how to use it, what the impact is — all the good, the bad and the ugly,” he said.

PANIC over AI

The first AI panic of 2023 occurred shortly after New Year’s Day, when classrooms reopened and schools from Seattle to Paris began blocking ChatGPT. Teenagers already asked the chatbot, released at the end of 2022, to prepare essays and answer take-home tests.

The big AI language models behind technology like ChatGPT work by repeatedly guessing the next word in a sentence after you’ve “learned” the patterns of a huge body of human writing. They often get the facts wrong. But the results seemed so natural that they sparked curiosity about future advances in artificial intelligence and its potential use for fraud and fraud.

Concerns grew as this new set of generative AI tools—spewing out not just words, but new images, music, and synthetic sounds—threatened the livelihood of anyone who wrote, drew, plotted, or coded. It added to strikes by Hollywood writers and actors, and legal challenges by visual artists and bestselling authors.

Some of the most respected researchers in the field of artificial intelligence warned that the uncontrolled development of technology was marching towards the intelligence of humans and possibly threatened their existence, while other researchers called their worries exaggerated or pointed to more immediate risks.

By spring, AI-generated deep fakes — some more convincing than others — had jumped into US election campaigns, with one misrepresenting the country’s former infectious disease expert embracing Donald Trump. Technology made it increasingly difficult to distinguish between real and fake war videos in Ukraine and Gaza.

By the end of the year, the AI crises had moved to ChatGPT’s own manufacturer, the San Francisco startup OpenAI, whose charismatic CEO was nearly destroyed by corporate turmoil, and to the boardroom of the Belgian government, with exhausted political leaders from across Europe. The union emerged after days of intense negotiations on an agreement on the world’s first major legal safeguards for artificial intelligence.

The new AI law won’t go into effect until 2025, and other legislative bodies – including the US Congress – are still a long way from enacting their own legislation.

TOO MUCH HYPE?

There is no doubt that the commercial AI products announced in 2023 included technical achievements that were not possible in earlier stages of AI research dating back to the mid-20th century.

But the latest generative AI trend is peaking, says market research firm Gartner, which has tracked the emerging technology’s “hype cycle” since the 1990s. Imagine a wooden roller coaster ticking over to its highest hill and coming down to what Gartner describes as a “trough of disappointment” before snapping back to reality.

“Generative AI is right at the peak of inflated expectations,” Gartner analyst Dave Micko said. “Vendors and producers of generative AI make huge claims about its capabilities, its ability to deliver those capabilities.”

Google came under fire this month for editing a video demonstration of its AI model called Gemini in a way that made it appear more impressive — and human.

Micko said the leading AI developers are pushing certain ways of applying the latest technology, most of which correspond to their current product portfolio – whether it’s search engines or workplace productivity software. That doesn’t mean the world uses it that way.

“As much as Google, Microsoft, Amazon and Apple would like us to adopt the way they think about their technology and produce it, I think the adoption will be from the bottom up,” he said.

IS IT USED AGAIN?

It’s easy to forget that this isn’t the first wave of AI commercialization. Computer vision techniques developed by Li and other researchers helped sort through a huge database of photographs to recognize objects and individual faces and guide self-driving cars. Advances in speech recognition made voice assistants like Siri and Alexa a part of many people’s lives.

“When we launched Siri in 2011, it was the fastest-growing consumer app and the only major AI app people had ever experienced,” said Tom Gruber, co-founder of Siri Inc., which was bought by Apple. made a fixed iPhone feature.

But Gruber believes what’s happening now is the “biggest wave ever” of AI, unleashing new opportunities and dangers.

“We were surprised that we could accidentally encounter this amazing language skill when we were training a machine to play solitaire all over the Internet,” Gruber said. “It’s kind of amazing.”

The dangers could come quickly in 2024, as major national elections in the US, India and elsewhere could be flooded with AI-generated deep rigging.

In the longer term, AI technology’s rapidly evolving language, visual perception and step-by-step design capabilities could complement the vision of a digital assistant — but only if it’s given access to “the inner loop of our digital life stream,” Gruber said.

“They can control your attention like, ‘You should watch this video. You should read this book. You should respond to this person’s communication,'” Gruber said. “That’s what a real executive assistant does. And we could have that, but at a really high risk to personal information and privacy. “

Related posts

Leave a Comment