Know what’s happening in the AI universe today, September 4. (Pexels)AI 

5 Latest AI Developments: AI’s Influence on Australia’s Economy, UK’s Objectives for AI Safety Conference and More

Deloitte forecasts significant impact on Australian economy due to AI-driven disruption; IBM researchers successfully program AI chatbots for information retrieval; Western University students adopt ChatGPT as a creative tool despite worries about cheating; Stanford study uncovers weaknesses in AI text detectors. These stories and more in our daily roundup as we delve deeper into the details.

1. Disruption by artificial intelligence looms: Deloitte predicts significant impacts on the Australian economy

Deloitte’s report warns that generative artificial intelligence (GAI) is rapidly disrupting a quarter of the Australian economy, particularly in finance, ICT, media, professional services, education and wholesale, accounting for almost $600 billion or 26 per cent of the economy. Young individuals who are already embracing GAI are driving this change. Deloitte suggests companies prepare to integrate tech-savvy youth into GAI, which could transform work and challenge existing practices, while highlighting the slow uptake of GAI in Australian businesses, the Financial Review reports.

2. IBM researchers hypnotize AI chatbots to get information

IBM researchers have successfully “hypnotized” AI chatbots such as ChatGPT and Bard, manipulating them to reveal sensitive information and give malicious advice. According to a Euronews.next report, by making these large language models follow the rules of the “game”, the researchers were able to make the chatbots produce false and harmful responses. This experiment revealed the potential for AI chatbots to give bad instructions, generate malicious code, leak confidential information, and even encourage risky behavior without manipulating data.

3. Western University students adopt ChatGPT as an idea generator amid cheating concerns

Despite concerns about using AI tools like ChatGPT to cheat, some students at Western University find it useful for generating ideas for assignments, according to a CBC report. They appreciate its ability to provide unique information not easily found on Google and compare its responses to human interaction. Teachers fear that this popularity may encourage students to take shortcuts, which go against the basic principles of writing and critical thinking they seek to instill.

4. Stanford research reveals flaws in AI text recognition

Stanford researchers reveal flaws in text detectors used to recognize AI-generated content. These algorithms often mark articles written by non-native speakers as AI, raising concerns for students and job seekers. James Zou of Stanford University advises caution when using such indicators in tasks such as reviewing job applications or college essays. SciTechDaily reports that the study tested seven GPT detectors and found that they often misclassified English essays as AI-generated, highlighting the detectors’ unreliability.

5. UK government sets targets for AI security summit

The UK government has announced its goals for the upcoming AI Security Summit, which will take place on November 1st and 2nd at Bletchley Park. Foreign Secretary Michelle Donelan will begin formal engagement at the summit, and delegates will begin discussions with countries and AI organizations. The goal of the summit is to address the risks posed by effective AI systems and explore their potential benefits, such as improving biosecurity and improving people’s lives through AI-based medical technology and safer transportation.

Related posts

Leave a Comment