Forecast predicts top 3 tech risks by 2040: AI competition, covert cyber attacks, and alarming consequences
The technology and scope of computer systems are undergoing astonishingly swift transformations. Notably, there are remarkable advancements in artificial intelligence, the vast network of interconnected devices known as the “Internet of Things,” and wireless connectivity. Regrettably, alongside these enhancements, there are potential risks that accompany the benefits. In order to ensure a secure future, it is crucial to foresee potential developments in computing and tackle them proactively. Consequently, experts’ insights on future occurrences and preventive measures become imperative.
To answer this question, our research team from the Universities of Lancaster and Manchester turned to the science of looking into the future, called ‘forecasting’. No one can predict the future, but we can put together predictions: descriptions of what might happen based on current trends.
Long-term predictions about technology trends can indeed turn out to be very accurate. And an excellent way to get predictions is to combine the ideas of several different experts to find where they agree.
We consulted 12 expert “futurists” for a new study. These are people whose tasks include long-term forecasting of the effects of information technology changes until 2040.
Using a technique called a Delphi study, we combined futurists’ predictions into risks and their recommendations for dealing with those risks.
I. Software Problems
Experts foresee rapid developments in artificial intelligence (AI) and connected systems, leading to a much more computer-driven world than today. Surprisingly, however, they expected little impact from two much-hyped innovations: Blockchain, a way of storing information that makes it impossible or difficult to manipulate a system, they suggested is mostly irrelevant to today’s problems; and quantum computing is still in its infancy and may have little impact in the next 15 years.
Futurists highlighted three major risks associated with computer software development as follows.
1. Artificial intelligence competition leading to problems
Our experts suggested that the attitude of many countries towards artificial intelligence as an area where they want to gain a competitive and technological edge encourages software developers to take risks in the use of artificial intelligence. This, combined with AI’s complexity and ability to surpass human capabilities, can lead to disasters.
Imagine, for example, that Shortcuts in testing lead to an error in the control systems of cars manufactured after 2025, which goes unnoticed amidst all the complex AI programming. It can even be related to a specific date when a large number of cars start behaving erratically at the same time, killing many people worldwide.
2. Generative artificial intelligence
Generative AI may make it impossible to determine the truth. For years, photos and videos have been very difficult to fake, so we expect them to be authentic. Generative artificial intelligence has already radically changed the situation. We expect its ability to produce convincing fake media to improve, making it very difficult to tell if any image or video is genuine.
Let’s say someone in a position of trust—a respected executive or celebrity—uses social media to display authentic content, but sometimes contains convincing fakes. For those following them, there is no way to tell the difference – it is impossible to know the truth.
3. Invisible cyber attacks
Finally, the complexity of the systems being built—networks of systems owned by different organizations that all depend on each other—has an unexpected consequence. It becomes difficult, if not impossible, to get to the bottom of what makes things go wrong.
Imagine that a cybercriminal hacks an app used to control appliances such as ovens or refrigerators, causing all the appliances to turn on at once. This increases the demand for electricity in the grid and causes large power outages.
Electric company experts find it challenging to even identify which devices caused the spike, let alone notice that they are all controlled by the same application. Cyber sabotage becomes invisible and indistinguishable from normal problems.
II. Software jujitsu
The purpose of such forecasts is not to alarm, but to give us the opportunity to start addressing problems. Perhaps the simplest suggestion the experts suggested was a kind of software jujitsu: using software to protect against oneself. We can make computer programs perform their own security checks by creating additional code that validates the programs’ output—effectively, code that checks itself.
Similarly, we can demand that the methods already used to ensure the safe operation of the software are still applied to new technologies. And that the newness of these systems is not used as an excuse to ignore good security practice.
III. Strategic solutions
However, the experts agreed that technical answers alone are not enough. Instead, solutions are found in the interactions between people and technology.
We need to develop the skills to deal with these problems of human technology and new forms of education that cross disciplines. Governments must strengthen the security principles for their own artificial intelligence procurements and regulate the security of artificial intelligence throughout the sector, which encourages responsible development and implementation.
These predictions give us a set of tools to solve potential future problems. Let’s use these tools to realize the exciting promise of our technological future.