Three main risks from technology by 2040: AI rivalry with invisible cyber attacks, see the horror forecast

Surprisingly rapid changes are occurring in the technology and reach of computer systems. There are exciting advances in artificial intelligence, in the masses of tiny interconnected devices we call the “Internet of Things” and in wireless connectivity. Unfortunately, these improvements bring potential dangers as well as benefits. To have a secure future, we need to anticipate what might happen in computing and deal with it early. So what do experts think will happen and what could we do to prevent bigger problems?

To answer that question, our research team from the universities of Lancaster and Manchester turned to the science of looking into the future, called “forecasting”. No one can predict the future, but we can make forecasts: descriptions of what might happen based on current trends.

Indeed, long-term forecasts of technology trends can prove incredibly accurate. And a great way to get forecasts is to combine the ideas of many different experts to find where they agree.

We consulted 12 “futurist” experts for a new research paper. These are people whose roles include long-term forecasts of the effects of changes in computer technology up to 2040.

Using a technique called a Delphi study, we combined futurists’ predictions into a set of risks, along with their recommendations for addressing those risks.

I. Software problems

Experts have predicted rapid advances in artificial intelligence (AI) and related systems, leading to a world that will be more computer-driven than today. Surprisingly, however, they expected little impact from two much-hyped innovations: Blockchain, a way to record information that makes it impossible or difficult to manipulate the system, they suggested, is largely irrelevant to today’s problems; and quantum computing is still in its early stages and may have little impact in the next 15 years.

Futurists have highlighted three main risks associated with computer software development, as follows.

1. AI Competition leads to problems

Our experts suggest that many countries’ view of artificial intelligence as an area in which they want to gain a competitive, technological advantage will encourage software developers to take risks in the use of artificial intelligence. This, combined with AI’s complexity and potential to surpass human capabilities, could lead to disasters.

For example, imagine that testing shortcuts lead to an error in the control systems of cars built after 2025, which goes unnoticed amidst all the complex AI programming. It could even link to a specific date, causing a large number of cars to start misbehaving at the same time, killing many people around the world.

2. Generative AI

Generative artificial intelligence can make truth impossible to determine. For years, photos and videos have been very difficult to fake, so we expect them to be authentic. Generative AI has already radically changed this situation. We expect its ability to create convincing fake media to improve so that it will be extremely difficult to tell if an image or video is real.

Suppose someone in a position of trust—a respected leader or celebrity—uses social media to display authentic content, but occasionally includes plausible fakes. For those who follow them, there is no way to tell the difference – it will be impossible to know the truth.

3. Invisible cyber attacks

Finally, the sheer complexity of the systems to be built—networks of systems owned by different organizations, all dependent on one another—has an unexpected consequence. It will become difficult, if not impossible, to get to the root of what is causing things to go wrong.

Imagine a cybercriminal hacking an application used to control appliances such as ovens or refrigerators, causing all appliances to turn on at once. This creates a spike in demand for electricity in the grid, creating major power outages.

It will be difficult for experts in the electrical industry to even identify which devices caused the spike, let alone notice that everything is controlled by the same application. Cyber ​​sabotage will become invisible and impossible to distinguish from normal problems.

II. Software jujitsu

The point of such forecasts is not to sow alarm, but to enable us to start solving problems. Perhaps the simplest suggestion put forward by experts was a kind of software jujitsu: using software to protect and protect itself. We can make computer programs perform their own security audits by creating additional code that checks the program’s output—in effect, code that checks itself.

Similarly, we can insist that the methods already used to ensure the safe operation of software continue to be applied to new technologies. And that the newness of these systems is not used as an excuse to ignore good security practices.

III. Strategic solutions

But experts agreed that technical answers alone will not be enough. Instead, solutions will be found in the interaction between people and technology.

We need to build skills to deal with these human technological problems and new forms of education that cross disciplines. And governments must establish security principles for their own AI procurement and legislate for AI security across the sector, encouraging responsible development and implementation methods.

These forecasts give us a number of tools to solve possible problems in the future. Let’s adopt these tools to realize the exciting promises of our technological future.

Also read these most important news today:

Police X! Elon Musk’s X, the company formerly known as Twitter, plans to build a new “Trust and Security Center of Excellence” to help enforce its content and security policies. Know what this is all about. If you enjoyed reading this article, please forward it to your friends and family.

An AI-Generated Apocalypse? Here are the biggest technology risks we’ll face by 2040 – AI rivalry, GenAI and invisible cyber attacks. Dive in here. Did you find it interesting? Go ahead and share it with everyone you know.

A bad apple? From early March, developers will be able to offer alternative app stores on iPhones and opt out of Apple’s in-app payment system, which charges commissions of up to 30%, under the block’s new rules. However, Spotify is not happy with the changes. Check it all out here.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *