Skype co-founder Jaan Tallinn has identified the biggest threats to human existence in this century, money.bg writes.
While climate emergencies and the coronavirus pandemic are seen as problems that require urgent global solutions, Tallinn told CNBC that the biggest existential risks by 2100 will come from artificial intelligence and synthetic biology.
“Climate change will not be an existential risk unless there is a scenario to avoid it.” And there are many.
The United Nations has recognized the climate crisis as a “defining issue for our time”, describing its impact as global and unprecedented in scale.
The international group also warned that there was alarming evidence to suggest that “important turning points leading to irreversible changes in major ecosystems and the planetary climate system may have already been reached.”
Citing a book by Oxford professor Toby Orr, Tallinn says there is a chance one in six people will not survive this century. One of the biggest potential threats in the near future is artificial intelligence, the book says.
Meanwhile, the work found that the probability of climate change causing human extinction is less than 1%.
Synthetic biology is the design and construction of new biological parts, devices and systems, says Tallinn.
However, he is more concerned with the development of artificial intelligence (AI) technologies, which is why he is investing millions of dollars to try to ensure that the technology develops safely.
This includes early investments in artificial intelligence laboratories such as DeepMind (in part so that it can monitor what they do) and funding for artificial intelligence safety research at universities such as Oxford and Cambridge.
Predicting the future of AI
As for AI, no one knows how intelligent machines will become – it is impossible to predict how advanced AI technology will be in the next 10, 20 or 100 years.
Attempts to predict the future of AI are further complicated by the fact that AI systems are beginning to create other AI systems without human input.
According to the co-founder of Skype, there are two main scenarios that need to be explored for the safety of AI. The first is a laboratory incident in which a research team leaves an AI system to train on several computer servers in the evening, and in the morning “the world is gone”.
The second is when the research team simply produces technology that is then adopted and applied to different domains, “which will ultimately have an unfortunate effect.” The expert is more focused on the first negative scenario because scientists think less about it.
Open and closed laboratories
The world’s largest technology companies are spending billions of dollars to improve the state of AI. Although some of their research is published openly, much of it is not, and this is alarming.
Some companies take AI safety more seriously than others, Tallinn said. DeepMind, for example, maintains regular contact with artificial intelligence safety researchers in places such as the Institute for the Future of Humanity in Oxford. It also employs dozens of people who are focused on AI safety.
At the other end of the scale, corporate hubs such as Google Brain and Facebook AI Research are less involved with the AI safety community. Both companies did not comment on the topic after a request from CNBC.
According to Tallinn, many problems can be isolated in AI. One of them is racism. If that happens, it would be better to have fewer players in the world.
He gives as an example the technology for the invention of the nuclear bomb and how many research groups from different countries have actually worked on the technology. “I think the situation is similar,” he said.
“If it turns out that artificial intelligence will not be very destructive in the near future, then it would certainly be useful for companies to try to solve some of the problems in a better distributed way,” Tallinn added.