According to physicist Stephen Hawking, humanity likely only has about 1,000 years left on Earth. He also warns that the only thing that could save us from certain extinction is creating colonies in other parts of the Solar System.
“[W]e must . . . continue to go into space for the future of humanity,” Hawking explained. “I don’t think we will survive another 1,000 years without escaping beyond our fragile planet.”
Hawking’s concerns over humanity’s lifespan have led him to discuss artificial intelligence as well, having said AI will either be “the best, or the worst, thing ever to happen to humanity.”
Meanwhile, billionaire entrepreneur Elon Musk has announced his hope to establish a human colony on Mars in the next few decades through his aerospace firm SpaceX. “I don’t have a doomsday prophecy,” Musk said, “but history suggests some doomsday event will happen.”
But Hawking has estimated that self-sustaining human colonies on Mars won’t be a practical option for at least another 100 years, and emphasizes our need to be extremely careful in the coming decades.
“Although the chance of disaster to planet Earth in a given year may be quite low, it adds up over time, and becomes a near certainty in the next 1,000 or 10,000 years,” Hawking noted. “By that time, we should have spread out into space and to other stars, so a disaster on Earth would not mean the end of the human race.”
Putting aside the severe effects of climate change, global pandemics resulting from antibiotic resistance, and the progression of warring nations’ nuclear capabilities, we may soon be confronted with the types of enemies we have no knowledge of dealing with.
Last year Hawking was a part of a coalition that included Elon Musk and more than 20,000 researchers and experts who called for a ban on the development of autonomous weapons capable of firing on targets without human intervention.
Musk’s new research initiative, dedicated to the ethics of AI, called today’s robots completely submissive, but there’s still the concern of what happens when we remove too many of their limitations. “AI systems today have impressive but narrow capabilities,” the founders explained.
“It seems that we’ll keep whittling away at their constraints, and in the extreme case they will reach human performance on virtually every intellectual task. It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.”
To make matters more complex, imagine if we were to create erratic robots with even more intelligence and strength than us, only to find out aliens have picked up on the signals we’ve putting out into the Universe. And what if, given continuous and aggressive struggles with climate change, aliens become pushy, sniffing out a weakened enemy on a habitable planet? We will likely experience a type of war we never imagined existing; one against extraterrestrial life.
“I am more convinced than ever that we are not alone,” Hawking notes in his new online film, Stephen Hawking’s Favourite Places. And if the aliens do know of us, “they will be vastly more powerful and may not see us as any more valuable than we see bacteria.”
Hawking’s 1,000-year deadline also points out our need to go somewhere else in the Solar System. Nevertheless, Hawking said it is a “glorious time to be alive and doing research into theoretical physics,” praising the importance and positivity of how our fundamental understanding of the universe has advanced in his lifetime. “Our picture of the universe has changed a great deal in the last 50 years and I am happy if I have made a small contribution.”
“The fact that we humans, who are ourselves mere fundamental particles of nature, have been able to come this close to understanding the laws that govern us and the universe is certainly a triumph.”
source and courtesy: collective-evolution.com