
Understanding the AI alignment problem
As DeepMind goes deeper into the creation of an artificial general intelligence (AGI), it attempts to address the alignment problem. Will Iason Gabriel’s new work finally resolve it?
As DeepMind goes deeper into the creation of an artificial general intelligence (AGI), it attempts to address the alignment problem. Will Iason Gabriel’s new work finally resolve it?
SafeLife hopes to address unintended consequences of AI systems, but Matt Beane says that it could lead to even more complex unintended consequences.
In 1998, two thinkers made a bet: Christof Koch bet a case of fine wine, saying that by 2023 someone would discover a signature for consciousness. But David Chalmers disagreed. How are the new findings in consciousness studies supporting or disproving the two positions?
It has been seven years since transhumanist George Dvorsky wrote this landmark article on mind uploading, but despite the improvements in science and technology, his points remain true as ever.
Eric Holloway has come up with an important conclusion: there is no materialist theory that can completely explain the nature of consciousness.
Artificial general intelligence is like Pandora’s box. Once it is opened, it will be very difficult for us to stop the eventual extinction of our species.
How do we take control of the intelligence explosion? How will we navigate the inevitable singularity of the future?
Here’s one idea for recontextualizing science: to bring back the imaginative and creative aspect in all scientific activity.
The best way we can look towards to the future is by stopping ourselves from dwelling in the past.
Not all technological development will benefit us. Others, unknown to their developers, worsen pre-existing conditions.