The creation of a super intelligent machine is a top priority for transhumanists, and they know that AI cannot be regulated. Understanding the philosophy they adhere to [see Transhumanists Want Homo Sapiens To Become Extinct] makes it clear why this isn’t such a big concern for them. On the other hand, this article strengthens our position calling for the application of the Precautionary Principle in the development of generalized intelligence.
Read Original Article
Read Online
Click the button below if you wish to read the article on the website where it was originally published.
Read Offline
Click button below if you wish to read the original article offline.
You may also like
-
ChatGPT-Like AI Designs Whole New Genomes From Scratch
-
It’s Time to Stop the Debate: Artificial Intelligence Advances, Level of Threat to Humanity Increases
-
Yuval Harari Warns: Never Summon a Power You Can’t Control
-
Autonomous weapons will threaten humanity
-
Navigating the Future: Embracing Humanity’s Potential Amid Existential Risks