The problem with the AI we have today is that they can only learn one thing at a time. For example, Alpha Go Zero may have learned how to play Go on its own, bit it will have to forget Go in order for it to learn chess.
Using neuroscience, Google Deep Mind has developed an AI program that can continuously learn by mimicking brain connections which retains important lessons it learned from previous tasks to learn new activities.
Implications for AI
While Google engineers say that there is still a long way to go before a real artificial general intelligence can be made, one must remember how AI experts in 2015 inaccurately predicted that it will take 12 years for AI to beat a human player at Go. With the amount of data and funding Google controls, it will not be a surprise if they have already achieved milestones in AGI that they haven’t publicized.
One must also remember how Ray Kurzweil, Google’s director of engineering predicted that singularity wil be achieved in 30 years. We might think that we still have 26 years to go, but the timeline may have just been shortened due to this new innovation from Deep Mind.
[contentcards url=”https://www.theguardian.com/global/2017/mar/14/googles-deepmind-makes-ai-program-that-can-learn-like-a-human” target=”_blank”]
You may also like
-
Elon Musk: “In the Future, There Will Be No Phones, Just Neuralinks”
-
Superintelligence: A Leap Forward or a Step Toward Oblivion?
-
Neuralink Claims First Successful Implantation of Wireless Brain Chips In Humans
-
AI Tasked With ‘Destroying Humanity’ Now ‘Working on Control Over Humanity Through Manipulation’
-
The Fourth Industrial Revolution: Its Risks and Benefits