It’s been a little over a month since the team at Google’s DeepMind announced a major breakthrough in human voice mimicking. Called WaveNet, the system is able to correlate individual sound waves humans create and they compared their results to existing programs including Google’s, the say they have surpassed all of those at least by 50 percent thereby bringing us closer to a more realistic text-to-speech future.
Well today isn’t about WaveNet but another milestone from DeepMind which is that the system is now capable of learning from its own memory and this simply means the system doesn’t require human input all the time as it is now capable of learning from data it already has.
In an article on Science Alert, it says the new hybrid system – called a Differential Neural Computer (DNC) – pairs a neural network with the vast data storage of conventional computers, and the AI is smart enough to navigate and learn from this external data bank.
So this means he DNC works like the human brain which relies on stored data (what we see or hear) to make judgements. In the DNC case, it’s going to be combining external memory (like a hard drive storage) with a neural network which is a made up of several interconnected nodes.
DeepMind researchers Alexander Graves and Greg Wayne say “These models… can learn from examples like neural networks, but they can also store complex data like computers” and by so doing, the DNC is poised to constantly improve results by comparing its results with the desired and corrected ones.
They have basically tried to create an improving human brain that is capable of self-learning and correction and this is a major achievement for Alphabet (Google’s parent company) which just a month ago announced another breakthrough in artificial intelligence technology which beats what is currently used in Apple’s Siri and other artificial intelligence platforms.
Google Assistant is currently outranking its rivals and a future infusion of the DeepMind success is going to make it even better making this space Google’s to control.
In the attached video, the DNC was able to continue to draw addition connections when given a basic information in family tree. It was able to perform the task better in future without additional information.
Also using the London underground, once the DNC learned the basics, it was able to figure out more complex relationships and routes without any extra help, relying on what it’s already got in its memory banks.