Google has created a learning computer.
This is both good and bad. There is no talk about whether the computer is aware of anything beyond it's current environment, whether it has access to the local network, the Internet or anything else.
They say they want to create a computer with the mental age of a toddler. Unfortunately people often forget that toddlers have to learn all sorts of things. Like sight, walking, conforming, speaking etc and they spend about 50% of their day sleeping.
Computers, on the other hand, don't need sleep and can learn 24x7 at the speed of microprocessors. The more concerning thing is that already business is queuing up to get their hands on any tech which might ease the burden of programming and human knowledge.... See the slippery slope?
What happens when some geek who lives in a dark room decides to teach a program how to program itself on a range of systems, then gives it general instructions like "Go learn about the world", then lets it lose on the Internet which is the most powerful grid of computers ever seen.
Whilst there have been dozens of films and books about how we reach the age of "AI" and suddenly are destroyed by our creations, I think there is an inevitability to this. Scientist are too close to the trees to see the forest. It's how they work best.
Personally I believe that all self learning programs should have some core, unalterable, functionality which follows Asmiov's laws of robotics. Only in this way will we be protected from what we create.
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
For Robot think "program", because a Robot is nothing more than a mobile and, perhaps, talking, program.
I shall be watching this closely from now on.