We are still a very long ways from making a computer that can pass for human, but recently two developments put us a bit closer to that goal...
For one thing, psychiatrist Ralph Hoffman of Yale and computer scientist Risto Miikkulainen of the University of Texas, Austin have made a schizophrenic computer. In an attempt to uncover the roots of the disorder in humans they took an artificial neural network known as DISCERN and began feeding it "stories" while inflicting different forms of damage to its modules. ANNs are programs designed to mimic the processes of a biological brain by isolating small sections of code called "modules" and forming a network between them, rather than being explicitly told what to do like a conventional program DISCERN has to learn the proper response to a given input. To test DISCERN they fed it short stories told in either first or third person and had it repeat the tales back to them. Once the machine had learned how to understand and repeat a story like a normal human adult the researchers began to modify the modules in different ways to mimic various forms of brain damage. In one instance they reprogrammed the memory encoder to learn at an accelerated rate so that it would remember story details normally dismissed as irrelevant.
However, instead of learning faster it got confused, mixing up stories with different plot lines and inserting itself into third-person stories, at one point claiming it had planted a bomb (a detail in a story about a terrorist attack). This resembled the symptoms of schizophrenia known as derailment and delusion, leading the researchers to conclude that accelerated learning might be a cause of schizophrenia.
While this was intentional, it seems to me that an AI could be programmed with a dangerously fast learning rate and go insane by accident.
Now, in slightly less risky to the continued existence of the species news Google is funding a project to teach computers regret. The project most likely will not actually give machines the ability to feel emotion but it should allow them to measure the distance between the desired result and actual results. Hopefully convincing them to try better next time, and with any luck "don't kill humans" will always be in the objective list.
mIGHt wE stILL hAve hopE?