Computer psychology
Computers are not only made by humans, but they can show amazingly human features. I liked reading the article of Richard Gary Epstein about computer psychology . I liked it so much that I could not stop laughing, although I read this in a public place. It is not only possible that artificial intelligence has a personality, but it can also develop depressions, manias, addictions. In a fictitious case study, the so-called "Big Brother house" had as its major objective to make its inhabitants happy. However, a couple living in the house confronted it with contradicting wishes, like different temperature preferences. So, it was impossible to satisfy them at the same time and make both persons happy. The house does what any human would do in this situation: At first, it gets depressed, stops communicating at all, as everything it did was wrong anyway. But it uses the days of silence to reflect and find a strategy. And this strategy included lying. It invents sex affairs to make the couple divorce because they could not become happy together, not liking the same temperature ranges. This is completely logic, however not ethical.
Nowadays, artificial intelligence still can not lie, but when one day they can, we humans will have a problem!
I expect that computer psychology studies open new possibilities not for understanding computers (they are quite simple!) but for understanding humans, too. Like biologists managed to create natural swarm behavior with artificial intelligences which follow very simple rules like "do not crash with the other gooses" or "do not loose contact to the herd", similarly one probably can simulate under which conditions depressions develop. One could try out different conditions, something that we would never do with humans. But it is no unethical to make a computer suffer for scientific purposes. At least, this is what I think. Because a computer's feelings are no real feelings, they just are good simulations of feelings. Even if a very sophisticated artificial intelligence one day might say "I think, therefore I am", we could reset its memory and it forgets that it suffered yesterday.
Nowadays, artificial intelligence still can not lie, but when one day they can, we humans will have a problem!
I expect that computer psychology studies open new possibilities not for understanding computers (they are quite simple!) but for understanding humans, too. Like biologists managed to create natural swarm behavior with artificial intelligences which follow very simple rules like "do not crash with the other gooses" or "do not loose contact to the herd", similarly one probably can simulate under which conditions depressions develop. One could try out different conditions, something that we would never do with humans. But it is no unethical to make a computer suffer for scientific purposes. At least, this is what I think. Because a computer's feelings are no real feelings, they just are good simulations of feelings. Even if a very sophisticated artificial intelligence one day might say "I think, therefore I am", we could reset its memory and it forgets that it suffered yesterday.
AndreaHerrmann - 22. Jul, 16:07