My Take on Artificial Intelligence
Recently I read a transcript of a panel discussion at MIT on the opportunities and threats of Artificial Intelligence (AI). The discussion was attended by religious leaders as well as AI scientists and developers.
The main theme of the discussion was if AI (i.e., robots) will be stronger than humans, threaten our existence and take over the world. Elon Musk, to the best of my knowledge, warned us about AI and labeled it as much as a disruptive threat.
Here is my take on the subject, for whatever it is worth.
In my lectures on decision making, I claim that decisions should not only be made by our brain but also involve the heart. The heart thinks, too. It has its own wishes. We refer to these wishes as “intuition,” or as a sense of consciousness, guilt, hope, etc. In decision making, these feelings should not be ignored. If they are, we will not be at peace with the decision we’ve made.
The sequence of when the brain decides and when the heart gets involved is important.
I suggest that when you have must make a decision of some significance, a decision that deals with change and thus might involve risk, start with your heart. Ask yourself how it feels to make the decision that you are going to make. Do not think, just feel.
Do you really want to do it?
Do you have the passion for it?
How does it smell?
What does your intuition tell you? Does it feel right to get involved or not?
If it does not feel right, drop it. Go no further on the subject.
If the answers to your questions are positive, get the brain involved: engage in cold, unemotional deliberation. Exercise professional due diligence. Accumulate information, weigh cost versus value. Evaluate the risk versus the benefits, etc.
When you complete the brain work of unemotional due diligence, go back to the heart.
Ask yourself, “Now that all data is in, how do I feel about the project? Did the due diligence discourage me or reinforce my desire to take action?”
For me, decision making—all decision making of significance—is an interplay of heart-brain-heart sequences of deliberations.
Now, back to artificial intelligence.
I believe that AI is all brain processing. AI takes our thinking processes and systematizes them into algorithms and programs in order to “think” optimally.
The reason why AI can one day make better decisions than we humans can is that AI can process more information faster than our brains can. It will be also free of human biases, fears, and false hopes, and thus produce more objective, intelligent answers to questions.
AI is a perfect substitute for the brain. But what about the heart?
That, the robot cannot have. That, I believe, cannot be programmed. That is not something one can put into algorithms.
The heart speaks best when the brain is quiet. It is not programmed. It cannot be programmed.
What impacts what our heart thinks is not clear. It could be past lives, it could be the subconscious, it could be our value systems that are products of our upbringing, our life experiences, past gurus that crossed our path, the books we read that impacted our soul…Who knows for sure?
Robots can process information. All (PA) thinking. It can be programmed to (E), too. It is the (I) that cannot be programmed.
Good decision making should not be all brain. That was the problem with the Nazis. It must involve the heart.
So, yes, AI can be a threat to humanity if we let it act independently of humans. In my opinion, we need to appoint a human to every single robot. This human would be responsible for providing the heart and reviewing the decisions that the robot came to.
Decisions made exclusively by the brain, without the heart, are the source of our tragedies.
Dare to think and share ,
Ichak Kalderon Adizes