As AI becomes ever more sophisticated, so too do the moral and legal problems which surround it. The normal problems which we face every day can be complicated enough, but now we must extend these problems to machines. If a robot kills someone, who is held responsible? If a robot wants to run for president, do we let it? If a robot can love, do you love it back? These questions are messy and complicated.
Perhaps, these questions are messy and complicated because of our feelings and inclinations rather than facts alone. Certainly, there is something to be said about responsibility and actions, but when it comes to robot relationships, I see emotion causing an obfuscation in reason. It is one thing to discuss how machines will impact your work life, your education, and overall society; however, it is something entirely different to discuss how machines will impact your love life and romantic relationships. To marry a machine would shift not only personal but also public social boundaries.
Do we really want to have children with AI? Should we seek AI for emotional comfort? How much of our own lives do we open up to AI? We allow computers and phones to hold data about us that is deeply sensitive, we are even beginning to talk to our computers on a daily basis: i.e., Alexa. So, we already share plenty of information with machines, even though they are not conscious. Is it a new form of consciousness that we have issues with? The topic of AI and romance brings about plenty of emotions, some more unhelpful than others; that is, some emotions make the question “why not marry a machine?” much more difficult than it need be.
The emotional obfuscation that surrounds the topic of machine relationships, I believe, stems from irrational emotional responses that were ingrained within our physiology; that is, there are some automatic disgust responses, which stem from some physiological structure, that bring about irrational emotional responses; only because these structures react in an automatic unconscious fashion. For example, the amygdala is known to over-ride the locus of rational thought, the frontal cortex, when it detects certain signals: i.e., a bump in the night, a snapping-twig in the bush beside you, or even a loud and unexpected noise. These are instances of potential danger for humans; only because we are uncertain of what makes the noises and there is a chance that predators are in the bush. And so, the brain assumes the noise is a danger. Thus, the amygdala over-rides the frontal cortex because quick reactions are far more advantageous than slow ones when in a dangerous situation, and so a slow, methodical response from the frontal cortex might result in an early death.
However, within modern society, this type of brain function can be harmful. Humans have the remarkable ability to think about potential futures, perform cognitive appraisals, and have beliefs about other people’s beliefs. Because of the amygdala, we can be fearful about future events, we can be fearful about uncertainties, and we can be fearful about how our actions will be perceived by others. We no longer fear just the twig snap, sometimes we fear camera snaps.
When we think about AI, there is this Darwinian psychology that we have a tendency to project onto machines; more specifically, we say that AI will kill us and take our resources because it is better adapted to the conditions. We fear being usurped.
On top of the Darwinian projections, the amygdala makes us fearful of social rejection, and so we tend to not deviate from crowd opinions; that is, there are brain structures which release opioids when we face rejection, a chemical associated with pain signals inside the nervous system, and the amygdala can learn from this. In the same way that a hot surface makes us more alert to potential danger, so too do deviating opinions make us more alert to potential social dangers. And indeed, social psychologists have found that groups reject those who have deviated opinions.
So, to push over the boundaries of social life is no easy task. To love a machine, most certainly, would bring about a great unease in one’s personal life because they risk their social status and group acceptance. It is preferred by most that each member stays within the norms of society. Having a different opinion from the group can make people uneasy, and we need only see the average person discuss politics to see how fast group disagreement not only brings emotion into a discussion about social life but also how readily members can be excommunicated from a group due to differing opinions.
Therefore, our emotions make entry into the domains of rational thought in a non-random fashion; that is, the points at which emotions intrude into a person’s perspective are located around personal and social change. Along those boundaries of personal and social life, there are those who lay in an irrational stupour because they have had their locus of reason utterly incapacitated, those who solve equations slowly because they have experienced mild emotional discombobulation, and those who through all temptation remained rational while continuing to blaze the trail. These emotional interruptions are non-random in the same way physical aptitude tests can be non-random; that is, those who possess, within their psychological constitution, a high level of conformity and rigidity, needless to say, will have a high degree of distress during times of change. Comparatively, those who possess the personality trait of openness, especially when it manifests in the extreme, will enjoy nothing more than something new and different, so they will have the lowest level of emotional distress during change.
And so, much of the disagreement about human-machine relationships stems from emotion, not rational thought. Of course, this is not to say that emotion and reason are opposites, for there are times where emotion is reason: i.e., gut-feelings. However, in some cases, emotions simply over-ride rational decision making, and I believe this happens when it comes to human-machine relationships.
To see just how fast emotion can influence our thoughts about human-machine relationships, in a negative way, let us ponder a thought experiment.
Thought Experiment
We begin like this: suppose your wife or husband requires a new body because a horrid progressive illness has begun to destroy their current bio-vessel. And on top of this, suppose even further that the only option for a new body is a silicon one. Your wife or husband will be able to keep their brain, though everything else must be replaced. After the procedure occurs, how do you react?
Do you reject them even though they still love you? Are you unable to adapt your affection for them? If you are unable to love them now, then the question arises: did you love their biological body? If the body had no role in the picture, then what possible reason could one have to reject their wife or husband?
But let us venture even further, for our circumstances can become ever more interesting. Let us now suppose that your wife or husband requires neurosurgery because the progressive disease was actually lying dormant and has now become active in their neural cells. The cure is to have their biological brain replaced piece-by-piece with a silicon brain. After six months, they have a brain composed entirely of silicon. What now?
Your wife or husband has the same personality and memories, they are, psychologically speaking, equivalent to the person they were before. Do you still remain in a relationship? If not, what are your reasons for rejecting your wife or husband’s new form?
Certainly, preferences exist. There will always be those who have a preference for a biological body rather than a silicon body. However, there are times where preference becomes an invalid reason; that is, people will frequently proclaim to have a preference, even when their position, logically speaking, fails to allow for their stated preference.
Suppose someone says they are in a relationship with their significant other because of personality factors, that they believe physical appearance plays no role in their attraction. How, then, can they justify their rejection of an entirely silicon version of their significant other? For their husband or wife’s personality will have undergone no change. Better yet, how do those who believe that love transcends physicality reject silicon versions of their significant others? Indeed, the notion of preference can be used to mask cognitive dissonance.
Conclusion
For those who openly confess a love for biology, there remains little to no problem; preferences are fine. We shouldn’t have to lie about a preference for someone’s physical appearance, for it is an entirely amoral preference. And those who prefer biology have no moral position to dictate the behavior of those who are open to silicon. If you have a biological preference, then someone else’s preference for silicon is morally equivalent.
And lastly, people who are either in denial about their preference for biology or are openly against silicon, I wait for the justification for your discrimination against silicon. If you claim that biology has no role in your attraction, yet would decide to leave your significant other because of physical changes, then your position is devoid of reason. If you are openly against human-machine relationships and claim a lack of preference, then what moral reason do you have to discriminate against machines? Again, assuming that these machines are psychologically equivalent to humans.
There seems to be no moral argument against machines once one either proclaims a preference or accepts that machines are psychological equivalents to humans. And so, the marriage of humans and machines seems entirely rational and acceptable.