Metacognitions About a Robot

FacebooktwittermailFacebooktwittermail

by Roman Taraban, Ph.D., Texas Tech University

Imagine a time when intelligent robots begin interacting with humans in sophisticated ways. Is this a bit farfetched? Probably not, as there already exist compelling examples of just that. Sophia, a robot, so impressed her Saudi audience at an investment summit in 2017 that she was granted Saudi citizenship. Nadine, another robot, is an emotionally intelligent companion whose “intelligent behavior is almost indistinguishable from that of a human”. The coming exponential rise of artificial intelligence into all aspects of human behavior requires a consideration of possible consequences. If a machine is a billion times more intelligent than a human, as some predict will happen by 2045, what will cognitive and social interactions with such superhuman machines be like? Chris Frith (2012) argues that a remarkable human capacity is metacognition that concerns others. However, what if the “other” is an intelligent machine, like a robot. Is metacognition about a robot feasible? That is the question posed here. Four aspects of metacognition are considered: the metacognitive experience, theory of mind, teamwork, and trust. Other aspects could be considered, but these four should be sufficient to get a sense of the human-machine metacognitive possibilities.

robot and human hand fist bump

Flavell (1979) defined metacognitive experiences as follows: “Metacognitive experiences are any conscious cognitive or affective experiences that accompany and pertain to any intellectual enterprise. An example would be the sudden feeling that you do not understand something another person just said” (p. 906). Other examples include wondering whether you understand what another person is doing, or believing that you are not adequately communicating how you feel to a friend. We can easily apply these examples to intelligent machines. For instance, I might have a sudden feeling that I did not understand what a robot said, I might wonder if I am understanding what a robot is doing, or I may believe that I am communicating poorly with the robot. So it appears to be safe to conclude that we can have metacognitive experiences involving robots.

Other instances of metacognition involving intelligent machines, like robots, are problematic. Take, for instance, mentalizing or Theory of Mind. In mentalizing, we take account (monitor) of others’ mental states and use that knowledge to predict (control) others’ and our own behavior. In humans, the ability to reason about the mental states of others emerges between the ages of 4 to 6 years and continues to develop across the lifespan. In a typical test of this ability, a child observes a person place an object in drawer A. The person then leaves the room. The child observes another person move the object to drawer B. When the first person returns, the child is asked to predict where the person will look for the object. Predicting drawer A is evidence that the child can think about what the other person believes, and that the child recognized that the other person’s beliefs may not be the same as the child’s own knowledge. Theory of mind metacognition directed towards humans is effective and productive; however, theory of mind metacognition directed to intelligent machines is not likely to work. The primary reason is that theory of mind is predicated on having a model of the other person and being able to simulate the experience of the other person. Because intelligent machines process information using algorithms and representations that differ from those humans use, it is not possible to anticipate the “thinking” of these machines and therefore predict their behavior in a metacognitive manner, i.e., having a theory of the other mind. Presently, for instance, intelligent machines use deep learning networks and naïve Bayes algorithms to “think” about a problem. The computational methods employed by these machines differ from those employed by humans.

What about teamwork? According to Frith (2012), humans are remarkable in their ability to work together in groups. Teamwork accounts for humans’ incredible achievements. The ability to work together is due, in large part, to metacognition. The specific factor cited by Frith is individuals’ willingness to share and explain the metacognitive considerations that prompted their decision-making behavior. For group work to succeed, participants need to know the goals, values, and intentions of others in the group. As has been pointed out already, machine intelligence is qualitatively different from human knowledge, so that is one barrier to human-machine group work. Further, the benefits of group work depend on a sense of shared responsibility. It is currently unknown whether or how a sense of cooperation and shared responsibility would occur in human-machine decision making and behavior.

There is one more concern related to machine intelligence that is separate from the fact that machines “think” in qualitatively different ways compared to humans. It is an issue of trust. In some cases of social interaction, understanding information that is being presented is not an issue. We may understand the message, but wonder if our assessment of the source of the information is reliable. Flavell (1979) echoed this case when he wrote: “In many real-life situations, the monitoring problem is not to determine how well you understand what a message means but to determine how much you ought to believe it or do what it says to do” (p. 910). When machines get super smart, will we be able to trust them? Benjamin Kuipers suggests the following: “For robots to be acceptable participants in human society, they will need to understand and follow human social norms.  They also must be able to communicate that they are trustworthy in the many large and small collaborations that make up human society” https://vimeo.com/253813907 .

What role will metacognitions about super-intelligent machines have in the future? Here I argue that we will have metacognitive experiences involving these machines. Those experiences will occur when we monitor and regulate our interactions with the machines. However, it is not clear that we will be able to attain deeper aspects of metacognition, like theory of mind. This is because the computations underlying machine intelligence are qualitatively different from human computation. Finally, will we be able to trust robots with our wealth, our children, our societies, our lives? That will depend on how we decide to regulate the construction, training, and deployment of super intelligent machines. Flavell (1979) often brings in affect, emotion, and feelings, into the discussion of metacognitive experiences. Kuipers emphasizes the notion of trust and ethics. These are all factors that computer scientists have not begun to address in their models of intelligent machine metacognition (Anderson & Oates, 2007; Cox, 2005). Hopefully solutions can be found, in order to enable rich and trustworthy relationships with smart machines.

References

Anderson, M. L., & Oates, T. (2007). A review of recent research in metareasoning and metalearning. AI Magazine28(1), 12.

Cox, M. T. (2005). Field review: Metacognition in computation: A selected research review. Artificial intelligence169(2), 104-141.

Flavell, John H. (1979). Metacognition and cognitive monitoring. American Psychologist, 34(10), 906-911.

Frith, C. D. (2012). The role of metacognition in human social interactions. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1599), 2213-2223.