A conversation with Professor Nathan Michael, Shield AI’s Chief Technology Officer. This is a continuation of our conversation about Trust and Robotic Systems.
How do you define intelligence in the context of robotic systems?
Within the context of organic systems -- humans, animals, plants -- there are more formal definitions of what we mean by intelligence. Organic intelligence refers to mental acuteness, the ability to learn, to understand and deal with new situations, and the ability to apply knowledge to think abstractly or to manipulate one’s environment.
To connect that notion of intelligence to robotics systems, the question really is, "What is artificial intelligence in the context of robotic systems?" And that is the application of machine learning and other areas or domains under the umbrella of the term AI as applied to a mechanical, electrical and computational systems that are integrated together and operate in the real world. Robotic systems that are intelligent exhibit capabilities in terms of autonomy -- they are able to complete certain tasks on their own. And the characteristics of intelligence as it relates to autonomy are well-formed definitions.
What are the characteristics of intelligence that we are looking for in robotic systems?
When we talk about intelligence in humans, we are referencing things like logic and understanding, reasoning, planning, critical thinking, creativity, and learning. These are the attributes that we associate with an organic intelligent system. These characteristics of intelligence manifest in organic systems in varying degrees. We recognize that there are different levels of intelligence; we don’t look at animals and expect that they will engage with the world in the same manner as people.
We also understand that intelligence includes not only our ability to infer or think critically, but also to retain information and use that knowledge in the future. And so, when we talk about an artificial intelligent system, we're talking about this confluence of different capabilities that we would expect to see with intelligent systems. And our inspiration for this idea of intelligence is how we look at it for ourselves -- we use ourselves as our metric. We understand intelligence as it applies to organic systems, even when we talk about it in reference to artificial systems.
So, as we start building artificially intelligent systems, we build in the capability to engage in logic and critical thinking and problem solving and planning. All of these different attributes or capabilities are developed through a combination of machines and algorithms. It requires machine learning, robotics and autonomy. And so, each one of these characteristics that we attribute to organic intelligence appears within the context of artificial intelligence.
How does this idea of intelligence relate back to learning?
When we talk about learning, we're talking about a subset of intelligence. That's an important point. Artificial intelligence is a large, broad field that encompasses many different aspects of what it means to create an intelligent system. Whereas learning is about identifying relationships. For robotic systems, learning may be identifying a causal relationship -- recognizing a cause and effect -- or identifying a correlation between observations from information sources such as sensors.
Systems -- both organic and artificial -- that are able to engage in an intelligent manner must be able to track what they are learning, understand it, and leverage what is learned in order to operate. As humans, we learn how to take data in, categorize it and store it for recollection and communication. We do this in a less explicit manner than machines, but as we grow and mature, we go through a process that causes us to be able to understand how to consume data, remember and maintain data, and recall it over time. And these kinds of concepts are refined and improved the older an individual gets.
Within the context of machine learning, computational systems are able to recognize and identify relationships through the use of models that map the data of what the robot is perceiving with other kinds of data taken at the same time or in the past. So what we're really asking when we are exploring how a robotic system learns is, "How do we formulate different types of models that identify relationships? How do we create algorithms that map inputs to outputs?” It’s a question of how relationships are mathematically formulated. There are many different types of techniques to develop these models. There are two types of models that generally we consider: generative and discriminative.
What is the difference between generative and discriminative models?
The easiest way to differentiate between generative and discriminative is to consider a system. A system represents the mapping between input and output data:
A generative model is concerned with how the output signal is generated by the system from the input data. It contemplates the nature of the system and what it is doing in order to take that input data and create that output data. We use generative models whenever we predict how the world is going to change around us, for example: I see a door; the door is moving; doors move in a specific way; if I do not move out of the way, the door will hit me.
Discriminative models are concerned only with the output data and what category that data fits into. So image classification is typically discriminative because you want to know what is represented in the image. Whereas a generative model concerned with image classification may seek to identify the camera characteristics that created the image.
And so, we use discriminative algorithms when we want to better understand the data that we have at hand. And we use generative algorithms when we care about how that data may change as a function of how we're thinking about the conditions of the environment.
In either case, these types of models play a role in how the system is able to take data in, think about it and associate it with certain properties. These algorithms represent the learning aspect of robotic systems and allow the system to identify relationships and make predictions about future behavior or about potential future rewards.