On How The Future of AI Depends on Trust
How does trust play into the system’s ability to evolve and adapt to what it is learning?
I can best respond with an analogy. If you think about a child, as we teach a child, we are teaching them to engage in the world, to learn about their surroundings, to learn about the implications of the actions that they take and the decisions that they make, and to conclude how that has an impact on their ability to interact with the world and everything in it. We are trusting that we are teaching that individual -- that child that is learning how to be part of that society -- how to think. We are teaching them how to learn and we trust that what they learn will be correct.
With children, we use quizzes to make sure that they are learning how to learn, and using that capacity to learn the right things, based on our criteria for what is correct. We measure what they’re learning by testing them. The quantitative metrics of these tests or quizzes allow us to judge if what that child is learning makes sense.
We use these same types of processes and methods to evaluate the learning of AI systems. We introduce quantitative metrics in order to cross-validate that what our systems are learning is correct. To test that what they are learning makes sense. We don’t know what the AI system is going to learn, precisely, but we can say that it will be within a set of bounds. We can articulate those bounds based on theory and fundamentals, and then validate that those bounds are being preserved. The more the systems operate, the more we can test the system, the more we can understand what the system is learning. Then we can tighten those bounds to get a better and better idea of how it’s learning, how it’s drawing conclusions, and the nature of those conclusions.
As these systems become more capable, how does the future of AI depend on trust?
One of the greatest challenges with AI is that there is an overwhelming impression of underlying magic that makes it fantastic. It is not magic. It is mathematics. It’s so exciting what’s being accomplished by AI, but it is also just theory and fundamentals and math. We are developing engineered systems. They are fantastic engineered systems that can think for themselves and improve over time, so it’s an amazing accomplishment from that perspective. But from a science, technology, engineering, and mathematics perspective, it really is just logic and reasoning running forward given data and time and compute.
Because of this, what we are going to see, as we improve our understanding, is that these systems will perform more and more in a manner that is aligned with what we would expect. That expectation will eventually be met consistently by a set of concepts, ideas, and technologies. But in the process we’ll see lots of proposed strategies that will fail miserably. We will go through periods of seeing things work really well, and of seeing things work not-so-well. This will unfold the same way that it did when we were learning how to build complex technologies like airplanes; we created many proposed designs that failed miserably, and then one that worked just okay, and then another that worked better, and then another that worked better still. No one trusted them at first, and then eventually there were a few that started to trust them, and so on. So, technological adoption curves for AI will start to manifest and appear.
How does trust factor into the products being developed at Shield AI?
At Shield AI, the notion of trust is central to everything we do and to all the technologies we build. As such, we establish rigorous internal standards and constantly assess the reliability of our systems and their AI. This assessment enables us to understand how we can improve our systems and ensure that these systems are correctly improving themselves.
Stay tuned here for the next round of The Role of Trust, or sign up for the Shield AI Newsletter to be notified of future posts.