A conversation with Andrew Reiter, Shield AI’s Co-Founder and Technical Fellow.
How do you view the role of trust in autonomy? Why is trust so important?
The usefulness of autonomous systems only extends as far as people are willing to trust them. A self-driving car is useless if no driver is willing to let go of the wheel, and a rocket guidance system will never get the chance to navigate space if no one believes that it can.
Trust depends on two factors:
Does it function reliably? When there are consequences to failure, a single error (as deemed by humans) is enough to wipe out the trust built up by tens -- or even thousands -- of successes. Technology is often a convenience more than a necessity. We automate tasks not because we are incapable of doing them ourselves, but because it would be tedious to do so, and would require substantial human capital (or risk). We are only willing to take the route of convenience and efficiency if we trust that the technology will work at least as well as the human efforts it is replacing.
Do I understand it? A critical component to achieving trust is to enable people to build a mental model of an autonomous system’s behavior. The mental model need not be representative of the true underlying algorithms, as long as it can predict the future actions of the system. Having a model allows a user to quickly gain confidence that the model is being followed, while without it, every action will be unexpected and unexplainable, causing the user to question if the system is working properly. While people can learn to trust systems that they do not understand, the reliability threshold becomes substantially higher.
What is the state of trust in technology today? What impact does this have on innovations?
In today’s tech-saturated world, where we can hold conversations with the AI in our pockets, there is not just a growing trust, but a growing expectation, that our technology can do everything we imagine. It seems like every month, AI manages to beat humans at another task, to the point that we’ve come to expect it and now wait impatiently for technology to take the next task off our plate. This expectation boosts innovation, as failure to innovate will rapidly leave a company behind its competitors.
Does the level of trust in technology depend on the type of product? For instance, trusting your email versus trusting an autonomous vehicle?
Absolutely -- trust is inversely proportional to risk. We trusted industrial automation decades ago because the only risk was to the product being produced. Then we began to trust automated systems with our money -- fraud detection, risk assessment and even trading. Soon we will trust autonomous systems with protecting our lives, as we let them drive us on highways, through traffic, and near drivers more hot-headed than the computers in our own cars.
With autonomy, the natural question that arises is: Can it be trusted? Why is it so important to continue questioning the nature of trust in these new technologies?
The holy grail of achieving trust is to be able to make performance guarantees. If we can assert that the error of each subcomponent is bounded, we can compute the error bounds of the full system. Starting from fundamental quantities like accelerometer noise and wind speed probability distributions, we can guarantee that a quadrotor will stay within some distance of its desired trajectory, and thus guarantee that it will not crash.
Where we are unable to make guarantees, we test -- and test, and test, and test. In fact, we test where we can make guarantees as well. We test hundreds or thousands of times in real life, and thousands or millions more in simulation. We test in conditions well beyond what the average user would experience so that we can find the edge of failure that defines the operational envelope, and then have confidence that the expected conditions are well within it. As new use cases for a technology come up, we add them to the test suite, always working to stay several steps ahead of what a user might want to do.