How Does Deep Learning Relate to AI?

 

A conversation with Chris Barngrover, Director of Engineering - AI - Scene Understanding. This is a continuation of our conversation about Deep Learning.


How does deep learning relate to AI and machine learning?

Artificial intelligence, or machine intelligence as we often think of it, is when a robot or machine runs algorithms that allow it to make discrete and sometimes complex decisions providing the appearance of intelligence. Much of the AI that exists on platforms like the ones we develop at Shield AI use carefully designed and proveable algorithms that control the choices of exploration and path planning.


Machine learning is a subset of AI. It refers to fitting a model to example data such that the model can later be used to inform perception, cognition, or action. In robotics, this data-driven approach to producing models can be used for everything from sensor modeling to controls to scene understanding. The learning approach and the amount of data will differ depending on the application.

Deep learning is a form of machine learning that uses neural networks with many layers that are adept at fitting complex models to large amounts of data. If the network uses convolutions to process images into features, then the deep learning could be used to fit a model for object classification. This is a very common application, but deep neural networks can be used to fit models for many other problems -- for example, state estimation using available sensor data as input.


What are the applications of deep learning?

Deep learning can be applied to a great many problems in processing sound, text, user data, images, robotic sensors, and more. It is important to remember that there are more learning models available in addition to neural networks and that the adjective deep leads to the question of, “How deep?”


The design decision of depth is driven by the available compute on the target platform and the amount of data available for training. Notably, deep neural networks are able to show improvements with increased amounts of training data where other approaches tend to plateau. But, there are also limitations to deep learning with neural networks that are important to consider.


What are the limitations? Are they inherent or will they be able to be solved through further research and development?

Like all machine learning, deep learning is only as good as the data, but unlike other machine learning approaches, deep learning models require huge amounts of data to see results. Since the learning type that produces the best results is still supervised learning, where the training data is given with the appropriate ground truth label, deep neural networks need huge amounts of labeled data -- this can sometimes be more challenging and costly than the actual machine learning engineering.


Another limitation of deep neural networks is that their many layers have no direct or human-discernable output. These “hidden” layers are why deep learning models are often referred to as “black boxes”. There is an entire area of research around visualizing and understanding the hidden layers of neural networks.


There are also compute considerations for the training and runtime inference of deep networks. The depth of the network and the design of each layer determines the number of parameters to train and therefore the training complexity. Training a complex network on millions of examples may take days and access to large compute clusters. At inference time, the resulting network may require large compute or be slower on the available resources.


The limitations of deep learning are primarily inherent but can be mitigated by the continued research and development in academia and industry. Despite the known deficiencies, deep neural networks are very powerful and are generating monumental breakthroughs in the domains of image processing, natural language process, and speech recognition, to name a few.


On the topic of research, what does state of the art mean in relation to deep learning? What are most of the research papers of late focused on?

The current buzz within the domain is also a limitation related to the large amounts of data and the hidden layers of the network -- generative adversarial networks (GANs). Recent research has shown that deep-learned models are susceptible to adversarial attacks where small changes to an image can produce incorrect results. These augmented images can be generated by a network that limits the visibly perceptible changes.


Beyond the popularity around GANs research, there is continued research on improving the training of networks, pruning the size of networks, and making networks faster. There is research on more complex problems like 3D point cloud processing. There is also a lot of research into interesting ways to use semi-supervised and unsupervised learning, where there is less or no need to prepare annotated datasets.


What are some of the strengths and weaknesses of deep learning as compared to other branches of machine learning?

Deep neural networks are particularly effective at producing quality models on very large datasets, where other types of machine learning often have diminishing returns. They also do well on complex problems for which the best features are either non-obvious or there is no expert available to produce them. When the features can be well-curated, the other machine learning techniques are worth considering. Neural networks with many layers are more costly in speed and compute for both training and inference. Finally, the resulting model from deep learning will produce outputs that are often more difficult to understand and explain.


stockAI_2.png
 

RELATED POSTS