top of page

This New Google Technique Help Us Understand How Neural Networks are Thinking

Writer's picture: FinHacksQuantsFinHacksQuants

Understanding what Hidden Layers Do|How Nodes are Activated|How Concepts are Formed

Interpretability remains one of the biggest challenges of modern deep learning applications. The recent advancements in computation models and deep learning research have enabled the creation of highly sophisticated models that can include thousands of hidden layers and tens of millions of neurons. While its relatively simple to create incredibly advanced deep neural network models, its Understanding how those models create and use Knowledge remains a challenge. Recently, researchers from the Google Brain team published a paper proposing a new method called Concept Activation Vectors(CAVs) that takes a new angle to the interpretability of deep learning models.

To understand the CAV technique, it is important to understand the nature of the interpretability challenge in deep learning models. In the current generation of deep learning technologies, there is a permanent friction between the accuracy of a model and our ability to interpret its Knowledge. The interpretability-accuracy friction is the friction between being able to accomplish complex knowledge tasks and Understanding how those tasks were accomplished. Knowledge vs. Control, Performance vs. Accountability, Efficiency vs. Simplicity…pick your favorite dilemma and they all can be explained by balancing the tradeoffs between accuracy and interpretability.



Do you care about obtaining the best results or do you care about understanding how those results were produced? That’s a question that data scientists need to answer in every deep learning scenario. Many deep learning techniques are complex in nature and, although they result very accurate in many scenarios, they can become incredibly difficult to interpret. If we can plot some of the best-known deep learning models in a chart that correlates accuracy and interpretability, we will get something like the following:

Interpretability in deep learning models is not a single Concept and can be seen across multiple layers:


Achieving interpretability across each one of the layers defined in the previous figure requires several fundamental building blocks. In a recent paper, researchers from Google outlined what they considered some of the foundational building blocks of interpretability.

Google summarizes the principles of interpretability as the following:


— Understanding what Hidden Layers Do: The bulk of the knowledge in a deep learning model is formed in the hidden layers. Understanding the functionality of the different hidden layers at a macro level is essential to be able to interpret a deep learning model.


— Understanding How Nodes are Activated: The key to interpretability is not to understand the functionality of individual neurons in a network but rather groups of interconnected neurons that fire together in the same spatial location. Segmenting a network by groups of interconnected neurons will provide a simpler level of abstraction to understand its functionality.


— Understanding How Concepts are Formed: Understanding how deep neural network forms individual concepts that can then be assembled into the final output is another key building block of interpretability.


Those principles were the theoretical foundation behind Google’s new CAV technique.

Following the ideas discussed in the previous section, the natural approach to interpretability should be to describe an deep learning model’s predictions in terms of the input features it considers. A classic examples would be a logistic regression classifier in which coefficient weights are often interpreted as the importance of each feature. However, most deep learning models operate on features, such as pixel values, that do not correspond to high level concepts that humans easily understand. Furthermore, a model’s internal values (e.g., neural activations) can seem incomprehensible. While techniques such as saliency maps are effective at measuring the importance of specific pixel regions they fail to correlate to higher level concepts.


5 views0 comments

Recent Posts

See All

Comentários


bottom of page