AUBER: Automated BERT Regularization

How can we effectively regularize BERT? Although BERT proves its effectiveness in various downstream natural language processing tasks, it often overfits when there are only a small number of training instances. A promising direction to regularize BERT is based on pruning its attention heads based on a proxy score for head importance. However, heuristic-based methods are usually suboptimal since they predetermine the order by which attention heads are pruned. In order to overcome such a limitation, we propose AUBER, an effective regularization method that leverages reinforcement learning to automatically prune attention heads from BERT. Instead of depending on heuristics or rule-based policies, AUBER learns a pruning policy that determines which attention heads should or should not be pruned for regularization. Experimental results show that AUBER outperforms existing pruning methods by achieving up to 10% better accuracy. In addition, our ablation study empirically demonstrates the effectiveness of our design choices for AUBER.


INTRODUCTION
How can we effectively regularize BERT (Devlin et al. (2018))? In natural language processing, it has been observed that generalization could be greatly improved by fine-tuning a large-scale language model pre-trained on a large unlabeled corpus. In particular, BERT demonstrated such an effectiveness on a wide range of downstream natural language processing tasks including question answering and language inference. Despite its recent success and wide adoption, fine-tuning BERT on a downstream task is prone to overfitting due to over-parameterization; BERT-base has 110M parameters and BERT-large has 340M parameters. This problem worsens when there are only a small number of training instances available. Some observations report that fine-tuning sometimes fails when a target dataset has fewer than 10,000 training instances (Devlin et al. (2018); Phang et al. (2018)).
To mitigate this critical issue, multiple studies attempt to regularize BERT by pruning parameters or using dropout to decrease its model complexity (Michel et al. (2019); Voita et al. (2019); Lee et al. (2020)). Among these approaches, we regularize BERT by pruning attention heads since pruning yields simple and explainable results and it can be used along with other regularization methods. In order to avoid combinatorial search, whose computational complexity grows exponentially with the number of heads, the existing methods measure the importance of each attention head based on heuristics such as an approximation of sensitivity of BERT to pruning a specific attention head. However, these approaches are based on hand-crafted heuristics that are not guaranteed to be directly related to the model performance, and therefore, would result in a suboptimal performance.
In this paper, we propose AUBER, an effective method for regularizing BERT. AUBER overcomes the limitation of past attempts to prune attention heads from BERT by leveraging reinforcement learning. When pruning attention heads from BERT, our method automates this process by learning policies rather than relying on a predetermined rule-based policy and heuristics. AUBER prunes BERT sequentially in a layer-wise manner. For each layer, AUBER extracts features that are useful for the reinforcement learning agent to determine which attention head to be pruned from the current layer. The final pruning policy found by the reinforcement learning agent is used to prune the  corresponding layer. Before AUBER proceeds to process the next layer, BERT is fine-tuned to recapture the information lost due to pruning attention heads. An overview of AUBER transitioning from the second to the third layer of BERT is demonstrated in Figure 1.
Our contributions are summarized as follows: • Regularization. BERT is prone to overfitting when the training dataset is too small. AUBER effectively prunes appropriate attention heads to decrease the model capacity and regularizes BERT. • Automation. By leveraging reinforcement learning, we automate the process of regularization of BERT. Instead of depending on hand-crafted policies or heuristics which often yield suboptimal results, AUBER inspects the current state of BERT and automatically chooses which attention head should be pruned. • Experiments. We perform extensive experiments, and show that AUBER successfully regularizes BERT improving the performance metric by up to 10% and outperforms other head pruning methods. Through ablation study, we empirically show that our design choices for AUBER are effective.
The rest of this paper is organized as follows. Section 2 explains preliminaries. Section 3 describes our proposed method, AUBER. Section 4 presents experimental results. After discussing related works in Section 5, we conclude in Section 6.

MULTI-HEADED SELF-ATTENTION
A self-attention function maps a query vector and a set of key-value vector pairs to an output. We compute the query, key, and value vectors by multiplying the input embeddings Q, K, V ∈ R N ×d with the parametrized matrices W Q ∈ R d×n , W K ∈ R d×n , and W V ∈ R d×m respectively, where N is the number of tokens in the sentence, and n, m, and d are query, value, and embedding dimension respectively. In multi-headed self-attention, H independently parameterized self-attention heads are applied in parallel to project the input embeddings into multiple representation subspaces. Each attention head contains parameter matrices W Q i ∈ R d×n , W K i ∈ R d×n , and W V i ∈ R d×m . Output matrices of H independent self-attention heads are concatenated and once again projected by a matrix W O ∈ R Hm×d to obtain the final result. This process can be represented as:

BERT
BERT (Devlin et al. (2018)) is a multi-layer Transformer (Vaswani et al. (2017)) pre-trained on masked language model and next sentence prediction tasks. It is then fine-tuned on specific tasks including question answering and language inference. It achieved state-of-the-art performance on a variety of downstream natural language processing tasks. BERT-base has 12 Transformer blocks and each block has 12 self-attention heads. Despite its success in various natural language processing tasks, BERT sometimes overfits when the training dataset is small due to over-parameterization: 110M parameters for BERT-base. Thus, there has been a growing interest in BERT regularization through various methods such as pruning or dropout (Lee et al. (2020)).

DEEP Q-LEARNING
A deep Q network (DQN) is a multi-layered neural policy network that outputs a vector of actionvalue pairs for a given state s. For a d s -dimensional state space and an action space containing d a actions, the neural network is a function from R ds to R da . Two important aspects of the DQN algorithm as proposed by Mnih et al. (2013) are the use of a target network, and the use of experience replay. The target network is the same as the policy network except that its parameters are copied every τ steps from the policy network. For the experience replay, observed transitions are stored for some time and sampled uniformly from this memory bank to update the network. Both the target network and the experience replay dramatically improve the performance of the algorithm.

PROPOSED METHOD
We propose AUBER, our method for regularizing BERT by automatically learning to prune attention heads from BERT. After presenting the overview of the proposed method in Section 3.1, we describe how we frame the problem of pruning attention heads into a reinforcement learning problem in Section 3.2. Then, we propose AUBER in Section 3.4.

OVERVIEW
We observe that BERT is prone to overfitting for tasks with a few training data. However, the existing head pruning methods rely on hand-crafted heuristics and hyperparamters, which give sub-optimal results. The goal of AUBER is to automate the pruning process for successful regularization. Designing such regularization method entails the following challenges: 1. Automation. How can we automate the head pruning process for regularization without resorting to sub-optimal heuristics? 2. State representation. When automating the regularization process as a reinforcement learning problem, how can we represent the state of BERT in a way useful for the pruning? 3. Action search space scalability. BERT has many parameters, many layers, and many attention heads in each layer. When automating the regularization process of BERT as a reinforcement learning problem, how can we handle prohibitively large action search space for pruning?
We propose the following main ideas to address the challenges: 1. Reinforcement learning. We exploit reinforcement learning, specifically DQN, with accuracy enhancement as reward. It is natural to leverage DQN for these problems that have discrete action space (Sutton & Barto (2018)). Experience replay also allows efficient usage of previous experiences and stable convergence (Mnih et al. (2013)). 2. L1 norm of value matrix. We use L1 norm of value matrix of each attention head to represent initial state of a layer. When a head is pruned, the corresponding value is set to 0. 3. Dually-greedy manner. We prune the attention heads layer by layer sequentially to reduce the search space. Moreover, we prune one attention head at one time instead of handling all possible pruning methods at once so that action search space becomes more scalable.

AUTOMATED REGULARIZATION WITH REINFORCEMENT LEARNING
AUBER leverages reinforcement learning for efficient search of regularization strategy without relying on heuristics. We exploit DQN among various reinforcement learning frameworks to take advantage of experience replay and to easily handle discrete action space. Here we introduce the detailed setting of reinforcement learning framework.
Initial state As mentioned in section 2.2, layer l has multiple attention heads, each of which has its own query, key, and value matrices. For layer l of BERT, we derive the initial state s l using L1 norm of the value matrix of each attention head. Further details for this computation method is elaborated in section 3.3.
Action The action space a of AUBER is discrete. For a BERT model with H attention heads per layer, the number of possible actions is H + 1 (i.e. a ∈ {1, 2, . . . , H, H + 1}). When the action a = i ∈ {1, 2, . . . , H − 1, H} is chosen, the corresponding i th attention head is pruned. The action a = H + 1 signals the DQN agent to quit pruning. With a continuous action space (e.g. effective sparsity ratio), a separate heuristic-based pruning algorithm must be used in order to choose which attention heads should be pruned. However, having a discrete action space allows the reinforcement learning agent to automatically infer the expected reward for each possible pruning policy, thereby minimizing the usage of error-prone heuristics.
Next State After the i th head is pruned, the value of i th index of s l is set to 0. This modified state is provided as the next state to the agent. When the action a = H + 1, the next state is set to N one. This mechanism allows the agent to recognize which attention heads have been pruned and decide the next best pruning policy based on past decisions.
Reward The reward of AUBER is the change in accuracy ∆acc = current accuracy − prev accuracy (3) where current accuracy is the accuracy of the current BERT model evaluated on a validation set, and prev accuracy is the accuracy obtained from the previous state or the accuracy of the original BERT model if no attention heads are pruned. If we set the reward simply as current accuracy, DQN cannot capture the differences among reward values if the changes in accuracy are relatively small. Setting the reward as the change in accuracy has the normalization effect, thus stabilizing the training process of the DQN agent. The reward for action a = H + 1 is a hyper-parameter that can be adjusted to encourage or discourage active pruning. In AUBER, it is set to 0 to encourage the DQN agent to prune only when the expected change in accuracy is positive.
Fine-tuning After the best pruning policy for layer l of BERT is found, the BERT model pruned according the best pruning policy is fine-tuned with a smaller learning rate. This fine-tuning step is crucial since it adjusts the weights of remaining attention heads to compensate for information lost due to pruning. Then, the initial state of layer l + 1 is calculated and provided to the agent.
Since frequent fine-tuning may lead to overfitting, we separate the training dataset into two: a minivalidation dataset and a mini-training dataset. The mini-validation dataset is the dataset on which the pruned BERT model is evaluated on to return a reward. After the optimal pruning policy is determined by using the mini-validation dataset, the mini-training dataset is used to fine-tune the pruned model. When all layers are pruned by AUBER, the final model is fine-tuned with the entire training dataset with early stopping.

STATE REPRESENTATION
The initial state s l of layer l of BERT is computed through following procedure. We first calculate the L1 norm of the value matrix of each attention head. Then, we standardize the norm values to have a mean µ = 0 and a standard deviation σ = 1. Finally, the sof tmax function is applied to the norm values to yield s l . We devise the method based on the following lemma. Lemma 1. For a layer with H heads, let N be the number of tokens in the sentence and m, n, and d be the value, query, and embedding dimension respectively. Let Q, K, V ∈ R N ×d be the input query, key, and value matrices, and W Q i , W K i , and W V i be the weight parameters of the i th head such that W Q i , W K i ∈ R d×n and W V i ∈ R d×m . Let O i be the output of the i th head. Then, This lemma provides the theoretical insight that L1 norm of the value matrix of a head bounds the L1 norm of its output matrix, which implies the importance of the head in the layer.

AUBER: AUTOMATED BERT REGULARIZATION
Our DQN agent processes the BERT model in a layer-wise manner. For each layer l with H attention heads, the agent receives an initial layer embedding s l which encodes useful characteristics of this layer. Then, the agent outputs the index of an attention head that is expected to increase or maintain the training accuracy when removed. After an attention head i is pruned, the value of the i th index of s l is set to 0, and it is provided as the next state to the agent. This process is repeated until the action a = H + 1. The model pruned up to layer l is fine-tuned on the training dataset, and a new initial layer embedding s l+1 is calculated from the fine-tuned model.
Algorithm 1 illustrates the process of AUBER.
Algorithm 1: AUBER Input : A BERT model B t fine-tuned on task t. Output:

EXPERIMENTS
We conduct experiments to answer the following questions of AUBER.
• Q1. Accuracy (Section 4.2). Given a BERT model fine-tuned on a specific natural language processing task, how well does AUBER improve the performance of the model? • Q2. Ablation Study (Section 4.3). How useful is the L1 norm of the value matrices of attention heads in representing the state of BERT? How does the order in which the layers are processed by AUBER affect regularization?

EXPERIMENTAL SETUP
Datasets. We perform downstream natural language processing tasks on four GLUE datasets -MRPC, CoLA, RTE, and WNLI. We test AUBER on datasets that contain less that 10,000 training instances since past experiments report that fine-tuning sometimes fails when a target dataset has fewer than 10,000 training instances (Devlin et al. (2018); Phang et al. (2018)). Detailed information of these datasets is described in Table 1. BERT Model. We use the pre-trained bert-base-cased model provided by huggingface 5 . We finetune this model on each dataset mentioned in Table 1 to obtain the initial model. Initial models for MRPC, CoLA, and WNLI are fine-tuned on the corresponding dataset for 3 epochs, and the initial model for RTE is fine-tuned for 4 epochs. The max sequence length is set to 128, the batch size per gpu is set to 32. The learning rate for fine-tuning initial models for MRPC, CoLA, and WNLI is set to 0.00002, and the learning rate for fine-tuning the initial model for RTE is set to 0.00001.

Reinforcement Learning.
We use a 4-layer feedforward neural network for the DQN agent. The dimension of input, output, and all hidden layers are set to 12, 13, and 512 respectively. LeakyReLU is applied after all layers except for the last one. We use the epsilon greedy strategy for choosing actions. The initial and final epsilon values are set to 1 and 0.05 respectively, and the epsilon decay value is set to 256. The replay memory size is set to 5000, and the batch size for training the DQN agent is set to 128. The discount value γ for the DQN agent is set to 1. The learning rate is set to 0.000002 when fine-tuning BERT after processing a layer. Before processing each layer, the training dataset is randomly split into 1 : 2 to yield a mini-training dataset and a mini-validation dataset. When fine-tuning the final mode, the patience value of early stopping is set to 20.
Competitors. We compare AUBER with other methods that prune BERT's attention heads. As a simple baseline, we examine random pruning policy and note the method as Random. We examine two different pruning methods based on the importance score. In both methods, if AUBER prunes P number of attention heads from BERT, we also prune P attention heads with the smallest importance scores to obtain the competitor model. We denote the pruning method using the confidence score as Confidence. The confidence score of an attention head is the average of the maximum attention weight; a high confidence score indicates that the weight is concentrated on a single token. On the other hand, Michel et al. (2019) performs a forward and backward pass to calculate gradients and uses them to assign an importance score to each attention head. Voita et al. (2019) constructs a new loss function that minimizes classification error and the number of being used heads so that unproductive heads are pruned while maintaining the model performance. We prune the same number of heads as AUBER by tuning hyperparameters for fair comparison.
Implementation. We construct all models using PyTorch framework. All the models are trained and tested on GeForce GTX 1080 Ti GPU.

ACCURACY
We evaluate the performance of AUBER against competitors. Table 2 shows the results on four GLUE datasets specified on Table 1. Note that AUBER outperforms its competitors on regularizing BERT that is fine-tuned on MRPC, CoLA, RTE, or WNLI. While most of its competitors fail to improve performance of BERT on the dev dataset of MRPC and CoLA, AUBER improves the performance of BERT by up to 4%.

ABLATION STUDY
Here we empirically demonstrate the effectiveness of our design choices for AUBER. More specifically, we validate that the L1 norm of value matrix of each attention head effectively guides AUBER to predict the best action. Moreover, we show that AUBER successfully regularizes BERT regardless of the direction in which the layers are processed. Table 3 shows the performances of the variants of AUBER on the four GLUE datasets listed on Table 1.

AUBER WITH THE KEY/QUERY MATRICES AS THE STATE VECTOR
Among the query, key, and value matrices of each attention head, we show that the value matrix best represents the current state of BERT. Here we evaluate the performance of AUBER against AUBER-Query and AUBER-Key. AUBER-Query and AUBER-Key use the query and key matrices respectively to obtain the initial state. Note that AUBER, which uses the value matrix to obtain state vectors, outperforms AUBER-Query and AUBER-Key on all four tasks.

AUBER WITH L2 NORM OF THE VALUE MATRICES AS THE STATE VECTOR
L1 norm of the value matrices is used to compute the state vector based on the theoretical derivation.
In this ablation study, we experimentally show that the L1 norm of the value matrices is appropriate for state vector. We set a new variant AUBER-L2 which leverages L2 norm of the value matrices to compute the initial state vector instead of L1 norm. The performance of AUBER is far more superior than AUBER-L2 in most cases bolstering that L1 norm of the value matrices effectively represents the state of BERT.

EFFECT OF PROCESSING LAYERS IN A DIFFERENT ORDER
We empirically demonstrate how the order in which the layers are processed affects the final performance. We evaluate the performance of AUBER against AUBER-Reverse. AUBER-Reverse processes BERT in an opposite direction (i.e. from Layer12 to Layer1 for BERT-base). Note that both AUBER and AUBER-Reverse effectively regularize BERT, proving the effectiveness of AUBER regardless of the order in which BERT layers are pruned. The differences in the final performance and the number of attention heads pruned can be attributed to the fine-tuning step after pruning each layer. Since the fine-tuning step adjusts the weights of the remaining attention heads in order to take the previous pruning policies into account, processing BERT in different directions may lead to different adjustments in weights. Varying updates on weights may make previously important attention head become unimportant and vice versa, thus resulting in different pruning policies and final accuracies.  (2019)). These studies evaluate the importance of each attention head by measuring some heuristics such as the average of its maximum attention weight, where average is taken over tokens in a set of sentences used for evaluation, or the expected sensitivity of the model to attention head pruning. Their results show that a large percentage of attention heads with low importance scores can be pruned without significantly impacting performance. However, they usually yield suboptimal results since they predetermine the order in which the attention heads are pruned by using heuristics.
To prevent overfitting of BERT on downstream natural language processing tasks, various regularization techniques are proposed. A variant of dropout improves the stability of fine-tuning a big, pre-trained language model even with only a few training examples of a target task (Lee et al. (2020)). Other existing heuristics to prevent overfitting include choosing a small learning rate or a triangular learning rate schedule, and a small number of iterations.
To automate the process of Convolutional Neural Network pruning, He & Han (2018) leverages reinforcement learning to determine the best sparsity ratio for each layer. Important features that characterize a layer are encoded and provided to a reinforcement learning agent to determine how much of the current layer should be pruned. To the best of our knowledge, AUBER is the first attempt to use reinforcement learning to prune attention heads from Transformer-based models such as BERT.

CONCLUSION
We propose AUBER, an effective method to regularize BERT by automatically pruning attention heads. Instead of depending on heuristics or rule-based policies, AUBER leverages reinforcement learning to learn a pruning policy that determines which attention heads should be pruned for better regularization. Experimental results demonstrate that AUBER effectively regularizes BERT, increasing the performance of the original model on the dev dataset by up to 10%. In addition, we experimentally demonstrate the effectiveness of our design choices for AUBER.

A APPENDIX
A.1 PROOF FOR LEMMA 1 Lemma 1. For a layer with H heads, let N be the number of tokens in the sentence and m, n, and d be the value, query, and embedding dimension respectively. Let Q, K, V ∈ R N ×d be the input query, key, and value matrices, and W Q i , W K i , and W V i be the weight parameters of the i th head such that W Q i , W K i ∈ R d×n and W V i ∈ R d×m . Let O i be the output of the i th head. Then, O i 1 ≤ C W V i 1 for the constant C = N V 1 .
Proof. For i th head in the layer, let and The output of the head, O i , is evaluated as O i = sof tmax i v i . Then, Since the L1 norm of a vector is always greater than or equal to the L2 norm of the vector, where the norm of the matrices is entrywise norm, A 1 = j k A jk . All heads in the same layer take the same V as input and T is constant. Thus, for the constant C = T V 1 .