{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 权重衰减\n",
"\n",
"上一节中我们观察了过拟合现象,即模型的训练误差远小于它在测试集上的误差。虽然增大训练数据集可能会减轻过拟合,但是获取额外的训练数据往往代价高昂。本节介绍应对过拟合问题的常用方法:权重衰减(weight decay)。\n",
"\n",
"\n",
"## 方法\n",
"\n",
"权重衰减等价于$L_2$范数正则化(regularization)。正则化通过为模型损失函数添加惩罚项使学出的模型参数值较小,是应对过拟合的常用手段。我们先描述$L_2$范数正则化,再解释它为何又称权重衰减。\n",
"\n",
"$L_2$范数正则化在模型原损失函数基础上添加$L_2$范数惩罚项,从而得到训练所需要最小化的函数。$L_2$范数惩罚项指的是模型权重参数每个元素的平方和与一个正的常数的乘积。以[“线性回归”](linear-regression.ipynb)一节中的线性回归损失函数\n",
"\n",
"$$\\ell(w_1, w_2, b) = \\frac{1}{n} \\sum_{i=1}^n \\frac{1}{2}\\left(x_1^{(i)} w_1 + x_2^{(i)} w_2 + b - y^{(i)}\\right)^2$$\n",
"\n",
"为例,其中$w_1, w_2$是权重参数,$b$是偏差参数,样本$i$的输入为$x_1^{(i)}, x_2^{(i)}$,标签为$y^{(i)}$,样本数为$n$。将权重参数用向量$\\boldsymbol{w} = [w_1, w_2]$表示,带有$L_2$范数惩罚项的新损失函数为\n",
"\n",
"$$\\ell(w_1, w_2, b) + \\frac{\\lambda}{2} \\|\\boldsymbol{w}\\|^2,$$\n",
"\n",
"其中超参数$\\lambda > 0$。当权重参数均为0时,惩罚项最小。当$\\lambda$较大时,惩罚项在损失函数中的比重较大,这通常会使学到的权重参数的元素较接近0。当$\\lambda$设为0时,惩罚项完全不起作用。上式中$L_2$范数平方$\\|\\boldsymbol{w}\\|^2$展开后得到$w_1^2 + w_2^2$。有了$L_2$范数惩罚项后,在小批量随机梯度下降中,我们将[“线性回归”](linear-regression.ipynb)一节中权重$w_1$和$w_2$的迭代方式更改为\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"w_1 &\\leftarrow \\left(1- \\eta\\lambda \\right)w_1 - \\frac{\\eta}{|\\mathcal{B}|} \\sum_{i \\in \\mathcal{B}}x_1^{(i)} \\left(x_1^{(i)} w_1 + x_2^{(i)} w_2 + b - y^{(i)}\\right),\\\\\n",
"w_2 &\\leftarrow \\left(1- \\eta\\lambda \\right)w_2 - \\frac{\\eta}{|\\mathcal{B}|} \\sum_{i \\in \\mathcal{B}}x_2^{(i)} \\left(x_1^{(i)} w_1 + x_2^{(i)} w_2 + b - y^{(i)}\\right).\n",
"\\end{aligned}\n",
"$$\n",
"\n",
"可见,$L_2$范数正则化令权重$w_1$和$w_2$先自乘小于1的数,再减去不含惩罚项的梯度。因此,$L_2$范数正则化又叫权重衰减。权重衰减通过惩罚绝对值较大的模型参数为需要学习的模型增加了限制,这可能对过拟合有效。实际场景中,我们有时也在惩罚项中添加偏差元素的平方和。\n",
"\n",
"## 高维线性回归实验\n",
"\n",
"下面,我们以高维线性回归为例来引入一个过拟合问题,并使用权重衰减来应对过拟合。设数据样本特征的维度为$p$。对于训练数据集和测试数据集中特征为$x_1, x_2, \\ldots, x_p$的任一样本,我们使用如下的线性函数来生成该样本的标签:\n",
"\n",
"$$y = 0.05 + \\sum_{i = 1}^p 0.01x_i + \\epsilon,$$\n",
"\n",
"其中噪声项$\\epsilon$服从均值为0、标准差为0.01的正态分布。为了较容易地观察过拟合,我们考虑高维线性回归问题,如设维度$p=200$;同时,我们特意把训练数据集的样本数设低,如20。"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "2"
}
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import d2lzh as d2l\n",
"from mxnet import autograd, gluon, init, nd\n",
"from mxnet.gluon import data as gdata, loss as gloss, nn\n",
"\n",
"n_train, n_test, num_inputs = 20, 100, 200\n",
"true_w, true_b = nd.ones((num_inputs, 1)) * 0.01, 0.05\n",
"\n",
"features = nd.random.normal(shape=(n_train + n_test, num_inputs))\n",
"labels = nd.dot(features, true_w) + true_b\n",
"labels += nd.random.normal(scale=0.01, shape=labels.shape)\n",
"train_features, test_features = features[:n_train, :], features[n_train:, :]\n",
"train_labels, test_labels = labels[:n_train], labels[n_train:]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 从零开始实现\n",
"\n",
"下面先介绍从零开始实现权重衰减的方法。我们通过在目标函数后添加$L_2$范数惩罚项来实现权重衰减。\n",
"\n",
"### 初始化模型参数\n",
"\n",
"首先,定义随机初始化模型参数的函数。该函数为每个参数都附上梯度。"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "5"
}
},
"outputs": [],
"source": [
"def init_params():\n",
" w = nd.random.normal(scale=1, shape=(num_inputs, 1))\n",
" b = nd.zeros(shape=(1,))\n",
" w.attach_grad()\n",
" b.attach_grad()\n",
" return [w, b]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 定义$L_2$范数惩罚项\n",
"\n",
"下面定义$L_2$范数惩罚项。这里只惩罚模型的权重参数。"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "6"
}
},
"outputs": [],
"source": [
"def l2_penalty(w):\n",
" return (w**2).sum() / 2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 定义训练和测试\n",
"\n",
"下面定义如何在训练数据集和测试数据集上分别训练和测试模型。与前面几节中不同的是,这里在计算最终的损失函数时添加了$L_2$范数惩罚项。"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "7"
}
},
"outputs": [],
"source": [
"batch_size, num_epochs, lr = 1, 100, 0.003\n",
"net, loss = d2l.linreg, d2l.squared_loss\n",
"train_iter = gdata.DataLoader(gdata.ArrayDataset(\n",
" train_features, train_labels), batch_size, shuffle=True)\n",
"\n",
"def fit_and_plot(lambd):\n",
" w, b = init_params()\n",
" train_ls, test_ls = [], []\n",
" for _ in range(num_epochs):\n",
" for X, y in train_iter:\n",
" with autograd.record():\n",
" # 添加了L2范数惩罚项,广播机制使其变成长度为batch_size的向量\n",
" l = loss(net(X, w, b), y) + lambd * l2_penalty(w)\n",
" l.backward()\n",
" d2l.sgd([w, b], lr, batch_size)\n",
" train_ls.append(loss(net(train_features, w, b),\n",
" train_labels).mean().asscalar())\n",
" test_ls.append(loss(net(test_features, w, b),\n",
" test_labels).mean().asscalar())\n",
" d2l.semilogy(range(1, num_epochs + 1), train_ls, 'epochs', 'loss',\n",
" range(1, num_epochs + 1), test_ls, ['train', 'test'])\n",
" print('L2 norm of w:', w.norm().asscalar())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 观察过拟合\n",
"\n",
"接下来,让我们训练并测试高维线性回归模型。当`lambd`设为0时,我们没有使用权重衰减。结果训练误差远小于测试集上的误差。这是典型的过拟合现象。"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "8"
}
},
"outputs": [
{
"data": {
"image/svg+xml": [
"\n",
"\n",
"\n",
"\n"
],
"text/plain": [
"