{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 单发多框检测(SSD)\n",
"\n",
"我们在前几节分别介绍了边界框、锚框、多尺度目标检测和数据集,下面我们基于这些背景知识来构造一个目标检测模型:单发多框检测(single shot multibox detection,SSD)[1]。它简单、快速,并得到了广泛应用。该模型的一些设计思想和实现细节常适用于其他目标检测模型。\n",
"\n",
"\n",
"## 定义模型\n",
"\n",
"图9.4描述了单发多框检测模型的设计。它主要由一个基础网络块和若干个多尺度特征块串联而成。其中基础网络块用来从原始图像中抽取特征,因此一般会选择常用的深度卷积神经网络。单发多框检测论文中选用了在分类层之前截断的VGG [1],现在也常用ResNet替代。我们可以设计基础网络,使它输出的高和宽较大。这样一来,基于该特征图生成的锚框数量较多,可以用来检测尺寸较小的目标。接下来的每个多尺度特征块将上一层提供的特征图的高和宽缩小(如减半),并使特征图中每个单元在输入图像上的感受野变得更广阔。如此一来,图9.4中越靠近顶部的多尺度特征块输出的特征图越小,故而基于特征图生成的锚框也越少,加之特征图中每个单元感受野越大,因此更适合检测尺寸较大的目标。由于单发多框检测基于基础网络块和各个多尺度特征块生成不同数量和不同大小的锚框,并通过预测锚框的类别和偏移量(即预测边界框)检测不同大小的目标,因此单发多框检测是一个多尺度的目标检测模型。\n",
"\n",
"![单发多框检测模型主要由一个基础网络块和若干多尺度特征块串联而成](../img/ssd.svg)\n",
"\n",
"\n",
"接下来我们介绍如何实现图中的各个模块。我们先介绍如何实现类别预测和边界框预测。\n",
"\n",
"### 类别预测层\n",
"\n",
"设目标的类别个数为$q$。每个锚框的类别个数将是$q+1$,其中类别0表示锚框只包含背景。在某个尺度下,设特征图的高和宽分别为$h$和$w$,如果以其中每个单元为中心生成$a$个锚框,那么我们需要对$hwa$个锚框进行分类。如果使用全连接层作为输出,很容易导致模型参数过多。回忆[“网络中的网络(NiN)”](../chapter_convolutional-neural-networks/nin.ipynb)一节介绍的使用卷积层的通道来输出类别预测的方法。单发多框检测采用同样的方法来降低模型复杂度。\n",
"\n",
"具体来说,类别预测层使用一个保持输入高和宽的卷积层。这样一来,输出和输入在特征图宽和高上的空间坐标一一对应。考虑输出和输入同一空间坐标$(x,y)$:输出特征图上$(x,y)$坐标的通道里包含了以输入特征图$(x,y)$坐标为中心生成的所有锚框的类别预测。因此输出通道数为$a(q+1)$,其中索引为$i(q+1) + j$($0 \\leq j \\leq q$)的通道代表了索引为$i$的锚框有关类别索引为$j$的预测。\n",
"\n",
"下面我们定义一个这样的类别预测层:指定参数$a$和$q$后,它使用一个填充为1的$3\\times3$卷积层。该卷积层的输入和输出的高和宽保持不变。"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "1"
}
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import d2lzh as d2l\n",
"from mxnet import autograd, contrib, gluon, image, init, nd\n",
"from mxnet.gluon import loss as gloss, nn\n",
"import time\n",
"\n",
"def cls_predictor(num_anchors, num_classes):\n",
" return nn.Conv2D(num_anchors * (num_classes + 1), kernel_size=3,\n",
" padding=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 边界框预测层\n",
"\n",
"边界框预测层的设计与类别预测层的设计类似。唯一不同的是,这里需要为每个锚框预测4个偏移量,而不是$q+1$个类别。"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "2"
}
},
"outputs": [],
"source": [
"def bbox_predictor(num_anchors):\n",
" return nn.Conv2D(num_anchors * 4, kernel_size=3, padding=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 连结多尺度的预测\n",
"\n",
"前面提到,单发多框检测根据多个尺度下的特征图生成锚框并预测类别和偏移量。由于每个尺度上特征图的形状或以同一单元为中心生成的锚框个数都可能不同,因此不同尺度的预测输出形状可能不同。\n",
"\n",
"在下面的例子中,我们对同一批量数据构造两个不同尺度下的特征图`Y1`和`Y2`,其中`Y2`相对于`Y1`来说高和宽分别减半。以类别预测为例,假设以`Y1`和`Y2`特征图中每个单元生成的锚框个数分别是5和3,当目标类别个数为10时,类别预测输出的通道数分别为$5\\times(10+1)=55$和$3\\times(10+1)=33$。预测输出的格式为(批量大小, 通道数, 高, 宽)。可以看到,除了批量大小外,其他维度大小均不一样。我们需要将它们变形成统一的格式并将多尺度的预测连结,从而让后续计算更简单。"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "3"
}
},
"outputs": [
{
"data": {
"text/plain": [
"((2, 55, 20, 20), (2, 33, 10, 10))"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def forward(x, block):\n",
" block.initialize()\n",
" return block(x)\n",
"\n",
"Y1 = forward(nd.zeros((2, 8, 20, 20)), cls_predictor(5, 10))\n",
"Y2 = forward(nd.zeros((2, 16, 10, 10)), cls_predictor(3, 10))\n",
"(Y1.shape, Y2.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"通道维包含中心相同的锚框的预测结果。我们首先将通道维移到最后一维。因为不同尺度下批量大小仍保持不变,我们可以将预测结果转成二维的(批量大小, 高$\\times$宽$\\times$通道数)的格式,以方便之后在维度1上的连结。"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "4"
}
},
"outputs": [],
"source": [
"def flatten_pred(pred):\n",
" return pred.transpose((0, 2, 3, 1)).flatten()\n",
"\n",
"def concat_preds(preds):\n",
" return nd.concat(*[flatten_pred(p) for p in preds], dim=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这样一来,尽管`Y1`和`Y2`形状不同,我们仍然可以将这两个同一批量不同尺度的预测结果连结在一起。"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "6"
}
},
"outputs": [
{
"data": {
"text/plain": [
"(2, 25300)"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"concat_preds([Y1, Y2]).shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 高和宽减半块\n",
"\n",
"为了在多尺度检测目标,下面定义高和宽减半块`down_sample_blk`。它串联了两个填充为1的$3\\times3$卷积层和步幅为2的$2\\times2$最大池化层。我们知道,填充为1的$3\\times3$卷积层不改变特征图的形状,而后面的池化层则直接将特征图的高和宽减半。由于$1\\times 2+(3-1)+(3-1)=6$,输出特征图中每个单元在输入特征图上的感受野形状为$6\\times6$。可以看出,高和宽减半块使输出特征图中每个单元的感受野变得更广阔。"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "7"
}
},
"outputs": [],
"source": [
"def down_sample_blk(num_channels):\n",
" blk = nn.Sequential()\n",
" for _ in range(2):\n",
" blk.add(nn.Conv2D(num_channels, kernel_size=3, padding=1),\n",
" nn.BatchNorm(in_channels=num_channels),\n",
" nn.Activation('relu'))\n",
" blk.add(nn.MaxPool2D(2))\n",
" return blk"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"测试高和宽减半块的前向计算。可以看到,它改变了输入的通道数,并将高和宽减半。"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "8"
}
},
"outputs": [
{
"data": {
"text/plain": [
"(2, 10, 10, 10)"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"forward(nd.zeros((2, 3, 20, 20)), down_sample_blk(10)).shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 基础网络块\n",
"\n",
"基础网络块用来从原始图像中抽取特征。为了计算简洁,我们在这里构造一个小的基础网络。该网络串联3个高和宽减半块,并逐步将通道数翻倍。当输入的原始图像的形状为$256\\times256$时,基础网络块输出的特征图的形状为$32 \\times 32$。"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "9"
}
},
"outputs": [
{
"data": {
"text/plain": [
"(2, 64, 32, 32)"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def base_net():\n",
" blk = nn.Sequential()\n",
" for num_filters in [16, 32, 64]:\n",
" blk.add(down_sample_blk(num_filters))\n",
" return blk\n",
"\n",
"forward(nd.zeros((2, 3, 256, 256)), base_net()).shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 完整的模型\n",
"\n",
"单发多框检测模型一共包含5个模块,每个模块输出的特征图既用来生成锚框,又用来预测这些锚框的类别和偏移量。第一模块为基础网络块,第二模块至第四模块为高和宽减半块,第五模块使用全局最大池化层将高和宽降到1。因此第二模块至第五模块均为图9.4中的多尺度特征块。"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "10"
}
},
"outputs": [],
"source": [
"def get_blk(i):\n",
" if i == 0:\n",
" blk = base_net()\n",
" elif i == 4:\n",
" blk = nn.GlobalMaxPool2D()\n",
" else:\n",
" blk = down_sample_blk(128)\n",
" return blk"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"接下来,我们定义每个模块如何进行前向计算。与之前介绍的卷积神经网络不同,这里不仅返回卷积计算输出的特征图`Y`,还返回根据`Y`生成的当前尺度的锚框,以及基于`Y`预测的锚框类别和偏移量。"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "11"
}
},
"outputs": [],
"source": [
"def blk_forward(X, blk, size, ratio, cls_predictor, bbox_predictor):\n",
" Y = blk(X)\n",
" anchors = contrib.ndarray.MultiBoxPrior(Y, sizes=size, ratios=ratio)\n",
" cls_preds = cls_predictor(Y)\n",
" bbox_preds = bbox_predictor(Y)\n",
" return (Y, anchors, cls_preds, bbox_preds)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们提到,图9.4中较靠近顶部的多尺度特征块用来检测尺寸较大的目标,因此需要生成较大的锚框。我们在这里先将0.2到1.05之间均分5份,以确定不同尺度下锚框大小的较小值0.2、0.37、0.54等,再按$\\sqrt{0.2 \\times 0.37} = 0.272$、$\\sqrt{0.37 \\times 0.54} = 0.447$等来确定不同尺度下锚框大小的较大值。"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "12"
}
},
"outputs": [],
"source": [
"sizes = [[0.2, 0.272], [0.37, 0.447], [0.54, 0.619], [0.71, 0.79],\n",
" [0.88, 0.961]]\n",
"ratios = [[1, 2, 0.5]] * 5\n",
"num_anchors = len(sizes[0]) + len(ratios[0]) - 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在,我们就可以定义出完整的模型`TinySSD`了。"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "13"
}
},
"outputs": [],
"source": [
"class TinySSD(nn.Block):\n",
" def __init__(self, num_classes, **kwargs):\n",
" super(TinySSD, self).__init__(**kwargs)\n",
" self.num_classes = num_classes\n",
" for i in range(5):\n",
" # 即赋值语句self.blk_i = get_blk(i)\n",
" setattr(self, 'blk_%d' % i, get_blk(i))\n",
" setattr(self, 'cls_%d' % i, cls_predictor(num_anchors,\n",
" num_classes))\n",
" setattr(self, 'bbox_%d' % i, bbox_predictor(num_anchors))\n",
"\n",
" def forward(self, X):\n",
" anchors, cls_preds, bbox_preds = [None] * 5, [None] * 5, [None] * 5\n",
" for i in range(5):\n",
" # getattr(self, 'blk_%d' % i)即访问self.blk_i\n",
" X, anchors[i], cls_preds[i], bbox_preds[i] = blk_forward(\n",
" X, getattr(self, 'blk_%d' % i), sizes[i], ratios[i],\n",
" getattr(self, 'cls_%d' % i), getattr(self, 'bbox_%d' % i))\n",
" # reshape函数中的0表示保持批量大小不变\n",
" return (nd.concat(*anchors, dim=1),\n",
" concat_preds(cls_preds).reshape(\n",
" (0, -1, self.num_classes + 1)), concat_preds(bbox_preds))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们创建单发多框检测模型实例并对一个高和宽均为256像素的小批量图像`X`做前向计算。我们在之前验证过,第一模块输出的特征图的形状为$32 \\times 32$。由于第二至第四模块为高和宽减半块、第五模块为全局池化层,并且以特征图每个单元为中心生成4个锚框,每个图像在5个尺度下生成的锚框总数为$(32^2 + 16^2 + 8^2 + 4^2 + 1)\\times 4 = 5444$。"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"output anchors: (1, 5444, 4)\n",
"output class preds: (32, 5444, 2)\n",
"output bbox preds: (32, 21776)\n"
]
}
],
"source": [
"net = TinySSD(num_classes=1)\n",
"net.initialize()\n",
"X = nd.zeros((32, 3, 256, 256))\n",
"anchors, cls_preds, bbox_preds = net(X)\n",
"\n",
"print('output anchors:', anchors.shape)\n",
"print('output class preds:', cls_preds.shape)\n",
"print('output bbox preds:', bbox_preds.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 训练模型\n",
"\n",
"下面我们描述如何一步步训练单发多框检测模型来进行目标检测。\n",
"\n",
"### 读取数据集和初始化\n",
"\n",
"我们读取[“目标检测数据集(皮卡丘)”](object-detection-dataset.ipynb)一节构造的皮卡丘数据集。"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "14"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading ../data/pikachu/train.rec from https://apache-mxnet.s3-accelerate.amazonaws.com/gluon/dataset/pikachu/train.rec...\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading ../data/pikachu/train.idx from https://apache-mxnet.s3-accelerate.amazonaws.com/gluon/dataset/pikachu/train.idx...\n",
"Downloading ../data/pikachu/val.rec from https://apache-mxnet.s3-accelerate.amazonaws.com/gluon/dataset/pikachu/val.rec...\n"
]
}
],
"source": [
"batch_size = 32\n",
"train_iter, _ = d2l.load_data_pikachu(batch_size)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"在皮卡丘数据集中,目标的类别数为1。定义好模型以后,我们需要初始化模型参数并定义优化算法。"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "15"
}
},
"outputs": [],
"source": [
"ctx, net = d2l.try_gpu(), TinySSD(num_classes=1)\n",
"net.initialize(init=init.Xavier(), ctx=ctx)\n",
"trainer = gluon.Trainer(net.collect_params(), 'sgd',\n",
" {'learning_rate': 0.2, 'wd': 5e-4})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 定义损失函数和评价函数\n",
"\n",
"目标检测有两个损失:一是有关锚框类别的损失,我们可以重用之前图像分类问题里一直使用的交叉熵损失函数;二是有关正类锚框偏移量的损失。预测偏移量是一个回归问题,但这里不使用前面介绍过的平方损失,而使用$L_1$范数损失,即预测值与真实值之间差的绝对值。掩码变量`bbox_masks`令负类锚框和填充锚框不参与损失的计算。最后,我们将有关锚框类别和偏移量的损失相加得到模型的最终损失函数。"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "16"
}
},
"outputs": [],
"source": [
"cls_loss = gloss.SoftmaxCrossEntropyLoss()\n",
"bbox_loss = gloss.L1Loss()\n",
"\n",
"def calc_loss(cls_preds, cls_labels, bbox_preds, bbox_labels, bbox_masks):\n",
" cls = cls_loss(cls_preds, cls_labels)\n",
" bbox = bbox_loss(bbox_preds * bbox_masks, bbox_labels * bbox_masks)\n",
" return cls + bbox"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们可以沿用准确率评价分类结果。因为使用了$L_1$范数损失,我们用平均绝对误差评价边界框的预测结果。"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "18"
}
},
"outputs": [],
"source": [
"def cls_eval(cls_preds, cls_labels):\n",
" # 由于类别预测结果放在最后一维,argmax需要指定最后一维\n",
" return (cls_preds.argmax(axis=-1) == cls_labels).sum().asscalar()\n",
"\n",
"def bbox_eval(bbox_preds, bbox_labels, bbox_masks):\n",
" return ((bbox_labels - bbox_preds) * bbox_masks).abs().sum().asscalar()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 训练模型\n",
"\n",
"在训练模型时,我们需要在模型的前向计算过程中生成多尺度的锚框`anchors`,并为每个锚框预测类别`cls_preds`和偏移量`bbox_preds`。之后,我们根据标签信息`Y`为生成的每个锚框标注类别`cls_labels`和偏移量`bbox_labels`。最后,我们根据类别和偏移量的预测和标注值计算损失函数。为了代码简洁,这里没有评价测试数据集。"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "19"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch 5, class err 2.99e-03, bbox mae 3.23e-03, time 8.3 sec\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch 10, class err 2.60e-03, bbox mae 2.82e-03, time 8.1 sec\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch 15, class err 2.44e-03, bbox mae 2.69e-03, time 8.1 sec\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch 20, class err 2.45e-03, bbox mae 2.57e-03, time 8.2 sec\n"
]
}
],
"source": [
"for epoch in range(20):\n",
" acc_sum, mae_sum, n, m = 0.0, 0.0, 0, 0\n",
" train_iter.reset() # 从头读取数据\n",
" start = time.time()\n",
" for batch in train_iter:\n",
" X = batch.data[0].as_in_context(ctx)\n",
" Y = batch.label[0].as_in_context(ctx)\n",
" with autograd.record():\n",
" # 生成多尺度的锚框,为每个锚框预测类别和偏移量\n",
" anchors, cls_preds, bbox_preds = net(X)\n",
" # 为每个锚框标注类别和偏移量\n",
" bbox_labels, bbox_masks, cls_labels = contrib.nd.MultiBoxTarget(\n",
" anchors, Y, cls_preds.transpose((0, 2, 1)))\n",
" # 根据类别和偏移量的预测和标注值计算损失函数\n",
" l = calc_loss(cls_preds, cls_labels, bbox_preds, bbox_labels,\n",
" bbox_masks)\n",
" l.backward()\n",
" trainer.step(batch_size)\n",
" acc_sum += cls_eval(cls_preds, cls_labels)\n",
" n += cls_labels.size\n",
" mae_sum += bbox_eval(bbox_preds, bbox_labels, bbox_masks)\n",
" m += bbox_labels.size\n",
"\n",
" if (epoch + 1) % 5 == 0:\n",
" print('epoch %2d, class err %.2e, bbox mae %.2e, time %.1f sec' % (\n",
" epoch + 1, 1 - acc_sum / n, mae_sum / m, time.time() - start))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 预测目标\n",
"\n",
"在预测阶段,我们希望能把图像里面所有我们感兴趣的目标检测出来。下面读取测试图像,将其变换尺寸,然后转成卷积层需要的四维格式。"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "20"
}
},
"outputs": [],
"source": [
"img = image.imread('../img/pikachu.jpg')\n",
"feature = image.imresize(img, 256, 256).astype('float32')\n",
"X = feature.transpose((2, 0, 1)).expand_dims(axis=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们通过`MultiBoxDetection`函数根据锚框及其预测偏移量得到预测边界框,并通过非极大值抑制移除相似的预测边界框。"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "21"
}
},
"outputs": [],
"source": [
"def predict(X):\n",
" anchors, cls_preds, bbox_preds = net(X.as_in_context(ctx))\n",
" cls_probs = cls_preds.softmax().transpose((0, 2, 1))\n",
" output = contrib.nd.MultiBoxDetection(cls_probs, bbox_preds, anchors)\n",
" idx = [i for i, row in enumerate(output[0]) if row[0].asscalar() != -1]\n",
" return output[0, idx]\n",
"\n",
"output = predict(X)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"最后,我们将置信度不低于0.3的边界框筛选为最终输出用以展示。"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "22"
}
},
"outputs": [
{
"data": {
"image/svg+xml": [
"\n",
"\n",
"\n",
"\n"
],
"text/plain": [
"