【MMEngine】用MMEngine自带函数get_model_complexity_info计算模型每层参数量、参数总量和计算量
2023-12-20 11:47:29
目录
🚗🚗参数定义
- 计算量对应时间复杂度,参数量对应空间复杂度。
- 计算量要看网络执行时间的长短,参数量要看占用显存的量。
????????最重要的衡量CNN 模型所需的计算力就是flops。
- FLOPS:(floating-point operations per second)的缩写。“每秒浮点运算次数”,“每秒峰值速度”是“每秒所执行的浮点运算次数”。它常被用来估算电脑的执行效能,尤其是在使用到大量浮点运算的科学计算领域中。正因为FLOPS字尾的那个S,代表秒,而不是复数,所以不能省略掉。
- activations:所有卷积层的输出feature map的尺寸,用于度量网络的复杂度,虽然activations不是衡量网络复杂性的通用标准,但activations可能会严重影响内存受限硬件加速器(例如GPU、TPU)上的运行时间,activations与推理时间的正相关性比FLOPs更强。
🌱🌱定义模型
????????以resnet50为例,首先定义模型,python代码如下:
import torch.nn.functional as F
import torchvision
from mmengine.model import BaseModel
class MMResNet50(BaseModel):
def __init__(self):
super().__init__()
self.resnet = torchvision.models.resnet50()
def forward(self, imgs, labels=None, mode='tensor'):
x = self.resnet(imgs)
if mode == 'loss':
return {'loss': F.cross_entropy(x, labels)}
elif mode == 'predict':
return x, labels
elif mode == 'tensor':
return x
🌲🌲统计计算量和参数量
????????使用mmengine自带的get_model_complexity_info函数即可实现统计模型计算量和参数量的目的。python代码如下:
from mmengine.analysis import get_model_complexity_info
input_shape = (3, 224, 224)
model = MMResNet50()
analysis_results = get_model_complexity_info(model, input_shape)
🎅🎅以表格形式显示每层输出
????????可以选择以表格形式和模型结构两种形式进行计算量和参数量的输出,首先以表格形式展示,python代码如下:
print(analysis_results['out_table'])
📌输出
????????可以看到resnet50模型总的参数量为25.557M,计算量flops为4.145G,网络复杂度为11.115M。
+------------------------+----------------------+------------+--------------+
| module | #parameters or shape | #flops | #activations |
+------------------------+----------------------+------------+--------------+
| resnet | 25.557M | 4.145G | 11.115M |
| conv1 | 9.408K | 0.118G | 0.803M |
| conv1.weight | (64, 3, 7, 7) | | |
| bn1 | 0.128K | 4.014M | 0 |
| bn1.weight | (64,) | | |
| bn1.bias | (64,) | | |
| layer1 | 0.216M | 0.69G | 4.415M |
| layer1.0 | 75.008K | 0.241G | 2.007M |
| layer1.0.conv1 | 4.096K | 12.845M | 0.201M |
| layer1.0.bn1 | 0.128K | 1.004M | 0 |
| layer1.0.conv2 | 36.864K | 0.116G | 0.201M |
| layer1.0.bn2 | 0.128K | 1.004M | 0 |
| layer1.0.conv3 | 16.384K | 51.38M | 0.803M |
| layer1.0.bn3 | 0.512K | 4.014M | 0 |
| layer1.0.downsample | 16.896K | 55.394M | 0.803M |
| layer1.1 | 70.4K | 0.224G | 1.204M |
| layer1.1.conv1 | 16.384K | 51.38M | 0.201M |
| layer1.1.bn1 | 0.128K | 1.004M | 0 |
| layer1.1.conv2 | 36.864K | 0.116G | 0.201M |
| layer1.1.bn2 | 0.128K | 1.004M | 0 |
| layer1.1.conv3 | 16.384K | 51.38M | 0.803M |
| layer1.1.bn3 | 0.512K | 4.014M | 0 |
| layer1.2 | 70.4K | 0.224G | 1.204M |
| layer1.2.conv1 | 16.384K | 51.38M | 0.201M |
| layer1.2.bn1 | 0.128K | 1.004M | 0 |
| layer1.2.conv2 | 36.864K | 0.116G | 0.201M |
| layer1.2.bn2 | 0.128K | 1.004M | 0 |
| layer1.2.conv3 | 16.384K | 51.38M | 0.803M |
| layer1.2.bn3 | 0.512K | 4.014M | 0 |
| layer2 | 1.22M | 1.043G | 3.111M |
| layer2.0 | 0.379M | 0.379G | 1.305M |
| layer2.0.conv1 | 32.768K | 0.103G | 0.401M |
| layer2.0.bn1 | 0.256K | 2.007M | 0 |
| layer2.0.conv2 | 0.147M | 0.116G | 0.1M |
| layer2.0.bn2 | 0.256K | 0.502M | 0 |
| layer2.0.conv3 | 65.536K | 51.38M | 0.401M |
| layer2.0.bn3 | 1.024K | 2.007M | 0 |
| layer2.0.downsample | 0.132M | 0.105G | 0.401M |
| layer2.1 | 0.28M | 0.221G | 0.602M |
| layer2.1.conv1 | 65.536K | 51.38M | 0.1M |
| layer2.1.bn1 | 0.256K | 0.502M | 0 |
| layer2.1.conv2 | 0.147M | 0.116G | 0.1M |
| layer2.1.bn2 | 0.256K | 0.502M | 0 |
| layer2.1.conv3 | 65.536K | 51.38M | 0.401M |
| layer2.1.bn3 | 1.024K | 2.007M | 0 |
| layer2.2 | 0.28M | 0.221G | 0.602M |
| layer2.2.conv1 | 65.536K | 51.38M | 0.1M |
| layer2.2.bn1 | 0.256K | 0.502M | 0 |
| layer2.2.conv2 | 0.147M | 0.116G | 0.1M |
| layer2.2.bn2 | 0.256K | 0.502M | 0 |
| layer2.2.conv3 | 65.536K | 51.38M | 0.401M |
| layer2.2.bn3 | 1.024K | 2.007M | 0 |
| layer2.3 | 0.28M | 0.221G | 0.602M |
| layer2.3.conv1 | 65.536K | 51.38M | 0.1M |
| layer2.3.bn1 | 0.256K | 0.502M | 0 |
| layer2.3.conv2 | 0.147M | 0.116G | 0.1M |
| layer2.3.bn2 | 0.256K | 0.502M | 0 |
| layer2.3.conv3 | 65.536K | 51.38M | 0.401M |
| layer2.3.bn3 | 1.024K | 2.007M | 0 |
| layer3 | 7.098M | 1.475G | 2.158M |
| layer3.0 | 1.512M | 0.376G | 0.652M |
| layer3.0.conv1 | 0.131M | 0.103G | 0.201M |
| layer3.0.bn1 | 0.512K | 1.004M | 0 |
| layer3.0.conv2 | 0.59M | 0.116G | 50.176K |
| layer3.0.bn2 | 0.512K | 0.251M | 0 |
| layer3.0.conv3 | 0.262M | 51.38M | 0.201M |
| layer3.0.bn3 | 2.048K | 1.004M | 0 |
| layer3.0.downsample | 0.526M | 0.104G | 0.201M |
| layer3.1 | 1.117M | 0.22G | 0.301M |
| layer3.1.conv1 | 0.262M | 51.38M | 50.176K |
| layer3.1.bn1 | 0.512K | 0.251M | 0 |
| layer3.1.conv2 | 0.59M | 0.116G | 50.176K |
| layer3.1.bn2 | 0.512K | 0.251M | 0 |
| layer3.1.conv3 | 0.262M | 51.38M | 0.201M |
| layer3.1.bn3 | 2.048K | 1.004M | 0 |
| layer3.2 | 1.117M | 0.22G | 0.301M |
| layer3.2.conv1 | 0.262M | 51.38M | 50.176K |
| layer3.2.bn1 | 0.512K | 0.251M | 0 |
| layer3.2.conv2 | 0.59M | 0.116G | 50.176K |
| layer3.2.bn2 | 0.512K | 0.251M | 0 |
| layer3.2.conv3 | 0.262M | 51.38M | 0.201M |
| layer3.2.bn3 | 2.048K | 1.004M | 0 |
| layer3.3 | 1.117M | 0.22G | 0.301M |
| layer3.3.conv1 | 0.262M | 51.38M | 50.176K |
| layer3.3.bn1 | 0.512K | 0.251M | 0 |
| layer3.3.conv2 | 0.59M | 0.116G | 50.176K |
| layer3.3.bn2 | 0.512K | 0.251M | 0 |
| layer3.3.conv3 | 0.262M | 51.38M | 0.201M |
| layer3.3.bn3 | 2.048K | 1.004M | 0 |
| layer3.4 | 1.117M | 0.22G | 0.301M |
| layer3.4.conv1 | 0.262M | 51.38M | 50.176K |
| layer3.4.bn1 | 0.512K | 0.251M | 0 |
| layer3.4.conv2 | 0.59M | 0.116G | 50.176K |
| layer3.4.bn2 | 0.512K | 0.251M | 0 |
| layer3.4.conv3 | 0.262M | 51.38M | 0.201M |
| layer3.4.bn3 | 2.048K | 1.004M | 0 |
| layer3.5 | 1.117M | 0.22G | 0.301M |
| layer3.5.conv1 | 0.262M | 51.38M | 50.176K |
| layer3.5.bn1 | 0.512K | 0.251M | 0 |
| layer3.5.conv2 | 0.59M | 0.116G | 50.176K |
| layer3.5.bn2 | 0.512K | 0.251M | 0 |
| layer3.5.conv3 | 0.262M | 51.38M | 0.201M |
| layer3.5.bn3 | 2.048K | 1.004M | 0 |
| layer4 | 14.965M | 0.812G | 0.627M |
| layer4.0 | 6.04M | 0.374G | 0.326M |
| layer4.0.conv1 | 0.524M | 0.103G | 0.1M |
| layer4.0.bn1 | 1.024K | 0.502M | 0 |
| layer4.0.conv2 | 2.359M | 0.116G | 25.088K |
| layer4.0.bn2 | 1.024K | 0.125M | 0 |
| layer4.0.conv3 | 1.049M | 51.38M | 0.1M |
| layer4.0.bn3 | 4.096K | 0.502M | 0 |
| layer4.0.downsample | 2.101M | 0.103G | 0.1M |
| layer4.1 | 4.463M | 0.219G | 0.151M |
| layer4.1.conv1 | 1.049M | 51.38M | 25.088K |
| layer4.1.bn1 | 1.024K | 0.125M | 0 |
| layer4.1.conv2 | 2.359M | 0.116G | 25.088K |
| layer4.1.bn2 | 1.024K | 0.125M | 0 |
| layer4.1.conv3 | 1.049M | 51.38M | 0.1M |
| layer4.1.bn3 | 4.096K | 0.502M | 0 |
| layer4.2 | 4.463M | 0.219G | 0.151M |
| layer4.2.conv1 | 1.049M | 51.38M | 25.088K |
| layer4.2.bn1 | 1.024K | 0.125M | 0 |
| layer4.2.conv2 | 2.359M | 0.116G | 25.088K |
| layer4.2.bn2 | 1.024K | 0.125M | 0 |
| layer4.2.conv3 | 1.049M | 51.38M | 0.1M |
| layer4.2.bn3 | 4.096K | 0.502M | 0 |
| fc | 2.049M | 2.048M | 1K |
| fc.weight | (1000, 2048) | | |
| fc.bias | (1000,) | | |
| avgpool | | 0.1M | 0 |
+------------------------+----------------------+------------+--------------+
🌟🌟以网络结构形式显示每层输出
????????mmengine支持表格形式和模型结构两种形式进行计算量和参数量的输出,以网络结构形式输出的python代码如下:
print(analysis_results['out_arch'])
📌输出
MMResNet50(
#params: 25.56M, #flops: 4.14G, #acts: 11.11M
(data_preprocessor): BaseDataPreprocessor(#params: 0, #flops: N/A, #acts: N/A)
(resnet): ResNet(
#params: 25.56M, #flops: 4.14G, #acts: 11.11M
(conv1): Conv2d(
3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
#params: 9.41K, #flops: 0.12G, #acts: 0.8M
)
(bn1): BatchNorm2d(
64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.13K, #flops: 4.01M, #acts: 0
)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
#params: 0.22M, #flops: 0.69G, #acts: 4.42M
(0): Bottleneck(
#params: 75.01K, #flops: 0.24G, #acts: 2.01M
(conv1): Conv2d(
64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 4.1K, #flops: 12.85M, #acts: 0.2M
)
(bn1): BatchNorm2d(
64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.13K, #flops: 1M, #acts: 0
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
#params: 36.86K, #flops: 0.12G, #acts: 0.2M
)
(bn2): BatchNorm2d(
64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.13K, #flops: 1M, #acts: 0
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 16.38K, #flops: 51.38M, #acts: 0.8M
)
(bn3): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 4.01M, #acts: 0
)
(relu): ReLU(inplace=True)
(downsample): Sequential(
#params: 16.9K, #flops: 55.39M, #acts: 0.8M
(0): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 16.38K, #flops: 51.38M, #acts: 0.8M
)
(1): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 4.01M, #acts: 0
)
)
)
(1): Bottleneck(
#params: 70.4K, #flops: 0.22G, #acts: 1.2M
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 16.38K, #flops: 51.38M, #acts: 0.2M
)
(bn1): BatchNorm2d(
64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.13K, #flops: 1M, #acts: 0
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
#params: 36.86K, #flops: 0.12G, #acts: 0.2M
)
(bn2): BatchNorm2d(
64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.13K, #flops: 1M, #acts: 0
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 16.38K, #flops: 51.38M, #acts: 0.8M
)
(bn3): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 4.01M, #acts: 0
)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
#params: 70.4K, #flops: 0.22G, #acts: 1.2M
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 16.38K, #flops: 51.38M, #acts: 0.2M
)
(bn1): BatchNorm2d(
64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.13K, #flops: 1M, #acts: 0
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
#params: 36.86K, #flops: 0.12G, #acts: 0.2M
)
(bn2): BatchNorm2d(
64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.13K, #flops: 1M, #acts: 0
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 16.38K, #flops: 51.38M, #acts: 0.8M
)
(bn3): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 4.01M, #acts: 0
)
(relu): ReLU(inplace=True)
)
)
(layer2): Sequential(
#params: 1.22M, #flops: 1.04G, #acts: 3.11M
(0): Bottleneck(
#params: 0.38M, #flops: 0.38G, #acts: 1.3M
(conv1): Conv2d(
256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 32.77K, #flops: 0.1G, #acts: 0.4M
)
(bn1): BatchNorm2d(
128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.26K, #flops: 2.01M, #acts: 0
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False
#params: 0.15M, #flops: 0.12G, #acts: 0.1M
)
(bn2): BatchNorm2d(
128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.26K, #flops: 0.5M, #acts: 0
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 65.54K, #flops: 51.38M, #acts: 0.4M
)
(bn3): BatchNorm2d(
512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 1.02K, #flops: 2.01M, #acts: 0
)
(relu): ReLU(inplace=True)
(downsample): Sequential(
#params: 0.13M, #flops: 0.1G, #acts: 0.4M
(0): Conv2d(
256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
#params: 0.13M, #flops: 0.1G, #acts: 0.4M
)
(1): BatchNorm2d(
512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 1.02K, #flops: 2.01M, #acts: 0
)
)
)
(1): Bottleneck(
#params: 0.28M, #flops: 0.22G, #acts: 0.6M
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 65.54K, #flops: 51.38M, #acts: 0.1M
)
(bn1): BatchNorm2d(
128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.26K, #flops: 0.5M, #acts: 0
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
#params: 0.15M, #flops: 0.12G, #acts: 0.1M
)
(bn2): BatchNorm2d(
128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.26K, #flops: 0.5M, #acts: 0
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 65.54K, #flops: 51.38M, #acts: 0.4M
)
(bn3): BatchNorm2d(
512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 1.02K, #flops: 2.01M, #acts: 0
)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
#params: 0.28M, #flops: 0.22G, #acts: 0.6M
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 65.54K, #flops: 51.38M, #acts: 0.1M
)
(bn1): BatchNorm2d(
128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.26K, #flops: 0.5M, #acts: 0
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
#params: 0.15M, #flops: 0.12G, #acts: 0.1M
)
(bn2): BatchNorm2d(
128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.26K, #flops: 0.5M, #acts: 0
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 65.54K, #flops: 51.38M, #acts: 0.4M
)
(bn3): BatchNorm2d(
512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 1.02K, #flops: 2.01M, #acts: 0
)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
#params: 0.28M, #flops: 0.22G, #acts: 0.6M
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 65.54K, #flops: 51.38M, #acts: 0.1M
)
(bn1): BatchNorm2d(
128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.26K, #flops: 0.5M, #acts: 0
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
#params: 0.15M, #flops: 0.12G, #acts: 0.1M
)
(bn2): BatchNorm2d(
128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.26K, #flops: 0.5M, #acts: 0
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 65.54K, #flops: 51.38M, #acts: 0.4M
)
(bn3): BatchNorm2d(
512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 1.02K, #flops: 2.01M, #acts: 0
)
(relu): ReLU(inplace=True)
)
)
(layer3): Sequential(
#params: 7.1M, #flops: 1.48G, #acts: 2.16M
(0): Bottleneck(
#params: 1.51M, #flops: 0.38G, #acts: 0.65M
(conv1): Conv2d(
512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 0.13M, #flops: 0.1G, #acts: 0.2M
)
(bn1): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 1M, #acts: 0
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False
#params: 0.59M, #flops: 0.12G, #acts: 50.18K
)
(bn2): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 0.25M, #acts: 0
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 0.26M, #flops: 51.38M, #acts: 0.2M
)
(bn3): BatchNorm2d(
1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 2.05K, #flops: 1M, #acts: 0
)
(relu): ReLU(inplace=True)
(downsample): Sequential(
#params: 0.53M, #flops: 0.1G, #acts: 0.2M
(0): Conv2d(
512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
#params: 0.52M, #flops: 0.1G, #acts: 0.2M
)
(1): BatchNorm2d(
1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 2.05K, #flops: 1M, #acts: 0
)
)
)
(1): Bottleneck(
#params: 1.12M, #flops: 0.22G, #acts: 0.3M
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 0.26M, #flops: 51.38M, #acts: 50.18K
)
(bn1): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 0.25M, #acts: 0
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
#params: 0.59M, #flops: 0.12G, #acts: 50.18K
)
(bn2): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 0.25M, #acts: 0
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 0.26M, #flops: 51.38M, #acts: 0.2M
)
(bn3): BatchNorm2d(
1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 2.05K, #flops: 1M, #acts: 0
)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
#params: 1.12M, #flops: 0.22G, #acts: 0.3M
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 0.26M, #flops: 51.38M, #acts: 50.18K
)
(bn1): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 0.25M, #acts: 0
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
#params: 0.59M, #flops: 0.12G, #acts: 50.18K
)
(bn2): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 0.25M, #acts: 0
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 0.26M, #flops: 51.38M, #acts: 0.2M
)
(bn3): BatchNorm2d(
1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 2.05K, #flops: 1M, #acts: 0
)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
#params: 1.12M, #flops: 0.22G, #acts: 0.3M
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 0.26M, #flops: 51.38M, #acts: 50.18K
)
(bn1): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 0.25M, #acts: 0
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
#params: 0.59M, #flops: 0.12G, #acts: 50.18K
)
(bn2): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 0.25M, #acts: 0
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 0.26M, #flops: 51.38M, #acts: 0.2M
)
(bn3): BatchNorm2d(
1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 2.05K, #flops: 1M, #acts: 0
)
(relu): ReLU(inplace=True)
)
(4): Bottleneck(
#params: 1.12M, #flops: 0.22G, #acts: 0.3M
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 0.26M, #flops: 51.38M, #acts: 50.18K
)
(bn1): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 0.25M, #acts: 0
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
#params: 0.59M, #flops: 0.12G, #acts: 50.18K
)
(bn2): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 0.25M, #acts: 0
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 0.26M, #flops: 51.38M, #acts: 0.2M
)
(bn3): BatchNorm2d(
1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 2.05K, #flops: 1M, #acts: 0
)
(relu): ReLU(inplace=True)
)
(5): Bottleneck(
#params: 1.12M, #flops: 0.22G, #acts: 0.3M
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 0.26M, #flops: 51.38M, #acts: 50.18K
)
(bn1): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 0.25M, #acts: 0
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
#params: 0.59M, #flops: 0.12G, #acts: 50.18K
)
(bn2): BatchNorm2d(
256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 0.51K, #flops: 0.25M, #acts: 0
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 0.26M, #flops: 51.38M, #acts: 0.2M
)
(bn3): BatchNorm2d(
1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 2.05K, #flops: 1M, #acts: 0
)
(relu): ReLU(inplace=True)
)
)
(layer4): Sequential(
#params: 14.96M, #flops: 0.81G, #acts: 0.63M
(0): Bottleneck(
#params: 6.04M, #flops: 0.37G, #acts: 0.33M
(conv1): Conv2d(
1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 0.52M, #flops: 0.1G, #acts: 0.1M
)
(bn1): BatchNorm2d(
512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 1.02K, #flops: 0.5M, #acts: 0
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False
#params: 2.36M, #flops: 0.12G, #acts: 25.09K
)
(bn2): BatchNorm2d(
512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 1.02K, #flops: 0.13M, #acts: 0
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 1.05M, #flops: 51.38M, #acts: 0.1M
)
(bn3): BatchNorm2d(
2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 4.1K, #flops: 0.5M, #acts: 0
)
(relu): ReLU(inplace=True)
(downsample): Sequential(
#params: 2.1M, #flops: 0.1G, #acts: 0.1M
(0): Conv2d(
1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False
#params: 2.1M, #flops: 0.1G, #acts: 0.1M
)
(1): BatchNorm2d(
2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 4.1K, #flops: 0.5M, #acts: 0
)
)
)
(1): Bottleneck(
#params: 4.46M, #flops: 0.22G, #acts: 0.15M
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 1.05M, #flops: 51.38M, #acts: 25.09K
)
(bn1): BatchNorm2d(
512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 1.02K, #flops: 0.13M, #acts: 0
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
#params: 2.36M, #flops: 0.12G, #acts: 25.09K
)
(bn2): BatchNorm2d(
512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 1.02K, #flops: 0.13M, #acts: 0
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 1.05M, #flops: 51.38M, #acts: 0.1M
)
(bn3): BatchNorm2d(
2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 4.1K, #flops: 0.5M, #acts: 0
)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
#params: 4.46M, #flops: 0.22G, #acts: 0.15M
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 1.05M, #flops: 51.38M, #acts: 25.09K
)
(bn1): BatchNorm2d(
512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 1.02K, #flops: 0.13M, #acts: 0
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
#params: 2.36M, #flops: 0.12G, #acts: 25.09K
)
(bn2): BatchNorm2d(
512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 1.02K, #flops: 0.13M, #acts: 0
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
#params: 1.05M, #flops: 51.38M, #acts: 0.1M
)
(bn3): BatchNorm2d(
2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
#params: 4.1K, #flops: 0.5M, #acts: 0
)
(relu): ReLU(inplace=True)
)
)
(avgpool): AdaptiveAvgPool2d(
output_size=(1, 1)
#params: 0, #flops: 0.1M, #acts: 0
)
(fc): Linear(
in_features=2048, out_features=1000, bias=True
#params: 2.05M, #flops: 2.05M, #acts: 1K
)
)
)
🎃🎃输出总计算量
print("Model Flops:{}".format(analysis_results['flops_str']))
📌输出
Model Flops:4.145G
🎉🎉输出总参数量?
print("Model Parameters:{}".format(analysis_results['params_str']))
📌输出
Model Parameters:25.557M
💪??????????????💪???????其他函数包计算参数量和计算量
🌲1.thop
- 第一步:安装模块
pip install thop
- 第二步:计算
# -- coding: utf-8 --
import torch
import torchvision
from thop import profile
# Model
print('==> Building model..')
model = torchvision.models.alexnet(pretrained=False)
dummy_input = torch.randn(1, 3, 224, 224)
flops, params = profile(model, (dummy_input,))
print('flops: ', flops, 'params: ', params)
print('flops: %.2f M, params: %.2f M' % (flops / 1000000.0, params / 1000000.0))
或者:
from torchvision.models import resnet18
from thop import profile
model = resnet18()
input = torch.randn(1, 3, 224, 224) #模型输入的形状,batch_size=1
flops, params = profile(model, inputs=(input, ))
print(flops/1e9,params/1e6) #flops单位G,para单位M
📌结果
==> Building model..
[INFO] Register count_convNd() for <class 'torch.nn.modules.conv.Conv2d'>.
[INFO] Register zero_ops() for <class 'torch.nn.modules.activation.ReLU'>.
[INFO] Register zero_ops() for <class 'torch.nn.modules.pooling.MaxPool2d'>.
[INFO] Register zero_ops() for <class 'torch.nn.modules.container.Sequential'>.
[INFO] Register count_adap_avgpool() for <class 'torch.nn.modules.pooling.AdaptiveAvgPool2d'>.
[INFO] Register zero_ops() for <class 'torch.nn.modules.dropout.Dropout'>.
[INFO] Register count_linear() for <class 'torch.nn.modules.linear.Linear'>.
flops: 714206912.0 params: 61100840.0
flops: 714.21 M, params: 61.10 M
注意:profile(net, (inputs,))的 (inputs,)中必须加上逗号,否者会报错
🌲2.ptflops
- 第一步:安装模块
pip install ptflops
- 第二步:计算
import torchvision
from ptflops import get_model_complexity_info
model = torchvision.models.alexnet(pretrained=False)
flops, params = get_model_complexity_info(model, (3, 224, 224), as_strings=True, print_per_layer_stat=True)
print('flops: ', flops, 'params: ', params)
📌结果
AlexNet(
61.101 M, 100.000% Params, 0.716 GMac, 100.000% MACs,
(features): Sequential(
2.47 M, 4.042% Params, 0.657 GMac, 91.804% MACs,
(0): Conv2d(0.023 M, 0.038% Params, 0.07 GMac, 9.848% MACs, 3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2))
(1): ReLU(0.0 M, 0.000% Params, 0.0 GMac, 0.027% MACs, inplace=True)
(2): MaxPool2d(0.0 M, 0.000% Params, 0.0 GMac, 0.027% MACs, kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(0.307 M, 0.503% Params, 0.224 GMac, 31.316% MACs, 64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(4): ReLU(0.0 M, 0.000% Params, 0.0 GMac, 0.020% MACs, inplace=True)
(5): MaxPool2d(0.0 M, 0.000% Params, 0.0 GMac, 0.020% MACs, kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(0.664 M, 1.087% Params, 0.112 GMac, 15.681% MACs, 192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(7): ReLU(0.0 M, 0.000% Params, 0.0 GMac, 0.009% MACs, inplace=True)
(8): Conv2d(0.885 M, 1.448% Params, 0.15 GMac, 20.902% MACs, 384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(9): ReLU(0.0 M, 0.000% Params, 0.0 GMac, 0.006% MACs, inplace=True)
(10): Conv2d(0.59 M, 0.966% Params, 0.1 GMac, 13.936% MACs, 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(0.0 M, 0.000% Params, 0.0 GMac, 0.006% MACs, inplace=True)
(12): MaxPool2d(0.0 M, 0.000% Params, 0.0 GMac, 0.006% MACs, kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(0.0 M, 0.000% Params, 0.0 GMac, 0.001% MACs, output_size=(6, 6))
(classifier): Sequential(
58.631 M, 95.958% Params, 0.059 GMac, 8.195% MACs,
(0): Dropout(0.0 M, 0.000% Params, 0.0 GMac, 0.000% MACs, p=0.5, inplace=False)
(1): Linear(37.753 M, 61.788% Params, 0.038 GMac, 5.276% MACs, in_features=9216, out_features=4096, bias=True)
(2): ReLU(0.0 M, 0.000% Params, 0.0 GMac, 0.001% MACs, inplace=True)
(3): Dropout(0.0 M, 0.000% Params, 0.0 GMac, 0.000% MACs, p=0.5, inplace=False)
(4): Linear(16.781 M, 27.465% Params, 0.017 GMac, 2.345% MACs, in_features=4096, out_features=4096, bias=True)
(5): ReLU(0.0 M, 0.000% Params, 0.0 GMac, 0.001% MACs, inplace=True)
(6): Linear(4.097 M, 6.705% Params, 0.004 GMac, 0.573% MACs, in_features=4096, out_features=1000, bias=True)
)
)
flops: 0.72 GMac params: 61.1 M
🌲3.torchsummary
- 第一步:安装模块
pip install torch
- 第二步:计算
import torch
import torchvision
from pytorch_model_summary import summary
# Model
print('==> Building model..')
model = torchvision.models.alexnet(pretrained=False)
dummy_input = torch.randn(1, 3, 224, 224)
print(summary(model, dummy_input, show_input=False, show_hierarchical=False))
?注意:新版本的torch导入summary包时,需要用:
from torchsummary import summary
📌结果
==> Building model..
-----------------------------------------------------------------------------
Layer (type) Output Shape Param # Tr. Param #
=============================================================================
Conv2d-1 [1, 64, 55, 55] 23,296 23,296
ReLU-2 [1, 64, 55, 55] 0 0
MaxPool2d-3 [1, 64, 27, 27] 0 0
Conv2d-4 [1, 192, 27, 27] 307,392 307,392
ReLU-5 [1, 192, 27, 27] 0 0
MaxPool2d-6 [1, 192, 13, 13] 0 0
Conv2d-7 [1, 384, 13, 13] 663,936 663,936
ReLU-8 [1, 384, 13, 13] 0 0
Conv2d-9 [1, 256, 13, 13] 884,992 884,992
ReLU-10 [1, 256, 13, 13] 0 0
Conv2d-11 [1, 256, 13, 13] 590,080 590,080
ReLU-12 [1, 256, 13, 13] 0 0
MaxPool2d-13 [1, 256, 6, 6] 0 0
AdaptiveAvgPool2d-14 [1, 256, 6, 6] 0 0
Dropout-15 [1, 9216] 0 0
Linear-16 [1, 4096] 37,752,832 37,752,832
ReLU-17 [1, 4096] 0 0
Dropout-18 [1, 4096] 0 0
Linear-19 [1, 4096] 16,781,312 16,781,312
ReLU-20 [1, 4096] 0 0
Linear-21 [1, 1000] 4,097,000 4,097,000
=============================================================================
Total params: 61,100,840
Trainable params: 61,100,840
Non-trainable params: 0
-----------------------------------------------------------------------------
🌲4.torchsummary计算参数总量和可训练参数总量
import torch
import torchvision
from pytorch_model_summary import summary
# Model
print('==> Building model..')
model = torchvision.models.alexnet(pretrained=False)
pytorch_total_params = sum(p.numel() for p in model.parameters())
trainable_pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
print('Total - ', pytorch_total_params)
print('Trainable - ', trainable_pytorch_total_params)
📌结果
==> Building model..
Total - 61100840
Trainable - 61100840
整理不易,欢迎一键三连!!!
送你们一条美丽的--分割线--
🌷🌷🍀🍀🌾🌾🍓🍓🍂🍂🙋🙋🐸🐸🙋🙋💖💖🍌🍌🔔🔔🍉🍉🍭🍭🍋🍋🍇🍇🏆🏆📸📸????🍎🍎👍👍🌷🌷
文章来源:https://blog.csdn.net/qq_38308388/article/details/135099281
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!