译(四十七)-Pytorch获取模型概要

stackoverflow热门问题目录

如有翻译问题欢迎评论指出,谢谢。

Pytorch获取模型概要

  • Wasi Ahmad asked:

    • Pytorch 怎么像 Keras 的 model.summary() 获得一个模型概要?

    • Model Summary:
      ____________________________________________________________________________________________________
      Layer (type)                     Output Shape          Param #     Connected to                     
      ====================================================================================================
      input_1 (InputLayer)             (None, 1, 15, 27)     0                                            
      ____________________________________________________________________________________________________
      convolution2d_1 (Convolution2D)  (None, 8, 15, 27)     872         input_1[0][0]                    
      ____________________________________________________________________________________________________
      maxpooling2d_1 (MaxPooling2D)    (None, 8, 7, 27)      0           convolution2d_1[0][0]            
      ____________________________________________________________________________________________________
      flatten_1 (Flatten)              (None, 1512)          0           maxpooling2d_1[0][0]             
      ____________________________________________________________________________________________________
      dense_1 (Dense)                  (None, 1)             1513        flatten_1[0][0]                  
      ====================================================================================================
      Total params: 2,385
      Trainable params: 2,385
      Non-trainable params: 0
  • Answers:

    • Shubham Chandel - vote: 213

    • pytorch-summary 包可以实现 Keras 的效果。

    • VGG16 的例子:

    • from torchvision import models
      from torchsummary import summary
      #
      vgg = models.vgg16()
      summary(vgg, (3, 224, 224))
      #
      ----------------------------------------------------------------
            Layer (type)               Output Shape         Param #
      ================================================================
                Conv2d-1         [-1, 64, 224, 224]           1,792
                  ReLU-2         [-1, 64, 224, 224]               0
                Conv2d-3         [-1, 64, 224, 224]          36,928
                  ReLU-4         [-1, 64, 224, 224]               0
             MaxPool2d-5         [-1, 64, 112, 112]               0
                Conv2d-6        [-1, 128, 112, 112]          73,856
                  ReLU-7        [-1, 128, 112, 112]               0
                Conv2d-8        [-1, 128, 112, 112]         147,584
                  ReLU-9        [-1, 128, 112, 112]               0
            MaxPool2d-10          [-1, 128, 56, 56]               0
               Conv2d-11          [-1, 256, 56, 56]         295,168
                 ReLU-12          [-1, 256, 56, 56]               0
               Conv2d-13          [-1, 256, 56, 56]         590,080
                 ReLU-14          [-1, 256, 56, 56]               0
               Conv2d-15          [-1, 256, 56, 56]         590,080
                 ReLU-16          [-1, 256, 56, 56]               0
            MaxPool2d-17          [-1, 256, 28, 28]               0
               Conv2d-18          [-1, 512, 28, 28]       1,180,160
                 ReLU-19          [-1, 512, 28, 28]               0
               Conv2d-20          [-1, 512, 28, 28]       2,359,808
                 ReLU-21          [-1, 512, 28, 28]               0
               Conv2d-22          [-1, 512, 28, 28]       2,359,808
                 ReLU-23          [-1, 512, 28, 28]               0
            MaxPool2d-24          [-1, 512, 14, 14]               0
               Conv2d-25          [-1, 512, 14, 14]       2,359,808
                 ReLU-26          [-1, 512, 14, 14]               0
               Conv2d-27          [-1, 512, 14, 14]       2,359,808
                 ReLU-28          [-1, 512, 14, 14]               0
               Conv2d-29          [-1, 512, 14, 14]       2,359,808
                 ReLU-30          [-1, 512, 14, 14]               0
            MaxPool2d-31            [-1, 512, 7, 7]               0
               Linear-32                 [-1, 4096]     102,764,544
                 ReLU-33                 [-1, 4096]               0
              Dropout-34                 [-1, 4096]               0
               Linear-35                 [-1, 4096]      16,781,312
                 ReLU-36                 [-1, 4096]               0
              Dropout-37                 [-1, 4096]               0
               Linear-38                 [-1, 1000]       4,097,000
      ================================================================
      Total params: 138,357,544
      Trainable params: 138,357,544
      Non-trainable params: 0
      ----------------------------------------------------------------
      Input size (MB): 0.57
      Forward/backward pass size (MB): 218.59
      Params size (MB): 527.79
      Estimated Total Size (MB): 746.96
      ----------------------------------------------------------------
    • SpiderWasp42 - vote: 185

    • 虽然不能像 Keras 的 model.summary() 一样得到模型详细信息,但通过打印模型你也能得到不同层的信息。

    • 例:

    • from torchvision import models
      model = models.vgg16()
      print(model)
    • 输出如下:

    • VGG (
      (features): Sequential (
        (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU (inplace)
        (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (3): ReLU (inplace)
        (4): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
        (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (6): ReLU (inplace)
        (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (8): ReLU (inplace)
        (9): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
        (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (11): ReLU (inplace)
        (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (13): ReLU (inplace)
        (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (15): ReLU (inplace)
        (16): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
        (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (18): ReLU (inplace)
        (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (20): ReLU (inplace)
        (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (22): ReLU (inplace)
        (23): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
        (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (25): ReLU (inplace)
        (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (27): ReLU (inplace)
        (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (29): ReLU (inplace)
        (30): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
      )
      (classifier): Sequential (
        (0): Dropout (p = 0.5)
        (1): Linear (25088 -> 4096)
        (2): ReLU (inplace)
        (3): Dropout (p = 0.5)
        (4): Linear (4096 -> 4096)
        (5): ReLU (inplace)
        (6): Linear (4096 -> 1000)
      )
      )
    • 正如 Kashyap 提到的,使用 state_dict 方法可以得到不同层的权重。你可以通过这些层的信息自己创建一个函数来实现类似 Keras 的模型概要。希望这能帮到你。

    • prosti - vote: 45

    • 使用 torchsummary:

    • from torchsummary import summary
    • 如果没有的话,如下安装:

    • pip install torchsummary 
    • 你可以试试,不过它需要你设置模型到 cuda 才能使用:

    • from torchsummary import summary
      help(summary)
      import torchvision.models as models
      alexnet = models.alexnet(pretrained=False)
      alexnet.cuda()
      summary(alexnet, (3, 224, 224))
      print(alexnet)
    • summary 需要输入尺寸,将 batch_size 设为 -1 表示适用于任何 batch_size。

    • 如果设为 summary(alexnet, (3, 224, 224), 32),则意味着使用 batch_size=32

    • summary(model, input_size, batch_size=-1, device='cuda')

    • 输出

    • Help on function summary in module torchsummary.torchsummary:
      #
      summary(model, input_size, batch_size=-1, device='cuda')
      #
      ----------------------------------------------------------------
            Layer (type)               Output Shape         Param #
      ================================================================
                Conv2d-1           [32, 64, 55, 55]          23,296
                  ReLU-2           [32, 64, 55, 55]               0
             MaxPool2d-3           [32, 64, 27, 27]               0
                Conv2d-4          [32, 192, 27, 27]         307,392
                  ReLU-5          [32, 192, 27, 27]               0
             MaxPool2d-6          [32, 192, 13, 13]               0
                Conv2d-7          [32, 384, 13, 13]         663,936
                  ReLU-8          [32, 384, 13, 13]               0
                Conv2d-9          [32, 256, 13, 13]         884,992
                 ReLU-10          [32, 256, 13, 13]               0
               Conv2d-11          [32, 256, 13, 13]         590,080
                 ReLU-12          [32, 256, 13, 13]               0
            MaxPool2d-13            [32, 256, 6, 6]               0
      AdaptiveAvgPool2d-14            [32, 256, 6, 6]               0
              Dropout-15                 [32, 9216]               0
               Linear-16                 [32, 4096]      37,752,832
                 ReLU-17                 [32, 4096]               0
              Dropout-18                 [32, 4096]               0
               Linear-19                 [32, 4096]      16,781,312
                 ReLU-20                 [32, 4096]               0
               Linear-21                 [32, 1000]       4,097,000
      ================================================================
      Total params: 61,100,840
      Trainable params: 61,100,840
      Non-trainable params: 0
      ----------------------------------------------------------------
      Input size (MB): 18.38
      Forward/backward pass size (MB): 268.12
      Params size (MB): 233.08
      Estimated Total Size (MB): 519.58
      ----------------------------------------------------------------
      AlexNet(
      (features): Sequential(
        (0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2))
        (1): ReLU(inplace)
        (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
        (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
        (4): ReLU(inplace)
        (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
        (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (7): ReLU(inplace)
        (8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (9): ReLU(inplace)
        (10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (11): ReLU(inplace)
        (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
      )
      (avgpool): AdaptiveAvgPool2d(output_size=(6, 6))
      (classifier): Sequential(
        (0): Dropout(p=0.5)
        (1): Linear(in_features=9216, out_features=4096, bias=True)
        (2): ReLU(inplace)
        (3): Dropout(p=0.5)
        (4): Linear(in_features=4096, out_features=4096, bias=True)
        (5): ReLU(inplace)
        (6): Linear(in_features=4096, out_features=1000, bias=True)
      )
      )

Model summary in pytorch

  • Wasi Ahmad asked:

    • How do I print the summary of a model in PyTorch like the model.summary() method does in Keras:
      Pytorch 怎么像 Keras 的 model.summary() 获得一个模型概要?

    • Model Summary:
      ____________________________________________________________________________________________________
      Layer (type)                     Output Shape          Param #     Connected to                     
      ====================================================================================================
      input_1 (InputLayer)             (None, 1, 15, 27)     0                                            
      ____________________________________________________________________________________________________
      convolution2d_1 (Convolution2D)  (None, 8, 15, 27)     872         input_1[0][0]                    
      ____________________________________________________________________________________________________
      maxpooling2d_1 (MaxPooling2D)    (None, 8, 7, 27)      0           convolution2d_1[0][0]            
      ____________________________________________________________________________________________________
      flatten_1 (Flatten)              (None, 1512)          0           maxpooling2d_1[0][0]             
      ____________________________________________________________________________________________________
      dense_1 (Dense)                  (None, 1)             1513        flatten_1[0][0]                  
      ====================================================================================================
      Total params: 2,385
      Trainable params: 2,385
      Non-trainable params: 0
  • Answers:

    • Shubham Chandel - vote: 213

    • Yes, you can get exact Keras representation, using the pytorch-summary package.
      pytorch-summary 包可以实现 Keras 的效果。

    • Example for VGG16:
      VGG16 的例子:

    • from torchvision import models
      from torchsummary import summary
      #
      vgg = models.vgg16()
      summary(vgg, (3, 224, 224))
      #
      ----------------------------------------------------------------
            Layer (type)               Output Shape         Param #
      ================================================================
                Conv2d-1         [-1, 64, 224, 224]           1,792
                  ReLU-2         [-1, 64, 224, 224]               0
                Conv2d-3         [-1, 64, 224, 224]          36,928
                  ReLU-4         [-1, 64, 224, 224]               0
             MaxPool2d-5         [-1, 64, 112, 112]               0
                Conv2d-6        [-1, 128, 112, 112]          73,856
                  ReLU-7        [-1, 128, 112, 112]               0
                Conv2d-8        [-1, 128, 112, 112]         147,584
                  ReLU-9        [-1, 128, 112, 112]               0
            MaxPool2d-10          [-1, 128, 56, 56]               0
               Conv2d-11          [-1, 256, 56, 56]         295,168
                 ReLU-12          [-1, 256, 56, 56]               0
               Conv2d-13          [-1, 256, 56, 56]         590,080
                 ReLU-14          [-1, 256, 56, 56]               0
               Conv2d-15          [-1, 256, 56, 56]         590,080
                 ReLU-16          [-1, 256, 56, 56]               0
            MaxPool2d-17          [-1, 256, 28, 28]               0
               Conv2d-18          [-1, 512, 28, 28]       1,180,160
                 ReLU-19          [-1, 512, 28, 28]               0
               Conv2d-20          [-1, 512, 28, 28]       2,359,808
                 ReLU-21          [-1, 512, 28, 28]               0
               Conv2d-22          [-1, 512, 28, 28]       2,359,808
                 ReLU-23          [-1, 512, 28, 28]               0
            MaxPool2d-24          [-1, 512, 14, 14]               0
               Conv2d-25          [-1, 512, 14, 14]       2,359,808
                 ReLU-26          [-1, 512, 14, 14]               0
               Conv2d-27          [-1, 512, 14, 14]       2,359,808
                 ReLU-28          [-1, 512, 14, 14]               0
               Conv2d-29          [-1, 512, 14, 14]       2,359,808
                 ReLU-30          [-1, 512, 14, 14]               0
            MaxPool2d-31            [-1, 512, 7, 7]               0
               Linear-32                 [-1, 4096]     102,764,544
                 ReLU-33                 [-1, 4096]               0
              Dropout-34                 [-1, 4096]               0
               Linear-35                 [-1, 4096]      16,781,312
                 ReLU-36                 [-1, 4096]               0
              Dropout-37                 [-1, 4096]               0
               Linear-38                 [-1, 1000]       4,097,000
      ================================================================
      Total params: 138,357,544
      Trainable params: 138,357,544
      Non-trainable params: 0
      ----------------------------------------------------------------
      Input size (MB): 0.57
      Forward/backward pass size (MB): 218.59
      Params size (MB): 527.79
      Estimated Total Size (MB): 746.96
      ----------------------------------------------------------------
    • SpiderWasp42 - vote: 185

    • While you will not get as detailed information about the model as in Keras\' model.summary, simply printing the model will give you some idea about the different layers involved and their specifications.
      虽然不能像 Keras 的 model.summary() 一样得到模型详细信息,但通过打印模型你也能得到不同层的信息。

    • For instance:
      例:

    • from torchvision import models
      model = models.vgg16()
      print(model)
    • The output in this case would be something as follows:
      输出如下:

    • VGG (
      (features): Sequential (
        (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU (inplace)
        (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (3): ReLU (inplace)
        (4): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
        (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (6): ReLU (inplace)
        (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (8): ReLU (inplace)
        (9): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
        (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (11): ReLU (inplace)
        (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (13): ReLU (inplace)
        (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (15): ReLU (inplace)
        (16): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
        (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (18): ReLU (inplace)
        (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (20): ReLU (inplace)
        (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (22): ReLU (inplace)
        (23): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
        (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (25): ReLU (inplace)
        (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (27): ReLU (inplace)
        (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (29): ReLU (inplace)
        (30): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
      )
      (classifier): Sequential (
        (0): Dropout (p = 0.5)
        (1): Linear (25088 -> 4096)
        (2): ReLU (inplace)
        (3): Dropout (p = 0.5)
        (4): Linear (4096 -> 4096)
        (5): ReLU (inplace)
        (6): Linear (4096 -> 1000)
      )
      )
    • Now you could, as mentioned by Kashyap, use the state_dict method to get the weights of the different layers. But using this listing of the layers would perhaps provide more direction is creating a helper function to get that Keras like model summary! Hope this helps!
      正如 Kashyap 提到的,使用 state_dict 方法可以得到不同层的权重。你可以通过这些层的信息自己创建一个函数来实现类似 Keras 的模型概要。希望这能帮到你。

    • prosti - vote: 45

    • In order to use torchsummary type:
      使用 torchsummary:

    • from torchsummary import summary
    • Install it first if you don\'t have it.
      如果没有的话,如下安装:

    • pip install torchsummary 
    • And then you can try it, but note for some reason it is not working unless I set model to cuda alexnet.cuda:
      你可以试试,不过它需要你设置模型到 cuda 才能使用:

    • from torchsummary import summary
      help(summary)
      import torchvision.models as models
      alexnet = models.alexnet(pretrained=False)
      alexnet.cuda()
      summary(alexnet, (3, 224, 224))
      print(alexnet)
    • The summary must take the input size and batch size is set to -1 meaning any batch size we provide.
      summary 需要输入尺寸,将 batch_size 设为 -1 表示适用于任何 batch_size。

    • If we set summary(alexnet, (3, 224, 224), 32) this means use the bs=32.
      如果设为 summary(alexnet, (3, 224, 224), 32),则意味着使用 batch_size=32

    • summary(model, input_size, batch_size=-1, device='cuda')

    • Out:
      输出

    • Help on function summary in module torchsummary.torchsummary:
      #
      summary(model, input_size, batch_size=-1, device='cuda')
      #
      ----------------------------------------------------------------
            Layer (type)               Output Shape         Param #
      ================================================================
                Conv2d-1           [32, 64, 55, 55]          23,296
                  ReLU-2           [32, 64, 55, 55]               0
             MaxPool2d-3           [32, 64, 27, 27]               0
                Conv2d-4          [32, 192, 27, 27]         307,392
                  ReLU-5          [32, 192, 27, 27]               0
             MaxPool2d-6          [32, 192, 13, 13]               0
                Conv2d-7          [32, 384, 13, 13]         663,936
                  ReLU-8          [32, 384, 13, 13]               0
                Conv2d-9          [32, 256, 13, 13]         884,992
                 ReLU-10          [32, 256, 13, 13]               0
               Conv2d-11          [32, 256, 13, 13]         590,080
                 ReLU-12          [32, 256, 13, 13]               0
            MaxPool2d-13            [32, 256, 6, 6]               0
      AdaptiveAvgPool2d-14            [32, 256, 6, 6]               0
              Dropout-15                 [32, 9216]               0
               Linear-16                 [32, 4096]      37,752,832
                 ReLU-17                 [32, 4096]               0
              Dropout-18                 [32, 4096]               0
               Linear-19                 [32, 4096]      16,781,312
                 ReLU-20                 [32, 4096]               0
               Linear-21                 [32, 1000]       4,097,000
      ================================================================
      Total params: 61,100,840
      Trainable params: 61,100,840
      Non-trainable params: 0
      ----------------------------------------------------------------
      Input size (MB): 18.38
      Forward/backward pass size (MB): 268.12
      Params size (MB): 233.08
      Estimated Total Size (MB): 519.58
      ----------------------------------------------------------------
      AlexNet(
      (features): Sequential(
        (0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2))
        (1): ReLU(inplace)
        (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
        (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
        (4): ReLU(inplace)
        (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
        (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (7): ReLU(inplace)
        (8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (9): ReLU(inplace)
        (10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (11): ReLU(inplace)
        (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
      )
      (avgpool): AdaptiveAvgPool2d(output_size=(6, 6))
      (classifier): Sequential(
        (0): Dropout(p=0.5)
        (1): Linear(in_features=9216, out_features=4096, bias=True)
        (2): ReLU(inplace)
        (3): Dropout(p=0.5)
        (4): Linear(in_features=4096, out_features=4096, bias=True)
        (5): ReLU(inplace)
        (6): Linear(in_features=4096, out_features=1000, bias=True)
      )
      )

版权声明:
作者:MWHLS
链接:https://panwj.top/3626.html
来源:无镣之涯
文章版权归作者所有,未经允许请勿转载。

THE END
分享
二维码
打赏
< <上一篇
下一篇>>