SoFunction
Updated on 2024-11-10

Detailed process of calculating Flops and params in pytorch and tensorflow

pytorch and tensorflow calculate flops and params

1. Calculate only params

    net = model()  # Defined network models
    total = sum([() for param in ()])
    print("Number of parameter: %.2fM" % total)

This is very common online directly with the self-contained method of calculating params, basically no error. The best thing about it is its simplicity.

2. Calculate flops and params

To calculate the flops, I haven't seen any method that comes with it, but basically I have to install other libraries.
Over here we install the thop library.

pip install thop # mountingthopstorehouse
import torch
from thop import profile
net = model()  # Defined network models
img1 = (1, 3, 512, 512)
img2 = (1, 3, 512, 512)
img3 = (1, 3, 512, 512)
macs, params = profile(net, (img1,img2,img3))
print('flops: ', 2*macs, 'params: ', params)

The difference between this and other online tutorials is that they don't distinguish between macs and flops. Because macs meansMultiply and accumulate operandsA multiplication plus an addition is what counts as a macs. And the flops indicateNumber of floating point operations, each addition, subtraction, multiplication, and division operation counts as 1FLOPs operation. So obviously, numerically, the1flops=2macs. In addition.(img1,img2,img3) would mean that if you have three inputs to enter into the model, write it like this

Also, note that params is only related to the number of model parameters, not the input tensor size. But flops is related to the input image size.

Calculate params and flops

Here are some methods I found for tensorflow to calculate params and flops, for reference only, no guarantee of results.

def get_flops_params():
    sess = .()
    graph = 
    flops = .(graph, options=..float_operation())
    params = .(graph,
                                           options=..trainable_variables_parameter())
    print('FLOPs: {};    Trainable params: {}'.format(flops.total_float_ops, params.total_parameters))
def count2():
    print(([(v.get_shape().as_list()) for v in tf.trainable_variables()]))
def get_nb_params_shape(shape):
    '''
    Computes the total number of params for a given shap.
    Works for any number of shapes etc [D,F] or [W,H,C] computes D*F and W*H*C.
    '''
    nb_params = 1
    for dim in shape:
        nb_params = nb_params * int(dim)
    return nb_params
def count3():
    tot_nb_params = 0
    for trainable_variable in tf.trainable_variables():
        shape = trainable_variable.get_shape()  #  [D,F] or [W,H,C]
        current_nb_params = get_nb_params_shape(shape)
        tot_nb_params = tot_nb_params + current_nb_params
    print(tot_nb_params)
import .v1 as tf
.v1.disable_eager_execution()
from model import Model
import  as K
def get_flops(model):
    run_meta = ()
    opts = .float_operation()
    # We use the Keras session graph in the call to the profiler.
    flops = (graph=K.get_session().graph,
                                run_meta=run_meta, cmd='op', options=opts)
    return flops.total_float_ops  # Prints the "flops" of the model.
# .... Define your model here ....
M = Model(BATCH_SIZE=1, INPUT_H=268, INPUT_W=360, is_training=False)
print(get_flops(M))

To this point this article on pytorch and tensorflow calculation Flops and params of the article is introduced to this, more related pytorch and tensorflow calculation content please search for my previous articles or continue to browse the following related articles I hope that you will support me more in the future!