SoFunction
Updated on 2024-11-19

Explaining the two ways TensorFlow trains networks in detail

TensorFlow trains networks in two ways, one is based on tensor(array) and the other is an iterator

The difference between the two is:

  • The first is to load all the data to form a tensor, then call () and then specify the parameter batch_size to train all the data in batches
  • The second is to first form an iterator of the data in batches yourself, and then traverse this iterator to train each batch separately

Way 1: via iterator

IMAGE_SIZE = 1000

# step1: load dataset
(train_images, train_labels), (val_images, val_labels) = .load_data()

# step2: Normalize the image
train_images, val_images = train_images / 255.0, val_images / 255.0

# step3:set training set size
train_images = train_images[:IMAGE_SIZE]
val_images = val_images[:IMAGE_SIZE]
train_labels = train_labels[:IMAGE_SIZE]
val_labels = val_labels[:IMAGE_SIZE]

# step4: change the dimension of the image to (IMAGE_SIZE,28,28,1)
train_images = tf.expand_dims(train_images, axis=3)
val_images = tf.expand_dims(val_images, axis=3)

# step5: change the size of the image to (32,32)
train_images = (train_images, [32, 32])
val_images = (val_images, [32, 32])

# step6: turn the data into an iterator
train_loader = .from_tensor_slices((train_images, train_labels)).batch(32)
val_loader = .from_tensor_slices((val_images, val_labels)).batch(IMAGE_SIZE)

# step5:Import the model
model = LeNet5()

# Let the model know the form of the input data
(input_shape=(1, 32, 32, 1))

# The ending Output Shape is multiple
(Input(shape=(32, 32, 1)))

# step6: Compile the model
(optimizer='adam',
              loss=(from_logits=True),
              metrics=['accuracy'])

# Path where weights are stored
checkpoint_path = "./weight/"

# Callback function, user saves weights
save_callback = (filepath=checkpoint_path,
                                                   save_best_only=True,
                                                   save_weights_only=True,
                                                   monitor='val_loss',
                                                   verbose=0)

EPOCHS = 11

for epoch in range(1, EPOCHS):
    # Training set error per batch
    train_epoch_loss_avg = ()
    # Training set accuracy per batch
    train_epoch_accuracy = ()
    # Validation set error per batch
    val_epoch_loss_avg = ()
    # Verification set accuracy per batch
    val_epoch_accuracy = ()

    for x, y in train_loader:
        history = (x,
                            y,
                            validation_data=val_loader,
                            callbacks=[save_callback],
                            verbose=0)

        # Update the error, keep the last
        train_epoch_loss_avg.update_state(['loss'][0])
        # Update accuracy, keep last
        train_epoch_accuracy.update_state(y, model(x, training=True))

        val_epoch_loss_avg.update_state(['val_loss'][0])
        val_epoch_accuracy.update_state(next(iter(val_loader))[1], model(next(iter(val_loader))[0], training=True))

    # Calculate error and precision results for each batch using .result()
    print("Epoch {:d}: trainLoss: {:.3f}, trainAccuracy: {:.3%} valLoss: {:.3f}, valAccuracy: {:.3%}".format(epoch,
                                                                                                             train_epoch_loss_avg.result(),
                                                                                                             train_epoch_accuracy.result(),
                                                                                                             val_epoch_loss_avg.result(),
                                                                                                             val_epoch_accuracy.result()))

Modality II: Application () for batch training

import model_sequential

(train_images, train_labels), (test_images, test_labels) = .load_data()

# step2: Normalize the image
train_images, test_images = train_images / 255.0, test_images / 255.0

# step3:Change the dimension of the image to (60000,28,28,1)
train_images = tf.expand_dims(train_images, axis=3)
test_images = tf.expand_dims(test_images, axis=3)

# step4: change the image size to (60000,32,32,1)
train_images = (train_images, [32, 32])
test_images = (test_images, [32, 32])

# step5:Import the model
# history = LeNet5()
history = model_sequential.LeNet()

# Let the model know the form of the input data
(input_shape=(1, 32, 32, 1))
# history(([1, 32, 32, 1]))

# The ending Output Shape is multiple
(Input(shape=(32, 32, 1)))
()

# step6: Compile the model
(optimizer='adam',
                loss=(from_logits=True),
                metrics=['accuracy'])

# Path where weights are stored
checkpoint_path = "./weight/"

# Callback function, user saves weights
save_callback = (filepath=checkpoint_path,
                                                   save_best_only=True,
                                                   save_weights_only=True,
                                                   monitor='val_loss',
                                                   verbose=1)
# step7: train the model
history = (train_images,
                      train_labels,
                      epochs=10,
                      batch_size=32,
                      validation_data=(test_images, test_labels),
                      callbacks=[save_callback])

To this article on the details of the TensorFlow training network of two ways to this article, more related to TensorFlow training network content, please search for my previous articles or continue to browse the following related articles I hope that you will support me more in the future!