In the use of tensorflow and keras mixing is normal but in load_model reported an error in the mark here
where the error is:TypeError: tuple indices must be integers, not list
And then after some Baidu no results, on Google found a similar problem. But it is a pair of bird language do not know what things (after translation found to be Russian). Later Google translated to find a solution. Therefore, the original problem article posted a warning!
Original training code
from import ImageDataGenerator from import Sequential from import Conv2D, MaxPooling2D, BatchNormalization from import Activation, Dropout, Flatten, Dense #Каталог с данными для обучения train_dir = 'train' # Каталог с данными для проверки val_dir = 'val' # Каталог с данными для тестирования test_dir = 'val' # Размеры изображения img_width, img_height = 800, 800 # Размерность тензора на основе изображения для входных данных в нейронную сеть # backend Tensorflow, channels_last input_shape = (img_width, img_height, 3) # Количество эпох epochs = 1 # Размер мини-выборки batch_size = 4 # Количество изображений для обучения nb_train_samples = 300 # Количество изображений для проверки nb_validation_samples = 25 # Количество изображений для тестирования nb_test_samples = 25 model = Sequential() (Conv2D(32, (7, 7), padding="same", input_shape=input_shape)) (BatchNormalization()) (Activation('tanh')) (MaxPooling2D(pool_size=(10, 10))) (Conv2D(64, (5, 5), padding="same")) (BatchNormalization()) (Activation('tanh')) (MaxPooling2D(pool_size=(10, 10))) (Flatten()) (Dense(512)) (Activation('relu')) (Dropout(0.5)) (Dense(10, activation='softmax')) (loss='categorical_crossentropy', optimizer="Nadam", metrics=['accuracy']) print(()) datagen = ImageDataGenerator(rescale=1. / 255) train_generator = datagen.flow_from_directory( train_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='categorical') val_generator = datagen.flow_from_directory( val_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='categorical') test_generator = datagen.flow_from_directory( test_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='categorical') model.fit_generator( train_generator, steps_per_epoch=nb_train_samples // batch_size, epochs=epochs, validation_data=val_generator, validation_steps=nb_validation_samples // batch_size) print('Сохраняем сеть') ("grib.h5") print("Сохранение завершено!")
Model loading
from import ImageDataGenerator from import Sequential from import Conv2D, MaxPooling2D, BatchNormalization from import Activation, Dropout, Flatten, Dense from import load_model print("Загрузка сети") model = load_model("grib.h5") print("Загрузка завершена!")
report an error
/usr/bin/python3.5 /home/disk2/py/neroset/
/home/mama/.local/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `` is deprecated. In future, it will be treated as `np.float64 == (float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Загрузка сети
Traceback (most recent call last):
File "/home/disk2/py/neroset/", line 13, in <module>
model = load_model("grib.h5")
File "/usr/local/lib/python3.5/dist-packages/keras/", line 243, in load_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "/usr/local/lib/python3.5/dist-packages/keras/", line 317, in model_from_config
return layer_module.deserialize(config, custom_objects=custom_objects)
File "/usr/local/lib/python3.5/dist-packages/keras/layers/__init__.py", line 55, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python3.5/dist-packages/keras/utils/generic_utils.py", line 144, in deserialize_keras_object
list(custom_objects.items())))
File "/usr/local/lib/python3.5/dist-packages/keras/", line 1350, in from_config
(layer)
File "/usr/local/lib/python3.5/dist-packages/keras/", line 492, in add
output_tensor = layer([0])
File "/usr/local/lib/python3.5/dist-packages/keras/engine/", line 590, in __call__
(input_shapes[0])
File "/usr/local/lib/python3.5/dist-packages/keras/layers/", line 92, in build
dim = input_shape[]
TypeError: tuple indices must be integers or slices, not list
Process finished with exit code 1
Explanation of combat races
I remove BatchNormalization and everything works fine. I found out that keras saving and tensorflow normalization don't work together, I just need to change the import line.((translations:collate (data, files)BatchNormalizationeverything normal。 Don't tell me what the mistake was.?I found the savekerasand standardizationtensorflowCan't work together.;Simply change the import string。)
Emphasis on text
import ImageDataGenerator import Sequential import Conv2D, MaxPooling2D, BatchNormalization import Activation, Dropout, Flatten, Dense
## Perfect solution ##
## Attach a link to the original article
/questions/keras-batchnormalization
Addendum: keras and tensorflow models read at the same time with caution
In the project, a keras model was read to get the model input size, and then the pb model after keras to tensorflow was loaded for prediction.
Report an error:
Attempting to use uninitialized value batch_normalization_14/moving_mean
Browsing the forums, there are suggestions to add initialization:
(tf.global_variables_initializer())
However, this would result in the model parameters all becoming initialized data. It is not possible to use predictive model parameters.
Finally, it was found that removing the loading of the keras model was sufficient.
Guess why: keras model and tensorflow model reading at the same time have pitfalls
import cv2 import numpy as np from import load_model from import get_labels from import preprocess_input import time import os import tensorflow as tf from import gfile ["CUDA_VISIBLE_DEVICES"] = "-1" emotion_labels = get_labels('fer2013') emotion_target_size = (64,64) #emotion_model_path = './models/emotion_model.hdf5' #emotion_classifier = load_model(emotion_model_path) #emotion_target_size = emotion_classifier.input_shape[1:3] path = '/mnt/nas/cv_data/emotion/test' filelist = (path) total_num = len(filelist) timeall = 0 n = 0 sess = () #(tf.global_variables_initializer()) with ("./trans_model/emotion_mode.pb", 'rb') as f: graph_def = () graph_def.ParseFromString(()) .as_default() tf.import_graph_def(graph_def, name='') pred = .get_tensor_by_name("predictions/Softmax:0") ######################img########################## for item in filelist: if (item == '.DS_Store') | (item == ''): continue src = ((path), item) bgr_image = (src) gray_image = (bgr_image, cv2.COLOR_BGR2GRAY) gray_face = gray_image try: gray_face = (gray_face, (emotion_target_size)) except: continue gray_face = preprocess_input(gray_face, True) gray_face = np.expand_dims(gray_face, 0) gray_face = np.expand_dims(gray_face, -1) input = .get_tensor_by_name('input_1:0') res = (pred, {input: gray_face}) print("src:", src) emotion_probability = (res[0]) emotion_label_arg = (res[0]) emotion_text = emotion_labels[emotion_label_arg] print("predict:", res[0], ",prob:", emotion_probability, ",label:", emotion_label_arg, ",text:",emotion_text)
The above is a personal experience, I hope it can give you a reference, and I hope you can support me more.