Given the less-than-ideal results of the CNN network that has been tossed around in recent times, the main goal is to detect some keypoints in the image, which can be referred to as the keypoint detection algorithm for faces.
However, since the production from the dataset was done by myself, the quality of the dataset may be debatable, and the reason for the poor training results may also be because the dataset was not produced properly (the punctuation was just too tiring).
So I wanted to see what happened to the dataset I had made when I entered the network with those intermediate hidden layers.
Today's main focus is to test it out ahead of time with an already trained mnist model, which is already about 98% accurate here.
One of the simpler models used:
def simple_cnn(): input_data = Input(shape=(28, 28, 1)) x = Conv2D(64, kernel_size=3, padding='same', activation='relu', name='conv1')(input_data) x = MaxPooling2D(pool_size=2, strides=2, name='maxpool1')(x) x = Conv2D(32, kernel_size=3, padding='same', activation='relu', name='conv2')(x) x = MaxPooling2D(pool_size=2, strides=2, name='maxpool2')(x) x = Dropout(0.25)(x) # Get the output of the last convolutional layer # Add your own full connection x = Flatten(name='flatten')(x) x = Dense(128, activation='relu', name='fc1')(x) x = Dropout(0.25)(x) x = Dense(10, activation='softmax', name='fc2')(x) model = Model(inputs=input_data, outputs=x)
This model has been trained and run for 10 epochs with a validation set of 0.33
It still works well here, ┓( ´∀` )┏
Here's a handwritten number from the internet
Using the network for prediction, let's start by giving how to visualize the output of the first convolutional layer here, wahaha
Code:
input_data = Input(shape=(28, 28, 1)) x = Conv2D(64, kernel_size=3, padding='same', activation='relu', name='conv1')(input_data) x = MaxPooling2D(pool_size=2, strides=2, name='maxpool1')(x) x = Conv2D(32, kernel_size=3, padding='same', activation='relu', name='conv2')(x) x = MaxPooling2D(pool_size=2, strides=2, name='maxpool2')(x) x = Dropout(0.25)(x) x = Flatten(name='flatten')(x) x = Dense(128, activation='relu', name='fc1')(x) x = Dropout(0.25)(x) x = Dense(10, activation='softmax', name='fc2')(x) model = Model(inputs=input_data, outputs=x) model.load_weights('final_model_mnist_2019_1_28.h5') raw_img = ('') test_img = load_img('', color_mode='grayscale', target_size=(28, 28)) test_img = (test_img) test_img = np.expand_dims(test_img, axis=0) test_img = np.expand_dims(test_img, axis=3) conv1_layer = Model(inputs=input_data, outputs=model.get_layer(index=1).output) conv1_output = conv1_layer.predict(test_img) for i in range(64): show_img = conv1_output[:, :, :, i] print(show_img.shape) show_img.shape = [28,28] ('img', show_img) (0)
The core method is by loading the model and then creating a new Model, just replace the output part with the number of network layers you want to view, and of course get_layer() includes the name and index parameters. Finally, by traversing all the feature mappings of the current convolutional layer, each one will be displayed. And that's it.
This keras feature map visualization example (middle layer) above is all I have to share with you, I hope it can give you a reference, and I hope you support me more.