SoFunction
Updated on 2024-12-12

2 implementations of keras-based output middle tier results

1, using the function model API, create a new model, define the inputs and outputs as the inputs of the original model and the outputs of the desired layer, and then re-predict.

#coding=utf-8
import seaborn as sbn
import pylab as plt
import theano
from  import Sequential
from  import Dense,Activation
 
 
from  import Model
 
model = Sequential()
(Dense(32, activation='relu', input_dim=100))
(Dense(16, activation='relu',name="Dense_1"))
(Dense(1, activation='sigmoid',name="Dense_2"))
(optimizer='rmsprop',
    loss='binary_crossentropy',
    metrics=['accuracy'])
 
# Generate dummy data
import numpy as np
# Assume that training and testing use the same set of data
data = ((1000, 100))
labels = (2, size=(1000, 1))
 
# Train the model, iterating on the data in batches of 32 samples
(data, labels, epochs=10, batch_size=32)
#Existing mods after load weighting
# Take the output of a layer for the output of the new for the model, using function modeling
dense1_layer_model = Model(inputs=,
          outputs=model.get_layer('Dense_1').output)
# Take the predicted value of this model as the output
dense1_output = dense1_layer_model.predict(data)
 
print dense1_output.shape
print dense1_output[0]

2, Since I am using theano for my backend, I can also consider using theano functions:

# It's a theano function #
dense1 = ([[0].input],[1].output,allow_input_downcast=True)
dense1_output = dense1(data) #visualize these images's FC-layer feature
print dense1_output[0]

The effect should be the same.

Above this article based on keras output middle tier results of the 2 kinds of implementation is all I share with you, I hope to give you a reference, and I hope you support me more.