Number of parameters and their role
Used to add a fully connected layer.
function is as follows:
( inputs, # layer inputs units, # Output dimensions of the layer activation=None, # activation function use_bias=True, kernel_initializer=None, # Initializer for convolution kernel bias_initializer=tf.zeros_initializer(), # Initializer for bias items kernel_regularizer=None, # Regularization of convolutional kernels bias_regularizer=None, # Regularization of bias terms activity_regularizer=None, kernel_constraint=None, bias_constraint=None, trainable=True, name=None, # Name of the layer reuse=None # Whether to reuse parameters )
Partial Parameter Explanation:
inputs: inputs for this layer.
units: the output dimension of the layer.
activation: activation function.
use_bias: whether to use bias items.
trainable=True : Indicates whether the parameters of the layer are involved in training.
typical example
Handwriting example, using two denses can form a single layer network, in the example below the number of neurons in the network is 200.
import numpy as np import tensorflow as tf from import input_data def compute_accuracy(x_data,y_data): global dense2 y_pre = (dense2,feed_dict={xs:x_data}) correct_prediction = (tf.arg_max(y_data,1),tf.arg_max(y_pre,1)) #Determine if they are equal accuracy = tf.reduce_mean((correct_prediction,tf.float32)) # Assign float32 data type to average. result = (accuracy,feed_dict = {xs:batch_xs,ys:batch_ys}) #Execute return result mnist = input_data.read_data_sets("MNIST_data",one_hot = "true") xs = (tf.float32,[None,784]) ys = (tf.float32,[None,10]) dense1 = ( xs, 200, activation = , kernel_initializer=tf.random_normal_initializer(mean=0, stddev=0.3), bias_initializer=tf.constant_initializer(0.1), name='fc1' ) dense2 = ( dense1, 10, activation = , kernel_initializer=tf.random_normal_initializer(mean=0, stddev=0.3), bias_initializer=tf.constant_initializer(0.1), name='fc2' ) loss = tf.reduce_mean(.softmax_cross_entropy_with_logits(logits = dense2, labels = ys),name = 'loss') #label is the label, logits are the predicted values, and cross-entropy. train = (0.5).minimize(loss) init = tf.initialize_all_variables() with () as sess: (init) for i in range(5001): batch_xs,batch_ys = .next_batch(100) (train,feed_dict = {xs:batch_xs,ys:batch_ys}) if i % 1000 == 0: print("The recognition rate for training %d times is: %f."%((i+1),compute_accuracy(,)))
The results of the experiment were:
The recognition rate for 1 training session is: 0.107400.
The recognition rate for 1001 training sessions is: 0.805200.
The recognition rate for 2001 training sessions is: 0.822800.
The recognition rate for 3001 training sessions is: 0.829400.
The recognition rate for 4001 training sessions is: 0.833100.
The recognition rate for 5001 training sessions is: 0.835300.
Above is the detailed content of python artificial intelligence tensorflow function usage, more information about tensorflow function please pay attention to my other related articles!