There are several ways to define input nodes in TensorFlow as follows.
Defined via placeholders: this is generally used.
Defined by dictionary type: generally used when there are more inputs.
Direct definition: generally rarely used.
I Placeholder definitions
Example:
Specifically using the function, the code is as follows:
X = ("float") Y = ("float")
II Dictionary Type Definitions
1 Example
Defining input nodes by dictionary type
2 Key Code
# Create models # placeholders inputdict = { 'x': ("float"), 'y': ("float") }
3 Interpretation
Defining by dictionary is more like the first one, except that it is stacked together.
4 All codes
import tensorflow as tf import numpy as np import as plt plotdata = { "batchsize":[], "loss":[] } def moving_average(a, w=10): if len(a) < w: return a[:] return [val if idx < w else sum(a[(idx-w):idx])/w for idx, val in enumerate(a)] # Generate simulated data train_X = (-1, 1, 100) train_Y = 2 * train_X + (*train_X.shape) * 0.3 # y=2x, but add noise # Graphic display (train_X, train_Y, 'ro', label='Original data') () () # Create models # placeholders inputdict = { 'x': ("float"), 'y': ("float") } # Model parameters W = (tf.random_normal([1]), name="weight") b = (([1]), name="bias") # Forward structure z = (inputdict['x'], W)+ b #Reverse Optimization cost =tf.reduce_mean( (inputdict['y'] - z)) learning_rate = 0.01 optimizer = (learning_rate).minimize(cost) #Gradient descent # Initialize variables init = tf.global_variables_initializer() # Parameter settings training_epochs = 20 display_step = 2 # Start the session with () as sess: (init) # Fit all training data for epoch in range(training_epochs): for (x, y) in zip(train_X, train_Y): (optimizer, feed_dict={inputdict['x']: x, inputdict['y']: y}) # Show details of training in progress if epoch % display_step == 0: loss = (cost, feed_dict={inputdict['x']: train_X, inputdict['y']:train_Y}) print ("Epoch:", epoch+1, "cost=", loss,"W=", (W), "b=", (b)) if not (loss == "NA" ): plotdata["batchsize"].append(epoch) plotdata["loss"].append(loss) print (" Finished!") print ("cost=", (cost, feed_dict={inputdict['x']: train_X, inputdict['y']: train_Y}), "W=", (W), "b=", (b)) # Graphic display (train_X, train_Y, 'ro', label='Original data') (train_X, (W) * train_X + (b), label='Fitted line') () () plotdata["avgloss"] = moving_average(plotdata["loss"]) (1) (211) (plotdata["batchsize"], plotdata["avgloss"], 'b--') ('Minibatch number') ('Loss') ('Minibatch run vs. Training loss') () print ("x=0.2,z=", (z, feed_dict={inputdict['x']: 0.2}))
5 Operational results
III. Direct definitions
1 Example
Direct definition of input results
2 Interpretation
Direct definition: defined Python variables are put directly into the OP node to participate in the input, and variables from simulated data are put directly into the model for training.
3 Code
import tensorflow as tf import numpy as np import as plt # Generate simulated data train_X =np.float32( (-1, 1, 100)) train_Y = 2 * train_X + (*train_X.shape) * 0.3 # y=2x, but add noise # Graphic display (train_X, train_Y, 'ro', label='Original data') () () # Create models # Model parameters W = (tf.random_normal([1]), name="weight") b = (([1]), name="bias") # Forward structure z = (W, train_X)+ b #Reverse Optimization cost =tf.reduce_mean( (train_Y - z)) learning_rate = 0.01 optimizer = (learning_rate).minimize(cost) #Gradient descent # Initialize variables init = tf.global_variables_initializer() # Parameter settings training_epochs = 20 display_step = 2 # Start the session with () as sess: (init) # Fit all training data for epoch in range(training_epochs): for (x, y) in zip(train_X, train_Y): (optimizer) # Show details of training in progress if epoch % display_step == 0: loss = (cost) print ("Epoch:", epoch+1, "cost=", loss,"W=", (W), "b=", (b)) print (" Finished!") print ("cost=", (cost), "W=", (W), "b=", (b))
4 Operational results
The above article on how to define TensorFlow input nodes is all that I have shared with you.