How to duplicate operations & placeholders in Tensorflow -
suppose have 2 neural network model defined each 1 input placeholder , 1 output tensor. these 2 outputs need 3 separate values.
inputs: i1, i2, outputs: o1, o2 = 1 b = 2 v1 = session.run(o1, feed_dict={i1: a}) v2 = session.run(o1, feed_dict={i1: b}) v3 = session.run(o2, feed_dict={i2: a})
the problem need feed these 3 values loss function can't above. need do
loss = session.run(l, feed_dict={i1: a, i1: b, i2:a })
i don't think can if still have ambiguity in later operations since o1 input i1 used differently o1 input i2.
i think solved having 2 input placeholders , 2 outputs in first neural network. given have model there way restructure inputs , outputs can accommodate this?
visually want turn
i1 ---- (model) ----- o1
into
i1a o1a \ / \ / x ----- (model) ----- x / \ / \ i1b o1b
your intuition right, have create 2 different placeholders i1a , i1b network 1, 2 outputs o1a , o1b. visuals great here proposition:
i1a ----- (model) ----- o1a | shared weights | i1b ----- (model) ----- o1b
the proper way duplicate network using tf.get_variable()
every variable reuse=true
.
def create_variables(): tf.variable_scope('model'): w1 = tf.get_variable('w1', [1, 2]) b1 = tf.get_variable('b1', [2]) def inference(input): tf.variable_scope('model', reuse=true): w1 = tf.get_variable('w1') b1 = tf.get_variable('b1') output = tf.matmul(input, w1) + b1 return output create_variables() i1a = tf.placeholder(tf.float32, [3, 1]) o1a = inference(i1a) i1b = tf.placeholder(tf.float32, [3, 1]) o1b = inference(i1b) loss = tf.reduce_mean(o1a - o1b) tf.session() sess: sess.run(tf.initialize_all_variables()) sess.run(loss, feed_dict={i1a: [[0.], [1.], [2.]], i1b: [[0.5], [1.5], [2.5]]})
Comments
Post a Comment