Why is Tensorflow segmentation network returning empty data when setting session parameter is_trainning as false to batchNorm layers?Get label prediction from Cifar-10 modelEquivalent of tf.identity with control dependency for an operation nodeUsing make_template() in TensorFlowIs FIFOQueue supported in TensorFlow for iOS?Batch normalization layer in Tensorflow is not updating its moving mean and moving varianceDeconvolution net with dynamic input width&heighttf.zeros vs tf.placeholder as RNN initial stateHow to get train loss and evaluate loss every global step in Tensorflow Estimator?TensorFlow: trainable flag - tf.nn.conv2d vs tf.layers.conv2dHow tensorflow pass through a deep learning model calculation?
What does this line mean in Zelazny's "The Courts of Chaos"?
How many sets of dice do I need for D&D?
How (un)safe is it to ride barefoot?
Is there a better way to do partial sums of array items in JavaScript?
How to Convert an Object into Array in magento 2
Selecting by attribute using Python and a list
In American Politics, why is the Justice Department under the President?
What do you call the action of "describing events as they happen" like sports anchors do?
Deciphering old handwriting from a 1850 church record
Was planting UN flag on Moon ever discussed?
How can powerful telekinesis avoid violating Newton's 3rd Law?
What's the best way to quit a job mostly because of money?
Mathematica 12 has gotten worse at solving simple equations?
What do I need to do, tax-wise, for a sudden windfall?
Forgot passport for Alaska cruise (Anchorage to Vancouver)
Print "N NE E SE S SW W NW"
How much web presence should I have?
How does AFV select the winning videos?
How to befriend someone who doesn't like to talk?
Professor Roman loves to teach unorthodox Chemistry
Was self-modifying code possible using BASIC?
My mom's return ticket is 3 days after I-94 expires
C++ logging library
How to generate list of *all* available commands and functions?
Why is Tensorflow segmentation network returning empty data when setting session parameter is_trainning as false to batchNorm layers?
Get label prediction from Cifar-10 modelEquivalent of tf.identity with control dependency for an operation nodeUsing make_template() in TensorFlowIs FIFOQueue supported in TensorFlow for iOS?Batch normalization layer in Tensorflow is not updating its moving mean and moving varianceDeconvolution net with dynamic input width&heighttf.zeros vs tf.placeholder as RNN initial stateHow to get train loss and evaluate loss every global step in Tensorflow Estimator?TensorFlow: trainable flag - tf.nn.conv2d vs tf.layers.conv2dHow tensorflow pass through a deep learning model calculation?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I'm working with a a neural network for image segmentation using Tensorflow.
The training phase and inference run are ok if the is_traning parameter of the slim.batch_norm layer is set to True.
But when i run the session using is_training as false, meaning (in what i understand) just infering/forwarding data through the network the result segmentation image data comes out empty.
I believe it has to do with batchNorm layers but i've already lost my mind over it and i just can't make it work.
I'm using a code based in Semantic Segmentation Suite in TensorFlow.
Below is a simplified version of what works and what fails.
.....
def ConvBlock(inputs, n_filters, kernel_size=[3, 3],is_training=True):
net = slim.conv2d(inputs, n_filters, kernel_size=[1, 1], activation_fn=None)
net = slim.batch_norm(net, fused=True, is_training=is_training)
net = tf.nn.relu(net)
return net
def DepthwiseSeparableConvBlock(inputs, n_filters, kernel_size=[3, 3],is_training=True):
net = slim.separable_convolution2d(inputs, num_outputs=None, depth_multiplier=1, kernel_size=[3, 3], activation_fn=None)
net = slim.batch_norm(net, fused=True, is_training=is_training)
net = tf.nn.relu(net)
....
return net
def ConvTransposeBlock(inputs, n_filters, kernel_size=[3, 3],is_training=True):
net = slim.conv2d_transpose(inputs, n_filters, kernel_size=[3, 3], stride=[2, 2], activation_fn=None)
net = slim.batch_norm(net,is_training=is_training)
net = tf.nn.relu(net)
return net
def build_mobile_unet(inputs, .... ,is_training=True):
net = ConvBlock(inputs, 64, is_training=is_training)
net = DepthwiseSeparableConvBlock(net, 64, is_training=is_training)
net = slim.pool(net, [2, 2], stride=[2, 2], pooling_type='MAX')
....
net = ConvTransposeBlock(net, 64, is_training=is_training)
net = DepthwiseSeparableConvBlock(net, 64, is_training=is_training)
net = DepthwiseSeparableConvBlock(net, 64, is_training=is_training)
net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None, scope='logits')
return net
# Define the param placeholders
net_input_image = tf.placeholder(tf.float32,shape=[None,None,None,3], name="input")
net_input_label = tf.placeholder(tf.int32, [None,None,None])
# Training phase placeholder
net_training = tf.placeholder(tf.bool, name='phase_train')
model, _ = build_mobile_unet(
net_input=net_input_image,
....
is_training=net_training)
model = tf.nn.softmax(model, name="softmax_output")
with tf.name_scope('loss'):
cross_entropy =tf.losses.sparse_softmax_cross_entropy(logits=model, labels=net_input_label)
cross_entropy = tf.reduce_mean(cross_entropy)
# use RMSProp to optimize
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer = tf.train.RMSPropOptimizer(learning_rate=0.0001,decay=0.995)
train_step = optimizer.minimize(cross_entropy)
# create train OP
total_loss = tf.losses.get_total_loss()
train_op = slim.learning.create_train_op(total_loss,optimizer)
# Do the training here
for epoch in range(args.epoch_start_i, args.num_epochs):
input_image_batch = ...
label_image_batch = ...
# Do the training
train_dict=
net_input_image:input_image_batch,
net_input_label:label_image_batch,
net_training: True
train_loss=sess.run(train_op, feed_dict=train_dict)
# Do the validation on a small set of validation images
for ind in val_indices:
input_image = np.expand_dims(np.float32(utils.load_image(val_input_names[ind])[:args.crop_height, :args.crop_width]),axis=0)/255.0
gt = utils.load_image(val_output_names[ind])[:args.crop_height, :args.crop_width]
gt = helpers.reverse_one_hot(helpers.one_hot_it(gt, label_values))
# THIS WORKS : Image segmentation result is OK
output_image = sess.run(
model,
feed_dict=
net_input_image:input_image,
net_training: True
)
# THIS FAILS : Image segmentation result is all Zeros....
output_image = sess.run(
model,
feed_dict=
net_input_image:input_image,
net_training: False
)
The training works well, and the net converges and all....
and If i allways keep the placeholder net_training as True, all is well.
But is i invoke the sess.run(model,...net_training: False) as you can see in the code above while testing some images the output result comes out empty.
What i'm i doing wrong guys?
Any help would be highly appreciated.
Thank you for your time.
tensorflow image-segmentation training-data inference tensorflow-slim
add a comment |
I'm working with a a neural network for image segmentation using Tensorflow.
The training phase and inference run are ok if the is_traning parameter of the slim.batch_norm layer is set to True.
But when i run the session using is_training as false, meaning (in what i understand) just infering/forwarding data through the network the result segmentation image data comes out empty.
I believe it has to do with batchNorm layers but i've already lost my mind over it and i just can't make it work.
I'm using a code based in Semantic Segmentation Suite in TensorFlow.
Below is a simplified version of what works and what fails.
.....
def ConvBlock(inputs, n_filters, kernel_size=[3, 3],is_training=True):
net = slim.conv2d(inputs, n_filters, kernel_size=[1, 1], activation_fn=None)
net = slim.batch_norm(net, fused=True, is_training=is_training)
net = tf.nn.relu(net)
return net
def DepthwiseSeparableConvBlock(inputs, n_filters, kernel_size=[3, 3],is_training=True):
net = slim.separable_convolution2d(inputs, num_outputs=None, depth_multiplier=1, kernel_size=[3, 3], activation_fn=None)
net = slim.batch_norm(net, fused=True, is_training=is_training)
net = tf.nn.relu(net)
....
return net
def ConvTransposeBlock(inputs, n_filters, kernel_size=[3, 3],is_training=True):
net = slim.conv2d_transpose(inputs, n_filters, kernel_size=[3, 3], stride=[2, 2], activation_fn=None)
net = slim.batch_norm(net,is_training=is_training)
net = tf.nn.relu(net)
return net
def build_mobile_unet(inputs, .... ,is_training=True):
net = ConvBlock(inputs, 64, is_training=is_training)
net = DepthwiseSeparableConvBlock(net, 64, is_training=is_training)
net = slim.pool(net, [2, 2], stride=[2, 2], pooling_type='MAX')
....
net = ConvTransposeBlock(net, 64, is_training=is_training)
net = DepthwiseSeparableConvBlock(net, 64, is_training=is_training)
net = DepthwiseSeparableConvBlock(net, 64, is_training=is_training)
net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None, scope='logits')
return net
# Define the param placeholders
net_input_image = tf.placeholder(tf.float32,shape=[None,None,None,3], name="input")
net_input_label = tf.placeholder(tf.int32, [None,None,None])
# Training phase placeholder
net_training = tf.placeholder(tf.bool, name='phase_train')
model, _ = build_mobile_unet(
net_input=net_input_image,
....
is_training=net_training)
model = tf.nn.softmax(model, name="softmax_output")
with tf.name_scope('loss'):
cross_entropy =tf.losses.sparse_softmax_cross_entropy(logits=model, labels=net_input_label)
cross_entropy = tf.reduce_mean(cross_entropy)
# use RMSProp to optimize
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer = tf.train.RMSPropOptimizer(learning_rate=0.0001,decay=0.995)
train_step = optimizer.minimize(cross_entropy)
# create train OP
total_loss = tf.losses.get_total_loss()
train_op = slim.learning.create_train_op(total_loss,optimizer)
# Do the training here
for epoch in range(args.epoch_start_i, args.num_epochs):
input_image_batch = ...
label_image_batch = ...
# Do the training
train_dict=
net_input_image:input_image_batch,
net_input_label:label_image_batch,
net_training: True
train_loss=sess.run(train_op, feed_dict=train_dict)
# Do the validation on a small set of validation images
for ind in val_indices:
input_image = np.expand_dims(np.float32(utils.load_image(val_input_names[ind])[:args.crop_height, :args.crop_width]),axis=0)/255.0
gt = utils.load_image(val_output_names[ind])[:args.crop_height, :args.crop_width]
gt = helpers.reverse_one_hot(helpers.one_hot_it(gt, label_values))
# THIS WORKS : Image segmentation result is OK
output_image = sess.run(
model,
feed_dict=
net_input_image:input_image,
net_training: True
)
# THIS FAILS : Image segmentation result is all Zeros....
output_image = sess.run(
model,
feed_dict=
net_input_image:input_image,
net_training: False
)
The training works well, and the net converges and all....
and If i allways keep the placeholder net_training as True, all is well.
But is i invoke the sess.run(model,...net_training: False) as you can see in the code above while testing some images the output result comes out empty.
What i'm i doing wrong guys?
Any help would be highly appreciated.
Thank you for your time.
tensorflow image-segmentation training-data inference tensorflow-slim
add a comment |
I'm working with a a neural network for image segmentation using Tensorflow.
The training phase and inference run are ok if the is_traning parameter of the slim.batch_norm layer is set to True.
But when i run the session using is_training as false, meaning (in what i understand) just infering/forwarding data through the network the result segmentation image data comes out empty.
I believe it has to do with batchNorm layers but i've already lost my mind over it and i just can't make it work.
I'm using a code based in Semantic Segmentation Suite in TensorFlow.
Below is a simplified version of what works and what fails.
.....
def ConvBlock(inputs, n_filters, kernel_size=[3, 3],is_training=True):
net = slim.conv2d(inputs, n_filters, kernel_size=[1, 1], activation_fn=None)
net = slim.batch_norm(net, fused=True, is_training=is_training)
net = tf.nn.relu(net)
return net
def DepthwiseSeparableConvBlock(inputs, n_filters, kernel_size=[3, 3],is_training=True):
net = slim.separable_convolution2d(inputs, num_outputs=None, depth_multiplier=1, kernel_size=[3, 3], activation_fn=None)
net = slim.batch_norm(net, fused=True, is_training=is_training)
net = tf.nn.relu(net)
....
return net
def ConvTransposeBlock(inputs, n_filters, kernel_size=[3, 3],is_training=True):
net = slim.conv2d_transpose(inputs, n_filters, kernel_size=[3, 3], stride=[2, 2], activation_fn=None)
net = slim.batch_norm(net,is_training=is_training)
net = tf.nn.relu(net)
return net
def build_mobile_unet(inputs, .... ,is_training=True):
net = ConvBlock(inputs, 64, is_training=is_training)
net = DepthwiseSeparableConvBlock(net, 64, is_training=is_training)
net = slim.pool(net, [2, 2], stride=[2, 2], pooling_type='MAX')
....
net = ConvTransposeBlock(net, 64, is_training=is_training)
net = DepthwiseSeparableConvBlock(net, 64, is_training=is_training)
net = DepthwiseSeparableConvBlock(net, 64, is_training=is_training)
net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None, scope='logits')
return net
# Define the param placeholders
net_input_image = tf.placeholder(tf.float32,shape=[None,None,None,3], name="input")
net_input_label = tf.placeholder(tf.int32, [None,None,None])
# Training phase placeholder
net_training = tf.placeholder(tf.bool, name='phase_train')
model, _ = build_mobile_unet(
net_input=net_input_image,
....
is_training=net_training)
model = tf.nn.softmax(model, name="softmax_output")
with tf.name_scope('loss'):
cross_entropy =tf.losses.sparse_softmax_cross_entropy(logits=model, labels=net_input_label)
cross_entropy = tf.reduce_mean(cross_entropy)
# use RMSProp to optimize
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer = tf.train.RMSPropOptimizer(learning_rate=0.0001,decay=0.995)
train_step = optimizer.minimize(cross_entropy)
# create train OP
total_loss = tf.losses.get_total_loss()
train_op = slim.learning.create_train_op(total_loss,optimizer)
# Do the training here
for epoch in range(args.epoch_start_i, args.num_epochs):
input_image_batch = ...
label_image_batch = ...
# Do the training
train_dict=
net_input_image:input_image_batch,
net_input_label:label_image_batch,
net_training: True
train_loss=sess.run(train_op, feed_dict=train_dict)
# Do the validation on a small set of validation images
for ind in val_indices:
input_image = np.expand_dims(np.float32(utils.load_image(val_input_names[ind])[:args.crop_height, :args.crop_width]),axis=0)/255.0
gt = utils.load_image(val_output_names[ind])[:args.crop_height, :args.crop_width]
gt = helpers.reverse_one_hot(helpers.one_hot_it(gt, label_values))
# THIS WORKS : Image segmentation result is OK
output_image = sess.run(
model,
feed_dict=
net_input_image:input_image,
net_training: True
)
# THIS FAILS : Image segmentation result is all Zeros....
output_image = sess.run(
model,
feed_dict=
net_input_image:input_image,
net_training: False
)
The training works well, and the net converges and all....
and If i allways keep the placeholder net_training as True, all is well.
But is i invoke the sess.run(model,...net_training: False) as you can see in the code above while testing some images the output result comes out empty.
What i'm i doing wrong guys?
Any help would be highly appreciated.
Thank you for your time.
tensorflow image-segmentation training-data inference tensorflow-slim
I'm working with a a neural network for image segmentation using Tensorflow.
The training phase and inference run are ok if the is_traning parameter of the slim.batch_norm layer is set to True.
But when i run the session using is_training as false, meaning (in what i understand) just infering/forwarding data through the network the result segmentation image data comes out empty.
I believe it has to do with batchNorm layers but i've already lost my mind over it and i just can't make it work.
I'm using a code based in Semantic Segmentation Suite in TensorFlow.
Below is a simplified version of what works and what fails.
.....
def ConvBlock(inputs, n_filters, kernel_size=[3, 3],is_training=True):
net = slim.conv2d(inputs, n_filters, kernel_size=[1, 1], activation_fn=None)
net = slim.batch_norm(net, fused=True, is_training=is_training)
net = tf.nn.relu(net)
return net
def DepthwiseSeparableConvBlock(inputs, n_filters, kernel_size=[3, 3],is_training=True):
net = slim.separable_convolution2d(inputs, num_outputs=None, depth_multiplier=1, kernel_size=[3, 3], activation_fn=None)
net = slim.batch_norm(net, fused=True, is_training=is_training)
net = tf.nn.relu(net)
....
return net
def ConvTransposeBlock(inputs, n_filters, kernel_size=[3, 3],is_training=True):
net = slim.conv2d_transpose(inputs, n_filters, kernel_size=[3, 3], stride=[2, 2], activation_fn=None)
net = slim.batch_norm(net,is_training=is_training)
net = tf.nn.relu(net)
return net
def build_mobile_unet(inputs, .... ,is_training=True):
net = ConvBlock(inputs, 64, is_training=is_training)
net = DepthwiseSeparableConvBlock(net, 64, is_training=is_training)
net = slim.pool(net, [2, 2], stride=[2, 2], pooling_type='MAX')
....
net = ConvTransposeBlock(net, 64, is_training=is_training)
net = DepthwiseSeparableConvBlock(net, 64, is_training=is_training)
net = DepthwiseSeparableConvBlock(net, 64, is_training=is_training)
net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None, scope='logits')
return net
# Define the param placeholders
net_input_image = tf.placeholder(tf.float32,shape=[None,None,None,3], name="input")
net_input_label = tf.placeholder(tf.int32, [None,None,None])
# Training phase placeholder
net_training = tf.placeholder(tf.bool, name='phase_train')
model, _ = build_mobile_unet(
net_input=net_input_image,
....
is_training=net_training)
model = tf.nn.softmax(model, name="softmax_output")
with tf.name_scope('loss'):
cross_entropy =tf.losses.sparse_softmax_cross_entropy(logits=model, labels=net_input_label)
cross_entropy = tf.reduce_mean(cross_entropy)
# use RMSProp to optimize
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer = tf.train.RMSPropOptimizer(learning_rate=0.0001,decay=0.995)
train_step = optimizer.minimize(cross_entropy)
# create train OP
total_loss = tf.losses.get_total_loss()
train_op = slim.learning.create_train_op(total_loss,optimizer)
# Do the training here
for epoch in range(args.epoch_start_i, args.num_epochs):
input_image_batch = ...
label_image_batch = ...
# Do the training
train_dict=
net_input_image:input_image_batch,
net_input_label:label_image_batch,
net_training: True
train_loss=sess.run(train_op, feed_dict=train_dict)
# Do the validation on a small set of validation images
for ind in val_indices:
input_image = np.expand_dims(np.float32(utils.load_image(val_input_names[ind])[:args.crop_height, :args.crop_width]),axis=0)/255.0
gt = utils.load_image(val_output_names[ind])[:args.crop_height, :args.crop_width]
gt = helpers.reverse_one_hot(helpers.one_hot_it(gt, label_values))
# THIS WORKS : Image segmentation result is OK
output_image = sess.run(
model,
feed_dict=
net_input_image:input_image,
net_training: True
)
# THIS FAILS : Image segmentation result is all Zeros....
output_image = sess.run(
model,
feed_dict=
net_input_image:input_image,
net_training: False
)
The training works well, and the net converges and all....
and If i allways keep the placeholder net_training as True, all is well.
But is i invoke the sess.run(model,...net_training: False) as you can see in the code above while testing some images the output result comes out empty.
What i'm i doing wrong guys?
Any help would be highly appreciated.
Thank you for your time.
tensorflow image-segmentation training-data inference tensorflow-slim
tensorflow image-segmentation training-data inference tensorflow-slim
edited Mar 24 at 22:51
user2880065
asked Mar 24 at 22:41
user2880065user2880065
84
84
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55329278%2fwhy-is-tensorflow-segmentation-network-returning-empty-data-when-setting-session%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55329278%2fwhy-is-tensorflow-segmentation-network-returning-empty-data-when-setting-session%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown