Tensorflow: gradient's are zeroDisplay number with leading zerosNicest way to pad zeroes to a stringIn TensorFlow, what is the difference between Session.run() and Tensor.eval()?How to print the value of a Tensor object in TensorFlow?Tensorflow: how to save/restore a model?What's the difference of name scope and a variable scope in tensorflow?TensorFlow not found using pipusing pre-loaded data in TensorFlowUsing make_template() in TensorFlowYour CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
How can I support the recycling, but not the new production of aluminum?
Have only girls been born for a long time in this village?
Is it safe to remove the bottom chords of a series of garage roof trusses?
What would be the next technological step after advancing from cybernetic body parts to nanotechnology?
Defense against attacks using dictionaries
Was 'help' pronounced starting with a vowel sound?
How much code would a codegolf golf if a codegolf could golf code?
Why don't politicians push for fossil fuel reduction by pointing out their scarcity?
How to dismiss intrusive questions from a colleague with whom I don't work?
How to look up identical column names in two dataframes and combine the matched columns
Can others monetize my project with GPLv3?
Why are isotropic tensors not considered scalars?
How to persuade recruiters to send me the Job Description?
Efficiently pathfinding many flocking enemies around obstacles
How to organize ideas to start writing a novel?
What does it mean to have a subnet mask /32?
Do I have to learn /o/ or /ɔ/ separately?
The sound of thunder's like a whip
What is the improvement of the "legally binding commitment" proposed by Boris Johnson over the existing "backstop"?
Does adding the 'precise' tag to daggers break anything?
Can my boyfriend, who lives in the UK and has a Polish passport, visit me in the USA?
Dark side of an exoplanet - if it was earth-like would its surface light be detectable?
Is it insecure to have an ansible user with passwordless sudo?
Why didn’t Doctor Strange stay in the original winning timeline?
Tensorflow: gradient's are zero
Display number with leading zerosNicest way to pad zeroes to a stringIn TensorFlow, what is the difference between Session.run() and Tensor.eval()?How to print the value of a Tensor object in TensorFlow?Tensorflow: how to save/restore a model?What's the difference of name scope and a variable scope in tensorflow?TensorFlow not found using pipusing pre-loaded data in TensorFlowUsing make_template() in TensorFlowYour CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
I'm trying to write my implementation of word2vec based on tf exmaple.
In my data I have sessions with positive and negative examples, so I want use loss function mentioned in this article (log sum of sigmoids for positive and (1 - sigmoid) for negative)
I wrote my implementation of it:
def loss_fn(batch_size, batch_inputs, batch_labels, batch_negative, embeddings):
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keepdims=True))
normalized_embeddings = embeddings / norm
log.info("loss_fn init")
res_lst = []
for i in xrange(batch_size):
inp = batch_inputs[i]
lbl = batch_labels[i]
ng = batch_negative[i]
m = tf.map_fn(lambda k: tf.matmul(tf.gather(normalized_embeddings, [inp]),
tf.gather(normalized_embeddings, [k]),
transpose_b=True),
lbl,
dtype=tf.float32)
nm = tf.map_fn(lambda n: tf.matmul(tf.gather(normalized_embeddings, [inp]),
tf.gather(normalized_embeddings, [n]),
transpose_b=True),
ng,
dtype=tf.float32)
s = tf.map_fn(lambda x: tf.log(tf.math.sigmoid(x)), m)
ns = tf.map_fn(lambda x: tf.log(tf.math.sigmoid(-x)), nm)
res = -(tf.math.reduce_sum(ns) + tf.math.reduce_sum(s))
res_lst.append(res)
return tf.stack(res_lst)
It returns losses for each class, as expected. Next, it try to pass it to the optimizer:
with tf.name_scope('loss'):
loss = tf.reduce_mean(
loss_fn(
batch_size=batch_size,
batch_inputs=train_inputs,
batch_labels=train_labels,
batch_negative=negative_samples,
embeddings=embeddings))
with tf.name_scope('optimizer'):
optimizer = tf.train.AdagradOptimizer(learning_rate).minimize(loss)
I noticed that even on the first step gradient result of evaluating
print(session.run(tf.train.AdagradOptimizer(learning_rate).compute_gradients(loss), feed_dict=feed_dict))
I can't understand what is wrong with my implementation.
python tensorflow
add a comment |
I'm trying to write my implementation of word2vec based on tf exmaple.
In my data I have sessions with positive and negative examples, so I want use loss function mentioned in this article (log sum of sigmoids for positive and (1 - sigmoid) for negative)
I wrote my implementation of it:
def loss_fn(batch_size, batch_inputs, batch_labels, batch_negative, embeddings):
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keepdims=True))
normalized_embeddings = embeddings / norm
log.info("loss_fn init")
res_lst = []
for i in xrange(batch_size):
inp = batch_inputs[i]
lbl = batch_labels[i]
ng = batch_negative[i]
m = tf.map_fn(lambda k: tf.matmul(tf.gather(normalized_embeddings, [inp]),
tf.gather(normalized_embeddings, [k]),
transpose_b=True),
lbl,
dtype=tf.float32)
nm = tf.map_fn(lambda n: tf.matmul(tf.gather(normalized_embeddings, [inp]),
tf.gather(normalized_embeddings, [n]),
transpose_b=True),
ng,
dtype=tf.float32)
s = tf.map_fn(lambda x: tf.log(tf.math.sigmoid(x)), m)
ns = tf.map_fn(lambda x: tf.log(tf.math.sigmoid(-x)), nm)
res = -(tf.math.reduce_sum(ns) + tf.math.reduce_sum(s))
res_lst.append(res)
return tf.stack(res_lst)
It returns losses for each class, as expected. Next, it try to pass it to the optimizer:
with tf.name_scope('loss'):
loss = tf.reduce_mean(
loss_fn(
batch_size=batch_size,
batch_inputs=train_inputs,
batch_labels=train_labels,
batch_negative=negative_samples,
embeddings=embeddings))
with tf.name_scope('optimizer'):
optimizer = tf.train.AdagradOptimizer(learning_rate).minimize(loss)
I noticed that even on the first step gradient result of evaluating
print(session.run(tf.train.AdagradOptimizer(learning_rate).compute_gradients(loss), feed_dict=feed_dict))
I can't understand what is wrong with my implementation.
python tensorflow
add a comment |
I'm trying to write my implementation of word2vec based on tf exmaple.
In my data I have sessions with positive and negative examples, so I want use loss function mentioned in this article (log sum of sigmoids for positive and (1 - sigmoid) for negative)
I wrote my implementation of it:
def loss_fn(batch_size, batch_inputs, batch_labels, batch_negative, embeddings):
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keepdims=True))
normalized_embeddings = embeddings / norm
log.info("loss_fn init")
res_lst = []
for i in xrange(batch_size):
inp = batch_inputs[i]
lbl = batch_labels[i]
ng = batch_negative[i]
m = tf.map_fn(lambda k: tf.matmul(tf.gather(normalized_embeddings, [inp]),
tf.gather(normalized_embeddings, [k]),
transpose_b=True),
lbl,
dtype=tf.float32)
nm = tf.map_fn(lambda n: tf.matmul(tf.gather(normalized_embeddings, [inp]),
tf.gather(normalized_embeddings, [n]),
transpose_b=True),
ng,
dtype=tf.float32)
s = tf.map_fn(lambda x: tf.log(tf.math.sigmoid(x)), m)
ns = tf.map_fn(lambda x: tf.log(tf.math.sigmoid(-x)), nm)
res = -(tf.math.reduce_sum(ns) + tf.math.reduce_sum(s))
res_lst.append(res)
return tf.stack(res_lst)
It returns losses for each class, as expected. Next, it try to pass it to the optimizer:
with tf.name_scope('loss'):
loss = tf.reduce_mean(
loss_fn(
batch_size=batch_size,
batch_inputs=train_inputs,
batch_labels=train_labels,
batch_negative=negative_samples,
embeddings=embeddings))
with tf.name_scope('optimizer'):
optimizer = tf.train.AdagradOptimizer(learning_rate).minimize(loss)
I noticed that even on the first step gradient result of evaluating
print(session.run(tf.train.AdagradOptimizer(learning_rate).compute_gradients(loss), feed_dict=feed_dict))
I can't understand what is wrong with my implementation.
python tensorflow
I'm trying to write my implementation of word2vec based on tf exmaple.
In my data I have sessions with positive and negative examples, so I want use loss function mentioned in this article (log sum of sigmoids for positive and (1 - sigmoid) for negative)
I wrote my implementation of it:
def loss_fn(batch_size, batch_inputs, batch_labels, batch_negative, embeddings):
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keepdims=True))
normalized_embeddings = embeddings / norm
log.info("loss_fn init")
res_lst = []
for i in xrange(batch_size):
inp = batch_inputs[i]
lbl = batch_labels[i]
ng = batch_negative[i]
m = tf.map_fn(lambda k: tf.matmul(tf.gather(normalized_embeddings, [inp]),
tf.gather(normalized_embeddings, [k]),
transpose_b=True),
lbl,
dtype=tf.float32)
nm = tf.map_fn(lambda n: tf.matmul(tf.gather(normalized_embeddings, [inp]),
tf.gather(normalized_embeddings, [n]),
transpose_b=True),
ng,
dtype=tf.float32)
s = tf.map_fn(lambda x: tf.log(tf.math.sigmoid(x)), m)
ns = tf.map_fn(lambda x: tf.log(tf.math.sigmoid(-x)), nm)
res = -(tf.math.reduce_sum(ns) + tf.math.reduce_sum(s))
res_lst.append(res)
return tf.stack(res_lst)
It returns losses for each class, as expected. Next, it try to pass it to the optimizer:
with tf.name_scope('loss'):
loss = tf.reduce_mean(
loss_fn(
batch_size=batch_size,
batch_inputs=train_inputs,
batch_labels=train_labels,
batch_negative=negative_samples,
embeddings=embeddings))
with tf.name_scope('optimizer'):
optimizer = tf.train.AdagradOptimizer(learning_rate).minimize(loss)
I noticed that even on the first step gradient result of evaluating
print(session.run(tf.train.AdagradOptimizer(learning_rate).compute_gradients(loss), feed_dict=feed_dict))
I can't understand what is wrong with my implementation.
python tensorflow
python tensorflow
asked Mar 27 at 15:16
NokinNokin
42 bronze badges
42 bronze badges
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55380692%2ftensorflow-gradients-are-zero%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.
Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55380692%2ftensorflow-gradients-are-zero%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown