Display loss in a Tensorflow DQN without leaving tf.Session()Placeholder missing error in Tensor flow for CNNUsing make_template() in TensorFlowSimple Feedforward Neural Network with TensorFlow won't learnWhy would a DQN give similar values to all actions in the action space (2) for all observationsInvalidArgumentError while coding MNIST tutorialHow to choose cross-entropy loss in tensorflow?DQN - Q-Loss not convergingtflite outputs don't match with tensorflow outputs for conv2d_transposeValueError: Cannot feed value of shape (4,) for Tensor 'Placeholder_36:0', which has shape '(?, 4)'Python Tensorflow DQN Next Steps
Why is C++ template use not recommended in space/radiated environment?
Print the phrase "And she said, 'But that's his.'" using only the alphabet
Was the Lonely Mountain, where Smaug lived, a volcano?
What are the advantages of using TLRs to rangefinders?
Can an escape pod land on Earth from orbit and not be immediately detected?
Can Mage Hand be used to indirectly trigger an attack?
Why can't we feel the Earth's revolution?
Short story about psychologist analyzing demon
How effective would a full set of plate armor be against wild animals found in temperate regions (bears, snakes, wolves)?
Arrows inside a commutative diagram using tikzcd
What game uses dice with compass point arrows, forbidden signs, explosions, arrows and targeting reticles?
Why is Skinner so awkward in Hot Fuzz?
Is it true that "only photographers care about noise"?
Someone who is granted access to information but not expected to read it
Parsing text written the millitext font
Why is my Taiyaki (Cake that looks like a fish) too hard and dry?
Why not make one big cpu core?
How can I detect if I'm in a subshell?
Any gotchas in buying second-hand sanitary ware?
Should I move out from my current apartment before the contract ends to save more money?
How Many Times To Repeat An Event With Known Probability Before It Has Occurred A Number of Times
How can religions without a hell discourage evil-doing?
Does an African-American baby born in Youngstown, Ohio have a higher infant mortality rate than a baby born in Iran?
Does WiFi affect the quality of images downloaded from the internet?
Display loss in a Tensorflow DQN without leaving tf.Session()
Placeholder missing error in Tensor flow for CNNUsing make_template() in TensorFlowSimple Feedforward Neural Network with TensorFlow won't learnWhy would a DQN give similar values to all actions in the action space (2) for all observationsInvalidArgumentError while coding MNIST tutorialHow to choose cross-entropy loss in tensorflow?DQN - Q-Loss not convergingtflite outputs don't match with tensorflow outputs for conv2d_transposeValueError: Cannot feed value of shape (4,) for Tensor 'Placeholder_36:0', which has shape '(?, 4)'Python Tensorflow DQN Next Steps
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I have a DQN all set up and working, but I can't figure out how to display the loss without leaving the Tensorflow session.
I first thought it involved creating a new function or class, but I'm not sure where to put it in the code, and what specifically to put into the function or class.
observations = tf.placeholder(tf.float32, shape=[None, num_stops], name='observations')
actions = tf.placeholder(tf.int32,shape=[None], name='actions')
rewards = tf.placeholder(tf.float32,shape=[None], name='rewards')
# Model
Y = tf.layers.dense(observations, 200, activation=tf.nn.relu)
Ylogits = tf.layers.dense(Y, num_stops)
# sample an action from predicted probabilities
sample_op = tf.random.categorical(logits=Ylogits, num_samples=1)
# loss
cross_entropies = tf.losses.softmax_cross_entropy(onehot_labels=tf.one_hot(actions,num_stops), logits=Ylogits)
loss = tf.reduce_sum(rewards * cross_entropies)
# training operation
optimizer = tf.train.RMSPropOptimizer(learning_rate=0.001, decay=.99)
train_op = optimizer.minimize(loss)
I then run the network, which works without error.
with tf.Session() as sess:
'''etc. The network is run'''
sess.run(train_op, feed_dict=observations: observations_list,
actions: actions_list,
rewards: rewards_list)
I want to have loss
from train_op
displayed to the user.
python tensorflow q-learning cross-entropy
add a comment |
I have a DQN all set up and working, but I can't figure out how to display the loss without leaving the Tensorflow session.
I first thought it involved creating a new function or class, but I'm not sure where to put it in the code, and what specifically to put into the function or class.
observations = tf.placeholder(tf.float32, shape=[None, num_stops], name='observations')
actions = tf.placeholder(tf.int32,shape=[None], name='actions')
rewards = tf.placeholder(tf.float32,shape=[None], name='rewards')
# Model
Y = tf.layers.dense(observations, 200, activation=tf.nn.relu)
Ylogits = tf.layers.dense(Y, num_stops)
# sample an action from predicted probabilities
sample_op = tf.random.categorical(logits=Ylogits, num_samples=1)
# loss
cross_entropies = tf.losses.softmax_cross_entropy(onehot_labels=tf.one_hot(actions,num_stops), logits=Ylogits)
loss = tf.reduce_sum(rewards * cross_entropies)
# training operation
optimizer = tf.train.RMSPropOptimizer(learning_rate=0.001, decay=.99)
train_op = optimizer.minimize(loss)
I then run the network, which works without error.
with tf.Session() as sess:
'''etc. The network is run'''
sess.run(train_op, feed_dict=observations: observations_list,
actions: actions_list,
rewards: rewards_list)
I want to have loss
from train_op
displayed to the user.
python tensorflow q-learning cross-entropy
add a comment |
I have a DQN all set up and working, but I can't figure out how to display the loss without leaving the Tensorflow session.
I first thought it involved creating a new function or class, but I'm not sure where to put it in the code, and what specifically to put into the function or class.
observations = tf.placeholder(tf.float32, shape=[None, num_stops], name='observations')
actions = tf.placeholder(tf.int32,shape=[None], name='actions')
rewards = tf.placeholder(tf.float32,shape=[None], name='rewards')
# Model
Y = tf.layers.dense(observations, 200, activation=tf.nn.relu)
Ylogits = tf.layers.dense(Y, num_stops)
# sample an action from predicted probabilities
sample_op = tf.random.categorical(logits=Ylogits, num_samples=1)
# loss
cross_entropies = tf.losses.softmax_cross_entropy(onehot_labels=tf.one_hot(actions,num_stops), logits=Ylogits)
loss = tf.reduce_sum(rewards * cross_entropies)
# training operation
optimizer = tf.train.RMSPropOptimizer(learning_rate=0.001, decay=.99)
train_op = optimizer.minimize(loss)
I then run the network, which works without error.
with tf.Session() as sess:
'''etc. The network is run'''
sess.run(train_op, feed_dict=observations: observations_list,
actions: actions_list,
rewards: rewards_list)
I want to have loss
from train_op
displayed to the user.
python tensorflow q-learning cross-entropy
I have a DQN all set up and working, but I can't figure out how to display the loss without leaving the Tensorflow session.
I first thought it involved creating a new function or class, but I'm not sure where to put it in the code, and what specifically to put into the function or class.
observations = tf.placeholder(tf.float32, shape=[None, num_stops], name='observations')
actions = tf.placeholder(tf.int32,shape=[None], name='actions')
rewards = tf.placeholder(tf.float32,shape=[None], name='rewards')
# Model
Y = tf.layers.dense(observations, 200, activation=tf.nn.relu)
Ylogits = tf.layers.dense(Y, num_stops)
# sample an action from predicted probabilities
sample_op = tf.random.categorical(logits=Ylogits, num_samples=1)
# loss
cross_entropies = tf.losses.softmax_cross_entropy(onehot_labels=tf.one_hot(actions,num_stops), logits=Ylogits)
loss = tf.reduce_sum(rewards * cross_entropies)
# training operation
optimizer = tf.train.RMSPropOptimizer(learning_rate=0.001, decay=.99)
train_op = optimizer.minimize(loss)
I then run the network, which works without error.
with tf.Session() as sess:
'''etc. The network is run'''
sess.run(train_op, feed_dict=observations: observations_list,
actions: actions_list,
rewards: rewards_list)
I want to have loss
from train_op
displayed to the user.
python tensorflow q-learning cross-entropy
python tensorflow q-learning cross-entropy
asked Mar 25 at 1:33
Rayna LevyRayna Levy
236
236
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
try this
loss, _ = sess.run([loss, train_op], feed_dict=observations: observations_list,
actions: actions_list,
rewards: rewards_list)
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55330273%2fdisplay-loss-in-a-tensorflow-dqn-without-leaving-tf-session%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
try this
loss, _ = sess.run([loss, train_op], feed_dict=observations: observations_list,
actions: actions_list,
rewards: rewards_list)
add a comment |
try this
loss, _ = sess.run([loss, train_op], feed_dict=observations: observations_list,
actions: actions_list,
rewards: rewards_list)
add a comment |
try this
loss, _ = sess.run([loss, train_op], feed_dict=observations: observations_list,
actions: actions_list,
rewards: rewards_list)
try this
loss, _ = sess.run([loss, train_op], feed_dict=observations: observations_list,
actions: actions_list,
rewards: rewards_list)
answered Mar 25 at 2:04
user1779012user1779012
6418
6418
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55330273%2fdisplay-loss-in-a-tensorflow-dqn-without-leaving-tf-session%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown