Is the calculated loss associated with all samples or not?how to read batches in one hdf5 data file for training?mini-batch gradient descent implementation in tensorflowHow to compute a per-class parameter from a mini-batch in TensorFlow?How to update model parameters with accumulated gradients?How to use different loss function in tensorflow, DNNLinearCombinedClassifierTensorflow optimizers: loss sum vs meantensorflow RNN loss not decreasingThe loss of full connected neural network doesn't fall training with tensorflowtensorflow custom loss that are not in the form of sum of single sample errorsLoss not changing no matter the learning rate
What are modes in real world?
Coworkers accusing me of "cheating" for working more efficiently
A pencil in a beaker of water
Avoid showing cancel button on dialog
LeetCode 65: Valid Number (Python)
Why is Robin Hood French in Shrek?
Did the Allies reverse the threads on secret microfilm-hiding buttons to thwart the Germans?
Why does hashing public keys not actually provide any quantum resistance?
What can I use for input conversion instead of scanf?
As tourist in China do I have to fear consequences for having publicly liked South Park?
What could an alternative human-powered transportation method look like?
Python Printable Characters
How do journals gain and lose reputation?
How do the other crew members know the xenomorph is "big", if they haven't seen it?
Fiducial placement
What do I get by paying more for a bicycle?
Would there be a difference between boiling whole black peppercorns or fine ground black pepp in a stew?
Why are one-word titles so dominant in books, film, and games?
What’s the difference between 实在 and 确实?
Company asks (more than once) if I can involve family members in project
Rational Number RNG
Is it safe to plug one travel adapter into another?
Covering an 8x8 grid with X pentominoes
Define a range using a formula
Is the calculated loss associated with all samples or not?
how to read batches in one hdf5 data file for training?mini-batch gradient descent implementation in tensorflowHow to compute a per-class parameter from a mini-batch in TensorFlow?How to update model parameters with accumulated gradients?How to use different loss function in tensorflow, DNNLinearCombinedClassifierTensorflow optimizers: loss sum vs meantensorflow RNN loss not decreasingThe loss of full connected neural network doesn't fall training with tensorflowtensorflow custom loss that are not in the form of sum of single sample errorsLoss not changing no matter the learning rate
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;
I am new to deep learning and Tensorflow. I got some basic questions with the sample code below :
for j in range(no_of_training_sample):
...
...
_, _loss = sess.run([train_step, loss], X: x, Y: y)
For the value of
_loss
returned, is it the loss for each data sample or the sum from data sample 0 up to data sample j ?When will the parameter w, h and b be updated, after each
sess.run()
?How can I change the code to turn it into mini-batch ?
I tried to search the internet, I but cannot get quick answers from it.
deep-learning tensorflow
migrated from ai.stackexchange.com Mar 28 at 20:55
This question came from our site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment.
add a comment
|
I am new to deep learning and Tensorflow. I got some basic questions with the sample code below :
for j in range(no_of_training_sample):
...
...
_, _loss = sess.run([train_step, loss], X: x, Y: y)
For the value of
_loss
returned, is it the loss for each data sample or the sum from data sample 0 up to data sample j ?When will the parameter w, h and b be updated, after each
sess.run()
?How can I change the code to turn it into mini-batch ?
I tried to search the internet, I but cannot get quick answers from it.
deep-learning tensorflow
migrated from ai.stackexchange.com Mar 28 at 20:55
This question came from our site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment.
add a comment
|
I am new to deep learning and Tensorflow. I got some basic questions with the sample code below :
for j in range(no_of_training_sample):
...
...
_, _loss = sess.run([train_step, loss], X: x, Y: y)
For the value of
_loss
returned, is it the loss for each data sample or the sum from data sample 0 up to data sample j ?When will the parameter w, h and b be updated, after each
sess.run()
?How can I change the code to turn it into mini-batch ?
I tried to search the internet, I but cannot get quick answers from it.
deep-learning tensorflow
I am new to deep learning and Tensorflow. I got some basic questions with the sample code below :
for j in range(no_of_training_sample):
...
...
_, _loss = sess.run([train_step, loss], X: x, Y: y)
For the value of
_loss
returned, is it the loss for each data sample or the sum from data sample 0 up to data sample j ?When will the parameter w, h and b be updated, after each
sess.run()
?How can I change the code to turn it into mini-batch ?
I tried to search the internet, I but cannot get quick answers from it.
deep-learning tensorflow
deep-learning tensorflow
edited Mar 28 at 21:17
nbro
6,36010 gold badges60 silver badges106 bronze badges
6,36010 gold badges60 silver badges106 bronze badges
asked Mar 28 at 13:14
MluiMlui
541 bronze badge
541 bronze badge
migrated from ai.stackexchange.com Mar 28 at 20:55
This question came from our site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment.
migrated from ai.stackexchange.com Mar 28 at 20:55
This question came from our site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment.
migrated from ai.stackexchange.com Mar 28 at 20:55
This question came from our site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment.
add a comment
|
add a comment
|
1 Answer
1
active
oldest
votes
_loss
is the returned value of loss
(the second element in the list that you pass as first argument to the run
function), after having executed a step of the computation graph (TF is based on "static" executions of computation graphs which represent the operations that need to be run).
The loss can be defined in different ways (e.g. you can define it as the cross entropy between the predicted values and the target values), so the result you will obtain in _loss
will depend not only on the data you pass to the session as second argument (in your case X: x, Y: y
), but also on the way you compute the loss.
The parameters of your network are updated when, in the computation graph, e.g., the function minimize
is called.
To change the code and make it use mini-batches (rather than the full dataset at once), instead of passing x
and y
to the placeholders X
and Y
in the expression X: x, Y: y
(when you call , _loss = sess.run([train_step, loss], X: x, Y: y)
), you will have to create an "iterator" which gives you a subset of x
(and the corresponding y
) that you can pass to X
(and Y
respectively). You can actually use one of the new TF APIs which facilitate this task (instead of creating the iterator from scratch).
(If you had just asked one question, my answer could have been more detailed).
add a comment
|
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55406724%2fis-the-calculated-loss-associated-with-all-samples-or-not%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
_loss
is the returned value of loss
(the second element in the list that you pass as first argument to the run
function), after having executed a step of the computation graph (TF is based on "static" executions of computation graphs which represent the operations that need to be run).
The loss can be defined in different ways (e.g. you can define it as the cross entropy between the predicted values and the target values), so the result you will obtain in _loss
will depend not only on the data you pass to the session as second argument (in your case X: x, Y: y
), but also on the way you compute the loss.
The parameters of your network are updated when, in the computation graph, e.g., the function minimize
is called.
To change the code and make it use mini-batches (rather than the full dataset at once), instead of passing x
and y
to the placeholders X
and Y
in the expression X: x, Y: y
(when you call , _loss = sess.run([train_step, loss], X: x, Y: y)
), you will have to create an "iterator" which gives you a subset of x
(and the corresponding y
) that you can pass to X
(and Y
respectively). You can actually use one of the new TF APIs which facilitate this task (instead of creating the iterator from scratch).
(If you had just asked one question, my answer could have been more detailed).
add a comment
|
_loss
is the returned value of loss
(the second element in the list that you pass as first argument to the run
function), after having executed a step of the computation graph (TF is based on "static" executions of computation graphs which represent the operations that need to be run).
The loss can be defined in different ways (e.g. you can define it as the cross entropy between the predicted values and the target values), so the result you will obtain in _loss
will depend not only on the data you pass to the session as second argument (in your case X: x, Y: y
), but also on the way you compute the loss.
The parameters of your network are updated when, in the computation graph, e.g., the function minimize
is called.
To change the code and make it use mini-batches (rather than the full dataset at once), instead of passing x
and y
to the placeholders X
and Y
in the expression X: x, Y: y
(when you call , _loss = sess.run([train_step, loss], X: x, Y: y)
), you will have to create an "iterator" which gives you a subset of x
(and the corresponding y
) that you can pass to X
(and Y
respectively). You can actually use one of the new TF APIs which facilitate this task (instead of creating the iterator from scratch).
(If you had just asked one question, my answer could have been more detailed).
add a comment
|
_loss
is the returned value of loss
(the second element in the list that you pass as first argument to the run
function), after having executed a step of the computation graph (TF is based on "static" executions of computation graphs which represent the operations that need to be run).
The loss can be defined in different ways (e.g. you can define it as the cross entropy between the predicted values and the target values), so the result you will obtain in _loss
will depend not only on the data you pass to the session as second argument (in your case X: x, Y: y
), but also on the way you compute the loss.
The parameters of your network are updated when, in the computation graph, e.g., the function minimize
is called.
To change the code and make it use mini-batches (rather than the full dataset at once), instead of passing x
and y
to the placeholders X
and Y
in the expression X: x, Y: y
(when you call , _loss = sess.run([train_step, loss], X: x, Y: y)
), you will have to create an "iterator" which gives you a subset of x
(and the corresponding y
) that you can pass to X
(and Y
respectively). You can actually use one of the new TF APIs which facilitate this task (instead of creating the iterator from scratch).
(If you had just asked one question, my answer could have been more detailed).
_loss
is the returned value of loss
(the second element in the list that you pass as first argument to the run
function), after having executed a step of the computation graph (TF is based on "static" executions of computation graphs which represent the operations that need to be run).
The loss can be defined in different ways (e.g. you can define it as the cross entropy between the predicted values and the target values), so the result you will obtain in _loss
will depend not only on the data you pass to the session as second argument (in your case X: x, Y: y
), but also on the way you compute the loss.
The parameters of your network are updated when, in the computation graph, e.g., the function minimize
is called.
To change the code and make it use mini-batches (rather than the full dataset at once), instead of passing x
and y
to the placeholders X
and Y
in the expression X: x, Y: y
(when you call , _loss = sess.run([train_step, loss], X: x, Y: y)
), you will have to create an "iterator" which gives you a subset of x
(and the corresponding y
) that you can pass to X
(and Y
respectively). You can actually use one of the new TF APIs which facilitate this task (instead of creating the iterator from scratch).
(If you had just asked one question, my answer could have been more detailed).
answered Mar 28 at 21:39
nbronbro
6,36010 gold badges60 silver badges106 bronze badges
6,36010 gold badges60 silver badges106 bronze badges
add a comment
|
add a comment
|
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55406724%2fis-the-calculated-loss-associated-with-all-samples-or-not%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown