Is the calculated loss associated with all samples or not?how to read batches in one hdf5 data file for training?mini-batch gradient descent implementation in tensorflowHow to compute a per-class parameter from a mini-batch in TensorFlow?How to update model parameters with accumulated gradients?How to use different loss function in tensorflow, DNNLinearCombinedClassifierTensorflow optimizers: loss sum vs meantensorflow RNN loss not decreasingThe loss of full connected neural network doesn't fall training with tensorflowtensorflow custom loss that are not in the form of sum of single sample errorsLoss not changing no matter the learning rate

What are modes in real world?

Coworkers accusing me of "cheating" for working more efficiently

A pencil in a beaker of water

Avoid showing cancel button on dialog

LeetCode 65: Valid Number (Python)

Why is Robin Hood French in Shrek?

Did the Allies reverse the threads on secret microfilm-hiding buttons to thwart the Germans?

Why does hashing public keys not actually provide any quantum resistance?

What can I use for input conversion instead of scanf?

As tourist in China do I have to fear consequences for having publicly liked South Park?

What could an alternative human-powered transportation method look like?

Python Printable Characters

How do journals gain and lose reputation?

How do the other crew members know the xenomorph is "big", if they haven't seen it?

Fiducial placement

What do I get by paying more for a bicycle?

Would there be a difference between boiling whole black peppercorns or fine ground black pepp in a stew?

Why are one-word titles so dominant in books, film, and games?

What’s the difference between 实在 and 确实?

Company asks (more than once) if I can involve family members in project

Rational Number RNG

Is it safe to plug one travel adapter into another?

Covering an 8x8 grid with X pentominoes

Define a range using a formula



Is the calculated loss associated with all samples or not?


how to read batches in one hdf5 data file for training?mini-batch gradient descent implementation in tensorflowHow to compute a per-class parameter from a mini-batch in TensorFlow?How to update model parameters with accumulated gradients?How to use different loss function in tensorflow, DNNLinearCombinedClassifierTensorflow optimizers: loss sum vs meantensorflow RNN loss not decreasingThe loss of full connected neural network doesn't fall training with tensorflowtensorflow custom loss that are not in the form of sum of single sample errorsLoss not changing no matter the learning rate






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;









3

















I am new to deep learning and Tensorflow. I got some basic questions with the sample code below :



for j in range(no_of_training_sample):
...
...
_, _loss = sess.run([train_step, loss], X: x, Y: y)


  1. For the value of _loss returned, is it the loss for each data sample or the sum from data sample 0 up to data sample j ?


  2. When will the parameter w, h and b be updated, after each sess.run()?


  3. How can I change the code to turn it into mini-batch ?


I tried to search the internet, I but cannot get quick answers from it.










share|improve this question


















migrated from ai.stackexchange.com Mar 28 at 20:55


This question came from our site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment.

























    3

















    I am new to deep learning and Tensorflow. I got some basic questions with the sample code below :



    for j in range(no_of_training_sample):
    ...
    ...
    _, _loss = sess.run([train_step, loss], X: x, Y: y)


    1. For the value of _loss returned, is it the loss for each data sample or the sum from data sample 0 up to data sample j ?


    2. When will the parameter w, h and b be updated, after each sess.run()?


    3. How can I change the code to turn it into mini-batch ?


    I tried to search the internet, I but cannot get quick answers from it.










    share|improve this question


















    migrated from ai.stackexchange.com Mar 28 at 20:55


    This question came from our site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment.





















      3












      3








      3








      I am new to deep learning and Tensorflow. I got some basic questions with the sample code below :



      for j in range(no_of_training_sample):
      ...
      ...
      _, _loss = sess.run([train_step, loss], X: x, Y: y)


      1. For the value of _loss returned, is it the loss for each data sample or the sum from data sample 0 up to data sample j ?


      2. When will the parameter w, h and b be updated, after each sess.run()?


      3. How can I change the code to turn it into mini-batch ?


      I tried to search the internet, I but cannot get quick answers from it.










      share|improve this question

















      I am new to deep learning and Tensorflow. I got some basic questions with the sample code below :



      for j in range(no_of_training_sample):
      ...
      ...
      _, _loss = sess.run([train_step, loss], X: x, Y: y)


      1. For the value of _loss returned, is it the loss for each data sample or the sum from data sample 0 up to data sample j ?


      2. When will the parameter w, h and b be updated, after each sess.run()?


      3. How can I change the code to turn it into mini-batch ?


      I tried to search the internet, I but cannot get quick answers from it.







      deep-learning tensorflow






      share|improve this question
















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 28 at 21:17









      nbro

      6,36010 gold badges60 silver badges106 bronze badges




      6,36010 gold badges60 silver badges106 bronze badges










      asked Mar 28 at 13:14









      MluiMlui

      541 bronze badge




      541 bronze badge





      migrated from ai.stackexchange.com Mar 28 at 20:55


      This question came from our site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment.











      migrated from ai.stackexchange.com Mar 28 at 20:55


      This question came from our site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment.









      migrated from ai.stackexchange.com Mar 28 at 20:55


      This question came from our site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment.
























          1 Answer
          1






          active

          oldest

          votes


















          0


















          _loss is the returned value of loss (the second element in the list that you pass as first argument to the run function), after having executed a step of the computation graph (TF is based on "static" executions of computation graphs which represent the operations that need to be run).



          The loss can be defined in different ways (e.g. you can define it as the cross entropy between the predicted values and the target values), so the result you will obtain in _loss will depend not only on the data you pass to the session as second argument (in your case X: x, Y: y), but also on the way you compute the loss.



          The parameters of your network are updated when, in the computation graph, e.g., the function minimize is called.



          To change the code and make it use mini-batches (rather than the full dataset at once), instead of passing x and y to the placeholders X and Y in the expression X: x, Y: y (when you call , _loss = sess.run([train_step, loss], X: x, Y: y)), you will have to create an "iterator" which gives you a subset of x (and the corresponding y) that you can pass to X (and Y respectively). You can actually use one of the new TF APIs which facilitate this task (instead of creating the iterator from scratch).



          (If you had just asked one question, my answer could have been more detailed).






          share|improve this answer



























            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );














            draft saved

            draft discarded
















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55406724%2fis-the-calculated-loss-associated-with-all-samples-or-not%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown


























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0


















            _loss is the returned value of loss (the second element in the list that you pass as first argument to the run function), after having executed a step of the computation graph (TF is based on "static" executions of computation graphs which represent the operations that need to be run).



            The loss can be defined in different ways (e.g. you can define it as the cross entropy between the predicted values and the target values), so the result you will obtain in _loss will depend not only on the data you pass to the session as second argument (in your case X: x, Y: y), but also on the way you compute the loss.



            The parameters of your network are updated when, in the computation graph, e.g., the function minimize is called.



            To change the code and make it use mini-batches (rather than the full dataset at once), instead of passing x and y to the placeholders X and Y in the expression X: x, Y: y (when you call , _loss = sess.run([train_step, loss], X: x, Y: y)), you will have to create an "iterator" which gives you a subset of x (and the corresponding y) that you can pass to X (and Y respectively). You can actually use one of the new TF APIs which facilitate this task (instead of creating the iterator from scratch).



            (If you had just asked one question, my answer could have been more detailed).






            share|improve this answer






























              0


















              _loss is the returned value of loss (the second element in the list that you pass as first argument to the run function), after having executed a step of the computation graph (TF is based on "static" executions of computation graphs which represent the operations that need to be run).



              The loss can be defined in different ways (e.g. you can define it as the cross entropy between the predicted values and the target values), so the result you will obtain in _loss will depend not only on the data you pass to the session as second argument (in your case X: x, Y: y), but also on the way you compute the loss.



              The parameters of your network are updated when, in the computation graph, e.g., the function minimize is called.



              To change the code and make it use mini-batches (rather than the full dataset at once), instead of passing x and y to the placeholders X and Y in the expression X: x, Y: y (when you call , _loss = sess.run([train_step, loss], X: x, Y: y)), you will have to create an "iterator" which gives you a subset of x (and the corresponding y) that you can pass to X (and Y respectively). You can actually use one of the new TF APIs which facilitate this task (instead of creating the iterator from scratch).



              (If you had just asked one question, my answer could have been more detailed).






              share|improve this answer




























                0














                0










                0









                _loss is the returned value of loss (the second element in the list that you pass as first argument to the run function), after having executed a step of the computation graph (TF is based on "static" executions of computation graphs which represent the operations that need to be run).



                The loss can be defined in different ways (e.g. you can define it as the cross entropy between the predicted values and the target values), so the result you will obtain in _loss will depend not only on the data you pass to the session as second argument (in your case X: x, Y: y), but also on the way you compute the loss.



                The parameters of your network are updated when, in the computation graph, e.g., the function minimize is called.



                To change the code and make it use mini-batches (rather than the full dataset at once), instead of passing x and y to the placeholders X and Y in the expression X: x, Y: y (when you call , _loss = sess.run([train_step, loss], X: x, Y: y)), you will have to create an "iterator" which gives you a subset of x (and the corresponding y) that you can pass to X (and Y respectively). You can actually use one of the new TF APIs which facilitate this task (instead of creating the iterator from scratch).



                (If you had just asked one question, my answer could have been more detailed).






                share|improve this answer














                _loss is the returned value of loss (the second element in the list that you pass as first argument to the run function), after having executed a step of the computation graph (TF is based on "static" executions of computation graphs which represent the operations that need to be run).



                The loss can be defined in different ways (e.g. you can define it as the cross entropy between the predicted values and the target values), so the result you will obtain in _loss will depend not only on the data you pass to the session as second argument (in your case X: x, Y: y), but also on the way you compute the loss.



                The parameters of your network are updated when, in the computation graph, e.g., the function minimize is called.



                To change the code and make it use mini-batches (rather than the full dataset at once), instead of passing x and y to the placeholders X and Y in the expression X: x, Y: y (when you call , _loss = sess.run([train_step, loss], X: x, Y: y)), you will have to create an "iterator" which gives you a subset of x (and the corresponding y) that you can pass to X (and Y respectively). You can actually use one of the new TF APIs which facilitate this task (instead of creating the iterator from scratch).



                (If you had just asked one question, my answer could have been more detailed).







                share|improve this answer













                share|improve this answer




                share|improve this answer










                answered Mar 28 at 21:39









                nbronbro

                6,36010 gold badges60 silver badges106 bronze badges




                6,36010 gold badges60 silver badges106 bronze badges

































                    draft saved

                    draft discarded















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55406724%2fis-the-calculated-loss-associated-with-all-samples-or-not%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown









                    Popular posts from this blog

                    Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

                    SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

                    은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현