How to prevent tensorflow from allocating the totality of a GPU memory?OOM of Graphic Card CUDA /TensorflowTensorflow OOM error when allocating resources to GPUHow to get current available GPUs in tensorflow?How can I solve 'ran out of gpu memory' in TensorFlowTensorFlow: Blas GEMM launch failedError using Tensorflow with GPUTensorflow: How do you monitor GPU performance during model training in real-time?how to limit GPU usage in tensorflow (r1.1) with C++ APIIs it unsafe to run multiple tensorflow processes on the same GPU?Tensorflow uses same amount of gpu memory regardless of batch sizeTensorflow 0.6 GPU IssueCPU/GPU Memory Usage with TensorflowLow GPU usage by Keras / Tensorflow?TensorFlow GPU memoryTensorFlow memory allocation in GPUTensorflow GPU Memory Allocation Affects Model AccuracyHow to prevent Tensorflow from allocating the totality of a GPU memory when using eager execution?How to allocate the minimum GPU memory needed for inference?Nvidia driver for GT 610 and GTX TITAN X to use Anaconda tensorflow-gpuTensorflow Out of memory and CPU/GPU usage

Are there any to-scale diagrams of the TRAPPIST-1 system?

Why is there not a willingness from the world to step in between Pakistan and India?

How to force GCC to assume that a floating-point expression is non-negative?

Is there any problem with a full installation on a USB drive?

A first "Hangman" game in Python

Grep contents before a colon

Why does this London Underground poster from 1924 have a Star of David atop a Christmas tree?

Defending Castle from Zombies

Unlock your Lock

Why did Lucius make a deal out of Buckbeak hurting Draco but not about Draco being turned into a ferret?

Are (c#) dictionaries an Anti Pattern?

What is the name of this plot that has rows with two connected dots?

What to do about my 1-month-old boy peeing through diapers?

Is a memoized pure function itself considered pure?

Why does a sticker slowly peel off, but if it is pulled quickly it tears?

Are strlen optimizations really needed in glibc?

助けてくれて有難う meaning and usage

Dotted background on a flowchart

Why is getting a PhD considered "financially irresponsible" by some people?

Alternatives to Network Backup

Book featuring a child learning from a crowdsourced AI book

Why did the population of Bhutan drop by 70% between 2007 and 2008?

Why did James Cameron decide to give Alita big eyes?

Using a JoeBlow Sport pump on a presta valve



How to prevent tensorflow from allocating the totality of a GPU memory?


OOM of Graphic Card CUDA /TensorflowTensorflow OOM error when allocating resources to GPUHow to get current available GPUs in tensorflow?How can I solve 'ran out of gpu memory' in TensorFlowTensorFlow: Blas GEMM launch failedError using Tensorflow with GPUTensorflow: How do you monitor GPU performance during model training in real-time?how to limit GPU usage in tensorflow (r1.1) with C++ APIIs it unsafe to run multiple tensorflow processes on the same GPU?Tensorflow uses same amount of gpu memory regardless of batch sizeTensorflow 0.6 GPU IssueCPU/GPU Memory Usage with TensorflowLow GPU usage by Keras / Tensorflow?TensorFlow GPU memoryTensorFlow memory allocation in GPUTensorflow GPU Memory Allocation Affects Model AccuracyHow to prevent Tensorflow from allocating the totality of a GPU memory when using eager execution?How to allocate the minimum GPU memory needed for inference?Nvidia driver for GT 610 and GTX TITAN X to use Anaconda tensorflow-gpuTensorflow Out of memory and CPU/GPU usage






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








232















I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each.



For small to moderate size models, the 12GB of the Titan X are usually enough for 2-3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the Titan X, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having several users running things on the GPUs at once.



The problem with TensorFlow is that, by default, it allocates the full amount of available memory on the GPU when it is launched. Even for a small 2-layer Neural Network, I see that the 12 GB of the Titan X are used up.



Is there a way to make TensorFlow only allocate, say, 4GB of GPU memory, if one knows that that amount is enough for a given model?










share|improve this question
































    232















    I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each.



    For small to moderate size models, the 12GB of the Titan X are usually enough for 2-3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the Titan X, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having several users running things on the GPUs at once.



    The problem with TensorFlow is that, by default, it allocates the full amount of available memory on the GPU when it is launched. Even for a small 2-layer Neural Network, I see that the 12 GB of the Titan X are used up.



    Is there a way to make TensorFlow only allocate, say, 4GB of GPU memory, if one knows that that amount is enough for a given model?










    share|improve this question




























      232












      232








      232


      94






      I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each.



      For small to moderate size models, the 12GB of the Titan X are usually enough for 2-3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the Titan X, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having several users running things on the GPUs at once.



      The problem with TensorFlow is that, by default, it allocates the full amount of available memory on the GPU when it is launched. Even for a small 2-layer Neural Network, I see that the 12 GB of the Titan X are used up.



      Is there a way to make TensorFlow only allocate, say, 4GB of GPU memory, if one knows that that amount is enough for a given model?










      share|improve this question
















      I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each.



      For small to moderate size models, the 12GB of the Titan X are usually enough for 2-3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the Titan X, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having several users running things on the GPUs at once.



      The problem with TensorFlow is that, by default, it allocates the full amount of available memory on the GPU when it is launched. Even for a small 2-layer Neural Network, I see that the 12 GB of the Titan X are used up.



      Is there a way to make TensorFlow only allocate, say, 4GB of GPU memory, if one knows that that amount is enough for a given model?







      python tensorflow nvidia-titan






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Dec 26 '17 at 23:13









      Misha Brukman

      8,5083 gold badges42 silver badges57 bronze badges




      8,5083 gold badges42 silver badges57 bronze badges










      asked Dec 10 '15 at 10:19









      Fabien C.Fabien C.

      1,2852 gold badges9 silver badges6 bronze badges




      1,2852 gold badges9 silver badges6 bronze badges

























          10 Answers
          10






          active

          oldest

          votes


















          259















          You can set the fraction of GPU memory to be allocated when you construct a tf.Session by passing a tf.GPUOptions as part of the optional config argument:



          # Assume that you have 12GB of GPU memory and want to allocate ~4GB:
          gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)

          sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))


          The per_process_gpu_memory_fraction acts as a hard upper bound on the amount of GPU memory that will be used by the process on each GPU on the same machine. Currently, this fraction is applied uniformly to all of the GPUs on the same machine; there is no way to set this on a per-GPU basis.






          share|improve this answer






















          • 3





            Thank you very much. This info is quite hidden in the current doc. I would never have found it by myself :-) If you can answer, I would like to ask for two additional infos: 1- Does this limit the amount of memory ever used, or just the memory initially allocated? (ie. will it still allocate more memory if there is a need for it by the computation graph) 2- Is there a way to set this on a per-GPU basis?

            – Fabien C.
            Dec 11 '15 at 1:29






          • 13





            Related note: setting CUDA_VISIBLE_DEVICES to limit TensorFlow to a single GPU works for me. See acceleware.com/blog/cudavisibledevices-masking-gpus

            – rd11
            Jan 12 '16 at 15:54






          • 1





            it seems that the memory allocation goes a bit over the request, e..g I requested per_process_gpu_memory_fraction=0.0909 on a 24443MiB gpu and got processes taking 2627MiB

            – jeremy_rutman
            Sep 23 '17 at 17:15







          • 2





            I can't seem to get this to work in a MonitoredTrainingSession

            – Anjum Sayed
            Oct 13 '17 at 5:34






          • 2





            @jeremy_rutman I believe this is due to cudnn and cublas context initialization. That is only relevant if you are executing kernels that use those libs though.

            – Daniel
            Feb 20 at 23:26



















          159















          config = tf.ConfigProto()
          config.gpu_options.allow_growth=True
          sess = tf.Session(config=config)


          https://github.com/tensorflow/tensorflow/issues/1578






          share|improve this answer




















          • 12





            This one is exactly what I want because in a multi-user environment, it is very inconvenient to specify the exact amount of GPU memory to reserve in the code itself.

            – xuancong84
            Oct 3 '16 at 1:07






          • 1





            Also, if you're using Keras with a TF backend, you can use this and run from keras import backend as K and K.set_session(sess) to avoid memory limitations

            – Tobsta
            Jul 1 at 4:52


















          39















          Here is an excerpt from the Book Deep Learning with TensorFlow




          In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as it is needed by the process. TensorFlow provides two configuration options on the session to control this. The first is the allow_growth option, which attempts to allocate only as much GPU memory based on runtime allocations, it starts out allocating very little memory, and as sessions get run and more GPU memory is needed, we extend the GPU memory region needed by the TensorFlow process.




          1) Allow growth: (more flexible)



          config = tf.ConfigProto()
          config.gpu_options.allow_growth = True
          session = tf.Session(config=config, ...)


          The second method is per_process_gpu_memory_fraction option, which determines the fraction of the overall amount of memory that each visible GPU should be allocated. Note: No release of memory needed, it can even worsen memory fragmentation when done.



          2) Allocate fixed memory:



          To only allocate 40% of the total memory of each GPU by:



          config = tf.ConfigProto()
          config.gpu_options.per_process_gpu_memory_fraction = 0.4
          session = tf.Session(config=config, ...)


          Note:
          That's only useful though if you truly want to bind the amount of GPU memory available on the TensorFlow process.






          share|improve this answer
































            13















            All the answers above assume execution with a sess.run() call, which is becoming the exception rather than the rule in recent versions of TensorFlow.



            When using the tf.Estimator framework (TensorFlow 1.4 and above) the way to pass the fraction along to the implicitly created MonitoredTrainingSession is,



            opts = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
            conf = tf.ConfigProto(gpu_options=opts)
            trainingConfig = tf.estimator.RunConfig(session_config=conf, ...)
            tf.estimator.Estimator(model_fn=...,
            config=trainingConfig)


            Similarly in Eager mode (TensorFlow 1.5 and above),



            opts = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
            conf = tf.ConfigProto(gpu_options=opts)
            tfe.enable_eager_execution(config=conf)


            Edit: 11-04-2018
            As an example, if you are to use tf.contrib.gan.train, then you can use something similar to bellow:



            tf.contrib.gan.gan_train(........, config=conf)





            share|improve this answer


































              11















              Updated for TensorFlow 2.0 Alpha and beyond



              From the 2.0 Alpha docs, the answer is now just one line before you do anything with TensorFlow:



              import tensorflow as tf
              tf.config.gpu.set_per_process_memory_growth(True)





              share|improve this answer



























              • Does this work for Tensorflow 1.13 ?

                – DollarAkshay
                May 1 at 3:26






              • 1





                @AkshayLAradhya no this is only for TF 2.0 and above. The other answers here will work fine for 1.13 and earlier.

                – Theo
                May 1 at 21:19


















              3















              Shameless plug: If you install the GPU supported Tensorflow, the session will first allocate all GPU whether you set it to use only CPU or GPU. I may add my tip that even you set the graph to use CPU only you should set the same configuration(as answered above:) ) to prevent the unwanted GPU occupation.



              And in interactive interface like IPython you should also set that configure, otherwise it will allocate all memory and left almost none for others. This is sometimes hard to notice.






              share|improve this answer


































                1















                Tensorflow 2.0 Beta and (probably) beyond



                The API changed again. It can be now found in:



                tf.config.experimental.set_memory_growth(
                device,
                enable
                )


                Aliases:



                • tf.compat.v1.config.experimental.set_memory_growth

                • tf.compat.v2.config.experimental.set_memory_growth

                • tf.config.experimental.set_memory_growth

                https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/config/experimental/set_memory_growth
                https://www.tensorflow.org/beta/guide/using_gpu#limiting_gpu_memory_growth






                share|improve this answer


































                  1















                  You can use



                  TF_FORCE_GPU_ALLOW_GROWTH=true


                  in your environment variables.



                  In tensorflow code:



                  bool GPUBFCAllocator::GetAllowGrowthValue(const GPUOptions& gpu_options) {
                  const char* force_allow_growth_string =
                  std::getenv("TF_FORCE_GPU_ALLOW_GROWTH");
                  if (force_allow_growth_string == nullptr)
                  return gpu_options.allow_growth();






                  share|improve this answer


































                    0















                    i tried to train unet on voc data set but because of huge image size, memory finishes. i tried all the above tips, even tried with batch size==1, yet to no improvement. sometimes TensorFlow version also causes the memory issues. try by using




                    pip install tensorflow-gpu==1.8.0







                    share|improve this answer
































                      0















                      Well I am new to tensorflow, I have Geforce 740m or something GPU with 2GB ram, I was running mnist handwritten kind of example for a native language with training data containing of 38700 images and 4300 testing images and was trying to get precision , recall , F1 using following code as sklearn was not giving me precise reults. once i added this to my existing code i started getting GPU errors.



                      TP = tf.count_nonzero(predicted * actual)
                      TN = tf.count_nonzero((predicted - 1) * (actual - 1))
                      FP = tf.count_nonzero(predicted * (actual - 1))
                      FN = tf.count_nonzero((predicted - 1) * actual)

                      prec = TP / (TP + FP)
                      recall = TP / (TP + FN)
                      f1 = 2 * prec * recall / (prec + recall)


                      plus my model was heavy i guess, i was getting memory error after 147, 148 epochs, and then I thought why not create functions for the tasks so I dont know if it works this way in tensrorflow, but I thought if a local variable is used and when out of scope it may release memory and i defined the above elements for training and testing in modules, I was able to achieve 10000 epochs without any issues, I hope this will help..






                      share|improve this answer

























                      • I am amazed at TF's utility but also by it's memory use. On the CPU python allocating 30GB or so for a training job on the flowers dataset used in may TF examples. Insane.

                        – Eric M
                        Apr 12 at 15:30











                      protected by Sheldore Jul 12 at 8:29



                      Thank you for your interest in this question.
                      Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



                      Would you like to answer one of these unanswered questions instead?














                      10 Answers
                      10






                      active

                      oldest

                      votes








                      10 Answers
                      10






                      active

                      oldest

                      votes









                      active

                      oldest

                      votes






                      active

                      oldest

                      votes









                      259















                      You can set the fraction of GPU memory to be allocated when you construct a tf.Session by passing a tf.GPUOptions as part of the optional config argument:



                      # Assume that you have 12GB of GPU memory and want to allocate ~4GB:
                      gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)

                      sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))


                      The per_process_gpu_memory_fraction acts as a hard upper bound on the amount of GPU memory that will be used by the process on each GPU on the same machine. Currently, this fraction is applied uniformly to all of the GPUs on the same machine; there is no way to set this on a per-GPU basis.






                      share|improve this answer






















                      • 3





                        Thank you very much. This info is quite hidden in the current doc. I would never have found it by myself :-) If you can answer, I would like to ask for two additional infos: 1- Does this limit the amount of memory ever used, or just the memory initially allocated? (ie. will it still allocate more memory if there is a need for it by the computation graph) 2- Is there a way to set this on a per-GPU basis?

                        – Fabien C.
                        Dec 11 '15 at 1:29






                      • 13





                        Related note: setting CUDA_VISIBLE_DEVICES to limit TensorFlow to a single GPU works for me. See acceleware.com/blog/cudavisibledevices-masking-gpus

                        – rd11
                        Jan 12 '16 at 15:54






                      • 1





                        it seems that the memory allocation goes a bit over the request, e..g I requested per_process_gpu_memory_fraction=0.0909 on a 24443MiB gpu and got processes taking 2627MiB

                        – jeremy_rutman
                        Sep 23 '17 at 17:15







                      • 2





                        I can't seem to get this to work in a MonitoredTrainingSession

                        – Anjum Sayed
                        Oct 13 '17 at 5:34






                      • 2





                        @jeremy_rutman I believe this is due to cudnn and cublas context initialization. That is only relevant if you are executing kernels that use those libs though.

                        – Daniel
                        Feb 20 at 23:26
















                      259















                      You can set the fraction of GPU memory to be allocated when you construct a tf.Session by passing a tf.GPUOptions as part of the optional config argument:



                      # Assume that you have 12GB of GPU memory and want to allocate ~4GB:
                      gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)

                      sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))


                      The per_process_gpu_memory_fraction acts as a hard upper bound on the amount of GPU memory that will be used by the process on each GPU on the same machine. Currently, this fraction is applied uniformly to all of the GPUs on the same machine; there is no way to set this on a per-GPU basis.






                      share|improve this answer






















                      • 3





                        Thank you very much. This info is quite hidden in the current doc. I would never have found it by myself :-) If you can answer, I would like to ask for two additional infos: 1- Does this limit the amount of memory ever used, or just the memory initially allocated? (ie. will it still allocate more memory if there is a need for it by the computation graph) 2- Is there a way to set this on a per-GPU basis?

                        – Fabien C.
                        Dec 11 '15 at 1:29






                      • 13





                        Related note: setting CUDA_VISIBLE_DEVICES to limit TensorFlow to a single GPU works for me. See acceleware.com/blog/cudavisibledevices-masking-gpus

                        – rd11
                        Jan 12 '16 at 15:54






                      • 1





                        it seems that the memory allocation goes a bit over the request, e..g I requested per_process_gpu_memory_fraction=0.0909 on a 24443MiB gpu and got processes taking 2627MiB

                        – jeremy_rutman
                        Sep 23 '17 at 17:15







                      • 2





                        I can't seem to get this to work in a MonitoredTrainingSession

                        – Anjum Sayed
                        Oct 13 '17 at 5:34






                      • 2





                        @jeremy_rutman I believe this is due to cudnn and cublas context initialization. That is only relevant if you are executing kernels that use those libs though.

                        – Daniel
                        Feb 20 at 23:26














                      259














                      259










                      259









                      You can set the fraction of GPU memory to be allocated when you construct a tf.Session by passing a tf.GPUOptions as part of the optional config argument:



                      # Assume that you have 12GB of GPU memory and want to allocate ~4GB:
                      gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)

                      sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))


                      The per_process_gpu_memory_fraction acts as a hard upper bound on the amount of GPU memory that will be used by the process on each GPU on the same machine. Currently, this fraction is applied uniformly to all of the GPUs on the same machine; there is no way to set this on a per-GPU basis.






                      share|improve this answer















                      You can set the fraction of GPU memory to be allocated when you construct a tf.Session by passing a tf.GPUOptions as part of the optional config argument:



                      # Assume that you have 12GB of GPU memory and want to allocate ~4GB:
                      gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)

                      sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))


                      The per_process_gpu_memory_fraction acts as a hard upper bound on the amount of GPU memory that will be used by the process on each GPU on the same machine. Currently, this fraction is applied uniformly to all of the GPUs on the same machine; there is no way to set this on a per-GPU basis.







                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited Jun 8 '17 at 15:00

























                      answered Dec 10 '15 at 11:00









                      mrrymrry

                      106k15 gold badges317 silver badges362 bronze badges




                      106k15 gold badges317 silver badges362 bronze badges










                      • 3





                        Thank you very much. This info is quite hidden in the current doc. I would never have found it by myself :-) If you can answer, I would like to ask for two additional infos: 1- Does this limit the amount of memory ever used, or just the memory initially allocated? (ie. will it still allocate more memory if there is a need for it by the computation graph) 2- Is there a way to set this on a per-GPU basis?

                        – Fabien C.
                        Dec 11 '15 at 1:29






                      • 13





                        Related note: setting CUDA_VISIBLE_DEVICES to limit TensorFlow to a single GPU works for me. See acceleware.com/blog/cudavisibledevices-masking-gpus

                        – rd11
                        Jan 12 '16 at 15:54






                      • 1





                        it seems that the memory allocation goes a bit over the request, e..g I requested per_process_gpu_memory_fraction=0.0909 on a 24443MiB gpu and got processes taking 2627MiB

                        – jeremy_rutman
                        Sep 23 '17 at 17:15







                      • 2





                        I can't seem to get this to work in a MonitoredTrainingSession

                        – Anjum Sayed
                        Oct 13 '17 at 5:34






                      • 2





                        @jeremy_rutman I believe this is due to cudnn and cublas context initialization. That is only relevant if you are executing kernels that use those libs though.

                        – Daniel
                        Feb 20 at 23:26













                      • 3





                        Thank you very much. This info is quite hidden in the current doc. I would never have found it by myself :-) If you can answer, I would like to ask for two additional infos: 1- Does this limit the amount of memory ever used, or just the memory initially allocated? (ie. will it still allocate more memory if there is a need for it by the computation graph) 2- Is there a way to set this on a per-GPU basis?

                        – Fabien C.
                        Dec 11 '15 at 1:29






                      • 13





                        Related note: setting CUDA_VISIBLE_DEVICES to limit TensorFlow to a single GPU works for me. See acceleware.com/blog/cudavisibledevices-masking-gpus

                        – rd11
                        Jan 12 '16 at 15:54






                      • 1





                        it seems that the memory allocation goes a bit over the request, e..g I requested per_process_gpu_memory_fraction=0.0909 on a 24443MiB gpu and got processes taking 2627MiB

                        – jeremy_rutman
                        Sep 23 '17 at 17:15







                      • 2





                        I can't seem to get this to work in a MonitoredTrainingSession

                        – Anjum Sayed
                        Oct 13 '17 at 5:34






                      • 2





                        @jeremy_rutman I believe this is due to cudnn and cublas context initialization. That is only relevant if you are executing kernels that use those libs though.

                        – Daniel
                        Feb 20 at 23:26








                      3




                      3





                      Thank you very much. This info is quite hidden in the current doc. I would never have found it by myself :-) If you can answer, I would like to ask for two additional infos: 1- Does this limit the amount of memory ever used, or just the memory initially allocated? (ie. will it still allocate more memory if there is a need for it by the computation graph) 2- Is there a way to set this on a per-GPU basis?

                      – Fabien C.
                      Dec 11 '15 at 1:29





                      Thank you very much. This info is quite hidden in the current doc. I would never have found it by myself :-) If you can answer, I would like to ask for two additional infos: 1- Does this limit the amount of memory ever used, or just the memory initially allocated? (ie. will it still allocate more memory if there is a need for it by the computation graph) 2- Is there a way to set this on a per-GPU basis?

                      – Fabien C.
                      Dec 11 '15 at 1:29




                      13




                      13





                      Related note: setting CUDA_VISIBLE_DEVICES to limit TensorFlow to a single GPU works for me. See acceleware.com/blog/cudavisibledevices-masking-gpus

                      – rd11
                      Jan 12 '16 at 15:54





                      Related note: setting CUDA_VISIBLE_DEVICES to limit TensorFlow to a single GPU works for me. See acceleware.com/blog/cudavisibledevices-masking-gpus

                      – rd11
                      Jan 12 '16 at 15:54




                      1




                      1





                      it seems that the memory allocation goes a bit over the request, e..g I requested per_process_gpu_memory_fraction=0.0909 on a 24443MiB gpu and got processes taking 2627MiB

                      – jeremy_rutman
                      Sep 23 '17 at 17:15






                      it seems that the memory allocation goes a bit over the request, e..g I requested per_process_gpu_memory_fraction=0.0909 on a 24443MiB gpu and got processes taking 2627MiB

                      – jeremy_rutman
                      Sep 23 '17 at 17:15





                      2




                      2





                      I can't seem to get this to work in a MonitoredTrainingSession

                      – Anjum Sayed
                      Oct 13 '17 at 5:34





                      I can't seem to get this to work in a MonitoredTrainingSession

                      – Anjum Sayed
                      Oct 13 '17 at 5:34




                      2




                      2





                      @jeremy_rutman I believe this is due to cudnn and cublas context initialization. That is only relevant if you are executing kernels that use those libs though.

                      – Daniel
                      Feb 20 at 23:26






                      @jeremy_rutman I believe this is due to cudnn and cublas context initialization. That is only relevant if you are executing kernels that use those libs though.

                      – Daniel
                      Feb 20 at 23:26














                      159















                      config = tf.ConfigProto()
                      config.gpu_options.allow_growth=True
                      sess = tf.Session(config=config)


                      https://github.com/tensorflow/tensorflow/issues/1578






                      share|improve this answer




















                      • 12





                        This one is exactly what I want because in a multi-user environment, it is very inconvenient to specify the exact amount of GPU memory to reserve in the code itself.

                        – xuancong84
                        Oct 3 '16 at 1:07






                      • 1





                        Also, if you're using Keras with a TF backend, you can use this and run from keras import backend as K and K.set_session(sess) to avoid memory limitations

                        – Tobsta
                        Jul 1 at 4:52















                      159















                      config = tf.ConfigProto()
                      config.gpu_options.allow_growth=True
                      sess = tf.Session(config=config)


                      https://github.com/tensorflow/tensorflow/issues/1578






                      share|improve this answer




















                      • 12





                        This one is exactly what I want because in a multi-user environment, it is very inconvenient to specify the exact amount of GPU memory to reserve in the code itself.

                        – xuancong84
                        Oct 3 '16 at 1:07






                      • 1





                        Also, if you're using Keras with a TF backend, you can use this and run from keras import backend as K and K.set_session(sess) to avoid memory limitations

                        – Tobsta
                        Jul 1 at 4:52













                      159














                      159










                      159









                      config = tf.ConfigProto()
                      config.gpu_options.allow_growth=True
                      sess = tf.Session(config=config)


                      https://github.com/tensorflow/tensorflow/issues/1578






                      share|improve this answer













                      config = tf.ConfigProto()
                      config.gpu_options.allow_growth=True
                      sess = tf.Session(config=config)


                      https://github.com/tensorflow/tensorflow/issues/1578







                      share|improve this answer












                      share|improve this answer



                      share|improve this answer










                      answered May 26 '16 at 7:43









                      Sergey DemyanovSergey Demyanov

                      1,9801 gold badge12 silver badges9 bronze badges




                      1,9801 gold badge12 silver badges9 bronze badges










                      • 12





                        This one is exactly what I want because in a multi-user environment, it is very inconvenient to specify the exact amount of GPU memory to reserve in the code itself.

                        – xuancong84
                        Oct 3 '16 at 1:07






                      • 1





                        Also, if you're using Keras with a TF backend, you can use this and run from keras import backend as K and K.set_session(sess) to avoid memory limitations

                        – Tobsta
                        Jul 1 at 4:52












                      • 12





                        This one is exactly what I want because in a multi-user environment, it is very inconvenient to specify the exact amount of GPU memory to reserve in the code itself.

                        – xuancong84
                        Oct 3 '16 at 1:07






                      • 1





                        Also, if you're using Keras with a TF backend, you can use this and run from keras import backend as K and K.set_session(sess) to avoid memory limitations

                        – Tobsta
                        Jul 1 at 4:52







                      12




                      12





                      This one is exactly what I want because in a multi-user environment, it is very inconvenient to specify the exact amount of GPU memory to reserve in the code itself.

                      – xuancong84
                      Oct 3 '16 at 1:07





                      This one is exactly what I want because in a multi-user environment, it is very inconvenient to specify the exact amount of GPU memory to reserve in the code itself.

                      – xuancong84
                      Oct 3 '16 at 1:07




                      1




                      1





                      Also, if you're using Keras with a TF backend, you can use this and run from keras import backend as K and K.set_session(sess) to avoid memory limitations

                      – Tobsta
                      Jul 1 at 4:52





                      Also, if you're using Keras with a TF backend, you can use this and run from keras import backend as K and K.set_session(sess) to avoid memory limitations

                      – Tobsta
                      Jul 1 at 4:52











                      39















                      Here is an excerpt from the Book Deep Learning with TensorFlow




                      In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as it is needed by the process. TensorFlow provides two configuration options on the session to control this. The first is the allow_growth option, which attempts to allocate only as much GPU memory based on runtime allocations, it starts out allocating very little memory, and as sessions get run and more GPU memory is needed, we extend the GPU memory region needed by the TensorFlow process.




                      1) Allow growth: (more flexible)



                      config = tf.ConfigProto()
                      config.gpu_options.allow_growth = True
                      session = tf.Session(config=config, ...)


                      The second method is per_process_gpu_memory_fraction option, which determines the fraction of the overall amount of memory that each visible GPU should be allocated. Note: No release of memory needed, it can even worsen memory fragmentation when done.



                      2) Allocate fixed memory:



                      To only allocate 40% of the total memory of each GPU by:



                      config = tf.ConfigProto()
                      config.gpu_options.per_process_gpu_memory_fraction = 0.4
                      session = tf.Session(config=config, ...)


                      Note:
                      That's only useful though if you truly want to bind the amount of GPU memory available on the TensorFlow process.






                      share|improve this answer





























                        39















                        Here is an excerpt from the Book Deep Learning with TensorFlow




                        In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as it is needed by the process. TensorFlow provides two configuration options on the session to control this. The first is the allow_growth option, which attempts to allocate only as much GPU memory based on runtime allocations, it starts out allocating very little memory, and as sessions get run and more GPU memory is needed, we extend the GPU memory region needed by the TensorFlow process.




                        1) Allow growth: (more flexible)



                        config = tf.ConfigProto()
                        config.gpu_options.allow_growth = True
                        session = tf.Session(config=config, ...)


                        The second method is per_process_gpu_memory_fraction option, which determines the fraction of the overall amount of memory that each visible GPU should be allocated. Note: No release of memory needed, it can even worsen memory fragmentation when done.



                        2) Allocate fixed memory:



                        To only allocate 40% of the total memory of each GPU by:



                        config = tf.ConfigProto()
                        config.gpu_options.per_process_gpu_memory_fraction = 0.4
                        session = tf.Session(config=config, ...)


                        Note:
                        That's only useful though if you truly want to bind the amount of GPU memory available on the TensorFlow process.






                        share|improve this answer



























                          39














                          39










                          39









                          Here is an excerpt from the Book Deep Learning with TensorFlow




                          In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as it is needed by the process. TensorFlow provides two configuration options on the session to control this. The first is the allow_growth option, which attempts to allocate only as much GPU memory based on runtime allocations, it starts out allocating very little memory, and as sessions get run and more GPU memory is needed, we extend the GPU memory region needed by the TensorFlow process.




                          1) Allow growth: (more flexible)



                          config = tf.ConfigProto()
                          config.gpu_options.allow_growth = True
                          session = tf.Session(config=config, ...)


                          The second method is per_process_gpu_memory_fraction option, which determines the fraction of the overall amount of memory that each visible GPU should be allocated. Note: No release of memory needed, it can even worsen memory fragmentation when done.



                          2) Allocate fixed memory:



                          To only allocate 40% of the total memory of each GPU by:



                          config = tf.ConfigProto()
                          config.gpu_options.per_process_gpu_memory_fraction = 0.4
                          session = tf.Session(config=config, ...)


                          Note:
                          That's only useful though if you truly want to bind the amount of GPU memory available on the TensorFlow process.






                          share|improve this answer













                          Here is an excerpt from the Book Deep Learning with TensorFlow




                          In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as it is needed by the process. TensorFlow provides two configuration options on the session to control this. The first is the allow_growth option, which attempts to allocate only as much GPU memory based on runtime allocations, it starts out allocating very little memory, and as sessions get run and more GPU memory is needed, we extend the GPU memory region needed by the TensorFlow process.




                          1) Allow growth: (more flexible)



                          config = tf.ConfigProto()
                          config.gpu_options.allow_growth = True
                          session = tf.Session(config=config, ...)


                          The second method is per_process_gpu_memory_fraction option, which determines the fraction of the overall amount of memory that each visible GPU should be allocated. Note: No release of memory needed, it can even worsen memory fragmentation when done.



                          2) Allocate fixed memory:



                          To only allocate 40% of the total memory of each GPU by:



                          config = tf.ConfigProto()
                          config.gpu_options.per_process_gpu_memory_fraction = 0.4
                          session = tf.Session(config=config, ...)


                          Note:
                          That's only useful though if you truly want to bind the amount of GPU memory available on the TensorFlow process.







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered Jan 11 '18 at 18:57









                          user1767754user1767754

                          11.5k5 gold badges79 silver badges95 bronze badges




                          11.5k5 gold badges79 silver badges95 bronze badges
























                              13















                              All the answers above assume execution with a sess.run() call, which is becoming the exception rather than the rule in recent versions of TensorFlow.



                              When using the tf.Estimator framework (TensorFlow 1.4 and above) the way to pass the fraction along to the implicitly created MonitoredTrainingSession is,



                              opts = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
                              conf = tf.ConfigProto(gpu_options=opts)
                              trainingConfig = tf.estimator.RunConfig(session_config=conf, ...)
                              tf.estimator.Estimator(model_fn=...,
                              config=trainingConfig)


                              Similarly in Eager mode (TensorFlow 1.5 and above),



                              opts = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
                              conf = tf.ConfigProto(gpu_options=opts)
                              tfe.enable_eager_execution(config=conf)


                              Edit: 11-04-2018
                              As an example, if you are to use tf.contrib.gan.train, then you can use something similar to bellow:



                              tf.contrib.gan.gan_train(........, config=conf)





                              share|improve this answer































                                13















                                All the answers above assume execution with a sess.run() call, which is becoming the exception rather than the rule in recent versions of TensorFlow.



                                When using the tf.Estimator framework (TensorFlow 1.4 and above) the way to pass the fraction along to the implicitly created MonitoredTrainingSession is,



                                opts = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
                                conf = tf.ConfigProto(gpu_options=opts)
                                trainingConfig = tf.estimator.RunConfig(session_config=conf, ...)
                                tf.estimator.Estimator(model_fn=...,
                                config=trainingConfig)


                                Similarly in Eager mode (TensorFlow 1.5 and above),



                                opts = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
                                conf = tf.ConfigProto(gpu_options=opts)
                                tfe.enable_eager_execution(config=conf)


                                Edit: 11-04-2018
                                As an example, if you are to use tf.contrib.gan.train, then you can use something similar to bellow:



                                tf.contrib.gan.gan_train(........, config=conf)





                                share|improve this answer





























                                  13














                                  13










                                  13









                                  All the answers above assume execution with a sess.run() call, which is becoming the exception rather than the rule in recent versions of TensorFlow.



                                  When using the tf.Estimator framework (TensorFlow 1.4 and above) the way to pass the fraction along to the implicitly created MonitoredTrainingSession is,



                                  opts = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
                                  conf = tf.ConfigProto(gpu_options=opts)
                                  trainingConfig = tf.estimator.RunConfig(session_config=conf, ...)
                                  tf.estimator.Estimator(model_fn=...,
                                  config=trainingConfig)


                                  Similarly in Eager mode (TensorFlow 1.5 and above),



                                  opts = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
                                  conf = tf.ConfigProto(gpu_options=opts)
                                  tfe.enable_eager_execution(config=conf)


                                  Edit: 11-04-2018
                                  As an example, if you are to use tf.contrib.gan.train, then you can use something similar to bellow:



                                  tf.contrib.gan.gan_train(........, config=conf)





                                  share|improve this answer















                                  All the answers above assume execution with a sess.run() call, which is becoming the exception rather than the rule in recent versions of TensorFlow.



                                  When using the tf.Estimator framework (TensorFlow 1.4 and above) the way to pass the fraction along to the implicitly created MonitoredTrainingSession is,



                                  opts = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
                                  conf = tf.ConfigProto(gpu_options=opts)
                                  trainingConfig = tf.estimator.RunConfig(session_config=conf, ...)
                                  tf.estimator.Estimator(model_fn=...,
                                  config=trainingConfig)


                                  Similarly in Eager mode (TensorFlow 1.5 and above),



                                  opts = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
                                  conf = tf.ConfigProto(gpu_options=opts)
                                  tfe.enable_eager_execution(config=conf)


                                  Edit: 11-04-2018
                                  As an example, if you are to use tf.contrib.gan.train, then you can use something similar to bellow:



                                  tf.contrib.gan.gan_train(........, config=conf)






                                  share|improve this answer














                                  share|improve this answer



                                  share|improve this answer








                                  edited Apr 11 '18 at 5:56









                                  GPrathap

                                  3,4095 gold badges36 silver badges57 bronze badges




                                  3,4095 gold badges36 silver badges57 bronze badges










                                  answered Feb 8 '18 at 3:25









                                  UrsUrs

                                  5054 silver badges9 bronze badges




                                  5054 silver badges9 bronze badges
























                                      11















                                      Updated for TensorFlow 2.0 Alpha and beyond



                                      From the 2.0 Alpha docs, the answer is now just one line before you do anything with TensorFlow:



                                      import tensorflow as tf
                                      tf.config.gpu.set_per_process_memory_growth(True)





                                      share|improve this answer



























                                      • Does this work for Tensorflow 1.13 ?

                                        – DollarAkshay
                                        May 1 at 3:26






                                      • 1





                                        @AkshayLAradhya no this is only for TF 2.0 and above. The other answers here will work fine for 1.13 and earlier.

                                        – Theo
                                        May 1 at 21:19















                                      11















                                      Updated for TensorFlow 2.0 Alpha and beyond



                                      From the 2.0 Alpha docs, the answer is now just one line before you do anything with TensorFlow:



                                      import tensorflow as tf
                                      tf.config.gpu.set_per_process_memory_growth(True)





                                      share|improve this answer



























                                      • Does this work for Tensorflow 1.13 ?

                                        – DollarAkshay
                                        May 1 at 3:26






                                      • 1





                                        @AkshayLAradhya no this is only for TF 2.0 and above. The other answers here will work fine for 1.13 and earlier.

                                        – Theo
                                        May 1 at 21:19













                                      11














                                      11










                                      11









                                      Updated for TensorFlow 2.0 Alpha and beyond



                                      From the 2.0 Alpha docs, the answer is now just one line before you do anything with TensorFlow:



                                      import tensorflow as tf
                                      tf.config.gpu.set_per_process_memory_growth(True)





                                      share|improve this answer















                                      Updated for TensorFlow 2.0 Alpha and beyond



                                      From the 2.0 Alpha docs, the answer is now just one line before you do anything with TensorFlow:



                                      import tensorflow as tf
                                      tf.config.gpu.set_per_process_memory_growth(True)






                                      share|improve this answer














                                      share|improve this answer



                                      share|improve this answer








                                      edited Apr 5 at 19:34

























                                      answered Apr 5 at 18:26









                                      TheoTheo

                                      1383 silver badges8 bronze badges




                                      1383 silver badges8 bronze badges















                                      • Does this work for Tensorflow 1.13 ?

                                        – DollarAkshay
                                        May 1 at 3:26






                                      • 1





                                        @AkshayLAradhya no this is only for TF 2.0 and above. The other answers here will work fine for 1.13 and earlier.

                                        – Theo
                                        May 1 at 21:19

















                                      • Does this work for Tensorflow 1.13 ?

                                        – DollarAkshay
                                        May 1 at 3:26






                                      • 1





                                        @AkshayLAradhya no this is only for TF 2.0 and above. The other answers here will work fine for 1.13 and earlier.

                                        – Theo
                                        May 1 at 21:19
















                                      Does this work for Tensorflow 1.13 ?

                                      – DollarAkshay
                                      May 1 at 3:26





                                      Does this work for Tensorflow 1.13 ?

                                      – DollarAkshay
                                      May 1 at 3:26




                                      1




                                      1





                                      @AkshayLAradhya no this is only for TF 2.0 and above. The other answers here will work fine for 1.13 and earlier.

                                      – Theo
                                      May 1 at 21:19





                                      @AkshayLAradhya no this is only for TF 2.0 and above. The other answers here will work fine for 1.13 and earlier.

                                      – Theo
                                      May 1 at 21:19











                                      3















                                      Shameless plug: If you install the GPU supported Tensorflow, the session will first allocate all GPU whether you set it to use only CPU or GPU. I may add my tip that even you set the graph to use CPU only you should set the same configuration(as answered above:) ) to prevent the unwanted GPU occupation.



                                      And in interactive interface like IPython you should also set that configure, otherwise it will allocate all memory and left almost none for others. This is sometimes hard to notice.






                                      share|improve this answer































                                        3















                                        Shameless plug: If you install the GPU supported Tensorflow, the session will first allocate all GPU whether you set it to use only CPU or GPU. I may add my tip that even you set the graph to use CPU only you should set the same configuration(as answered above:) ) to prevent the unwanted GPU occupation.



                                        And in interactive interface like IPython you should also set that configure, otherwise it will allocate all memory and left almost none for others. This is sometimes hard to notice.






                                        share|improve this answer





























                                          3














                                          3










                                          3









                                          Shameless plug: If you install the GPU supported Tensorflow, the session will first allocate all GPU whether you set it to use only CPU or GPU. I may add my tip that even you set the graph to use CPU only you should set the same configuration(as answered above:) ) to prevent the unwanted GPU occupation.



                                          And in interactive interface like IPython you should also set that configure, otherwise it will allocate all memory and left almost none for others. This is sometimes hard to notice.






                                          share|improve this answer















                                          Shameless plug: If you install the GPU supported Tensorflow, the session will first allocate all GPU whether you set it to use only CPU or GPU. I may add my tip that even you set the graph to use CPU only you should set the same configuration(as answered above:) ) to prevent the unwanted GPU occupation.



                                          And in interactive interface like IPython you should also set that configure, otherwise it will allocate all memory and left almost none for others. This is sometimes hard to notice.







                                          share|improve this answer














                                          share|improve this answer



                                          share|improve this answer








                                          edited Jan 15 '18 at 11:45

























                                          answered May 23 '17 at 7:52









                                          Lerner ZhangLerner Zhang

                                          2,6661 gold badge18 silver badges37 bronze badges




                                          2,6661 gold badge18 silver badges37 bronze badges
























                                              1















                                              Tensorflow 2.0 Beta and (probably) beyond



                                              The API changed again. It can be now found in:



                                              tf.config.experimental.set_memory_growth(
                                              device,
                                              enable
                                              )


                                              Aliases:



                                              • tf.compat.v1.config.experimental.set_memory_growth

                                              • tf.compat.v2.config.experimental.set_memory_growth

                                              • tf.config.experimental.set_memory_growth

                                              https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/config/experimental/set_memory_growth
                                              https://www.tensorflow.org/beta/guide/using_gpu#limiting_gpu_memory_growth






                                              share|improve this answer































                                                1















                                                Tensorflow 2.0 Beta and (probably) beyond



                                                The API changed again. It can be now found in:



                                                tf.config.experimental.set_memory_growth(
                                                device,
                                                enable
                                                )


                                                Aliases:



                                                • tf.compat.v1.config.experimental.set_memory_growth

                                                • tf.compat.v2.config.experimental.set_memory_growth

                                                • tf.config.experimental.set_memory_growth

                                                https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/config/experimental/set_memory_growth
                                                https://www.tensorflow.org/beta/guide/using_gpu#limiting_gpu_memory_growth






                                                share|improve this answer





























                                                  1














                                                  1










                                                  1









                                                  Tensorflow 2.0 Beta and (probably) beyond



                                                  The API changed again. It can be now found in:



                                                  tf.config.experimental.set_memory_growth(
                                                  device,
                                                  enable
                                                  )


                                                  Aliases:



                                                  • tf.compat.v1.config.experimental.set_memory_growth

                                                  • tf.compat.v2.config.experimental.set_memory_growth

                                                  • tf.config.experimental.set_memory_growth

                                                  https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/config/experimental/set_memory_growth
                                                  https://www.tensorflow.org/beta/guide/using_gpu#limiting_gpu_memory_growth






                                                  share|improve this answer















                                                  Tensorflow 2.0 Beta and (probably) beyond



                                                  The API changed again. It can be now found in:



                                                  tf.config.experimental.set_memory_growth(
                                                  device,
                                                  enable
                                                  )


                                                  Aliases:



                                                  • tf.compat.v1.config.experimental.set_memory_growth

                                                  • tf.compat.v2.config.experimental.set_memory_growth

                                                  • tf.config.experimental.set_memory_growth

                                                  https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/config/experimental/set_memory_growth
                                                  https://www.tensorflow.org/beta/guide/using_gpu#limiting_gpu_memory_growth







                                                  share|improve this answer














                                                  share|improve this answer



                                                  share|improve this answer








                                                  edited Jun 17 at 13:22

























                                                  answered Jun 17 at 13:08









                                                  mx_mucmx_muc

                                                  648 bronze badges




                                                  648 bronze badges
























                                                      1















                                                      You can use



                                                      TF_FORCE_GPU_ALLOW_GROWTH=true


                                                      in your environment variables.



                                                      In tensorflow code:



                                                      bool GPUBFCAllocator::GetAllowGrowthValue(const GPUOptions& gpu_options) {
                                                      const char* force_allow_growth_string =
                                                      std::getenv("TF_FORCE_GPU_ALLOW_GROWTH");
                                                      if (force_allow_growth_string == nullptr)
                                                      return gpu_options.allow_growth();






                                                      share|improve this answer































                                                        1















                                                        You can use



                                                        TF_FORCE_GPU_ALLOW_GROWTH=true


                                                        in your environment variables.



                                                        In tensorflow code:



                                                        bool GPUBFCAllocator::GetAllowGrowthValue(const GPUOptions& gpu_options) {
                                                        const char* force_allow_growth_string =
                                                        std::getenv("TF_FORCE_GPU_ALLOW_GROWTH");
                                                        if (force_allow_growth_string == nullptr)
                                                        return gpu_options.allow_growth();






                                                        share|improve this answer





























                                                          1














                                                          1










                                                          1









                                                          You can use



                                                          TF_FORCE_GPU_ALLOW_GROWTH=true


                                                          in your environment variables.



                                                          In tensorflow code:



                                                          bool GPUBFCAllocator::GetAllowGrowthValue(const GPUOptions& gpu_options) {
                                                          const char* force_allow_growth_string =
                                                          std::getenv("TF_FORCE_GPU_ALLOW_GROWTH");
                                                          if (force_allow_growth_string == nullptr)
                                                          return gpu_options.allow_growth();






                                                          share|improve this answer















                                                          You can use



                                                          TF_FORCE_GPU_ALLOW_GROWTH=true


                                                          in your environment variables.



                                                          In tensorflow code:



                                                          bool GPUBFCAllocator::GetAllowGrowthValue(const GPUOptions& gpu_options) {
                                                          const char* force_allow_growth_string =
                                                          std::getenv("TF_FORCE_GPU_ALLOW_GROWTH");
                                                          if (force_allow_growth_string == nullptr)
                                                          return gpu_options.allow_growth();







                                                          share|improve this answer














                                                          share|improve this answer



                                                          share|improve this answer








                                                          edited Jun 17 at 16:44

























                                                          answered Jun 2 at 17:15









                                                          Mey KhaliliMey Khalili

                                                          112 bronze badges




                                                          112 bronze badges
























                                                              0















                                                              i tried to train unet on voc data set but because of huge image size, memory finishes. i tried all the above tips, even tried with batch size==1, yet to no improvement. sometimes TensorFlow version also causes the memory issues. try by using




                                                              pip install tensorflow-gpu==1.8.0







                                                              share|improve this answer





























                                                                0















                                                                i tried to train unet on voc data set but because of huge image size, memory finishes. i tried all the above tips, even tried with batch size==1, yet to no improvement. sometimes TensorFlow version also causes the memory issues. try by using




                                                                pip install tensorflow-gpu==1.8.0







                                                                share|improve this answer



























                                                                  0














                                                                  0










                                                                  0









                                                                  i tried to train unet on voc data set but because of huge image size, memory finishes. i tried all the above tips, even tried with batch size==1, yet to no improvement. sometimes TensorFlow version also causes the memory issues. try by using




                                                                  pip install tensorflow-gpu==1.8.0







                                                                  share|improve this answer













                                                                  i tried to train unet on voc data set but because of huge image size, memory finishes. i tried all the above tips, even tried with batch size==1, yet to no improvement. sometimes TensorFlow version also causes the memory issues. try by using




                                                                  pip install tensorflow-gpu==1.8.0








                                                                  share|improve this answer












                                                                  share|improve this answer



                                                                  share|improve this answer










                                                                  answered Oct 16 '18 at 6:05









                                                                  KhanKhan

                                                                  4012 silver badges7 bronze badges




                                                                  4012 silver badges7 bronze badges
























                                                                      0















                                                                      Well I am new to tensorflow, I have Geforce 740m or something GPU with 2GB ram, I was running mnist handwritten kind of example for a native language with training data containing of 38700 images and 4300 testing images and was trying to get precision , recall , F1 using following code as sklearn was not giving me precise reults. once i added this to my existing code i started getting GPU errors.



                                                                      TP = tf.count_nonzero(predicted * actual)
                                                                      TN = tf.count_nonzero((predicted - 1) * (actual - 1))
                                                                      FP = tf.count_nonzero(predicted * (actual - 1))
                                                                      FN = tf.count_nonzero((predicted - 1) * actual)

                                                                      prec = TP / (TP + FP)
                                                                      recall = TP / (TP + FN)
                                                                      f1 = 2 * prec * recall / (prec + recall)


                                                                      plus my model was heavy i guess, i was getting memory error after 147, 148 epochs, and then I thought why not create functions for the tasks so I dont know if it works this way in tensrorflow, but I thought if a local variable is used and when out of scope it may release memory and i defined the above elements for training and testing in modules, I was able to achieve 10000 epochs without any issues, I hope this will help..






                                                                      share|improve this answer

























                                                                      • I am amazed at TF's utility but also by it's memory use. On the CPU python allocating 30GB or so for a training job on the flowers dataset used in may TF examples. Insane.

                                                                        – Eric M
                                                                        Apr 12 at 15:30















                                                                      0















                                                                      Well I am new to tensorflow, I have Geforce 740m or something GPU with 2GB ram, I was running mnist handwritten kind of example for a native language with training data containing of 38700 images and 4300 testing images and was trying to get precision , recall , F1 using following code as sklearn was not giving me precise reults. once i added this to my existing code i started getting GPU errors.



                                                                      TP = tf.count_nonzero(predicted * actual)
                                                                      TN = tf.count_nonzero((predicted - 1) * (actual - 1))
                                                                      FP = tf.count_nonzero(predicted * (actual - 1))
                                                                      FN = tf.count_nonzero((predicted - 1) * actual)

                                                                      prec = TP / (TP + FP)
                                                                      recall = TP / (TP + FN)
                                                                      f1 = 2 * prec * recall / (prec + recall)


                                                                      plus my model was heavy i guess, i was getting memory error after 147, 148 epochs, and then I thought why not create functions for the tasks so I dont know if it works this way in tensrorflow, but I thought if a local variable is used and when out of scope it may release memory and i defined the above elements for training and testing in modules, I was able to achieve 10000 epochs without any issues, I hope this will help..






                                                                      share|improve this answer

























                                                                      • I am amazed at TF's utility but also by it's memory use. On the CPU python allocating 30GB or so for a training job on the flowers dataset used in may TF examples. Insane.

                                                                        – Eric M
                                                                        Apr 12 at 15:30













                                                                      0














                                                                      0










                                                                      0









                                                                      Well I am new to tensorflow, I have Geforce 740m or something GPU with 2GB ram, I was running mnist handwritten kind of example for a native language with training data containing of 38700 images and 4300 testing images and was trying to get precision , recall , F1 using following code as sklearn was not giving me precise reults. once i added this to my existing code i started getting GPU errors.



                                                                      TP = tf.count_nonzero(predicted * actual)
                                                                      TN = tf.count_nonzero((predicted - 1) * (actual - 1))
                                                                      FP = tf.count_nonzero(predicted * (actual - 1))
                                                                      FN = tf.count_nonzero((predicted - 1) * actual)

                                                                      prec = TP / (TP + FP)
                                                                      recall = TP / (TP + FN)
                                                                      f1 = 2 * prec * recall / (prec + recall)


                                                                      plus my model was heavy i guess, i was getting memory error after 147, 148 epochs, and then I thought why not create functions for the tasks so I dont know if it works this way in tensrorflow, but I thought if a local variable is used and when out of scope it may release memory and i defined the above elements for training and testing in modules, I was able to achieve 10000 epochs without any issues, I hope this will help..






                                                                      share|improve this answer













                                                                      Well I am new to tensorflow, I have Geforce 740m or something GPU with 2GB ram, I was running mnist handwritten kind of example for a native language with training data containing of 38700 images and 4300 testing images and was trying to get precision , recall , F1 using following code as sklearn was not giving me precise reults. once i added this to my existing code i started getting GPU errors.



                                                                      TP = tf.count_nonzero(predicted * actual)
                                                                      TN = tf.count_nonzero((predicted - 1) * (actual - 1))
                                                                      FP = tf.count_nonzero(predicted * (actual - 1))
                                                                      FN = tf.count_nonzero((predicted - 1) * actual)

                                                                      prec = TP / (TP + FP)
                                                                      recall = TP / (TP + FN)
                                                                      f1 = 2 * prec * recall / (prec + recall)


                                                                      plus my model was heavy i guess, i was getting memory error after 147, 148 epochs, and then I thought why not create functions for the tasks so I dont know if it works this way in tensrorflow, but I thought if a local variable is used and when out of scope it may release memory and i defined the above elements for training and testing in modules, I was able to achieve 10000 epochs without any issues, I hope this will help..







                                                                      share|improve this answer












                                                                      share|improve this answer



                                                                      share|improve this answer










                                                                      answered Jan 21 at 17:26









                                                                      Imran Ud DinImran Ud Din

                                                                      491 bronze badge




                                                                      491 bronze badge















                                                                      • I am amazed at TF's utility but also by it's memory use. On the CPU python allocating 30GB or so for a training job on the flowers dataset used in may TF examples. Insane.

                                                                        – Eric M
                                                                        Apr 12 at 15:30

















                                                                      • I am amazed at TF's utility but also by it's memory use. On the CPU python allocating 30GB or so for a training job on the flowers dataset used in may TF examples. Insane.

                                                                        – Eric M
                                                                        Apr 12 at 15:30
















                                                                      I am amazed at TF's utility but also by it's memory use. On the CPU python allocating 30GB or so for a training job on the flowers dataset used in may TF examples. Insane.

                                                                      – Eric M
                                                                      Apr 12 at 15:30





                                                                      I am amazed at TF's utility but also by it's memory use. On the CPU python allocating 30GB or so for a training job on the flowers dataset used in may TF examples. Insane.

                                                                      – Eric M
                                                                      Apr 12 at 15:30





                                                                      protected by Sheldore Jul 12 at 8:29



                                                                      Thank you for your interest in this question.
                                                                      Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



                                                                      Would you like to answer one of these unanswered questions instead?



                                                                      Popular posts from this blog

                                                                      Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

                                                                      SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

                                                                      은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현