How exactly k8s reserves resources for a namespace?Allocate or Limit resource for pods in Kubernetes?Why is kubernetes reporting scheduling error because of Memory resource limits?Specify actual resources requestsautoscaling in multi tenant kubernetes clusterHow does multiple replicas/pods scale Kubernetes?Overprovision Pods resources in Kubernetes HPAChoosing the compute resources of the nodes in the cluster with horizontal scalingMonitoring pod resource usage on Kubernetes nodesDoes the nodes of a Kubernetes cluster share memoryGKE autoscaling doesn't scale

Why is the result of ('b'+'a'+ + 'a' + 'a').toLowerCase() 'banana'?

How to assign many blockers at the same time?

Does Molecular Weight of a Gas affect its lifting properties at the same velocity over the same wing?

These were just lying around

Safest way to store environment variable value in a file

Email address etiquette - Which address should I use to contact professors?

How do some PhD students get 10+ papers? Is that what I need for landing good faculty position?

Do beef farmed pastures net remove carbon emissions?

Do I have to cite common CS algorithms?

Is There a Tool to Select Files to Download From an Org in VSCode?

How does proof assistant organize knowledge?

Why are Tucker and Malcolm not dead?

Basic properties of expectation in non-separable Banach spaces

Is 悪いところを見つかった proper Japanese?

Plotting octahedron inside the sphere and sphere inside the cube

If clocks themselves are based on light signals, wouldn't we expect the measured speed of light to always be the same constant?

Word for an event that will likely never happen again

How are you supposed to know the strumming pattern for a song from the "chord sheet music"?

How much maintenance time did it take to make an F4U Corsair ready for another flight?

Are employers legally allowed to pay employees in goods and services equal to or greater than the minimum wage?

Why did Gandalf use a sword against the Balrog?

Can a PC use the Levitate spell to avoid movement speed reduction from exhaustion?

Is there a command to install basic applications on Ubuntu 16.04?

How can I decide if my homebrew item should require attunement?



How exactly k8s reserves resources for a namespace?


Allocate or Limit resource for pods in Kubernetes?Why is kubernetes reporting scheduling error because of Memory resource limits?Specify actual resources requestsautoscaling in multi tenant kubernetes clusterHow does multiple replicas/pods scale Kubernetes?Overprovision Pods resources in Kubernetes HPAChoosing the compute resources of the nodes in the cluster with horizontal scalingMonitoring pod resource usage on Kubernetes nodesDoes the nodes of a Kubernetes cluster share memoryGKE autoscaling doesn't scale






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








1















I have the following questions regarding request/limit quota for ns:



Considering the following namespace resource setup:
- request: 1 core/1GiB
- limit: 2 core/2GiB



  1. Does it mean a namespace is guaranteed to have 1/1GiB? How is it achieved physically on cluster nodes? Does it mean k8s somehow strictly reserve these values for a ns (at a time it's created)? At which point of time reservation takes place?


  2. Limit 2 core/2GiB - does it mean it's not guaranteed for a ns and depends on current cluster's state? Like if currently cluster has only 100MiB of free ram available, but in runtime pod needs 200Mib more above a resource request - pod will be restarted? Where does k8s take this resource if pod needs to go above it's request?


  3. Regarding namespace granularity and k8s horizontal auto scaling: consider we have 2 applications and 2 namespaces - 1 ns per each app. We set both ns quota as such that there's some free buffer for 2 extra pods and horizontal auto scaling up to 2 pods with certain CPU threshold. So, is there really a point in doing such a set up? My concern is that if NS reserves it's resources and no other ns can utilize them - we can just create 2 extra pods in each ns replica set with no auto scaling, using these pods constantly. I can see a point in using auto scaling if we have more than 1 application in 1 ns, so that these apps could share same resource buffer for scaling. Is this assumption correct?


  4. How do you think is this a good practice to have 1 ns per app? Why?


p.s. i know what resource request/limit are and difference between them. In most info sources there's just very high level explanation of the concept.



Thanks in advance.










share|improve this question






























    1















    I have the following questions regarding request/limit quota for ns:



    Considering the following namespace resource setup:
    - request: 1 core/1GiB
    - limit: 2 core/2GiB



    1. Does it mean a namespace is guaranteed to have 1/1GiB? How is it achieved physically on cluster nodes? Does it mean k8s somehow strictly reserve these values for a ns (at a time it's created)? At which point of time reservation takes place?


    2. Limit 2 core/2GiB - does it mean it's not guaranteed for a ns and depends on current cluster's state? Like if currently cluster has only 100MiB of free ram available, but in runtime pod needs 200Mib more above a resource request - pod will be restarted? Where does k8s take this resource if pod needs to go above it's request?


    3. Regarding namespace granularity and k8s horizontal auto scaling: consider we have 2 applications and 2 namespaces - 1 ns per each app. We set both ns quota as such that there's some free buffer for 2 extra pods and horizontal auto scaling up to 2 pods with certain CPU threshold. So, is there really a point in doing such a set up? My concern is that if NS reserves it's resources and no other ns can utilize them - we can just create 2 extra pods in each ns replica set with no auto scaling, using these pods constantly. I can see a point in using auto scaling if we have more than 1 application in 1 ns, so that these apps could share same resource buffer for scaling. Is this assumption correct?


    4. How do you think is this a good practice to have 1 ns per app? Why?


    p.s. i know what resource request/limit are and difference between them. In most info sources there's just very high level explanation of the concept.



    Thanks in advance.










    share|improve this question


























      1












      1








      1








      I have the following questions regarding request/limit quota for ns:



      Considering the following namespace resource setup:
      - request: 1 core/1GiB
      - limit: 2 core/2GiB



      1. Does it mean a namespace is guaranteed to have 1/1GiB? How is it achieved physically on cluster nodes? Does it mean k8s somehow strictly reserve these values for a ns (at a time it's created)? At which point of time reservation takes place?


      2. Limit 2 core/2GiB - does it mean it's not guaranteed for a ns and depends on current cluster's state? Like if currently cluster has only 100MiB of free ram available, but in runtime pod needs 200Mib more above a resource request - pod will be restarted? Where does k8s take this resource if pod needs to go above it's request?


      3. Regarding namespace granularity and k8s horizontal auto scaling: consider we have 2 applications and 2 namespaces - 1 ns per each app. We set both ns quota as such that there's some free buffer for 2 extra pods and horizontal auto scaling up to 2 pods with certain CPU threshold. So, is there really a point in doing such a set up? My concern is that if NS reserves it's resources and no other ns can utilize them - we can just create 2 extra pods in each ns replica set with no auto scaling, using these pods constantly. I can see a point in using auto scaling if we have more than 1 application in 1 ns, so that these apps could share same resource buffer for scaling. Is this assumption correct?


      4. How do you think is this a good practice to have 1 ns per app? Why?


      p.s. i know what resource request/limit are and difference between them. In most info sources there's just very high level explanation of the concept.



      Thanks in advance.










      share|improve this question














      I have the following questions regarding request/limit quota for ns:



      Considering the following namespace resource setup:
      - request: 1 core/1GiB
      - limit: 2 core/2GiB



      1. Does it mean a namespace is guaranteed to have 1/1GiB? How is it achieved physically on cluster nodes? Does it mean k8s somehow strictly reserve these values for a ns (at a time it's created)? At which point of time reservation takes place?


      2. Limit 2 core/2GiB - does it mean it's not guaranteed for a ns and depends on current cluster's state? Like if currently cluster has only 100MiB of free ram available, but in runtime pod needs 200Mib more above a resource request - pod will be restarted? Where does k8s take this resource if pod needs to go above it's request?


      3. Regarding namespace granularity and k8s horizontal auto scaling: consider we have 2 applications and 2 namespaces - 1 ns per each app. We set both ns quota as such that there's some free buffer for 2 extra pods and horizontal auto scaling up to 2 pods with certain CPU threshold. So, is there really a point in doing such a set up? My concern is that if NS reserves it's resources and no other ns can utilize them - we can just create 2 extra pods in each ns replica set with no auto scaling, using these pods constantly. I can see a point in using auto scaling if we have more than 1 application in 1 ns, so that these apps could share same resource buffer for scaling. Is this assumption correct?


      4. How do you think is this a good practice to have 1 ns per app? Why?


      p.s. i know what resource request/limit are and difference between them. In most info sources there's just very high level explanation of the concept.



      Thanks in advance.







      kubernetes






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 27 at 8:51









      Jan LobauJan Lobau

      1212 silver badges13 bronze badges




      1212 silver badges13 bronze badges

























          1 Answer
          1






          active

          oldest

          votes


















          1














          The docs clearly states the following:




          In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces, there may be contention for resources. This is handled on a first-come-first-served basis.




          and




          ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consume more resources.




          and




          resource quota divides up aggregate cluster resources, but it creates no restrictions around nodes: pods from several namespaces may run on the same node




          ResourceQuotas is a constraint set in the namespace and does not reserve capacity, it just set a limit of resources that can be consumed by each namespace.



          To effectively "reserve" the capacity, you have to set the restrictions to all namespaces, so that other namespaces does not use more resources than you cluster can provide. This way you can have more guarantees that a namespace will have available capacity to run their load.



          The docs suggests:



          • Proportionally divide total cluster resources among several teams(namespaces).

          • Allow each team to grow resource usage as needed, but have a generous limit to prevent accidental resource exhaustion.

          • Detect demand from one namespace, add nodes, and increase quota.

          Given that, the answer for your questions are:



          1. it is not a reserved capacity, the reservation happens on resource(pod) creation.


          2. Running resources are not affected after reservation. New resources are rejected if the resource creation will over commit the quotas(limits)


          3. As stated in the docs, if the limit are higher than the capacity, the reservation will happen in a first-come-first-served basis.


          4. This question can make to it's own question in SO, in simple terms, for resource isolation and management.






          share|improve this answer
























            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55373083%2fhow-exactly-k8s-reserves-resources-for-a-namespace%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            The docs clearly states the following:




            In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces, there may be contention for resources. This is handled on a first-come-first-served basis.




            and




            ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consume more resources.




            and




            resource quota divides up aggregate cluster resources, but it creates no restrictions around nodes: pods from several namespaces may run on the same node




            ResourceQuotas is a constraint set in the namespace and does not reserve capacity, it just set a limit of resources that can be consumed by each namespace.



            To effectively "reserve" the capacity, you have to set the restrictions to all namespaces, so that other namespaces does not use more resources than you cluster can provide. This way you can have more guarantees that a namespace will have available capacity to run their load.



            The docs suggests:



            • Proportionally divide total cluster resources among several teams(namespaces).

            • Allow each team to grow resource usage as needed, but have a generous limit to prevent accidental resource exhaustion.

            • Detect demand from one namespace, add nodes, and increase quota.

            Given that, the answer for your questions are:



            1. it is not a reserved capacity, the reservation happens on resource(pod) creation.


            2. Running resources are not affected after reservation. New resources are rejected if the resource creation will over commit the quotas(limits)


            3. As stated in the docs, if the limit are higher than the capacity, the reservation will happen in a first-come-first-served basis.


            4. This question can make to it's own question in SO, in simple terms, for resource isolation and management.






            share|improve this answer





























              1














              The docs clearly states the following:




              In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces, there may be contention for resources. This is handled on a first-come-first-served basis.




              and




              ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consume more resources.




              and




              resource quota divides up aggregate cluster resources, but it creates no restrictions around nodes: pods from several namespaces may run on the same node




              ResourceQuotas is a constraint set in the namespace and does not reserve capacity, it just set a limit of resources that can be consumed by each namespace.



              To effectively "reserve" the capacity, you have to set the restrictions to all namespaces, so that other namespaces does not use more resources than you cluster can provide. This way you can have more guarantees that a namespace will have available capacity to run their load.



              The docs suggests:



              • Proportionally divide total cluster resources among several teams(namespaces).

              • Allow each team to grow resource usage as needed, but have a generous limit to prevent accidental resource exhaustion.

              • Detect demand from one namespace, add nodes, and increase quota.

              Given that, the answer for your questions are:



              1. it is not a reserved capacity, the reservation happens on resource(pod) creation.


              2. Running resources are not affected after reservation. New resources are rejected if the resource creation will over commit the quotas(limits)


              3. As stated in the docs, if the limit are higher than the capacity, the reservation will happen in a first-come-first-served basis.


              4. This question can make to it's own question in SO, in simple terms, for resource isolation and management.






              share|improve this answer



























                1












                1








                1







                The docs clearly states the following:




                In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces, there may be contention for resources. This is handled on a first-come-first-served basis.




                and




                ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consume more resources.




                and




                resource quota divides up aggregate cluster resources, but it creates no restrictions around nodes: pods from several namespaces may run on the same node




                ResourceQuotas is a constraint set in the namespace and does not reserve capacity, it just set a limit of resources that can be consumed by each namespace.



                To effectively "reserve" the capacity, you have to set the restrictions to all namespaces, so that other namespaces does not use more resources than you cluster can provide. This way you can have more guarantees that a namespace will have available capacity to run their load.



                The docs suggests:



                • Proportionally divide total cluster resources among several teams(namespaces).

                • Allow each team to grow resource usage as needed, but have a generous limit to prevent accidental resource exhaustion.

                • Detect demand from one namespace, add nodes, and increase quota.

                Given that, the answer for your questions are:



                1. it is not a reserved capacity, the reservation happens on resource(pod) creation.


                2. Running resources are not affected after reservation. New resources are rejected if the resource creation will over commit the quotas(limits)


                3. As stated in the docs, if the limit are higher than the capacity, the reservation will happen in a first-come-first-served basis.


                4. This question can make to it's own question in SO, in simple terms, for resource isolation and management.






                share|improve this answer













                The docs clearly states the following:




                In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces, there may be contention for resources. This is handled on a first-come-first-served basis.




                and




                ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consume more resources.




                and




                resource quota divides up aggregate cluster resources, but it creates no restrictions around nodes: pods from several namespaces may run on the same node




                ResourceQuotas is a constraint set in the namespace and does not reserve capacity, it just set a limit of resources that can be consumed by each namespace.



                To effectively "reserve" the capacity, you have to set the restrictions to all namespaces, so that other namespaces does not use more resources than you cluster can provide. This way you can have more guarantees that a namespace will have available capacity to run their load.



                The docs suggests:



                • Proportionally divide total cluster resources among several teams(namespaces).

                • Allow each team to grow resource usage as needed, but have a generous limit to prevent accidental resource exhaustion.

                • Detect demand from one namespace, add nodes, and increase quota.

                Given that, the answer for your questions are:



                1. it is not a reserved capacity, the reservation happens on resource(pod) creation.


                2. Running resources are not affected after reservation. New resources are rejected if the resource creation will over commit the quotas(limits)


                3. As stated in the docs, if the limit are higher than the capacity, the reservation will happen in a first-come-first-served basis.


                4. This question can make to it's own question in SO, in simple terms, for resource isolation and management.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Mar 27 at 10:27









                Diego MendesDiego Mendes

                6,3321 gold badge19 silver badges29 bronze badges




                6,3321 gold badge19 silver badges29 bronze badges





















                    Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.







                    Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.



















                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55373083%2fhow-exactly-k8s-reserves-resources-for-a-namespace%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

                    SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

                    은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현