How to fix Kubernetes Ingress Controller cutting off nodes from clusterkubernetes ingress controller clarificationTraffic not passing from node to ingress nginx controllernginx ingress controller is not creating load balancer IP address in custom Kubernetes clusterError with Traefik on Google Kubernetes EngineTraefik Ingress Controller for Kubernetes (AWS EKS)What is necessary to make an ingress deployed as a demonset listening on port 80 on a raspberrypi kubernetes clusterHow can I expose my kubernetes app in a nice domain name using ingress?Ingress resource vs NGINX ingress controller on KubernetesK8S baremetal nginx-ingress-controllerTraefik loadblanacer via helm chart does not route any traffic

On the Rømer experiments and the speed if light

AsyncDictionary - Can you break thread safety?

What happen to those who died but not from the snap?

How does 'AND' distribute over 'OR' (Set Theory)?

Is there a standardised way to check fake news?

create a tuple from pairs

On Math Looking Obvious in Retrospect

Blocking people from taking pictures of me with smartphone

As a 16 year old, how can I keep my money safe from my mother?

Is refreshing multiple times a test case for web applications?

constant evaluation when using differential equations.

How to mark beverage cans in a cooler for a blind person?

Are differences between uniformly distributed numbers uniformly distributed?

CTCI Chapter 1 : Palindrome Permutation

Continuous vertical line using booktabs in tabularx table?

Why did the RAAF procure the F/A-18 despite being purpose-built for carriers?

Ex-contractor published company source code and secrets online

Should you play baroque pieces a semitone lower?

How is this kind of structure made?

Visa National - No Exit Stamp From France on Return to the UK

Why isn’t SHA-3 in wider use?

Y2K... in 2019?

try/finally with bash shell

Understanding the point of a kölsche Witz



How to fix Kubernetes Ingress Controller cutting off nodes from cluster


kubernetes ingress controller clarificationTraffic not passing from node to ingress nginx controllernginx ingress controller is not creating load balancer IP address in custom Kubernetes clusterError with Traefik on Google Kubernetes EngineTraefik Ingress Controller for Kubernetes (AWS EKS)What is necessary to make an ingress deployed as a demonset listening on port 80 on a raspberrypi kubernetes clusterHow can I expose my kubernetes app in a nice domain name using ingress?Ingress resource vs NGINX ingress controller on KubernetesK8S baremetal nginx-ingress-controllerTraefik loadblanacer via helm chart does not route any traffic






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















I'm having some trouble installing an Ingress Controller in my on-prem cluster (created with Kubespray, running MetalLB to create LoadBalancer.).



I tried using nginx, traefik and kong but all got the same results.



I'm installing my the nginx helm chart using the following values.yaml:



controller:
kind: DaemonSet
nodeSelector:
node-role.kubernetes.io/master: ""
image:
tag: 0.23.0
rbac:
create: true


With command:



helm install --name nginx stable/nginx-ingress --values values.yaml --namespace ingress-nginx


When I deploy the ingress controller in the cluster, a service is created (e.g. nginx-ingress-controller for nginx). This service is of the type LoadBalancer and gets an external IP.



When this external IP is assigned, the node that's linked to this external IP is lost (status Not Ready). However, when I check this node, it's still running, it's just cut off from the other
nodes, it can't even ping them (No route found). When I remove the service (not the rest of the nginx helm chart), everything works and the Ingress works. I also tried installing nginx/traefik/kong without a LoadBalancer using NodePorts or External IPs on the service, but I get the same result.



Does anyone recognize this behaviour?
Why does the ingress still work, even when I remove the nginx-ingress-controller service?










share|improve this question
























  • Can you please elaborate this - "node that's linked to this external IP is lost " Do you mean, that node and ingress service attempting to assign the same public IP?

    – A_Suh
    Apr 2 at 14:41











  • Hi @A_Suh, thanks for your response! The external IP for the service is the IP of one of the 5 nodes in my cluster. Let's call that node X. When the service is created and gets an external IP, X gets status "Not Ready". However, X is not down, since I can still log in to it and kubelet is still running. The moment the service is installed in my cluster, X can't access the master node anymore so its health pings can't reach master anymore. When I ping to the master node (or any other node) from X, I get "Destination Host Unreachable".

    – Nils Lamot
    Apr 2 at 14:53











  • this is weird your DHCP server is assigning IP to the service, which has been already assigned to the node. Would you try to manually set a static IP address to your ingress service? i.e. apiVersion: v1 kind: Service spec: type: LoadBalancer loadBalancerIP: 10.10.10.10

    – A_Suh
    Apr 2 at 15:05











  • Aha, so it seems I have misconfigured something. Since the load balancer is running inside the cluster, shouldn't the load balancer IP be the IP of one of the nodes inside the cluster? I configured MetalLB to provide a range of IP addresses to the load balancers and these IP's are the IP's of the nodes in my cluster. I'm sorry, I'm new to Kubernetes and I think I'm missing something?

    – Nils Lamot
    Apr 2 at 15:12











  • No it shouldn't be the same. You are exposing access to the service as a separate object. So, please amend the range of the IPs for the LB not to intersect with node's IPs

    – A_Suh
    Apr 2 at 15:21

















0















I'm having some trouble installing an Ingress Controller in my on-prem cluster (created with Kubespray, running MetalLB to create LoadBalancer.).



I tried using nginx, traefik and kong but all got the same results.



I'm installing my the nginx helm chart using the following values.yaml:



controller:
kind: DaemonSet
nodeSelector:
node-role.kubernetes.io/master: ""
image:
tag: 0.23.0
rbac:
create: true


With command:



helm install --name nginx stable/nginx-ingress --values values.yaml --namespace ingress-nginx


When I deploy the ingress controller in the cluster, a service is created (e.g. nginx-ingress-controller for nginx). This service is of the type LoadBalancer and gets an external IP.



When this external IP is assigned, the node that's linked to this external IP is lost (status Not Ready). However, when I check this node, it's still running, it's just cut off from the other
nodes, it can't even ping them (No route found). When I remove the service (not the rest of the nginx helm chart), everything works and the Ingress works. I also tried installing nginx/traefik/kong without a LoadBalancer using NodePorts or External IPs on the service, but I get the same result.



Does anyone recognize this behaviour?
Why does the ingress still work, even when I remove the nginx-ingress-controller service?










share|improve this question
























  • Can you please elaborate this - "node that's linked to this external IP is lost " Do you mean, that node and ingress service attempting to assign the same public IP?

    – A_Suh
    Apr 2 at 14:41











  • Hi @A_Suh, thanks for your response! The external IP for the service is the IP of one of the 5 nodes in my cluster. Let's call that node X. When the service is created and gets an external IP, X gets status "Not Ready". However, X is not down, since I can still log in to it and kubelet is still running. The moment the service is installed in my cluster, X can't access the master node anymore so its health pings can't reach master anymore. When I ping to the master node (or any other node) from X, I get "Destination Host Unreachable".

    – Nils Lamot
    Apr 2 at 14:53











  • this is weird your DHCP server is assigning IP to the service, which has been already assigned to the node. Would you try to manually set a static IP address to your ingress service? i.e. apiVersion: v1 kind: Service spec: type: LoadBalancer loadBalancerIP: 10.10.10.10

    – A_Suh
    Apr 2 at 15:05











  • Aha, so it seems I have misconfigured something. Since the load balancer is running inside the cluster, shouldn't the load balancer IP be the IP of one of the nodes inside the cluster? I configured MetalLB to provide a range of IP addresses to the load balancers and these IP's are the IP's of the nodes in my cluster. I'm sorry, I'm new to Kubernetes and I think I'm missing something?

    – Nils Lamot
    Apr 2 at 15:12











  • No it shouldn't be the same. You are exposing access to the service as a separate object. So, please amend the range of the IPs for the LB not to intersect with node's IPs

    – A_Suh
    Apr 2 at 15:21













0












0








0








I'm having some trouble installing an Ingress Controller in my on-prem cluster (created with Kubespray, running MetalLB to create LoadBalancer.).



I tried using nginx, traefik and kong but all got the same results.



I'm installing my the nginx helm chart using the following values.yaml:



controller:
kind: DaemonSet
nodeSelector:
node-role.kubernetes.io/master: ""
image:
tag: 0.23.0
rbac:
create: true


With command:



helm install --name nginx stable/nginx-ingress --values values.yaml --namespace ingress-nginx


When I deploy the ingress controller in the cluster, a service is created (e.g. nginx-ingress-controller for nginx). This service is of the type LoadBalancer and gets an external IP.



When this external IP is assigned, the node that's linked to this external IP is lost (status Not Ready). However, when I check this node, it's still running, it's just cut off from the other
nodes, it can't even ping them (No route found). When I remove the service (not the rest of the nginx helm chart), everything works and the Ingress works. I also tried installing nginx/traefik/kong without a LoadBalancer using NodePorts or External IPs on the service, but I get the same result.



Does anyone recognize this behaviour?
Why does the ingress still work, even when I remove the nginx-ingress-controller service?










share|improve this question














I'm having some trouble installing an Ingress Controller in my on-prem cluster (created with Kubespray, running MetalLB to create LoadBalancer.).



I tried using nginx, traefik and kong but all got the same results.



I'm installing my the nginx helm chart using the following values.yaml:



controller:
kind: DaemonSet
nodeSelector:
node-role.kubernetes.io/master: ""
image:
tag: 0.23.0
rbac:
create: true


With command:



helm install --name nginx stable/nginx-ingress --values values.yaml --namespace ingress-nginx


When I deploy the ingress controller in the cluster, a service is created (e.g. nginx-ingress-controller for nginx). This service is of the type LoadBalancer and gets an external IP.



When this external IP is assigned, the node that's linked to this external IP is lost (status Not Ready). However, when I check this node, it's still running, it's just cut off from the other
nodes, it can't even ping them (No route found). When I remove the service (not the rest of the nginx helm chart), everything works and the Ingress works. I also tried installing nginx/traefik/kong without a LoadBalancer using NodePorts or External IPs on the service, but I get the same result.



Does anyone recognize this behaviour?
Why does the ingress still work, even when I remove the nginx-ingress-controller service?







nginx kubernetes traefik kubernetes-ingress kong






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Mar 27 at 8:32









Nils LamotNils Lamot

115 bronze badges




115 bronze badges















  • Can you please elaborate this - "node that's linked to this external IP is lost " Do you mean, that node and ingress service attempting to assign the same public IP?

    – A_Suh
    Apr 2 at 14:41











  • Hi @A_Suh, thanks for your response! The external IP for the service is the IP of one of the 5 nodes in my cluster. Let's call that node X. When the service is created and gets an external IP, X gets status "Not Ready". However, X is not down, since I can still log in to it and kubelet is still running. The moment the service is installed in my cluster, X can't access the master node anymore so its health pings can't reach master anymore. When I ping to the master node (or any other node) from X, I get "Destination Host Unreachable".

    – Nils Lamot
    Apr 2 at 14:53











  • this is weird your DHCP server is assigning IP to the service, which has been already assigned to the node. Would you try to manually set a static IP address to your ingress service? i.e. apiVersion: v1 kind: Service spec: type: LoadBalancer loadBalancerIP: 10.10.10.10

    – A_Suh
    Apr 2 at 15:05











  • Aha, so it seems I have misconfigured something. Since the load balancer is running inside the cluster, shouldn't the load balancer IP be the IP of one of the nodes inside the cluster? I configured MetalLB to provide a range of IP addresses to the load balancers and these IP's are the IP's of the nodes in my cluster. I'm sorry, I'm new to Kubernetes and I think I'm missing something?

    – Nils Lamot
    Apr 2 at 15:12











  • No it shouldn't be the same. You are exposing access to the service as a separate object. So, please amend the range of the IPs for the LB not to intersect with node's IPs

    – A_Suh
    Apr 2 at 15:21

















  • Can you please elaborate this - "node that's linked to this external IP is lost " Do you mean, that node and ingress service attempting to assign the same public IP?

    – A_Suh
    Apr 2 at 14:41











  • Hi @A_Suh, thanks for your response! The external IP for the service is the IP of one of the 5 nodes in my cluster. Let's call that node X. When the service is created and gets an external IP, X gets status "Not Ready". However, X is not down, since I can still log in to it and kubelet is still running. The moment the service is installed in my cluster, X can't access the master node anymore so its health pings can't reach master anymore. When I ping to the master node (or any other node) from X, I get "Destination Host Unreachable".

    – Nils Lamot
    Apr 2 at 14:53











  • this is weird your DHCP server is assigning IP to the service, which has been already assigned to the node. Would you try to manually set a static IP address to your ingress service? i.e. apiVersion: v1 kind: Service spec: type: LoadBalancer loadBalancerIP: 10.10.10.10

    – A_Suh
    Apr 2 at 15:05











  • Aha, so it seems I have misconfigured something. Since the load balancer is running inside the cluster, shouldn't the load balancer IP be the IP of one of the nodes inside the cluster? I configured MetalLB to provide a range of IP addresses to the load balancers and these IP's are the IP's of the nodes in my cluster. I'm sorry, I'm new to Kubernetes and I think I'm missing something?

    – Nils Lamot
    Apr 2 at 15:12











  • No it shouldn't be the same. You are exposing access to the service as a separate object. So, please amend the range of the IPs for the LB not to intersect with node's IPs

    – A_Suh
    Apr 2 at 15:21
















Can you please elaborate this - "node that's linked to this external IP is lost " Do you mean, that node and ingress service attempting to assign the same public IP?

– A_Suh
Apr 2 at 14:41





Can you please elaborate this - "node that's linked to this external IP is lost " Do you mean, that node and ingress service attempting to assign the same public IP?

– A_Suh
Apr 2 at 14:41













Hi @A_Suh, thanks for your response! The external IP for the service is the IP of one of the 5 nodes in my cluster. Let's call that node X. When the service is created and gets an external IP, X gets status "Not Ready". However, X is not down, since I can still log in to it and kubelet is still running. The moment the service is installed in my cluster, X can't access the master node anymore so its health pings can't reach master anymore. When I ping to the master node (or any other node) from X, I get "Destination Host Unreachable".

– Nils Lamot
Apr 2 at 14:53





Hi @A_Suh, thanks for your response! The external IP for the service is the IP of one of the 5 nodes in my cluster. Let's call that node X. When the service is created and gets an external IP, X gets status "Not Ready". However, X is not down, since I can still log in to it and kubelet is still running. The moment the service is installed in my cluster, X can't access the master node anymore so its health pings can't reach master anymore. When I ping to the master node (or any other node) from X, I get "Destination Host Unreachable".

– Nils Lamot
Apr 2 at 14:53













this is weird your DHCP server is assigning IP to the service, which has been already assigned to the node. Would you try to manually set a static IP address to your ingress service? i.e. apiVersion: v1 kind: Service spec: type: LoadBalancer loadBalancerIP: 10.10.10.10

– A_Suh
Apr 2 at 15:05





this is weird your DHCP server is assigning IP to the service, which has been already assigned to the node. Would you try to manually set a static IP address to your ingress service? i.e. apiVersion: v1 kind: Service spec: type: LoadBalancer loadBalancerIP: 10.10.10.10

– A_Suh
Apr 2 at 15:05













Aha, so it seems I have misconfigured something. Since the load balancer is running inside the cluster, shouldn't the load balancer IP be the IP of one of the nodes inside the cluster? I configured MetalLB to provide a range of IP addresses to the load balancers and these IP's are the IP's of the nodes in my cluster. I'm sorry, I'm new to Kubernetes and I think I'm missing something?

– Nils Lamot
Apr 2 at 15:12





Aha, so it seems I have misconfigured something. Since the load balancer is running inside the cluster, shouldn't the load balancer IP be the IP of one of the nodes inside the cluster? I configured MetalLB to provide a range of IP addresses to the load balancers and these IP's are the IP's of the nodes in my cluster. I'm sorry, I'm new to Kubernetes and I think I'm missing something?

– Nils Lamot
Apr 2 at 15:12













No it shouldn't be the same. You are exposing access to the service as a separate object. So, please amend the range of the IPs for the LB not to intersect with node's IPs

– A_Suh
Apr 2 at 15:21





No it shouldn't be the same. You are exposing access to the service as a separate object. So, please amend the range of the IPs for the LB not to intersect with node's IPs

– A_Suh
Apr 2 at 15:21












1 Answer
1






active

oldest

votes


















1














After a long search, we finally found a working solution for this problem.



As mentioned by @A_Suh, the pool of IPs that metallb uses, should contain IPs that are currently not used by one of the nodes in the cluster. By adding a new IP range that's also configured in the DHCP server, metallb can use ARP to link one of the IPs to one of the nodes.



For example in my 5 node cluster (kube11-15): When metallb gets the range 10.4.5.200/31 and allocates 10.4.5.200 for my nginx-ingress-controller, 10.4.5.200 is linked to kube12. On ARP requests for 10.4.5.200, all 5 nodes respond with kube12 and trafic will be routed to this node.






share|improve this answer
























    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55372754%2fhow-to-fix-kubernetes-ingress-controller-cutting-off-nodes-from-cluster%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1














    After a long search, we finally found a working solution for this problem.



    As mentioned by @A_Suh, the pool of IPs that metallb uses, should contain IPs that are currently not used by one of the nodes in the cluster. By adding a new IP range that's also configured in the DHCP server, metallb can use ARP to link one of the IPs to one of the nodes.



    For example in my 5 node cluster (kube11-15): When metallb gets the range 10.4.5.200/31 and allocates 10.4.5.200 for my nginx-ingress-controller, 10.4.5.200 is linked to kube12. On ARP requests for 10.4.5.200, all 5 nodes respond with kube12 and trafic will be routed to this node.






    share|improve this answer





























      1














      After a long search, we finally found a working solution for this problem.



      As mentioned by @A_Suh, the pool of IPs that metallb uses, should contain IPs that are currently not used by one of the nodes in the cluster. By adding a new IP range that's also configured in the DHCP server, metallb can use ARP to link one of the IPs to one of the nodes.



      For example in my 5 node cluster (kube11-15): When metallb gets the range 10.4.5.200/31 and allocates 10.4.5.200 for my nginx-ingress-controller, 10.4.5.200 is linked to kube12. On ARP requests for 10.4.5.200, all 5 nodes respond with kube12 and trafic will be routed to this node.






      share|improve this answer



























        1












        1








        1







        After a long search, we finally found a working solution for this problem.



        As mentioned by @A_Suh, the pool of IPs that metallb uses, should contain IPs that are currently not used by one of the nodes in the cluster. By adding a new IP range that's also configured in the DHCP server, metallb can use ARP to link one of the IPs to one of the nodes.



        For example in my 5 node cluster (kube11-15): When metallb gets the range 10.4.5.200/31 and allocates 10.4.5.200 for my nginx-ingress-controller, 10.4.5.200 is linked to kube12. On ARP requests for 10.4.5.200, all 5 nodes respond with kube12 and trafic will be routed to this node.






        share|improve this answer













        After a long search, we finally found a working solution for this problem.



        As mentioned by @A_Suh, the pool of IPs that metallb uses, should contain IPs that are currently not used by one of the nodes in the cluster. By adding a new IP range that's also configured in the DHCP server, metallb can use ARP to link one of the IPs to one of the nodes.



        For example in my 5 node cluster (kube11-15): When metallb gets the range 10.4.5.200/31 and allocates 10.4.5.200 for my nginx-ingress-controller, 10.4.5.200 is linked to kube12. On ARP requests for 10.4.5.200, all 5 nodes respond with kube12 and trafic will be routed to this node.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Apr 16 at 11:34









        Nils LamotNils Lamot

        115 bronze badges




        115 bronze badges





















            Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.







            Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.



















            draft saved

            draft discarded
















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55372754%2fhow-to-fix-kubernetes-ingress-controller-cutting-off-nodes-from-cluster%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

            SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

            은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현