Using a K8s Ingress controller on VMware TKGI:
- For non HA deployments, Load Balancers are not required. Access to TKGI K8s deployments can be provided with a K8s Ingress Controller.
- Ingress Controller: K8s API object (YAML) that configures external access to an internal application (deployment) within the K8s cluster for HTTP/HTTPS workloads:
- - Provider external access to service(s) with resolvable URLs
- - Terminate TLS/SSL
- - Provide name-based virtual domain virtual hosting
- Ingress Controller Types:
- - nginx / Ingress-NGINX (standard)
- - AWS (AWS cloud)
- - GCE (GCE cloud)
- - Many others exist outside the Kubernetes project maintained three above.
IMPORTANT:
- If you have a non HTTP/HTTPS port, then you are not doing an Ingress, but either:
- - Service.Type=NodePort
- - Service.Type=LoadBalancer
- An ingress resource without the ingress controller will not route - both are required.
Setup without Using NSX - High-level Steps:
A. Deploy a K8s Ingress Controller (e.g. NGINX)
B. Configure DNS
C. Configure TLS (not required, but really is these days)
D. Deploy an APP using the Service that specifies the Ingress Controller
Prerequisites:
- TKGI-deployed cluster must have its own LoadBalancer already set-up
- A wildcard DNS record that points to the cluster Load Balancer
- CoreDNS is enabled
If CoreDNS is not enabled and running, the cluster will need to be recreated:
- Tanzu Operations Manager --> click the BOSH Director (tile). --> click the Director Config (pane).
- Enable Post Deploy Scripts: <checked> --> click Review Pending Changes (button), click Apply Changes (button)
- Delete the cluster, and then re-create the cluster.
A. Deploy a K8s NGINX Ingress Controller:
1. Login:
$ tkgi login -a api.tkgi.mindwatering.net -u myadminid -k
$ tkgi get-credentials my-cluster
<confirm context set for cluster my-cluster>
2. Confirm the cluster CoreDNS is enabled:
$ kubectl cluster-info
<view output and confirm CoreDNS line>
e.g.
Kubernetes master is running at https://10.10.5.123
elasticsearch-logging is running at https://10.10.5.123/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
kibana-logging is running at https://10.10.5.123/api/v1/namespaces/kube-system/services/kibana-logging/proxy
CoreDNS is running at https://10.10.5.123/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
grafana is running at https://10.10.5.123/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
3a. Install the Ingress Controller (NGINX Cloud version):
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.nn.nn/deploy/static/provider/cloud/deploy.yaml
- or -
3b. Install the Ingress Controller (NGINX Bare-metal version - NodePort):
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.13.3/deploy/static/provider/baremetal/deploy.yaml
Note: The baremetal option will end up with high port numbers assigned for the HTTP and HTTPS ports. The MetalLB project, other Layer 2 software options, or an external (to the cluster firewall) can be used to translate the high port numbers to a public IP with port 80/443. See kubernetes.github.io/ingress-nginx/deploy/baremetal for more information.
4. Check the cluster firewall configuration:
$ kubectl -n ingress-nginx get pod -o yaml
<view output>
Note:
- Confirm port 8443 (ingress-nginx admission controller) is listed/open between all K8s hosts/nodes
- Confirm ports 80 and 443 are listed/open to the external public on the K8s nodes
5. Confirm the pods started and are running for the NGINX Ingress Controller:
$ kubectl get pods --namespace=ingress-nginx
<confirm state is running, if not yet all running, wait a moment, and repeat>
B. Configure DNS
1. Confirm the port number NGINX is using:
$ kubectl get service ingress-nginx-controller --namespace=ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
ingress-nginx LoadBalancer 10.10.5.125 80.90.100.120 80:30825/TCP,443:31843/TCP
- or -
ingress-nginx LoadBalancer 10.10.5.125 <pending> 80:30825/TCP,443:31843/TCP
Notes:
- If the External-IP says <pending>, then the cluster doesn't have a spare external IP to allocate to the ingress-nginx controller, or the cluster doesn't support service of type LoadBalancer.
- Note that the bare-metal NotePort NGINX type of ingress has high ports that map to the internal app's port 80 and 443. In this example the HTTP local Node port is 30825.
2. Get the internal-cluster IP of the deployment VMs running on the nodes:
$ kubectl -o jsonpath='{.items[*].status.addresses[0].address}' get nodes
<review the VM cluster-internal IPs and note the IP for the internal application>
3. Set-up external DNS for the application to either this public IP:
e.g. "A" record: demoapp.tkgi.mindwatering.net, 80.90.100.120
C. Configure TLS
1a. For a corporate internal cluster/certificate, create the self-signed certificate:
(key = mwkeyint.pem, cert=mwcertint.pem, for 1 year)
$ cd ~/mw-cert-int/
$ openssl req -x509 \
-nodes -newkey rsa:4096 \
-keyout mwkeyint.pem \
-out mwcertint.pem \
-days 365 \
-subj "/CN=*.tkgi.mindwatering.net"
- or -
1b. Obtain a commercial cert and key purchases from a registrar in PEM format:
(key = mwkeyext.pem, cert=mwcertext.pem, for 1 year)
2a. Upload the internal certificate and private key to the ingress controller:
(mw-cert-int = name for the secret that will be created, that holds the key and cert.)
$ kubectl -n ingress-nginx create secret tls mw-cert-int --key mwkeyint.pem --cert mwcertint.pem
- or -
2b. Upload the purchased public certificate and its private key to the ingress controller:
(mw-cert-int = name for the secret that will be created, that holds the key and cert.)
$ kubectl -n ingress-nginx create secret tls mw-cert-int --key mwkeyext.pem --cert mwcertext.pem
D. Deploy an APP using the Service that specifies the Ingress Controller
1. Create a demo application:
$ kubectl create deployment demoapp --image=httpd --port=80
$ kubectl expose deployment demoapp
2a. Create a local node app ingress:
$ kubectl create ingress demoapp --class=nginx --rule="demoapp.tkgi.mindwatering.net/*=demoapp:80"
- or -
$ kubectl create ingress demoapp --class=nginx --rule=demoapp.tkgi.mindwatering.net/=demoapp:80
2b. Create the port-forward for the ingress:
$ kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
3. Open a browser and see if the domain now loads the cluster app:
Browser --> http://demoapp.tkgi.mindwatering.net
---
Other Information:
Glossary:
- Node: A hardware worker (host) in Kubernetes, part of a cluster.
- Cluster: A set of Nodes that run containerized applications managed by Kubernetes.
- - In most common K8s deployments, nodes in the cluster are not part of the public internet, they have/use internal networks that have an "ingress" to be consumed externally.
- Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloud provider or a physical piece of hardware.
- Cluster network: A set of links, logical and/or physical (hardware), that facilitate communication within a cluster according to the Kubernetes networking model.
- Service: Identifies a set of Pods using label selectors. Unless specified, services are assumed to have virtual IPs only routable within the cluster network.
Service Types:
- ClusterIP: Exposes the service on a cluster-internal IP. Perfect for back-end deployment components (e.g. MySQL or Mongo DB). ClusterIP types that need external access require a front-end Ingress, Gateway (L4 - L7 routing which includes K8s next generation LBs)
- NodePort: Exposes the service on each (physical worker) Node's IP at a static port between 30000-32767. Each node proxies that same port number for that service on each node. If you have round-robin DNS, you can use this to spread load across the Nodes, or a more intelligent load balancer (IAM/WebSEAL, HAProxy, F5, etc) that can also translate/map the port back down to a standard "low number" port number. NodePorts can handle TCP, UDP, or SCTP, whichever matches the type of the service.
- Ingress: Exposes HTTP and HTTPs routes from outside the cluster to services within the cluster. Traffic is controller by rules defined on the ingress resource manifest (YAML).
- LoadBalancer: Set's configuration with an external LoadBalancer, on-premise or cloud-provided. K8s does not provide one.
- ExternalName: Maps the service to the contents of the externalName field to a virtual host name. Configuring an ExternalName, creates a CNAME record in the cluster DNS services mapped to the the "primary" hostname. No proxy is configured.
previous page
|