# [[Kubernetes – Services]] ```ad-warning Draft ``` Services are roughly analogous to L4 load balancing. ^[[Kubernetes Networking Intro and Deep-Dive - Bowei Du & Tim Hockin, Google - YouTube](https://www.youtube.com/watch?v=tq9ng_Nz9j8)] Services represent an abstraction of how a client should reach `1..*` instances of a pod running an application. ## Best practices - Use named ports for a `Service` `targetPort` instead of a hard-coded port-value. This requires ports to be named in the `Pod` spec but allows pods to evolve the port they use over time with no breaking change to the service. ## Service Types `LoadBalancer` -> `NodePort` -> `ClusterIP` **`ClusterIP`** makes a `Service` routable ***within*** a cluster via a [[Virtual IP | VIP]] for the service. Each `Endpoint` is kept up to date locally on each node by the `kube-proxy`. The `kube-proxy` can run in a few different modes but a common mode is for `IPTables` on a node to be updated with mapping info from the `VIP` to the appropriate healthy back-ends. You *can* still access a `ClusterIP` service from outside the cluster for **debugging** purposes via the `k8s` `apiserver` proxy. ^[This should be for debugging only since you're using your `k8s` cluster credentials to authenticate to the apiserver proxy]. ^[[So Many Proxies](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#so-many-proxies)] ``` kubectl proxy --port=8088 curl http://localhost:8088/api/v1/namespaces/<NAMESPACE>/services/<SERVICE_NAME>:<SERVICE_PORT>/proxy ``` ^[[Manually constructing apiserver proxy URLs - Accessing Clusters - Kubernetes](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls)] **`NodePort`** is the most primitive way to expose a service outside of the cluster by opening the same port on all nodes in the cluster. you *can* specify the `Service` `spec.ports.[].nodePort` but you need to be sure it's available on all nodes. It's often easier to leave it unspecified and let `k8s` pick a random port. Whoever can reach that node (e.g. another VM in the same vpn / subnet) can connect to the service. The node is configured by `kube-proxy` to forward traffic on the node's port to the targeted `Endpoint` pod ip/port. There are many downsides that make this impractical at scale: - only 1 service per port - only ports 30000-32767 - You still need to figure out the IP of the node(s) in the cluster and pick one `NodePort` in practice is useful for examples and as a building block for exposing services to a `LoadBalancer` **`LoadBalancer`** exposes a L4 load-balancer at a single IP address and uses the targeted pod `Endpoints` as backends. Usually leverages an infrastructure provider load-balancer resource (i.e. not a resource provided natively by k8s) Since you need a load balancer per service and each load balancer needs an IP (i.e. this is a L4 load balancer), this setup can get expensive. ## Ingress An `Ingress` isn't a `Service`. Instead, it models an L7 application load balancer that supports subdomains and virtual routing. An ingress resources resource config targets services. ## Resources - [Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what? | by Sandeep Dinesh | Google Cloud - Community | Medium](https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0) --- - Links: [[Kubernetes|k8s]] [[Kubernetes networking]] [[Kubernetes – Ingress]] - Created at: [[2021-04-06]]