Service Types in Kubernetes?

Image for post

In kubernetes a service always enables its network access to a pod or set of pods
Services will select the pods based on their labels and when a network is made to those services it selects all Pods in the cluster matching the service’s selector and will choose one of them, and then will forwards the network request to it.

Image for post

source Thanks to (https://matthewpalmer.net//)

Kubernetes Service vs Deployment

How can we differentiate a Deployment and a Service in K8s?

A deployment is responsible for keeping a set of pods running in a cluster ‘

A service is responsible for enabling network access to a set of pods in a cluster

We can use deployment without using service so we can keep a couple of same pods running is K8s cluster

The deployment could be scaled up and down and pods could be replicated.
In kubernetes a single pod can be accessed individually directly via network requests, hence to track pods is a bit difficult

We can always and also use a service type without a deployment part.Here if we do that we would be creating a single pod rather than creating all together like we do in deployment .Then we have alternative that our Service is capable of routing network requests to those pods by selecting on the basis of their labels allocated to them .

How can we discover a Kubernetes service

In Kubernetes, there are two ways to discover a service:

  • DNS type. In this specific part, the DNS server is added to the cluster in order to watch the Kubernetes API create DNS record sets for each new service.
  • “When DNS is enabled all over the cluster, all pods should be able to automatically perform name resolution of services.”
  • ENV Variables. In this discovery method, a pod runs on a node, so the kubelet adds environment variables for each active service.

How to create a service

To understand in a better way with a simple example in the form of the “Hello World” App with help of deployment kind.

As we see the app is deployed and running with Up status, we will NOW create service (ClusterIP) for accessing our application in Kubernetes

Now, let’s create a deployment running

 

Here this command creates a deployment with two replicas of our application in Kubernetes

Next,

 

So the app is running now if you want to access the freshly created application, we need to create ClusterIP type of Service

  • Create a YAML manifest for the service and apply it, or
  • Use the “kubectl expose” command, which is the easier option. This expose command creates a service without creating a YAML file.
$ kubectl expose deployment hello-world --type=ClusterIP --name=example-service
service "example-service" exposed

Here, we’ll create a service called example-service with type ClusterIP.

So now we will access our application :

run “kubectl get service example-service” to get our port number.

Then, we need to execute command port-forward. Because here our service type is Cluster IP, which can only be accessed within the cluster, we must access our application by forwarding the port to a local port in the cluster .

We can use other types, like “LoadBalanacer”, which will create an LB in AWS or GCP, then we can access the app using the DNS address given to the LB with our port number.

 

Now we can browse http://localhost:8080 from our workstation and we should see:

Hello Kubernetes!

Kubernetes Service NodePort Example YAML

This example YAML creates a Service that is available to external network requests. Here, we mentioned the NodePort with Value so the service is mapped to a port on each node within the cluster.

Image for post
Image for post

Here is the example of a yaml code that will show how we use a NodePort service type in Kubernetes.

 

What does ClusterIP, NodePort, and LoadBalancer mean?

The type property in the Service's spec determines how the service is exposed to the network. The possibles are ClusterIP, NodePort, and LoadBalancer

  • ClusterIP – The default value. The service is only accessible from within the Kubernetes cluster
  • NodePort – This makes the service accessible on a static port on each Node in the cluster.
  • LoadBalancer – The service becomes accessible externally through a cloud provider's load balancer functionality. GCP, AWS, Azure, and OpenStack offer this functionality.

 

ps:

https://mp.weixin.qq.com/s/4LTSt0e_6VQu3RIlGJTumw 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章