Supposing you've migrated your services to kubernetes and there is a service with a hardcoded port 8755 from your legacy application listing some products in a JSON format a where the requirement declares the standard port is 80. To address this problem, you can use the ambassador design to export access to the service on port 80.
First of all, create a ConfigMap definition, pipe the configuration to haproxy.cfg
and save it as my-service-ambassador-config.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-service-ambassador-config
data:
haproxy.cfg: |-
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen http-in
bind *:80
server server1 127.0.0.1:8775 maxconn 32
kubectl apply -f my-service-ambassador-config.yml
Create a pod definition called my-service.yml
. As you can see, there is an ambassador container running the haproxy:1.7
image and proxying the incoming traffic on port 80 to the legacy service on port 8775.
apiVersion: v1
kind: Pod
metadata:
name: my-service
spec:
containers:
- name: legacy-service
image: legacy-service:1
- name: haproxy-ambassador
image: haproxy:1.7
ports:
- containerPort: 80
- name: config-volume
mountPath: /usr/local/etc/haproxy
volumes:
- name: config-volume
configMap:
name: my-service-ambassador-config
kubectl apply -f my-service.yml
Let's test it by creating a busybox pod
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: myapp-container
image: radial/busyboxplus:curl
command: ['sh', '-c', 'while true; do sleep 3600; done']
kubectl apply -f busybox.yml
The sub-command $(kubectl get pod my-service -o=custom-columns=IP:.status.podIP --no-headers)
is used to get the IP address of my-service
.
kubectl exec busybox -- curl $(kubectl get pod my-service -o=custom-columns=IP:.status.podIP --no-headers):80
For more about ambassador pattern, please check out Design patterns for container-based distributed systems
No comments:
Post a Comment