-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
log sidecar log external IP? #36
Comments
Hi, any step forward on this? Note that a L4+ load balancer will always change the source IP, but you should see the lb IP instead of the pod itself. x-forwarded-for isn't used for logging the source ip, you'd need to customize the log to use the header value instead. If AWS lb supports proxy protocol (I think it supports) you can use it to propagate the client IP to haproxy, so configure I unfortunately have no useful idea if you don't have a lb in front of your ingress opening the client connection and creating a new one with haproxy. Maybe you have some sort of nat in your node but it's not clear to me how this could happen and what to suggest. |
Hi. For AWS I can confirm, I see the IP address of the load balancer, so you are right, I will need to log the X-forwarded-for header, apologies, I mistook the IP of the load balancer for the pod (similar looking subnets) However, in Hetzner cloud the node is directly on the internet. We are running k3s clusters, so I did a bit of digging - it seems that the k3s traffic manager What I see is, if I connect (from the internet) to https://myserver:32192, haproxy sets the correct x-forwarded-for header, but if I connect to https://myserver:443 haproxy sets the pod IP as the header. I don't know enough about Kubernetes overlay to explain what is happening here or how to see the real IP. Any suggestions? |
I don't know k3s enough to provide a proper suggestion, and in a quick test I can apparently see the source IP correctly. As a simple test you can try to remove the haproxy-ingress service (the type: LoadBalancer one) - Klipper uses this service to provision a pod that configures the route between port 80/443 to the pod - and configure the controller to bind to the host network. |
The test as you suggested was successful. I removed the service and changed But I'm not sure about running this as a final configuration: for one thing I believe the ingress controller would need to run on every node (fine for a small install, but a 50 node install doesn't need 50x haproxy instances) which is IMHO why a service is better, but I can live with this set up (any other ports are blocked by firewalld anyway). However the other "problem" is that the service will always return if the Helm chart is reconfigured or updated - there isn't an option to turn it off I could create an issue there to make this optional? |
Hi, unfortunately I cannot help that much on why Klipper is changing your source IP, maybe a small reproducer with echoserver in the rancher forums or issue tracker might help. Regarding the deployment you can configure the controller.service.type as ClusterIP - although a type None would be very welcome to remove the whole service manifest. I'll take care of this. Regarding the hostNetwork you can use a daemonset with node selector and label the nodes you want the controller should be running, so you'll know what IPs you should for example configure in your DNS. |
Hi Guys
I've enabled sidecar logging
controller.logs.enabled: true
but in the request logs I see the IP address of the haproxy ingress pod (10.42.3.133 in my case) and not the external IP. I run the ingress on two different types of clusters:I tried setting this configuration, but nothing changed:
Any ideas?
The text was updated successfully, but these errors were encountered: