Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

log sidecar log external IP? #36

Open
mogoman opened this issue Oct 18, 2021 · 5 comments
Open

log sidecar log external IP? #36

mogoman opened this issue Oct 18, 2021 · 5 comments
Labels
question Further information is requested

Comments

@mogoman
Copy link

mogoman commented Oct 18, 2021

Hi Guys

I've enabled sidecar logging controller.logs.enabled: true but in the request logs I see the IP address of the haproxy ingress pod (10.42.3.133 in my case) and not the external IP. I run the ingress on two different types of clusters:

  1. Behind AWS application load balancer (which sets X-Forwarded-For)
  2. On bare metal using a floating IP (Hetzner Cloud) with no load balancer

I tried setting this configuration, but nothing changed:

controller:
  config:
    forwardfor: ifmissing

Any ideas?

@jcmoraisjr
Copy link
Member

Hi, any step forward on this? Note that a L4+ load balancer will always change the source IP, but you should see the lb IP instead of the pod itself. x-forwarded-for isn't used for logging the source ip, you'd need to customize the log to use the header value instead. If AWS lb supports proxy protocol (I think it supports) you can use it to propagate the client IP to haproxy, so configure use-proxy-protocol, doc here.

I unfortunately have no useful idea if you don't have a lb in front of your ingress opening the client connection and creating a new one with haproxy. Maybe you have some sort of nat in your node but it's not clear to me how this could happen and what to suggest.

@jcmoraisjr jcmoraisjr added the question Further information is requested label Nov 2, 2021
@mogoman
Copy link
Author

mogoman commented Nov 5, 2021

Hi.

For AWS I can confirm, I see the IP address of the load balancer, so you are right, I will need to log the X-forwarded-for header, apologies, I mistook the IP of the load balancer for the pod (similar looking subnets)

However, in Hetzner cloud the node is directly on the internet. We are running k3s clusters, so I did a bit of digging - it seems that the k3s traffic manager Klipper actually manages the node ports 80 and 443 using Iptables rules and haproxy is bound to higher ports. In my case the haproxy service HTTPS port is on nodeport 32192.

What I see is, if I connect (from the internet) to https://myserver:32192, haproxy sets the correct x-forwarded-for header, but if I connect to https://myserver:443 haproxy sets the pod IP as the header.

I don't know enough about Kubernetes overlay to explain what is happening here or how to see the real IP. Any suggestions?

@jcmoraisjr
Copy link
Member

I don't know k3s enough to provide a proper suggestion, and in a quick test I can apparently see the source IP correctly. As a simple test you can try to remove the haproxy-ingress service (the type: LoadBalancer one) - Klipper uses this service to provision a pod that configures the route between port 80/443 to the pod - and configure the controller to bind to the host network. controller.hostNetwork in the helm chart. This should solve the problem or give you some evidence that the problem is elsewhere in the topology. Note that the controller publicly binds some ports that you will not want to make public, so properly configure a security group if you choose to leave it as the final configuration.

@mogoman
Copy link
Author

mogoman commented Nov 10, 2021

The test as you suggested was successful. I removed the service and changed controller.hostNetwork to true and now see the correct IP address (at the very least correctly in x-forwarded-for which is fine).

But I'm not sure about running this as a final configuration: for one thing I believe the ingress controller would need to run on every node (fine for a small install, but a 50 node install doesn't need 50x haproxy instances) which is IMHO why a service is better, but I can live with this set up (any other ports are blocked by firewalld anyway).

However the other "problem" is that the service will always return if the Helm chart is reconfigured or updated - there isn't an option to turn it off I could create an issue there to make this optional?

@jcmoraisjr
Copy link
Member

Hi, unfortunately I cannot help that much on why Klipper is changing your source IP, maybe a small reproducer with echoserver in the rancher forums or issue tracker might help.

Regarding the deployment you can configure the controller.service.type as ClusterIP - although a type None would be very welcome to remove the whole service manifest. I'll take care of this.

Regarding the hostNetwork you can use a daemonset with node selector and label the nodes you want the controller should be running, so you'll know what IPs you should for example configure in your DNS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants