linux - How do I configure Docker to work with my ens34 network interface (instead of eth0)? -


does know how docker decides nic work docker0 network? have node 2 interfaces (eth0 , ens34), however, requests go through eth0 forwarded container.

when vm provisioned , docker installed, started silly test: created centos vm, installed netcat on , committed image. started daemon container listening on port 8080. used:

docker -it -p 8080:8080 --name nc-server nc-server nc -vv -l 8080 

so tried connect container listening on port 8080 node in same network (in same ip address interface ens34). did not work.

whereas when sent request machine ip address eth0, saw reaction in container (the communication worked). "tailing" output with:

docker logs -ft nc-server 

my conclusion experiment: there's mysterious relationship between eth0 (primary nic) , docker0, , requests sent ens34 (10.) interface never forwarded veth / docker0 interfaces, requests go through eth0 (9.*). why's that?

also, know can make work if use --net=host, don't want use that... doesn't feel right somehow, standard practice use host mode in docker containers? caveats on that?

--

update: managed make work after disabling iptables:

service iptables stop 

however, still don't going on. info below should relevant understand going on:

ifconfig

[root@mydockervm2 myuser]# ifconfig | grep -a 1 flags docker0: flags=4163<up,broadcast,running,multicast>  mtu 1500         inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0 -- ens34: flags=4163<up,broadcast,running,multicast>  mtu 1500         inet 10.1.21.18  netmask 255.255.255.0  broadcast 10.1.21.255 -- eth0: flags=4163<up,broadcast,running,multicast>  mtu 1500         inet 9.32.145.99  netmask 255.255.255.0  broadcast 9.32.148.255 -- lo: flags=73<up,loopback,running>  mtu 65536         inet 127.0.0.1  netmask 255.0.0.0 -- veth8dbab2f: flags=4163<up,broadcast,running,multicast>  mtu 1500         inet6 fe80::3815:67ff:fe9b:88e9  prefixlen 64  scopeid 0x20<link> -- virbr0: flags=4099<up,broadcast,multicast>  mtu 1500         inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255 

netstat

[root@mydockervm2 myuser]# netstat -nr kernel ip routing table destination     gateway         genmask         flags   mss window  irtt iface 0.0.0.0         9.32.145.1      0.0.0.0         ug        0 0          0 eth0 9.32.145.0      0.0.0.0         255.255.255.0   u         0 0          0 eth0 10.1.21.0       0.0.0.0         255.255.255.0   u         0 0          0 ens34 169.254.0.0     0.0.0.0         255.255.0.0     u         0 0          0 eth0 169.254.0.0     0.0.0.0         255.255.0.0     u         0 0          0 ens34 172.17.0.0      0.0.0.0         255.255.0.0     u         0 0          0 docker0 192.168.122.0   0.0.0.0         255.255.255.0   u         0 0          0 virbr0 

filters

[root@mydockervm2 myuser]# iptables -t filter -vs -p input accept -c 169 106311 -p forward accept -c 0 0 -p output accept -c 110 13426 -n docker -n docker-isolation -a forward -c 0 0 -j docker-isolation -a forward -o docker0 -c 0 0 -j docker -a forward -o docker0 -m conntrack --ctstate related,established -c 0 0 -j accept -a forward -i docker0 ! -o docker0 -c 0 0 -j accept -a forward -i docker0 -o docker0 -c 0 0 -j accept -a forward -m physdev --physdev-is-bridged -c 0 0 -j accept -a docker -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8080 -c 0 0 -j accept -a docker-isolation -c 0 0 -j return 

nat

[root@mydockervm2 myuser]# iptables -t nat -vs -p prerouting accept -c 28 4818 -p input accept -c 28 4818 -p output accept -c 8 572 -p postrouting accept -c 8 572 -n docker -a prerouting -m addrtype --dst-type local -c 2 98 -j docker -a output ! -d 127.0.0.0/8 -m addrtype --dst-type local -c 0 0 -j docker -a postrouting -s 172.17.0.0/16 ! -o docker0 -c 0 0 -j masquerade -a postrouting -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 8080 -c 0 0 -j masquerade -a docker -i docker0 -c 0 0 -j return -a docker ! -i docker0 -p tcp -m tcp --dport 8080 -c 0 0 -j dnat --to-destination 172.17.0.2:8080 

thoughts?

first, rule out obvious , make sure hosts on other networks know how route machine reach container network. that, check

netstat -nr 

on source host , make sure docker subnet listed docker host gateway, or default router handling traffic upstream knows host.

if traffic getting routed blocked, you're getting forwarding , iptables. forwarding, following should show 1:

cat /proc/sys/net/ipv4/ip_forward 

make sure local host shows route bridges container networks same netstat command, there should line docker0 interface , docker subnet destination:

netstat -nr 

for iptables, check see if there interface specific nat or filter rules need adjusted:

iptables -t filter -vs iptables -t nat -vs 

if forward rule defaults drop instead of accept, may want add logging, or change default accept traffic if believe can trusted (e.g. host behind firewall).

this being said, advertising ports directly on host common practice containers. private stuff, can setup multiple containers isolated on internal network can talk each other, no other containers, , expose ports open rest of world on host -p flag run (or ports option in docker-compose).


Comments

Popular posts from this blog

wordpress - (T_ENDFOREACH) php error -

Export Excel workseet into txt file using vba - (text and numbers with formulas) -

Using django-mptt to get only the categories that have items -