[lvs-users] Keepalived, LVS (dr) and ProFTPd

Mark Scholten mark at streamservice.nl
Sun Nov 18 02:43:29 GMT 2012


Hello,

For me LVS is relatively new, I use keepalived since begin 2010. I now
combine LVS + keepalived with direct routing. For http/imap/smtp/pop3 this
works perfect. However for FTP (for me ProFTPd is important) I can't get
this to work.

I want to use direct routing where possible (I don't really like NAT).

In the data below the usernames and passwords are replaced, if something is
replaced the new content is located between [ and ].

Our setup is like this:
Router1-----------------------Router2
                           |
                           |
LB01-------------------------------LB02
                           |
                           |
FTP1-------------


Router1/Router2: running keepalived so we can keep internal traffic intern.
This are also firewalls and only contain static routes.
LB01/LB02: keepalived+LVS
FTP1: a single system running ProFTPd, maybe we add another FTP server in
the future

In this email I will provide information regarding LB01/LB02.

OS: Debian Wheezy (64bit)
Keepalived version: 1.2.2
Ipvsadm version: ipvsadm v1.26 2008/5/15 (compiled with popt and IPVS
v1.2.1)

As you can see below in our keepalived.conf we also run some other services
on this FTP server and we have separate caching nodes for http, soon I will
add https to the caching nodes and after that I also want to support that.
But one thing first, getting FTP to work (active+passive if possible).

I tried FTP in active and in passive mode (FileZilla is the FTP client
used). I can see it can connect for the first part and after that it fails,
below you can find the details:
Response:	220 ProFTPD 1.3.3d Server ready.
Command:	USER [username]
Response:	331 Password required for [username]
Command:	PASS [password]
Response:	230 User [username] logged in
Command:	OPTS UTF8 ON
Response:	200 UTF8 set to on
Status:	Connected
Status:	Retrieving directory listing...
Command:	PWD
Response:	257 "/" is the current directory
Command:	TYPE I
Response:	200 Type set to I
Command:	PASV
Response:	227 Entering Passive Mode (91,199,167,39,138,90).
Command:	MLSD
Error:	Connection timed out
Error:	Failed to retrieve directory listing

root at LB01:~# lsmod | grep ip_vs
ip_vs_wrr              12613  0
ip_vs_wlc              12437  0
ip_vs_sh               12642  0
ip_vs_sed              12437  0
ip_vs_rr               12521  22
ip_vs_nq               12435  0
ip_vs_lc               12435  0
ip_vs_lblcr            13046  0
ip_vs_lblc             12894  0
ip_vs_ftp              12783  0
nf_nat                 18242  3 ip_vs_ftp,iptable_nat,ipt_MASQUERADE
ip_vs_dh               12642  0
ip_vs                 103967  44
ip_vs_dh,ip_vs_ftp,ip_vs_lblc,ip_vs_lblcr,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_s
ed,ip_vs_sh,ip_vs_wlc,ip_vs_wrr
nf_conntrack           52720  5
ip_vs,nf_conntrack_ipv4,nf_nat,iptable_nat,ipt_MASQUERADE
libcrc32c              12426  1 ip_vs

root at LB01:~# Ipvsadm -l -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  91.199.167.39:20 rr persistent 300
  -> 91.199.167.36:20             Route   100    0          0
TCP  91.199.167.39:21 rr persistent 300
  -> 91.199.167.36:21             Route   100    0          0
TCP  91.199.167.39:25 rr
  -> 91.199.167.36:25             Route   100    0          0
TCP  91.199.167.39:80 rr persistent 3600
  -> 91.199.167.37:80             Route   100    0          147
  -> 91.199.167.38:80             Route   100    3          120
TCP  91.199.167.39:110 rr
  -> 91.199.167.36:110            Route   100    0          0
TCP  91.199.167.39:143 rr
  -> 91.199.167.36:143            Route   100    1          0
TCP  91.199.167.39:587 rr
  -> 91.199.167.36:25             Route   100    0          0
TCP  91.199.167.39:993 rr
  -> 91.199.167.36:993            Route   100    0          0
TCP  91.199.167.39:995 rr
  -> 91.199.167.36:995            Route   100    0          0
TCP  91.199.167.39:2222 rr
  -> 91.199.167.36:2222           Route   100    0          0
TCP  91.199.167.39:10587 rr
  -> 91.199.167.36:25             Route   100    0          0
TCP  91.199.167.40:80 rr persistent 3600
  -> 91.199.167.37:80             Route   100    0          0
  -> 91.199.167.38:80             Route   100    0          13
TCP  213.189.17.39:21 rr
  -> 91.199.167.36:21             Route   100    0          0
TCP  213.189.17.39:25 rr
  -> 91.199.167.36:25             Route   100    0          0
TCP  213.189.17.39:80 rr persistent 3600
  -> 91.199.167.37:80             Route   100    0          0
  -> 91.199.167.38:80             Route   100    0          0
TCP  213.189.17.39:110 rr
  -> 91.199.167.36:110            Route   100    0          0
TCP  213.189.17.39:143 rr
  -> 91.199.167.36:143            Route   100    6          0
TCP  213.189.17.39:587 rr
  -> 91.199.167.36:25             Route   100    0          0
TCP  213.189.17.39:993 rr
  -> 91.199.167.36:993            Route   100    2          0
TCP  213.189.17.39:995 rr
  -> 91.199.167.36:995            Route   100    0          0
TCP  213.189.17.39:2222 rr
  -> 91.199.167.36:2222           Route   100    0          0
TCP  213.189.17.39:10587 rr
  -> 91.199.167.36:25             Route   100    0          0

Below you can find my /etc/keepalived/keepalived.conf.
global_defs {
        lvs_id LVS_CLUSTER_01
}

vrrp_sync_group VG1 {
        group {
                CL01_VI_WAN
                CL01_VI_VAR
                CL01_VI_LAN
        }
}

# Cluster IPs WAN
vrrp_instance CL01_VI_WAN {
        state MASTER
        interface eth0
        virtual_router_id 101
        priority 100
        advert_int 1
        authentication {
                auth_type PASS
                auth_pass [password]
        }
        virtual_ipaddress {
                91.199.167.39
                91.199.167.40
                213.189.17.39
        }
}

# Cluster IPv6 IPs WAN
vrrp_instance CL01_VI_WAN {
        state MASTER
        interface eth0
        virtual_router_id 111
        priority 100
        advert_int 1
        authentication {
                auth_type PASS
                auth_pass [password]
        }
        virtual_ipaddress {
                2a00:0f10:0111:1::80:100/48 dev eth0
        }
}

# Cluster IP LAN
vrrp_instance CL01_VI_LAN {
        state MASTER
        interface eth1
        virtual_router_id 102
        priority 100
        advert_int 1
        authentication {
                auth_type PASS
                auth_pass [password]
        }
        virtual_ipaddress {
                10.125.0.1
        }
}

# Cluster IP VARnish
vrrp_instance CL01_VI_LAN {
        state MASTER
        interface eth2
        virtual_router_id 103
        priority 100
        advert_int 1
        authentication {
                auth_type PASS
                auth_pass dfafddf
        }
        virtual_ipaddress {
                10.125.2.1
                10.125.2.5
        }
}

virtual_server 91.199.167.39 80 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        persistence_timeout 3600
        protocol TCP

#       real_server 10.125.2.11 80 {
        real_server 91.199.167.37 80 {
                weight 100
                TCP_CHECK {
                        connect_timeout 3
                        connect_port 80
                }
        }
        real_server 91.199.167.38 80 {
                weight 100
                TCP_CHECK {
                        connect_timeout 3
                        connect_port 80
                }
        }
}

virtual_server 91.199.167.40 80 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        persistence_timeout 3600
        protocol TCP

#       real_server 10.125.2.11 80 {
        real_server 91.199.167.37 80 {
                weight 100
                TCP_CHECK {
                        connect_timeout 3
                        connect_port 80
                }
        }
        real_server 91.199.167.38 80 {
                weight 100
                TCP_CHECK {
                        connect_timeout 3
                        connect_port 80
        }
}

virtual_server 213.189.17.39 80 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        persistence_timeout 3600
        protocol TCP

#       real_server 10.125.2.11 80 {
        real_server 91.199.167.37 80 {
                weight 100
                TCP_CHECK {
                        connect_timeout 3
                        connect_port 80
                }
        }
        real_server 91.199.167.38 80 {
                weight 100
                TCP_CHECK {
                        connect_timeout 3
                        connect_port 80
                }
        }
}

virtual_server 91.199.167.39 20 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP
        persistence_timeout 300

#       real_server 10.125.0.200 21 {
        real_server 91.199.167.36 20 {
                weight 100
        }
}

virtual_server 91.199.167.39 21 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP
        persistence_timeout 300

#       real_server 10.125.0.200 21 {
        real_server 91.199.167.36 21 {
                weight 100
        }
}

virtual_server 91.199.167.39 25 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
#       nat_mask 255.255.255.0
        protocol TCP

#       real_server 10.125.0.200 25 {
        real_server 91.199.167.36 25 {
                weight 100
        }
}

virtual_server 91.199.167.39 587 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP

#       real_server 10.125.0.200 25 {
        real_server 91.199.167.36 25 {
                weight 100
        }
}

virtual_server 91.199.167.39 10587 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP

#       real_server 10.125.0.200 25 {
        real_server 91.199.167.36 25 {
                weight 100
        }
}

virtual_server 91.199.167.39 110 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP

#       real_server 10.125.0.200 110 {
        real_server 91.199.167.36 110 {
                weight 100
        }
}

virtual_server 91.199.167.39 143 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP

#       real_server 10.125.0.200 143 {
        real_server 91.199.167.36 143 {
        }
}

virtual_server 91.199.167.39 993 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP
#       real_server 10.125.0.200 993 {
        real_server 91.199.167.36 993 {
                weight 100
        }
}

virtual_server 91.199.167.39 995 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP

#       real_server 10.125.0.200 995 {
        real_server 91.199.167.36 995 {
                weight 100
        }
}

virtual_server 91.199.167.39 2222 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP
#       real_server 10.125.0.200 2222 {
        real_server 91.199.167.36 2222 {
                weight 100
        }
}

virtual_server 213.189.17.39 21 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP

#       real_server 10.125.0.200 21 {
        real_server 91.199.167.36 21 {
                weight 100
        }
}

virtual_server 213.189.17.39 25 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
#       nat_mask 255.255.255.0
        protocol TCP

#       real_server 10.125.0.200 25 {
        real_server 91.199.167.36 25 {
                weight 100
        }
}

virtual_server 213.189.17.39 587 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP

#       real_server 10.125.0.200 25 {
        real_server 91.199.167.36 25 {
                weight 100
        }
}

virtual_server 213.189.17.39 10587 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP

#       real_server 10.125.0.200 25 {
        real_server 91.199.167.36 25 {
                weight 100
        }
}

virtual_server 213.189.17.39 110 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP

#       real_server 10.125.0.200 110 {
        real_server 91.199.167.36 110 {
                weight 100
        }
}

virtual_server 213.189.17.39 143 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP

#       real_server 10.125.0.200 143 {
        real_server 91.199.167.36 143 {
        }
}

virtual_server 213.189.17.39 993 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP
#       real_server 10.125.0.200 993 {
        real_server 91.199.167.36 993 {
                weight 100
        }
}

virtual_server 213.189.17.39 995 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP

#       real_server 10.125.0.200 995 {
        real_server 91.199.167.36 995 {
                weight 100
        }
}

virtual_server 213.189.17.39 2222 {
        delay_loop 6
        lb_algo rr
#       lb_kind NAT
        lb_kind DR
#       nat_mask 255.255.255.0
        protocol TCP
#       real_server 10.125.0.200 2222 {
        real_server 91.199.167.36 2222 {
                weight 100
        }
}

You can see an IPv6 address in the file, I will also look into it to make
everything also work on IPv6 soon.

Kind regards,

Mark Scholten





More information about the lvs-users mailing list