[lvs-users] using ipvsadm, but there is no output of real server

hanool para lvs.para at gmail.com
Sat Nov 17 02:17:40 GMT 2012


hi ,dear all
I am newbie for LVS and keepalived. I met one problem when I was building
the lvs+ keepalived environment.

After I installed lvs and keepalived, I checked all them are all installed
ok.
[root at p-lvs-1 ~]# ipvsadm -V
Try `ipvsadm -h' or 'ipvsadm --help' for more information.
[root at p-lvs-1 ~]# ipvsadm -v
ipvsadm v1.25 2008/5/15 (compiled with popt and IPVS v1.2.1)
[root at p-lvs-1 ~]# keepalived -v
Keepalived v1.2.2 (11/16,2012)
[root at p-lvs-1 ~]# lsmod | grep ip_vs
ip_vs_wlc               1241  0
ip_vs_rr                1420  1
ip_vs                 115490  5 ip_vs_wlc,ip_vs_rr
libcrc32c               1246  1 ip_vs
ipv6                  322541  32
ip_vs,ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6

I think the ipvsadm and keepalived are OK,  not that?

The following is the keepalived.conf of Mater node:

[root at p-lvs-1 ~]# cat /etc/keepalived/keepalived.conf
   global_defs {
      lvs_id LVS_DEVEL_2
    }
    vrrp_sync_group VGM {
       group {
           VI_GW
        }
   }

   vrrp_instance VI_GW {
       state BACKUP
       interface eth0
       lvs_sync_daemon_inteface eth0
       virtual_router_id 216
       priority 90
       advert_int 5
       authentication {
           auth_type PASS
           auth_pass 1111
       }
       virtual_ipaddress {
           192.168.75.253 dev eth0 label eth0:0
       }
   }

   virtual_server 192.168.75.253 80 {
      delay_loop 6
      lb_algo rr
      lb_kind DR
      persistence_timeout 10
      protocol TCP

       real_server 192.168.75.142 80 {
           weight 100
           TCP_CHECK {
               connect_timeout 5
               nb_get_retry 3
               delay_before_retry 3
               connect_port 80
           }
       }
   }

After service keepalived start, I found that:

[root at p-lvs-1 ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.75.253:http rr persistent 10

Here is no REAL IP, I am so confused?? I thinks if using keepalived, I
should not add VIP manually. is that?

AND get  trace as following:

Nov 17 09:40:43 p-lvs-1 Keepalived[27838]: Starting Keepalived v1.2.4
(11/15,2012)
Nov 17 09:40:43 p-lvs-1 Keepalived[27839]: Starting Healthcheck child
process, pid=27840
Nov 17 09:40:43 p-lvs-1 Keepalived[27839]: Starting VRRP child process,
pid=27841
Nov 17 09:40:43 p-lvs-1 Keepalived_vrrp[27841]: Interface queue is empty
Nov 17 09:40:43 p-lvs-1 Keepalived_vrrp[27841]: No such interface, virbr0
Nov 17 09:40:43 p-lvs-1 Keepalived_vrrp[27841]: No such interface,
virbr0-nic
Nov 17 09:40:43 p-lvs-1 Keepalived_vrrp[27841]: Netlink reflector reports
IP 192.168.75.135 added
Nov 17 09:40:43 p-lvs-1 Keepalived_vrrp[27841]: Netlink reflector reports
IP 192.168.122.1 added
Nov 17 09:40:43 p-lvs-1 Keepalived_vrrp[27841]: Netlink reflector reports
IP fe80::44d5:d7ff:fed8:ba5a added
Nov 17 09:40:43 p-lvs-1 Keepalived_vrrp[27841]: Registering Kernel netlink
reflector
Nov 17 09:40:43 p-lvs-1 Keepalived_vrrp[27841]: Registering Kernel netlink
command channel
Nov 17 09:40:43 p-lvs-1 Keepalived_vrrp[27841]: Registering gratuitous ARP
shared channel
Nov 17 09:40:43 p-lvs-1 Keepalived_healthcheckers[27840]: Interface queue
is empty
Nov 17 09:40:43 p-lvs-1 Keepalived_healthcheckers[27840]: No such
interface, virbr0
Nov 17 09:40:43 p-lvs-1 Keepalived_healthcheckers[27840]: No such
interface, virbr0-nic
Nov 17 09:40:43 p-lvs-1 Keepalived_healthcheckers[27840]: Netlink reflector
reports IP 192.168.75.135 added
Nov 17 09:40:43 p-lvs-1 Keepalived_healthcheckers[27840]: Netlink reflector
reports IP 192.168.122.1 added
Nov 17 09:40:43 p-lvs-1 Keepalived_healthcheckers[27840]: Netlink reflector
reports IP fe80::44d5:d7ff:fed8:ba5a added
Nov 17 09:40:43 p-lvs-1 Keepalived_healthcheckers[27840]: Registering
Kernel netlink reflector
Nov 17 09:40:43 p-lvs-1 Keepalived_healthcheckers[27840]: Registering
Kernel netlink command channel
Nov 17 09:40:43 p-lvs-1 Keepalived_vrrp[27841]: Opening file
'/etc/keepalived/keepalived.conf'.
Nov 17 09:40:43 p-lvs-1 Keepalived_vrrp[27841]: Configuration is using :
63907 Bytes
Nov 17 09:40:43 p-lvs-1 Keepalived_vrrp[27841]: Using LinkWatch kernel
netlink reflector...
Nov 17 09:40:43 p-lvs-1 Keepalived_vrrp[27841]: VRRP_Instance(VI_GW)
Entering BACKUP STATE
Nov 17 09:40:43 p-lvs-1 Keepalived_vrrp[27841]: VRRP sockpool: [ifindex(2),
proto(112), fd(10,11)]
Nov 17 09:40:43 p-lvs-1 Keepalived_healthcheckers[27840]: Opening file
'/etc/keepalived/keepalived.conf'.
Nov 17 09:40:43 p-lvs-1 Keepalived_healthcheckers[27840]: Configuration is
using : 9995 Bytes
Nov 17 09:40:43 p-lvs-1 Keepalived_healthcheckers[27840]: Using LinkWatch
kernel netlink reflector...
Nov 17 09:40:43 p-lvs-1 Keepalived_healthcheckers[27840]: Activating
healthchecker for service [192.168.75.142]:80
Nov 17 09:40:44 p-lvs-1 Keepalived_healthcheckers[27840]: TCP connection to
[192.168.75.142]:80 failed !!!
Nov 17 09:40:44 p-lvs-1 Keepalived_healthcheckers[27840]: Removing service
[192.168.75.142]:80 from VS [192.168.75.253]:80
Nov 17 09:40:44 p-lvs-1 Keepalived_healthcheckers[27840]: Lost quorum 1-0=1
> 0 for VS [192.168.75.253]:80
Nov 17 09:40:47 p-lvs-1 Keepalived_vrrp[27841]: ip address associated with
VRID not present in received packet : 192.168.75.253
Nov 17 09:40:47 p-lvs-1 Keepalived_vrrp[27841]: one or more VIP associated
with VRID mismatch actual MASTER advert
Nov 17 09:40:47 p-lvs-1 Keepalived_vrrp[27841]: bogus VRRP packet received
on eth0 !!!
Nov 17 09:40:47 p-lvs-1 Keepalived_vrrp[27841]: VRRP_Instance(VI_GW)
ignoring received advertisment...

I don't know what induced "TCP connection to [192.168.75.142]:80 failed
!!!" ?

In the Real Server 192.168.75.142, I set with following :

 ifconfig lo:0  192.168.75.142  broadcast 192.168.75.142 netmask
255.255.255.255 up
 route add -host  192.168.75.142  dev lo:0
 echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
 echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
 echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
 echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

And checks whether are ok:

[root at p-rs-1 home]# ifconfig
eth0      Link encap:Ethernet  HWaddr 2E:F0:45:48:F1:C6
          inet addr:192.168.75.142  Bcast:192.168.75.255  Mask:255.255.255.0
          inet6 addr: fe80::2cf0:45ff:fe48:f1c6/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:142355 errors:0 dropped:0 overruns:0 frame:0
          TX packets:17953 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:8113255 (7.7 MiB)  TX bytes:1209010 (1.1 MiB)
          Interrupt:10 Base address:0xe000

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:68 errors:0 dropped:0 overruns:0 frame:0
          TX packets:68 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:4782 (4.6 KiB)  TX bytes:4782 (4.6 KiB)

lo:0      Link encap:Local Loopback
          inet addr:192.168.75.253  Mask:255.255.255.255
          UP LOOPBACK RUNNING  MTU:16436  Metric:1

virbr0    Link encap:Ethernet  HWaddr 52:54:00:D2:7A:6C
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:509 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:26642 (26.0 KiB)

[root at p-rs-1 home]# cat /proc/sys/net/ipv4/conf/lo/arp_ignore
1
[root at p-rs-1 home]# cat /proc/sys/net/ipv4/conf/lo/arp_announce
2
[root at p-rs-1 home]# cat /proc/sys/net/ipv4/conf/all/arp_ignore
1
[root at p-rs-1 home]# cat /proc/sys/net/ipv4/conf/all/arp_announce
2

[root at p-rs-1 home]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use
Iface
192.168.75.253  0.0.0.0         255.255.255.255 UH    0      0        0 lo
192.168.75.0    0.0.0.0         255.255.255.0   U     1      0        0 eth0
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0
virbr0
0.0.0.0         192.168.75.1    0.0.0.0         UG    0      0        0 eth0

Are there any problems with the Real Server settings?
I think the LVS node has not connect with RealServer . But I dont know what
induced that, and have no idea how to find that?
I am very appreciated of your advice and free sharings.

cheers,
para



More information about the lvs-users mailing list