[lvs-users] LVS working, just not load balancing.

Ray W. Johnson raywjohnson at gmail.com
Tue Jun 14 05:25:24 BST 2011


Greetings All,

I have a working setup of one director and two real servers. All servers 
are running and pulse daemon starts fine. I can HTTP, FTP, and SSH to 
all servers. The "virtual server" is behind a router/firewall that 
handles real/public to local IPs.

I suspect I have something backwards somewhere. Here is the setup/config.

D01 (director): http://208.90.226.39/
(no backup yet)
P01 (real # 1): http://208.90.226.30/
P02 (real # 2): http://208.90.226.31/

------------------------------------------------------------------------
Topology:

             ----------
            | internet |
            |          |
             ----------
                 |
                 |
             ----------
            |  router  | 208.90.226.39 -> 198.168.10.39 (director)
            |          |
            |          | 208.90.226.30 -> 198.168.10.30 (real server #1)
            |          | 208.90.226.31 -> 198.168.10.31 (real server #2)
             ----------
                 |
                 |
             ----------
            | director | -VIP:198.168.11.39
            |          | -RIP:198.168.10.39 (eth0)
             ----------
                 |
                 |
                 |
     --------------------------
     |                        |
     |                        |
   ----------          ----------
  | real     |        | real     |
   ----------          ----------
  192.168.10.30       192.168.10.31
  RIP/eth0 RIP/eth0


BOTH REAL SERVERS: (VIP:198.168.11.39)

   ip route add 198.168.11.39 dev eth0

   arptables -A IN -d 198.168.11.39 -j DROP
   arptables -A OUT -s 198.168.11.39 -j mangle --mangle-ip-s 198.168.10.39

   service arptables_jf restart

service pulse start
   pulse[19058]: STARTING PULSE AS MASTER
   pulse[19058]: partner dead: activating lvs
   avahi-daemon[3167]: Registering new address record for 198.168.11.39 
on eth0.
   lvs[19068]: starting virtual service Director-80 active: 80
   kernel: IPVS: [wlc] scheduler registered.
   lvs[19068]: create_monitor for Director-80/Proxy-1 running as pid 19084
   lvs[19068]: create_monitor for Director-80/Proxy-2 running as pid 19085
   nanny[19085]: starting LVS client monitor for 198.168.11.39:80 -> 
192.168.10.31:80
   nanny[19084]: starting LVS client monitor for 198.168.11.39:80 -> 
192.168.10.30:80
   nanny[19084]: [ active ] making 192.168.10.30:80 available
   nanny[19085]: [ active ] making 192.168.10.31:80 available
   pulse[19070]: gratuitous lvs arps finished

ipvsadm -ln
   IP Virtual Server version 1.2.1 (size=4096)
   Prot LocalAddress:Port Scheduler Flags
     -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
   TCP  198.168.11.39:80 wlc persistent 14400
     -> 192.168.10.30:80             Route   1      0          0
     -> 192.168.10.31:80             Route   1      0          0

BUT... no load balancing.

Surf to : http://208.90.226.39/index.html and you get "D01"
Should switch between: "P01" and "P02", right??

( http://208.90.226.30/index.html == "P01" / 
http://208.90.226.31/index.html == "P02" )

--RayJ

PS: /etc/sysconfig/ha/lvs.cf

serial_no = 302
primary = 192.168.10.39
service = lvs
backup = 0.0.0.0
heartbeat = 1
heartbeat_port = 539
keepalive = 6
deadtime = 18
network = direct
debug_level = NONE
virtual Director-80 {
      active = 1
      address = 198.168.11.39 eth0:1
      vip_nmask = 255.255.255.0
      port = 80
      persistent = 14400
      pmask = 255.255.255.255
      send = "GET / HTTP/1.1\r\n\r\n"
      expect = "HTTP"
      use_regex = 0
      load_monitor = none
      scheduler = wlc
      protocol = tcp
      timeout = 30
      reentry = 15
      quiesce_server = 1
      server Proxy-1 {
          address = 192.168.10.30
          active = 1
          weight = 1
      }
      server Proxy-2 {
          address = 192.168.10.31
          active = 1
          weight = 1
      }
}





More information about the lvs-users mailing list