[lvs-users] Possibly beating a dead horse - IPVS-NAT - 1 Director, 2 Realservers

Memblin memblin at discursive.org
Sat May 31 15:14:01 BST 2008

----- Original Message ----- 
From: "Thomas Pedoussaut" <thomas at pedoussaut.com>
To: "LinuxVirtualServer.org users mailing list." 
<lvs-users at linuxvirtualserver.org>
Sent: Saturday, May 31, 2008 3:00 AM
Subject: Re: [lvs-users] Possibly beating a dead horse - IPVS-NAT - 1 
Director, 2 Realservers

> Memblin wrote:
>>  I can get any TCP application to work that I want
>> but what we're looking for is a way to load balance
>> traffic to our DNS resolvers / caching servers.
>>  I've seen all over the list where people have had problems
>> with DNS and other UDP based applications and LVS.
>>  The only problem I have is that all DNS queries that
>> come in always go to the same real server. I do get an
>> answer back at the client but it's not load balanced of course.
> Just a few questions:
> - what's your LB method ? RR or WRR are the only one working for UDP (no
> sessions)

I've tried RR and WRR and mixing up the weights

> - Can you give us the ouptput of an ipvsadm -ln to check that you
> effectively have both weight > 0
> (and is 1 realserver located o the same host as the director ?)

bal01:/home/crow# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
UDP  <Public VIP>:53 rr
  ->               Masq    1      0          0
  ->               Masq    1      0          0

The director and both realservers are seperte hardware

 eth1 - <Public VIP> -> Internet
 eth0 - -> Switch1

 eth0 - -> Switch1

  eth0 - -> Switch1

The director can ping both realservers
the realservers can both ping the director
and reach the world through the director.

> - Are you proceeding the tests from different clients from different
> networks ? There is server affinity in ipvs to "stick" client to
> servers. In the same vein, can you check that there is efectively NO
> traffic to the second resolver.

 The clients I am testing with are a machine in a seperate VLAN from
the VIP so it goes through a router to get to <Public VIP> and another
one from out of state.

 I both tcpdump and dnstop show the same thing when I fire up the
performance test or run manual 'dig @<Public VIP> <domain>'.

 If I remove the realserver from the ipvs config that is currently getting
all the traffice (say num 1), the traffic does go to the other realserver
(say num 2). Then when I put number 1 back into the ipvs configuation
the traffice stays with number 2.  If I do nslookups from a windows
box it looks like Windows actually asks for like 8 things per single
request and those get spread out between both servers normally.

> PS: I know some ISP use anycast for deploying farms of DNS resolvers

 Some of our networking folks are rather ANYCAST resistant which is
why I was looking for a way to load balance this with a server of sorts. I
think I may go back and tell them they're going to have to do the ANYTCAST
for us. 

More information about the lvs-users mailing list