Graeme Fowler graeme at
Mon Aug 8 21:35:12 BST 2005


On Mon, 2005-08-08 at 12:36 -0700, Joseph Mack NA3T wrote:
> just wanted to make sure. The situation isn't clear AFAIK 
> either. It's not like it comes up a lot and we've got it 
> down pat. Horms probably has the clearest point on the 
> matter which is not to have a separate SSL engine but to 
> have each realserver do its own decrypting/encrypting.

I've built a similar system at work (can't go into too much detail about
certain parts of it, sadly) but the essence is as follows:

director + failover director (keepalived/LVS)
squid 1 ... squid 2 ... squid N
realserver 1 ... realserver 2 ... realserver N

The Squids are acting in reverse proxy mode and I terminate the SSL
connections on them via the frontend LVS, so they're all load balanced.
Behind the scenes is a bit more complicated as certain vhosts are only
present on certain groups of servers within the "cluster" and the Squids
aren't necessarily aware where they might be, so the Squids use a
custom-written redirector to do lookups against an appropriate directory
and redirect their requests accordingly.

In a position where you have a 1:1 map of squids/realservers you could
in theory park a single server behind a single squid, but that doesn't
give you much scaleability. It does however mean if a webserver fails
then that failure gets cascaded up into the cluster more easily. Then
again, if you're shrewd with your healthchecks you can combine tests for
your SSL IP addresses and take them down if all the webservers fail.

Also, don't overallocate IP addresses on the Squids. If I say "think TCP
ports", you only need a single IP on your frontend NIC... but I'll leave
you to work that one out!

Remember that the LVS is effectively just a clever router; it isn't
application-aware at all. That's what L7 kit is for :)


More information about the lvs-users mailing list