LVS HTTPS SSL and Squid
horms at verge.net.au
Thu Aug 11 09:20:12 BST 2005
On Tue, Aug 09, 2005 at 02:18:20PM +0100, Graeme Fowler wrote:
> On Tue 09 Aug 2005 02:29:11 BST , Horms <horms at verge.net.au> wrote:
> >Here is a description I wrote a while ago about SSL/LVS
> >In a nutshell, you probably want to use persistence
> >and have the real-servers handle the SSL decryption.
> ...this is OK if you have a set of servers onto which you can install
> multiple SSL Certs and undergo the pain of potentially having an IP
> management nightmare:
> 1 realserver, 1 site -> 1 "cluster" IP
> 2 realservers, 1 site -> 2 "cluster" IPs
> 10 realservers, 10 sites -> 100 "cluster" IPs
> I think you can see where this is going! Very rapidly the IP address
> management becomes unwieldy.
> You can of course get around this by assigning different *ports* to
> each site on each server, but if you have the cert installed on all
> servers in the cluster this soon becomes difficult to manage too.
> Offloading onto some sort of SSL "proxy", accelerator, engine (call it
> what you will) means that you can then simply utilise the processing
> power of that (those) system(s) to do the SSL overhead and keep your
> webservers doing just that, serving pages. If each VIP:443 points to a
> different port on the "engine" you greatly simplify your address
> management too.
I do not follow how having traffic that arrives on the
real-servers in plain-text or SSL changes the IP address situation
you describe above.
> Also, most commercial SSL certification authorities will charge you an
> additional fee to deploy a cert on more than one machine, so if you can
> reduce the number of "engine" servers you reduce your costs by quite
> some margin.
This is the sole reason to use an SSL accelerator in my opinion.
The fact is that SSL is likely to be the most expensive operation
your cluster is doing. And that in many cases it is cheaper to
buy some extra servers, and offload this processing to an
LVS cluster, than to by an SSL accelerator card.
> You do however still need to use persistence, and potentially deploy
> some sort of "pseudo-persistence" in your engines too to ensure that
> they are utilising the same backend server. If you don't, you'll get
> all sorts of application-based session oddness occurring (unless you
> can share session states across the cluster).
If you have applications that need persistence, then of course
you need persistence regardless of SSL or not. But if you are delivering
SSL to the real servers then you probably want persistence, regardless
of your applications, to allow SessionID to work, and thus lower
the SSL processing cost.
More information about the lvs-users