Skip to main content
Version: Next

Load Balancers

Flow

Typically, your environment will use TLS certificates, and users will log in through the ASI load balancer via port 443.
The load balancer will then direct traffic to port 52000 or 50443 depending on the request URL.

Request URLPortRedirect URL
https://asi-load-balancer.com443https://asi-member.com:52000
https://asi-load-balancer.com/auth443https://asi-member.com:50443/auth

Other ports (for BES, DataHub, Integration Hub, etc.) should pass through the load balancer on the same port they were received on.
The sections below show typical health checks and traffic flows for each product.


Health Rules

  • Configure one health check per cluster member per product.
  • Health checks should call the product’s status or login URL.
  • Treat any HTTP 200–399 response code as healthy.
  • In standalone environments, a load balancer is usually not required; if one is used, configure it for the single instance.

Configuration Overview

ProductHealth Check URLHealthy CodesModeSticky SessionsInbound PortsBackend Ports
ASIhttps://asi-member.com:52000/status200–399round-robinyes44352000
BEShttps://bes-member.com:50819/escapex/login.jsp200–399round-robinno50819, 543250819, 5432
DataHubhttps://datahub-member.com:50005/status200–399round-robinno50000, 5000550000, 50005
Integration Hubhttps://integration-hub-member.com:58080/status200–399round-robinno5808058080

In addition to the standard health check and port mapping, the ASI load balancer needs to support path-based routing:

This requires the load balancer to inspect the HTTP path and route traffic to different backend ports based on whether the request begins with /auth.

Routing Rules Summary

Incoming Request URLPath MatchBackend Target
https://asi-lb.comanything not /authhttps://asi-member.com:52000
https://asi-lb.com/auth/auth or /auth/*https://asi-member.com:50443/auth

Most load balancers (including F5 BIG-IP, HAProxy, NGINX, and AWS ALB) support this pattern through L7 policies or path-based routing rules.


Load Balancer Configuration Examples

The following examples show basic configurations for:

  • F5 BIG-IP (LTM)
  • NGINX
  • HAProxy
  • AWS Application Load Balancer (ALB)

Assumptions:

  • ASI: https://<asi-node>:52000, health GET /status
  • BES: https://<bes-node>:50819, health GET /escapex/login.jsp
  • DataHub: https://<datahub-node>:50005, health GET /status
  • Integration Hub: https://<ihub-node>:58080, health GET /status
  • HTTP 200–399 is considered healthy

Adjust hostnames, IPs, and ports to match your environment.


F5 BIG-IP (LTM)

F5 BIG-IP (LTM) provides advanced L4–L7 load balancing.
Each example below shows:

  • An HTTPS health monitor
  • A pool of backend nodes
  • A virtual server that receives client traffic

Health checks always call the product’s /status or login endpoint.

ASI

This configuration creates:

  • An HTTPS health check on /status
  • A pool on port 52000 for standard ASI traffic
  • A pool on port 50443 for /auth traffic
  • A virtual server on 443 that routes:
    • /auth → port 50443
    • everything else → port 52000

Monitor send string:

GET /status HTTP/1.1
Host: asi.example.com
Connection: close

Pools, iRule & Virtual Server:

create ltm monitor https asi_status_monitor {
send "GET /status HTTP/1.1\r\nHost: asi.example.com\r\nConnection: close\r\n\r\n"
interval 5
timeout 16
}

create ltm pool asi_pool_52000 {
load-balancing-mode round-robin
monitor asi_status_monitor
members {
10.0.1.11:52000 {}
10.0.1.12:52000 {}
10.0.1.13:52000 {}
}
}

create ltm pool asi_auth_pool_50443 {
load-balancing-mode round-robin
monitor asi_status_monitor
members {
10.0.1.11:50443 {}
10.0.1.12:50443 {}
10.0.1.13:50443 {}
}
}

create ltm rule asi_path_routing_irule {
when HTTP_REQUEST {
if { [HTTP::path] starts_with "/auth" } {
pool asi_auth_pool_50443
} else {
pool asi_pool_52000
}
}
}

create ltm virtual asi_vs {
destination 0.0.0.0:443
ip-protocol tcp
pool asi_pool_52000
profiles {
tcp {}
clientssl { }
http { }
}
rules {
asi_path_routing_irule
}
}
BES

This setup uses a single health check on port 50819, and ensures both 50819 (web) and 5432 (database) only send traffic to nodes where the BES login page is healthy.

BES requires load balancing on both 50819 (web/app) and 5432 (database), but health checks must always be performed on port 50819.
If port 50819 is unhealthy, the node should be removed from both pools.

Health Check (HTTPS on 50819)

Monitor send string:

GET /escapex/login.jsp HTTP/1.1
Host: bes.example.com
Connection: close


Create the monitor:

create ltm monitor https bes_status_monitor {
send "GET /escapex/login.jsp HTTP/1.1\r\nHost: bes.example.com\r\nConnection: close\r\n\r\n"
destination *:50819
interval 5
timeout 16
}

The destination *:50819 forces F5 to perform the health check against port 50819, even when used in a pool containing :5432 members.

Pool & Virtual Server – Port 50819

create ltm pool bes_pool_50819 {
load-balancing-mode round-robin
monitor bes_status_monitor
members {
10.0.2.11:50819 {}
10.0.2.12:50819 {}
10.0.2.13:50819 {}
}
}

create ltm virtual bes_vs_50819 {
destination 0.0.0.0:50819
ip-protocol tcp
pool bes_pool_50819
profiles {
tcp {}
clientssl { }
http { }
}
}

Pool & Virtual Server – Port 5432 (Database)

Uses the same health monitor, so DB traffic is only sent to nodes with a healthy BES web tier.

create ltm pool bes_pool_5432 {
load-balancing-mode round-robin
monitor bes_status_monitor
members {
10.0.2.11:5432 {}
10.0.2.12:5432 {}
10.0.2.13:5432 {}
}
}

create ltm virtual bes_vs_5432 {
destination 0.0.0.0:5432
ip-protocol tcp
pool bes_pool_5432
profiles {
tcp {}
}
}

This ensures:

  • One health check (50819) controls node availability
  • 50819 and 5432 both follow the same health state
  • Nodes failing the BES login page health check are fully removed from BES traffic handling
DataHub

This configuration performs health checks on 50005 and routes both 50005 and 50000 traffic only to nodes where the /status endpoint is healthy.

DataHub requires load balancing on 50005 (API/status) and 50000 (data/query), but health checks should always be performed on port 50005.
If port 50005 is unhealthy, the node should be removed from both pools.

Health Check (HTTPS on 50005)

Monitor send string:

GET /status HTTP/1.1
Host: datahub.example.com
Connection: close


Create the monitor:

create ltm monitor https datahub_status_monitor {
send "GET /status HTTP/1.1\r\nHost: datahub.example.com\r\nConnection: close\r\n\r\n"
destination *:50005
interval 5
timeout 16
}

The destination *:50005 forces F5 to perform the health check against port 50005, even when this monitor is used in a pool containing :50000 members.

Pool & Virtual Server – Port 50005

This pool handles API/status traffic on port 50005.

create ltm pool datahub_pool_50005 {
load-balancing-mode round-robin
monitor datahub_status_monitor
members {
10.0.3.11:50005 {}
10.0.3.12:50005 {}
10.0.3.13:50005 {}
}
}

create ltm virtual datahub_vs_50005 {
destination 0.0.0.0:50005
ip-protocol tcp
pool datahub_pool_50005
profiles {
tcp {}
clientssl { }
http { }
}
}

Pool & Virtual Server – Port 50000

This pool handles DataHub query/ingest traffic on port 50000, but uses the same health monitor on 50005, so only nodes with a healthy API/status endpoint receive 50000 traffic.

create ltm pool datahub_pool_50000 {
load-balancing-mode round-robin
monitor datahub_status_monitor
members {
10.0.3.11:50000 {}
10.0.3.12:50000 {}
10.0.3.13:50000 {}
}
}

create ltm virtual datahub_vs_50000 {
destination 0.0.0.0:50000
ip-protocol tcp
pool datahub_pool_50000
profiles {
tcp {}
clientssl { }
http { }
}
}

This ensures:

  • One health check (/status on 50005) controls node availability
  • 50000 and 50005 both follow the same health state
  • Nodes failing the DataHub health check are completely removed from DataHub traffic (all ports)
Integration Hub

This config checks /status on port 58080 and forwards traffic only to healthy Integration Hub nodes.

Monitor send string:

GET /status HTTP/1.1
Host: integration-hub.example.com
Connection: close


Pool & Virtual Server:

create ltm monitor https ihub_status_monitor {
send "GET /status HTTP/1.1\r\nHost: integration-hub.example.com\r\nConnection: close\r\n\r\n"
destination *:58080
interval 5
timeout 16
}

create ltm pool ihub_pool {
load-balancing-mode round-robin
monitor ihub_status_monitor
members {
10.0.4.11:58080 {}
10.0.4.12:58080 {}
10.0.4.13:58080 {}
}
}

create ltm virtual ihub_vs {
destination 0.0.0.0:58080
ip-protocol tcp
pool ihub_pool
profiles {
tcp {}
clientssl { }
http { }
}
}

NGINX

NGINX acts as an HTTPS reverse proxy.
Each configuration defines:

  • An upstream block with backend nodes
  • A server block listening on the load-balancer port
  • Proxy rules that forward traffic to the correct product backend

Health is determined by whether the backend responds successfully to the proxied /status or login URL.

ASI

This configuration defines two upstreams:

  • asi_backend_52000 for standard ASI traffic
  • asi_backend_50443 for /auth traffic

The NGINX server listens on 443 and routes requests based on the path:

  • /auth → port 50443
  • all other paths → port 52000
upstream asi_backend_52000 {
server 10.0.1.11:52000 max_fails=3 fail_timeout=30s;
server 10.0.1.12:52000 max_fails=3 fail_timeout=30s;
server 10.0.1.13:52000 max_fails=3 fail_timeout=30s;
}

upstream asi_backend_50443 {
server 10.0.1.11:50443 max_fails=3 fail_timeout=30s;
server 10.0.1.12:50443 max_fails=3 fail_timeout=30s;
server 10.0.1.13:50443 max_fails=3 fail_timeout=30s;
}

server {
listen 443 ssl;
server_name asi-load-balancer.example.com;

ssl_certificate /etc/nginx/certs/asi.crt;
ssl_certificate_key /etc/nginx/certs/asi.key;

# Default route → 52000
location / {
proxy_pass https://asi_backend_52000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}

# Authentication route → 50443
location /auth {
proxy_pass https://asi_backend_50443;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}

# Health check
location /status {
proxy_pass https://asi_backend_52000/status;
proxy_set_header Host $host;
}
}
BES

This configuration reverse-proxies BES on port 50819 and uses the /escapex/login.jsp endpoint as a logical health URL.

upstream bes_backend {
server 10.0.2.11:50819 max_fails=3 fail_timeout=30s;
server 10.0.2.12:50819 max_fails=3 fail_timeout=30s;
server 10.0.2.13:50819 max_fails=3 fail_timeout=30s;
}

server {
listen 50819 ssl;
server_name bes-load-balancer.example.com;

ssl_certificate /etc/nginx/certs/bes.crt;
ssl_certificate_key /etc/nginx/certs/bes.key;

location / {
proxy_pass https://bes_backend;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}

location /escapex/login.jsp {
proxy_pass https://bes_backend/escapex/login.jsp;
proxy_set_header Host $host;
}
}
DataHub

This configuration routes DataHub traffic via port 50005, using the /status endpoint to verify backend health.

upstream datahub_backend {
server 10.0.3.11:50005 max_fails=3 fail_timeout=30s;
server 10.0.3.12:50005 max_fails=3 fail_timeout=30s;
server 10.0.3.13:50005 max_fails=3 fail_timeout=30s;
}

server {
listen 50005 ssl;
server_name datahub-load-balancer.example.com;

ssl_certificate /etc/nginx/certs/datahub.crt;
ssl_certificate_key /etc/nginx/certs/datahub.key;

location / {
proxy_pass https://datahub_backend;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}

location /status {
proxy_pass https://datahub_backend/status;
proxy_set_header Host $host;
}
}
Integration Hub

This configuration proxies Integration Hub traffic on port 58080 and checks /status to confirm node health.

upstream ihub_backend {
server 10.0.4.11:58080 max_fails=3 fail_timeout=30s;
server 10.0.4.12:58080 max_fails=3 fail_timeout=30s;
server 10.0.4.13:58080 max_fails=3 fail_timeout=30s;
}

server {
listen 58080 ssl;
server_name integration-hub-load-balancer.example.com;

ssl_certificate /etc/nginx/certs/ihub.crt;
ssl_certificate_key /etc/nginx/certs/ihub.key;

location / {
proxy_pass https://ihub_backend;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}

location /status {
proxy_pass https://ihub_backend/status;
proxy_set_header Host $host;
}
}

HAProxy

HAProxy provides Layer 7 load balancing with native HTTP health checks.
Each configuration includes:

  • A frontend (public listener)
  • A backend that distributes traffic to cluster nodes
  • An HTTP health check validating 200–399 responses

Each product has its own frontend/backend pair.

ASI

This configuration checks /status for health and routes traffic based on path:

  • /auth → ASI nodes on 50443
  • all other paths → ASI nodes on 52000
frontend asi_frontend
bind *:443 ssl crt /etc/haproxy/certs/asi.pem

# ACLs for path matching
acl is_auth_path path_beg /auth

# Route /auth → 50443
use_backend asi_backend_50443 if is_auth_path

# Default route → 52000
default_backend asi_backend_52000


backend asi_backend_52000
balance roundrobin
option httpchk GET /status HTTP/1.1\r\nHost:\ asi.example.com\r\nConnection:\ close\r\n
http-check expect rstatus (2[0-9][0-9]|3[0-9][0-9])

server asi1 10.0.1.11:52000 check ssl verify none
server asi2 10.0.1.12:52000 check ssl verify none
server asi3 10.0.1.13:52000 check ssl verify none


backend asi_backend_50443
balance roundrobin
option httpchk GET /status HTTP/1.1\r\nHost:\ asi.example.com\r\nConnection:\ close\r\n
http-check expect rstatus (2[0-9][0-9]|3[0-9][0-9])

server asi1 10.0.1.11:50443 check ssl verify none
server asi2 10.0.1.12:50443 check ssl verify none
server asi3 10.0.1.13:50443 check ssl verify none
BES

This configuration uses /escapex/login.jsp as the health endpoint and balances HTTPS traffic on 50819 across BES nodes.

frontend bes_frontend
bind *:50819 ssl crt /etc/haproxy/certs/bes.pem
default_backend bes_backend

backend bes_backend
balance roundrobin
option httpchk GET /escapex/login.jsp HTTP/1.1
Host:\ bes.example.com
Connection:\ close

http-check expect rstatus (2[0-9][0-9]|3[0-9][0-9])

server bes1 10.0.2.11:50819 check ssl verify none
server bes2 10.0.2.12:50819 check ssl verify none
server bes3 10.0.2.13:50819 check ssl verify none
DataHub

This configuration checks /status on 50005 and routes HTTPS traffic on the same port to DataHub nodes.

frontend datahub_frontend
bind *:50005 ssl crt /etc/haproxy/certs/datahub.pem
default_backend datahub_backend

backend datahub_backend
balance roundrobin
option httpchk GET /status HTTP/1.1
Host:\ datahub.example.com
Connection:\ close

http-check expect rstatus (2[0-9][0-9]|3[0-9][0-9])

server datahub1 10.0.3.11:50005 check ssl verify none
server datahub2 10.0.3.12:50005 check ssl verify none
server datahub3 10.0.3.13:50005 check ssl verify none
Integration Hub

This configuration validates /status on 58080 and balances Integration Hub traffic across nodes.

frontend ihub_frontend
bind *:58080 ssl crt /etc/haproxy/certs/ihub.pem
default_backend ihub_backend

backend ihub_backend
balance roundrobin
option httpchk GET /status HTTP/1.1
Host:\ integration-hub.example.com
Connection:\ close

http-check expect rstatus (2[0-9][0-9]|3[0-9][0-9])

server ihub1 10.0.4.11:58080 check ssl verify none
server ihub2 10.0.4.12:58080 check ssl verify none
server ihub3 10.0.4.13:58080 check ssl verify none

AWS Application Load Balancer (ALB)

AWS Application Load Balancer (ALB) uses HTTPS listeners and Target Groups.
Each Target Group below defines:

  • A backend port
  • A health check path (status or login)
  • A matcher that treats 200–399 as healthy

Create one Target Group per product.

ASI Target Groups & Listener Rules

This configuration defines two Target Groups:

  • asi-tg-52000 → standard ASI traffic
  • asi-tg-50443/auth traffic

The ALB listener on port 443 routes based on the request path:

  • /auth → port 50443
  • everything else → port 52000
# -----------------------------
# Target Group: ASI (port 52000)
# -----------------------------
resource "aws_lb_target_group" "asi_52000" {
name = "asi-tg-52000"
port = 52000
protocol = "HTTPS"
vpc_id = aws_vpc.main.id

health_check {
enabled = true
interval = 30
path = "/status"
protocol = "HTTPS"
port = "52000"
healthy_threshold = 3
unhealthy_threshold = 3
timeout = 5
matcher = "200-399"
}
}

# -----------------------------
# Target Group: ASI Auth (port 50443)
# -----------------------------
resource "aws_lb_target_group" "asi_50443" {
name = "asi-tg-50443"
port = 50443
protocol = "HTTPS"
vpc_id = aws_vpc.main.id

health_check {
enabled = true
interval = 30
path = "/status"
protocol = "HTTPS"
port = "50443"
healthy_threshold = 3
unhealthy_threshold = 3
timeout = 5
matcher = "200-399"
}
}

# -----------------------------
# Listener on 443
# -----------------------------
resource "aws_lb_listener" "asi_listener" {
load_balancer_arn = aws_lb.asi_lb.arn
port = 443
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-TLS13-1-2-2021-06"
certificate_arn = aws_acm_certificate.asi_cert.arn

# Default → 52000
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.asi_52000.arn
}
}

# -----------------------------
# Listener Rule for /auth path
# -----------------------------
resource "aws_lb_listener_rule" "asi_auth_rule" {
listener_arn = aws_lb_listener.asi_listener.arn
priority = 10

action {
type = "forward"
target_group_arn = aws_lb_target_group.asi_50443.arn
}

condition {
path_pattern {
values = ["/auth*", "/auth/*"]
}
}
}
BES Target Group

This Target Group routes HTTPS traffic on 50819 and uses /escapex/login.jsp as the health endpoint for BES.

resource "aws_lb_target_group" "bes" {
name = "bes-tg"
port = 50819
protocol = "HTTPS"
vpc_id = aws_vpc.main.id

health_check {
enabled = true
interval = 30
path = "/escapex/login.jsp"
protocol = "HTTPS"
port = "50819"
healthy_threshold = 3
unhealthy_threshold = 3
timeout = 5
matcher = "200-399"
}
}
DataHub Target Group

This Target Group targets DataHub on port 50005, using /status for health checks.

resource "aws_lb_target_group" "datahub" {
name = "datahub-tg"
port = 50005
protocol = "HTTPS"
vpc_id = aws_vpc.main.id

health_check {
enabled = true
interval = 30
path = "/status"
protocol = "HTTPS"
port = "50005"
healthy_threshold = 3
unhealthy_threshold = 3
timeout = 5
matcher = "200-399"
}
}
Integration Hub Target Group

This Target Group routes traffic on port 58080 to Integration Hub nodes and checks /status for health.

resource "aws_lb_target_group" "ihub" {
name = "integration-hub-tg"
port = 58080
protocol = "HTTPS"
vpc_id = aws_vpc.main.id

health_check {
enabled = true
interval = 30
path = "/status"
protocol = "HTTPS"
port = "58080"
healthy_threshold = 3
unhealthy_threshold = 3
timeout = 5
matcher = "200-399"
}
}