Vaadin is full stack java framework to write web applications.
One of the interesting parts is, that you can write the UI completly in java, so you won't have to mess with different technologies and languages.
On the other side, it's easy to include webcomponents or write your own and connect them to your java application.
Vaadin also has a powerfull integrated push system, which allows you to push UI updates/notifications to the client when they are "ready". It's based on athmosphere and can benefit from websockets, but also works/falls back to standard http long polling http requests if nothing else works.
If you are using a single tomcat / servlet container instance, then you either expose that one to your customers or put some proxy in front of it. It's pretty standard and you find a lot of examples on how to do this with nginx or apache. You can either use plain http+websocket proxy, or (in the case of apache) also the ajp connector to the backend service.
But if you have multiple backend tomcat / sevlet containers running for redundacy/scaling/... then things get more complicated.
The vaadin framework is usually statefull, you you just can't route each new request to any one of your backend servers. You will have to always route the requests to the same servlet engine.
This is called "sticky session", since a session is sticking on the same backend server.
When you use nginx as the frontend/loadbalancer service, then you will get sticky sessions too, but only based in the ip (and port) of the client. https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/
This works fine as long as you users are spread over the world (or better use different IP addresses), the load is distributed amoung the backend servers.
But if you have a company application, which is for example used at two locations in the world, with each 50 users, the distribution of the requests can occur to route all to exaclty one of the backend servers and the others waiting for work to do.
This is because nginx uses the hash of the client public ip address, so all users behind the same ip will be routed to the same backend, which isn't what we intend.
Nginx also has the ability to route the requests based on a cookie you define (on in case of servlet engines is caled JSESSIONID). But that feature is not available in the free nginx version, it's premium feature requiring a pricy subscription. https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/#enabling-session-persistence
So if you wan't to stay free of costs for the load balancer you have to choose something else.
There exist dedicated HA proxies for this, but we did choose to use the Apache webserver for this.
The basic confirguration for a apache reverse proxy is simple, a bit more complex with load balancing and even less documented for load balancing with websockets.
This is why this post exists, I did search (most of the?) internet for informations on how to achieve this, but only did find fragments of the solution and often with wrong parts in it.
So our solution does correctly proxy http/http2/websocket requests to the backend servers in a sticky way. The ssl/https configuration is not part of this post, but you can use standard ways for this.
This setup is valid for all Vaadin Flow 23.0 and 23.1 and 23.2. setups. For Vaadin 23.3 and later, the push endpoints have been simplified for a better lb configuration. https://github.com/vaadin/flow/issues/14641
So here what our config looks like, we will explaing the severals parts later:
# 1
ProxyRequests Off
#2
ProxyPass /images/ !
ProxyPass /.well-known/ !
#3
RewriteEngine On
RewriteCond %{REQUEST_URI} ^/ [NC]
RewriteCond %{QUERY_STRING} transport=websocket [NC]
RewriteRule /(.*) balancer://backend-ws/$1 [P,L]
#4
ProxyPass / balancer://backend/
ProxyPassReverse / balancer://backend/
#5
<Proxy balancer://backend>
BalancerMember http://192.168.1.50:8080 route=backend1
BalancerMember http://192.168.1.51:8080 route=backend2
ProxySet stickysession=JSESSIONID
</Proxy>
#6
<Proxy balancer://backend-ws>
BalancerMember ws://192.168.1.50:8080 route=backend1
BalancerMember ws://192.168.1.51:8080 route=backend2
ProxySet stickysession=JSESSIONID
</Proxy>
And in your tomcat server.xml on the backend servers:
<Engine name="Catalina" defaultHost="backend.service.ch" jvmRoute="backend1">
...
</Engine>
So lets explain the parts:
1. General proxy config
ProxyRequests Off
Make sure to have this in your config, otherwise your webserver can be missused to proxy any request to the internet, turning your server in an open proxy.
#2
ProxyPass /images/ !
ProxyPass /.well-known/ !
With these you can serve static content direcly from you apache webserver (As long as it has access to that content)
The .well-known this is usually needed when you use letsencrypt certificates for https
#3
RewriteEngine On
RewriteCond %{REQUEST_URI} ^/ [NC]
RewriteCond %{QUERY_STRING} transport=websocket [NC]
RewriteRule /(.*) balancer://backend-ws/$1 [P,L]
The roles above make sure to correctly handle http->websocket upgrade requests and send them to the websocket balancer.
Depending on you application/backend you will need to tune the rewrites, but these here work for a vaadin application.
It is known that a rewrite rule is not optimal from a performance point of view, but so far I know of no other solution, until Vaadin 24 will hopefully use a dedicated push/websocket endpoint.
#4
ProxyPass / balancer://backend/
ProxyPassReverse / balancer://backend/
Here we route the normal http and http2 requests to the http balancer. Please take care to include the trailing / after the backend, otherwise you will receive strange errors like "No protocol handler was valid for the URL /home (scheme 'balancer')" in your server error log
#5
<Proxy balancer://backend>
BalancerMember http://192.168.1.50:8080 route=backend1
BalancerMember http://192.168.1.51:8080 route=backend2
ProxySet stickysession=JSESSIONID
</Proxy>
This is the load balancer to route the requests to the two backend servers, you can add more if you have more of them.
The stickysession indicates to use the JSESSIONID cookie to match the requests to the correct backend. The name of the route should match your jvmRoute entry in the server.xml file for each backend.
#6
<Proxy balancer://backend-ws>
BalancerMember ws://192.168.1.50:8080 route=backend1
BalancerMember ws://192.168.1.51:8080 route=backend2
ProxySet stickysession=JSESSIONID
</Proxy>
Same as #5, but for the websocket requests.
As you see in the balancer definitions, the backend servers are connected via unencrypted http/ws. If you need to use https/wss toward the backend servers too, then you can just replace the backend server definitions with https/wss. But of course you will then also have to handle the certificates on tomcat side too.
Required apache modules for this to work:
Apache should use the event mpm if possible, for better handling of http2 and websokets. Other mpm might work, I have not tested them.
As for the modules themself, enable these:
http2 -> For http/2 of course
proxy -> General basic proxy funcionality
proxy_balancer -> To used balancers toward the backend
proxy_http -> For http 1.x proxy requests
proxy_http2 -> For http/2 proxy requests
proxy_wstunnel -> For websocket proxy stuff
rewrite -> To identify and redirect websocket requests to the ws balancer
lbmethod_byrequests -> type of loadbalancing to use
In debian you can just use a2enmod <module_name> to enable them, on other distributions when commands vary, but finally these modules must be active to have a full http/http2/ws load balancer for Vaadin.
This setup work for Vaadin 23.1.x, for Vaadin 24 there is a discussion going on about having dedicated endpoint for push/websockets. https://github.com/vaadin/flow/issues/14641#issuecomment-1266519119
And a documentation (still to be done) about Vaadin and reverse proxy setups
https://github.com/vaadin/docs/issues/1776#issuecomment-1272384234