SignalR poses some small challenges when running in a load balanced environment. When you have multiple servers then some users will have SignalR connections open to one server and then some users will have SignalR connections open to another. The usual solution is to use a SignalR backplane so that all the servers in your cluster can see every SignalR message that was sent, then forward the message to the appropriate users connected to each server. This way users connected to Server B can see SignalR messages sent from Server A. Then you smack a load balancer in front of all the SignalR servers so that you can evenly distribute traffic across all of them, the only problem is that traditionally you have to use sticky sessions on your load balancer in order to make SignalR work.

What are sticky sessions, you may ask? Sticky sessions are something that you can enable on your load balancer to make all requests for a user get routed to the same server. The problem here is that it’s additional configuration that must be done in your environment, and if you work for an ISV where your software is hosted on-premise then that means you need to understand multiple load balancers to support your customers. It’s easier to support something that is usually closer to the default configuration for the load balancer.

Why Are Sticky Sessions Required?

This is due to the way that SignalR works by default. When we establish a new SignalR connection, there are usually two different phases that occur. The first step is the negotiation phase, where the browser makes a connection to the server and gets a list of transports that are supported as well as a unique connection id.

// POST /hubs/myhub/negotiate
{
   "connectionId":"r6KAGrlP0sW85ytVncuQJQ",
   "availableTransports":[
      {
         "transport":"WebSockets",
         "transferFormats":[
            "Text",
            "Binary"
         ]
      },
      {
         "transport":"ServerSentEvents",
         "transferFormats":[
            "Text"
         ]
      },
      {
         "transport":"LongPolling",
         "transferFormats":[
            "Text",
            "Binary"
         ]
      }
   ]
}

Once we are past the negotiation phase, it’s time to open a SignalR connection using one of the available transports. If we choose ServerSentEvents transport then SignalR will make a HTTP GET request to /hubs/myhubname?id=r6KAGrlP0sW85ytVncuQJQ where id is the connection id returned from the negotiation phase. The problem is that the connection id is tied to a specific server, if the negotiation phase hits Server A and then we hit Server B when its time to establish the SignalR connection, then we’ll end up getting a 404 response back from SignalR. Using sticky sessions with our load balancer prevents this.

But as things turn out, the negotiation phase is entirely optional for SignalR connections using the WebSocket transport, we just have to configure the browser to always use the WebSocket transport for SignalR connections and we should be golden.

Just Show Me The Code!

On the frontend, we’ll always build hub connections that use the WebSocket transport and skip the negotiation phase. We use Typescript where I work so this will require some massaging if you are using JavaScript instead:

import * as signalR from '@aspnet/signalr';

export function createHubConnection(url: string) : signalR.HubConnection {
  return new signalR.HubConnectionBuilder()
    .withUrl(url, {
      skipNegotiation: true,
      transport: signalR.HttpTransportType.WebSockets
    })
    .build();
}

That’s the vast majority of what will need to be done to get SignalR working with round-robin routing. On the backend you’ll still need some kind of backplane in order to deliver all SignalR messages to all of your SignalR instances. Microsoft recommends using their redis backplane but we have our own internal MongoDB backplane that we use:

public class Startup
{
  public virtual IServiceProvider ConfigureServices(IServiceCollection services)
  {
    services.AddSignalR().AddMongoDb(config.Data.Mongo.SwimlaneConnectionString);

    // Rest of service configuration...
  }
}

And that’s it! You should now be able to use SignalR with a load balancer that doesn’t use sticky sessions.