This pattern is part of Patterns of Distributed Systems
HeartBeat
Show a server is available by periodically sending a message to all the other servers.
04 August 2020
Problem
When multiple servers form a cluster, the servers are responsible for storing some portion of the data, based on the partitioning and replication schemes used. Timely detection of server failures is important to make sure corrective actions can be taken by making some other server responsible for handling requests for the data on failed servers.
Solution

Figure 1: Heartbeat
Periodically send a request to all the other servers indicating liveness of the sending server. Select the request interval to be more than the network round trip time between the servers. All the servers wait for the timeout interval, which is multiple of the request interval to check for the heartbeats. In general,
Timeout Interval > Request Interval > Network round trip time between the servers.
e.g. If the network round trip time between the servers is 20ms, the heartbeats can be sent every 100ms, and servers check after 1 second to give enough time for multiple heartbeats to be sent and not get false negatives. If no heartbeat is received in this interval, then it declares the sending server as failed.
Both the servers, the one sending the heartbeat and the one receiving it, have a scheduler defined as follows. The scheduler is given a method to be executed at a regular interval. When started, the task is scheduled to execute the given method.
class HeartBeatScheduler…
public class HeartBeatScheduler implements Logging {
private ScheduledThreadPoolExecutor executor = new ScheduledThreadPoolExecutor(1);
private Runnable action;
private Long heartBeatInterval;
public HeartBeatScheduler(Runnable action, Long heartBeatIntervalMs) {
this.action = action;
this.heartBeatInterval = heartBeatIntervalMs;
}
private ScheduledFuture<?> scheduledTask;
public void start() {
scheduledTask = executor.scheduleWithFixedDelay(new HeartBeatTask(action), heartBeatInterval, heartBeatInterval, TimeUnit.MILLISECONDS);
}
On the sending server side, the scheduler executes a method to send heartbeat messages.
class SendingServer…
private void sendHeartbeat() throws IOException { socketChannel.blockingSend(newHeartbeatRequest(serverId)); }
On the receiving server, the failure detection mechanism has a similar scheduler started. At regular intervals, it checks if the heartbeat was received or not.
class AbstractFailureDetector…
private HeartBeatScheduler heartbeatScheduler = new HeartBeatScheduler(this::heartBeatCheck, 100l); abstract void heartBeatCheck(); abstract void heartBeatReceived(T serverId);
The failure detector needs to have two methods:
- A method to be called whenever the receiving server receives the heartbeat, to tell the failure detector that heartbeat is received
class ReceivingServer…
private void handleRequest(Message<RequestOrResponse> request) {
RequestOrResponse clientRequest = request.getRequest();
if (isHeartbeatRequest(clientRequest)) {
HeartbeatRequest heartbeatRequest = JsonSerDes.deserialize(clientRequest.getMessageBodyJson(), HeartbeatRequest.class);
failureDetector.heartBeatReceived(heartbeatRequest.getServerId());
sendResponse(request);
} else {
//processes other requests
}
}
The implementation of when to mark a server as failed depends on various criterias. There are different trade offs. In general, the smaller the heartbeat interval, the quicker the failures are detected, but then there is higher probability of false failure detections. So the heartbeat intervals and interpretation of missing heartbeats is implemented as per the requirements of the cluster. In general there are following two broad categories.
Small Clusters - e.g. Consensus Based Systems like RAFT, Zookeeper
In all the consensus implementations, Heartbeats are sent from the leader server to all followers servers. Every time a heartbeat is received, the timestamp of heartbeat arrival is recorded
class TimeoutBasedFailureDetector…
@Override public void heartBeatReceived(T serverId) { Long currentTime = System.nanoTime(); heartbeatReceivedTimes.put(serverId, currentTime); markUp(serverId); }
If no heartbeat is received in a fixed time window, the leader is considered crashed, and a new server is elected as a leader. There are chances of false failure detection because of slow processes or networks. So Generation Clock needs to be used to detect the stale leader. This provides better availability of the system, as crashes are detected in smaller time periods. This is suitable for smaller clusters, typically 3 to 5 node setup which is observed in most consensus implementations like Zookeeper or Raft.
class TimeoutBasedFailureDetector…
@Override
void heartBeatCheck() {
Long now = System.nanoTime();
Set<T> serverIds = heartbeatReceivedTimes.keySet();
for (T serverId : serverIds) {
Long lastHeartbeatReceivedTime = heartbeatReceivedTimes.get(serverId);
Long timeSinceLastHeartbeat = now - lastHeartbeatReceivedTime;
if (timeSinceLastHeartbeat >= timeoutNanos) {
markDown(serverId);
}
}
}
Technical Considerations
When Single Socket Channel is used to communicate between servers, care must be taken to make sure that the [head-of-line-blocking] does not prevent heartbeat messages from being processed. Otherwise it can cause delays long enough to falsely detect the sending server to be down, even when it was sending heart beats at the regular intervals. Request Pipeline can be used to make sure servers do not wait for the response of previous requests before sending heartbeats. Sometimes, when using Singular Update Queue, some tasks like disk writes, can cause delays which might delay processing of timing interrupts and delay sending heartbeats.
This can be solved by using a separate thread for sending heartbeats asynchronously. Frameworks like [consul] and Akka send heartbeats asynchronously. This can be the issue on receiving servers as well. A receiving server doing a disk write, can check the heartbeat only after the write is complete, causing false failure detection. So the receiving server using Singular Update Queue can reset its heartbeat-checking mechanism to incorporate those delays. Reference implementation of Raft, [log-cabin] does this.
Sometimes, a [local-pause] because of some runtime-specific events like Garbage Collection can delay the processing of heartbeats. There needs to be a mechanism to check if the processing is happening after a possible local pause. A simple mechanism, to use is to check if the processing is happening after a long enough time window, e.g. 5 seconds. In that case nothing is marked as failed based on the time window, and it's deferred to the next cycle. Implementation in Cassandra is a good example of this.
Large Clusters. Gossip Based Protocols
Heartbeating, described in the previous section, does not scale to larger clusters with a few hundred to thousand servers spanning across wide area networks. In large clusters, two things need to be considered:
- Fixed limit on the number of messages generated per server
- The total bandwidth consumed by the heartbeat messages. It should not consume a lot of network bandwidth. There should be an upper bound of a few hundred kilo bytes, making sure that too many heartbeat messages do not affect actual data transfer across the cluster.
For these reasons,all-to-all heartbeating is avoided. Failure detectors, along with Gossip protocols for propagating failure information across the cluster are typically used in these situations. These clusters typically take actions like moving data across nodes in case of failures, so prefer correct detections and tolerate some more delays (although bounded). The main challenge is not to incorrectly detect a node as faulty because of network delays or slow processes. A common mechanism used then, is for each process to be assigned a suspicion number, which increments if there is no gossip including that process in bounded time. It's calculated based on past statistics, and only after this suspicion number reaches a configured upper limit, is it marked as failed.
There are two mainstream implementations: 1) Phi Accrual failure detector (used in Akka, Cassandra) 2) SWIM with Lifeguard enhancement (used in Hashicorp Consul, memberlist) These implementations scale over a wide area network with thousands of machines. Akka is known to be tried for 2400 servers. Hashicorp Consul is routinely deployed with several thousand consul servers in a group. Having a reliable failure detector, which works efficiently for large cluster deployments and at the same time provides some consistency guarantees, remains an area of active development. Some recent developments in frameworks like Rapid look promising.
Examples
- Consensus implementations like ZAB or RAFT, which work with a small three to five node cluster, implement a fixed time window based failure detection.
- Akka Actors and Cassandra use Phi Accrual failure detector.
- Hashicorp consul use gossip based failure detector SWIM.
This page is part of:
Patterns of Distributed Systems

Patterns
- Clock-Bound Wait
- Consistent Core
- Emergent Leader
- Fixed Partitions
- Follower Reads
- Generation Clock
- Gossip Dissemination
- HeartBeat
- High-Water Mark
- Hybrid Clock
- Idempotent Receiver
- Key-Range Partitions
- Lamport Clock
- Leader and Followers
- Lease
- Low-Water Mark
- Paxos
- Quorum
- Replicated Log
- Request Batch
- Request Pipeline
- Request Waiting List
- Segmented Log
- Single Socket Channel
- Singular Update Queue
- State Watch
- Two Phase Commit
- Version Vector
- Versioned Value
- Write-Ahead Log
Significant Revisions
04 August 2020: Initial publication