Redis Cache in Spring Boot applications

Redis Cache in Spring Boot applications

When developing applications with Spring Boot, one has to consider an important aspect that arises with the usage of persistent, non-in-memory databases such as PostgreSQL: performance. Depending on the amount and type of data, as well as the access times to the storage devices, one might find that in a production environment, access times to frequently needed data are a significant bottleneck to the entire application.

Redis is an open source (BSD licensed), in-memory data store. It can be used as a database, cache, message broker or streaming engine. For Spring Boot applications, Redis Cache implements the Spring Framework’s caching support. Using the Redis Cache in Spring allows for a quick and simple setup and use of Caching in Spring Boot apps.

Summary

Why use Redis as a cache in Spring?

As mentioned above, the main reason to use caching is for performance. Redis Cache works as an in-memory cache, meaning that any data that is cached is stored on RAM. Modern DDR4-RAM can offer peak transfer speeds of 35 GB/s or more. More importantly, it offers significantly more data transfers per second. What this means is that RAM will allow you to access more data in terms of size, but also allow you to access it faster than with traditional Hard Drive Disks (HDD) or newer Solid State Drives (SSD).

Here is an example of how much faster using a cache is: the first query in a GET-request of a Spring Boot application, which isn’t already in Cache, could take anywhere around 50 ms, while the second GET-request on the same URI could take somewhere around 5-10 ms.

For the end user of your application, this means a faster and more responsive user experience. There is a slight trade-off for this increased performance: you need to have more resources put aside for the in-memory cache. However, since you will have significantly less hits on your actual database, you will certainly end up saving resources when dealing with lots of frequent requests on the same URI.

In a nutshell, Redis Cache minimizes the number of network calls made to your application and improves latency, which in return improves the overall performance of your system architecture.

Configuring Redis Cache in Spring Boot

To use Redis Cache in Spring Boot, you first need to add the spring-boot-starter-cache and spring-boot-starter-data-redis dependencies to your project. We will use Maven as our project management tool for this example:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

These dependencies add Spring’s caching support as well as Redis’ implementation of it to your project.

Next, add the @EnableCaching annotation to your Spring Boot app:

@SpringBootApplication
@EnableCaching
public class MyApp {
    public static void main(String[] args) {
        SpringApplication.run(MyApp.class, args);
    }
}

Spring Boot will now auto-configure the cache according to the default RedisCacheConfiguration. You may also add your own custom configuration by configuring the RedisCacheConfiguration bean:

@Bean
public RedisCacheConfiguration cacheConfiguration() {
    return RedisCacheConfiguration
        .defaultCacheConfig()
        .entryTtl(Duration.ofMinutes(60))
        .serializeValuesWith(RedisSerializationContext
            .SerializationPair
            .fromSerializer(new GenericJackson2JsonRedisSerializer()));
}

In this case, we set the default time-to-live (TTL) of each cache key entry to be 60 minutes, and also define a default serializer for our values.

You could also add a bean for the RedisCacheManager directly to define individual configurations for different types of data in your application:

@Bean
public RedisCacheManagerBuilderCustomizer redisCacheManagerBuilderCustomizer() {
    return (builder) -> builder
      .withCacheConfiguration("userCache",
        RedisCacheConfiguration
            .defaultCacheConfig()
            .entryTtl(Duration.ofMinutes(20)))
      .withCacheConfiguration("dataCache",
        RedisCacheConfiguration
            .defaultCacheConfig()
            .entryTtl(Duration.ofMinutes(5)));
}

Here, we have defined two separate cache configurations. A „userCache”, which should have a TTL of 20 minutes, and a „dataCache”, with a TTL of only 5 minutes. This allows you to issue appropriate cache configurations depending on the data you want to cache and how often it needs to be updated or accessed.

Next, you can start using the annotations provided by Spring Framework’s caching support. This feature can be used anywhere in your code where there are serializable objects. In this case, the User class implements Serializable. Now, we can use the annotations in our service class:

@Service
public class UserService {

    @Cacheable(value = "user", key = "#username")
    public User getUser(String username) {
        return userRepository.getUserByUsername(username);
    }

    @CacheEvict(value = "user", key = "#user.username")
    public User saveUser(User user) {
        return userRepository.save(user);
    }
}

Spring Cache synchronization

There is one special case that needs consideration. The moment the first request reaches the application, the cache isn’t filled yet. The application will then get the data from the database. As previously mentioned, this requires more time than a cache hitting request.

What happens when a second request hits the same resource at the same time? When the cache has not been filled yet? The second request would try to fill the cache again. We’ve effectively had two database hits for a resource when really, we could’ve just had one. Luckily, Spring Cache offers an optional setting in the @Cachable annotation for this special case:

@Cacheable(value = "user", key = "#username", sync = true)
...

Setting sync to true will enable Spring Framework’s native cache synchronization. This means that consecutive hits to our applications, which happened before the cache was properly populated, will wait for the cache to actually be populated, instead of performing another request to the database.

With a simple example like this, there is not much being lost in terms of performance, even if you don’t use cache synchronization. However, depending on how complex your application is, how large the data that is being accessed is, how much it needs to be processed, or how many concurrent requests are happening, cache synchronization may save you valuable resources on the long run.

Limitations to Spring’s native cache synchronization

The above example was pretty simplistic: there was only one instance of our Spring Boot application running. In today’s world of Microservices, it is completely expected that we may want multiple instances of our application deployed to handle increasing demand properly – for example using container orchestration using Kubernetes.

What happens if we want to use cache synchronization with multiple instances of our application?

To demonstrate this further, we’ve developed a simple Angular app that allows us to create multiple parallel requests to multiple instances of our spring boot application. Here is how it looks:

Example application using no caching

As you can see, there are 5 server instances locally deployed, ranging from the ports 8080 to 8084. A GET-request is constructed from each of the boxes’ HOSTPORT, and URI values. The user JohnDoe is also created from scratch before the first request to evict any previous cache entries. Also, I’ve purposefully put a 2.5 second sleep into my service where the GET-request for users are resolved, so that we can have a look at what this would mean in it’s most extreme cases. It is safe to say most applications will never have access times this large. Finally, the servers are hit after a delay of 500 milliseconds each to further demonstrate the effects of different caching configurations.

Another thing is that I’m tracking the response times of each request and aggregating them. Obviously, without any caching implemented, the total response time is at its maximum: each server should take about $6 * 2.5 = 15$ seconds, and they do. Not great!

Here is what it looks like when we try using just Spring caching, without synchronization:

Example with Spring Caching – no synchronization

By just enabling caching, the total response time is already halved. We can see that every request which is done after the first request has pended – roughly 2.5 seconds later – instantly returns, as the cache has already been populated at that point. However, even though the cache is populated, earlier requests are still not finished, since they didn’t wait for the cache being populated.

Let’s enable cache synchronization and have a look at the changes:

Example with Spring Caching and synchronization

We can now see that every concurrent request on the same server instance finishes at the same time, because they all waited for the cache to be populated. That’s a lot better than before. In our single instance application example, we would already be done.

However, we can see an issue here: the cache is not being synchronized across multiple server instances. Using Spring’s native cache synchronization, we get one additional request to the database per added server instance that we use.

In another article, we will address the solution: Distributed Caching using Redis in Spring Boot applications

Newsletter Subscription