Caching in .NET: What you should and shouldn't be using

Caching is a powerful technique that we can use to optimize the performance of an application. In .NET, we already have a few ways on how we can apply caching. .NET 9 introduces another form of caching: The HybridCache. And we also have some third-party options as well. So which ones should you use? In this blog post, I’ll talk about some of the caching options we have, and which ones you should and shouldn’t be using.

To show off the different caching implementations, I have made a simple demo-application that guesses your age based on your name. It does this by calling an external age-guesser API. You can find the application code here.

We probably don’t want to call this external API every time, if you call it with the same name, you expect the same outcome. So that is what we are going to cache!

IMemoryCache

The IMemoryCache gives us the option to cache something locally in the application instance itself. The first step in using it, is registering the service:

builder.Services.AddMemoryCache();

Using the memory cache is very straightforward, here I use it when calling the API:

var cacheKey = $"guessedAge:{name}";
var ageGuessResponse = await memoryCache.GetOrCreateAsync<AgeGuessResponse>(cacheKey, async _ =>
    await GetGuessedAge(name));

We will either call the API and cache the result, or retrieve an existing result from the cache using the corresponding cache key.

However, there are a few downsides in using the IMemoryCache. Let’s say we want to invalidate the cache after a certain amount of time. If we then call the API a hundred times for the same name, it would be nice if only the first call hits the API, with the other 99 using the cached result. But that’s not what happens. Instead, many of those calls will bypass the cache and hit the real API. This creates a stampede of requests all reaching the external API at once, which is exactly what we want to avoid.

You can try this yourself actually 🧪:

  1. Spin up the AgeGuesser demo service
  2. Set a breakpoint in the GetGuessedAge method
  3. Call the API twice at the same time
  4. It will do the external API call twice and never use the cache.

We also have an inherent downside of using this cache, because it’s not distributed. If you are running multiple instances of your application, then the same cache is not shared across those instances, resulting in unnecessary calls when something should be cached already.

That last downside is something the next caching technique will solve.

IDistributedCache

The IDistributedCache solution will exactly work like the name implies, it will give you the ability to have one central cache, that can be used in a distributed environment where you could have multiple copies of the same application.

Some of you might already be familiar with a caching solution like Redis. This is the implementation that we will be using for our demo.

To start using this caching mechanism, we need to add the Redis implementation for the distributed cache:

> dotnet add package Microsoft.Extensions.Caching.StackExchangeRedis

To make this work locally, the easiest way would be to spin up a docker container with a Redis image ☝️.

You can register the Redis cache as shown below:

builder.Services.AddStackExchangeRedisCache(options =>
{
    options.Configuration = "localhost:6379";
});

The implementation is a little different than we had with the in-memory variant. We don’t have a combined method for either getting the cached result or creating a new cached entry. Instead, we have separate get and set methods:

var cacheKey = $"guessedAge:{name}";

var ageGuessResponseJson = await distributedCache.GetStringAsync(cacheKey);
if (ageGuessResponseJson is null)
{
    var ageGuesserResponse = await GetGuessedAge(name);
    await distributedCache.SetStringAsync(cacheKey, JsonSerializer.Serialize(ageGuesserResponse));
    return ageGuesserResponse;
}

return JsonSerializer.Deserialize<AgeGuessResponse>(ageGuessResponseJson)!;

Now ok, this already solves one of the issues. However, even in this setup, if the cache becomes invalidated and multiple API calls arrive simultaneously, there’s still no guarantee that only one of them will perform the actual API request while the others rely on the newly cached result.

People have tried to tackle this problem in the past by writing custom implementations on top of the IDistributedCache to prevent this scenario from happening. Honestly, it’s a bit surprising that Microsoft never offered a built-in solution for this 🤷‍♂️. Until .NET 9 introduced yet another cache that promises to be the silver bullet…

HybridCache

At the time of writing this blog, the HybridCache is the newest addition to the caching family. It promises to solve the underlying issues we’ve had with the other caching mechanisms. To be fair, it’s not really ’new’. They have gotten a lot of inspiration from a third-party package named FusionCache. Even the name itself was suggested by the author. So they definitely deserve some credit. 🥇

There is also a case to be made for using the FusionCache package instead of Microsoft’s HybridCache implementation. However, that falls outside the scope of this blog post.

The great thing is that the FusionCache package can hook into the HybridCache. Meaning that you can still use the HybridCache as an abstraction layer, so the only thing you really need to change then is how the caching is configured. 💡

One of the biggest problems we still had was when the cache is invalidated, we can get a flood of new requests all at the same time that will bypass the cache. This will never happen here, the hybrid cache has build-in protection against that. 🛡️

To install the hybrid cache, you can use the following command:

> dotnet add package Microsoft.Extensions.Caching.Hybrid

To use the HybridCache we need to add it to our dependency registration, like we did with the other ones:

builder.Services.AddHybridCache();

The implementation will actually look identical to the IMemoryCache implementation:

var cacheKey = $"guessedAge:{name}";
var ageGuessResponse = await hybridCache.GetOrCreateAsync<AgeGuessResponse>(cacheKey, async _ =>
    await GetGuessedAge(name));

return ageGuessResponse;

If you’ve been following along with the code itself, notice that the HybridCache is not an interface, but an abstract class, which is a bit of an odd choice from Microsoft, given the others do have an interface, why not keep that consistent? How well… the fact that it is an abstract class still means we can mock it out during testing. 🧪

Using the HybridCache comes with a caveat. You see, the HybridCache can be used as an in-memory cache, as well as a distributed cache.

To activate the in-memory mode, deploy the most powerful developer technique known to mankind! do absolutely nothing. It’s the default.

If you want to use a distributed cache, you just add the Redis cache to the dependency registration like we did before:

builder.Services.AddStackExchangeRedisCache(options =>
{
    options.Configuration = "localhost:6379";
});

Now, this of course means that you apply either the in-memory cache or distributed cache everywhere at once, but what if you still want to make an exception for caching certain data? You can!

Let’s say we enabled the distributed cache in the dependency registration, we can do the following to enable an in-memory-cache again for a specific case:

var ageGuessResponse = await hybridCache.GetOrCreateAsync<AgeGuessResponse>(cacheKey, async _ =>
    await GetGuessedAge(name), new HybridCacheEntryOptions
{
    Flags = HybridCacheEntryFlags.DisableDistributedCache
});

That’s it! 🙌 You now have a basic understanding on the different options you have for caching within .NET. So, which one is the best? If you’re just looking for a good basic caching option, then I would use the HybridCache. It’s a solid choice out of the box. And as a nice bonus, it doesn’t require pulling in an extra third-party dependency.

If you are in need of some more advanced options for your cache, then you might want to take a look at the FusionCache instead. It offers features like fail-safe mechanisms, soft/hard timeouts, etc.

I hope this gives you a good overview of all of the caching options. Now go forth and cache responsibly! Your users, servers and CTO will thank you.