Skip to content
Pablo Rodriguez

Caching Using Elasticache

You should consider caching your database in the following situations:

Response Time Concerns

You are concerned about response times for your customer. You might have latency-sensitive workloads that you want to speed up.

High Volume Requests

You have a high volume of requests that overburden your database. You might have a large amount of traffic and need increased throughput.

Cost Reduction

You would like to reduce your database costs. Scaling for high reads can be costly, and a single in-memory cache node can deliver more requests per second than multiple database read replicas.

Time-consuming database queries and complex queries can create bottlenecks in applications. A database cache supplements your primary database by removing unnecessary pressure on it, typically in the form of frequently accessed read data.

ElastiCache is a fully managed, key-value, in-memory data store with sub-millisecond latency that sits between an application and an origin data store.

  • Decreases access latency and eases the load of databases and applications
  • Provides high performance and cost-effective in-memory cache
  • Is fully compatible with open source Redis and Memcached engines

ElastiCache manages the work involved in setting up a distributed in-memory environment, from provisioning server resources to installing software. The service automates common administrative tasks such as failure detection, recovery, and software patching.

ElastiCache can scale out, scale in, and scale up to meet fluctuating application demands.

ElastiCache works as an in-memory data store and cache to support the most demanding applications that require sub-millisecond response times.

Most data stores have areas of data that are frequently accessed but seldom updated. By caching query results, you pay the price of the query one time, then quickly retrieve the data multiple times without having to rerun the query.

Developers can continue to use the same Redis and Memcached application code, drivers, and tools to run, manage, and scale their workloads on ElastiCache.

Choose an ElastiCache Engine Based on What You Need

Section titled “Choose an ElastiCache Engine Based on What You Need”

Choose if you need:

  • The most basic model
  • The ability to run large nodes with multiple cores or threads
  • The ability to scale horizontally with Auto Discovery

Features:

  • Low maintenance: Requires less maintenance than Redis
  • Multithreading: Can use multiple processing cores for better performance
  • Horizontal scalability: Auto Discovery feature automatically discovers changes to node membership

A cache node is the smallest building block of an ElastiCache deployment. It is a fixed-size chunk of secure, network-attached RAM. Each node runs the engine that was chosen when the cluster was created.

An ElastiCache cluster is a logical grouping of one or more nodes. Your application connects to an ElastiCache node or cluster by using a unique address called an endpoint.

A TTL value is added to each write. The TTL specifies the number of seconds or milliseconds until the key expires. When an application attempts to read an expired key, the application is treated as though the data isn’t found in the cache.

As a result, the database is queried and the cache is updated. This way, the data doesn’t get too stale, and values in the cache are occasionally refreshed from the database.

  • Caching Pattern: Updates the cache after the data is requested
  • Advantage: The cache contains only data that the application actually requests
  • Disadvantage: Requires a programmatic strategy to handle keeping the cache up to date

Use lazy loading when you have data that will be read often but written infrequently, such as user profiles that rarely change but are accessed frequently.

  • Caching Pattern: Updates the cache immediately after updating the primary database
  • Advantage: The cache is up-to-date with the primary database (most likely data will be found in the cache)
  • Disadvantage: Results in increased cost from storing data in-memory that you might not use

Use write-through caching when you have data that must be updated in real time.

  1. A user makes a request to access content

  2. Cache Hit: Application code reads data from cache. If cache contains the data, ElastiCache returns the file to the user

  3. Cache Miss: If data is not found in cache, the cache forwards the request to the primary database

  4. The origin server sends requested data to user and adds it to cache for future requests

Advantages:

  • Lower latency response to the requester
  • Reduced load on the primary database
  • Only requested data is cached

Disadvantages:

  • Cache miss incurs a penalty with three trips
  • Data in cache can become stale
  1. Any change or update to the application writes data to the database

  2. Immediately after, the local cache gets updated

Advantages:

  • Data in cache is always up to date and never stale
  • Increases likelihood that application will find value in cache

Disadvantages:

  • Potentially caching data that you don’t need, causing added costs

ElastiCache provides a powerful solution for database caching that can significantly improve application performance while reducing database load and costs. Choosing the right engine and caching strategy is crucial for optimal results.