

Upstream caching is dependent on a definition of its longevity. To be clear, this isn’t denormalization but rather an entirely temporary hub of data. So upstream cache’s purpose is to save the results of expensive calculations so that they don’t need to be done again only to provide the same result.

Examples: Cloudfront and other CDNs, or a Squid or Varnish proxy server.These caches are at the page-level of a website.Downstream caches are services that users talk to that live further downstream from your servers, toward the user.These caches can include computed data, content fragments, and syndicated content.Upstream caches are services your servers talk to that live further upstream, away from the user.The bulleting below summarizes the two options in terms of placement and purpose: Both can transform the speed of your system. Upstream cache, on the other hand, can deliver pages faster with fewer server resources by re-using calculated values and content fragments generated during one page view for future page views. We added the downstream cache to deliver pages faster with fewer server resources by re-serving a page generated for one request to future requests for the same page. The EC2 servers then talk to both the RDS and an upstream Elastic Cache, which is Amazon’s trademarked temporary database.A downstream cache is coded before the Elastic Load Balancer and in front of the S3 bucket.Now, let’s look at that same system with the addition of caching. The EC2 Servers then talk to an RDS Backend (a durable database) to get things done.The EC2 Servers dump some static content into S3 Buckets (a static file storage).There’s an Elastic Load Balancer (a device that spreads capacity across many servers) that goes into a cluster of EC2 Servers (which allow for scaled computing).To explain how caching fits into a computing system, we can look at a standard web infrastructure. A cache, therefore, is an ephemeral memory bank that provides quick access to previously computed data where you need it, when you need it. The concept is that we can reuse what we’ve done before so that we don’t have to needlessly redo it. But what is cache?Ĭaching is a fancy word for being lazy or, more accurately, letting your servers be lazy.

Luckily, there’s still a way to salvage your code without breaking the bank: You can cache. There are several ways to optimize, but scaling out horizontally to deal with growing demand gets expensive. Code optimization is no longer a waste of time-it’s required.

The Golden Rule for Django Developers is “Don’t prematurely optimize code.” But when a website goes live, user demand can outweigh site functionality, meaning slower page loads and a downgraded user experience.
