Site Logo
Published on

How Linkedin scaled profile data store?

Authors

How LinkedIn Scaled Its Profile Data Store While Reducing Costs

LinkedIn serves around 4.8 million member profiles per second, a testament to its massive scale. The journey from Oracle to their homegrown document store, Espresso, marked a significant shift, enabling horizontal scaling and cost-effective growth. But the challenges didn't stop there.

The Scaling Challenge

The platform's yearly doubling in scale and a read-heavy workload demanded a sustainable scaling solution. Enter Couchbase, a centralized storage tier cache, which remarkably achieved:

  • A 99% hit rate
  • 60% reduction in tail latencies
  • 10% decrease in annual costs

Overcoming Legacy Challenges

  • From Oracle to Memcached: Initial struggles with maintaining a Memcached infrastructure during cache expansions and node replacements.
  • Transition to Espresso: Espresso's impressive scalability reduced reliance on additional caching but eventually hit an upper limit.

Making Caching Work

Strategies for effective caching included:

  1. Resiliency Against Couchbase Failures: Health monitors, operational retries, and tripling node replicas.
  2. Ensuring Data Availability: Keeping profile data cached across data centers with infinite TTL, periodically bootstrapping Couchbase.
  3. Strict SLOs: Maintaining minimal data divergence between the source and the cache.

The Reality of Scaling

Scaling isn't just about adding a cache or more nodes. It requires deep software engineering expertise, a keen understanding of systems, and the ability to navigate challenges and bottlenecks.

Further Reading: Dive into more details in the full blog post.

If you found this digest helpful, like, share, and follow for more technical insights.

#distributedsystems #scalability #caching