What the research is:
The continued emergence of large social network applications has introduced a scale of data and query volume that challenges the limits of existing data stores. However, few benchmarks accurately simulate these request patterns, leaving researchers in short supply of tools to evaluate and improve upon these systems.
To address this issue, we are open-sourcing TAOBench, a new benchmark that captures the social graph workload at Meta. We’re making workload configurations available, as well as a benchmarking framework that leverages these request features to accurately model production workloads and generate emergent application behavior. We’re ensuring the integrity of TAOBench’s workloads by validating them against their production counterparts. Furthermore, we’re describing several benchmark use cases at Meta and reporting results for five popular distributed database systems to demonstrate the benefits of using TAOBench to evaluate system trade-offs and to identify and address performance issues. Our benchmark fills a gap in the available tools and data that researchers and developers have to inform system design decisions.
How it works:
Since benchmarks are only as useful as the workloads they are derived from, we have identified five properties that should be captured by their request patterns. A comprehensive social network benchmark should:
Accurately emulate social network requests
Capture any transactional requirements
Express data colocation preferences and constraints
Model request distributions without prescriptive query types
Exhibit multitenant behavior on shared data
To satisfy these properties, we profile requests served by TAO, an online graph data store at Meta.
TAO is a read-optimized, geographically distributed data store that provides access to the social graph for diverse products and back-end systems. In aggregate, TAO serves over 10 billion requests per second on a changing dataset of many petabytes. Its workload contains a variety of notable attributes. For example, read and write skew often manifests on different keys: Over 99 percent of data items that are frequently written to are, on average, read less than once per day.
To accurately generate TAO’s workloads at a flexible scale, we characterize these request patterns and identify a small set of parameters, including transaction size, key to shard mapping, and frequency of operation types, that are sufficient to replicate production workloads. We then leverage these features in TAOBench to both accurately downscale Meta’s social network workload and model emergent application behavior. Our parametrized framework is open source and extensible, allowing it to simulate a range of different request patterns.
To illustrate TAOBench’s applicability, we report on how Meta uses this tool to test new features, optimizations, and reliability (e.g., hotspots, worst-case scenarios) as well as experiment with speculative workloads that would otherwise be difficult or infeasible to assess in production.
We provide four examples:
Analyzing new transaction use cases
Assessing contention under longer lock hold times
Evaluating new APIs
Quantifying the performance of high fan-out transactions
Furthermore, we provide the results for TAOBench on five widely used distributed databases (Cloud Spanner, CockroachDB, PlanetScale, TiDB, YugabyteDB) to demonstrate how our benchmark can be used to study performance trade-offs and identify optimization opportunities.
Why it matters:
Despite the ubiquity of social networks, there is a lack of publicly available, realistic workloads to guide research on their underlying database infrastructure. In academia, this scarcity makes it difficult to probe the limits of existing systems and develop novel mechanisms to overcome them. In industry, it is challenging for practitioners to evaluate new features and resolve issues without a way to reproduce these request patterns. To address the gap in representative workloads, we present TAOBench, the first open source benchmark that generates end-to-end, transactional request patterns derived from a large-scale social network. With our benchmark, we make Meta’s social graph workload accessible to the database community and provide visibility into the real-world challenges of supporting such workloads.
Read the full paper:
The post Open-sourcing TAOBench: An end-to-end social network benchmark appeared first on Engineering at Meta.
Engineering at Meta