Scalable Task Architecture for In-Memory Databases

15-418 Final Project


  • Title Scalable Task Architecture for In-Memory Databases
  • Team Lilia Tang, Wan Shen Lim
  • URL
  • Summary We will integrate the Intel TBB Task Scheduler [0] into the CMU DBMS terrier [1]. This provides a mechanism for submitting tasks for query execution, which we hope to unify with the existing ad-hoc threading architecture in the system via the tuning and usage of TBB arenas. Separately, we aim to write microbenchmarks to gain a deep understanding of efficiently utilizing Intel TBB across different hardware configurations.

  • [0]
  • [1]


The existing codebase in terrier has a dedicated thread registry [0] that is currently being used to support the metrics manager [1], write-ahead logging [2], and network layer connection handling [3,4]. We are able to parse SQL commands from the network layer, through the parser, into the binder, and soon into the optimizer. However, we are missing the link that allows the optimizer to hand off physical query plans to the execution engine, i.e. we are missing a work queue that can accept arbitrary tasks which should be executed later. We have performed a cursory survey of our options and have decided that Intel TBB is the way to go. However, we do not currently have a task queue in our system. The goal is to explore the limitations and scalability of Intel TBB for the workload of a DBMS.

  • [0]
  • [1]
  • [2]
  • [3]
  • [4]


The difficulty in this project is understanding the constraints of the varied workloads in a DBMS. As mentioned above, the threading infrastructure handles metrics collection, logging to stable storage, and managing connections. The tasks supported are of diverse profiles; some are long-lived and unimportant, some are long-lived but fairly important, some are generated in response to other tasks, and so on. Our goal is to first understand the characteristics of TBB and of typical workloads in order to handle issues of, e.g., unfairness, preemption or starvation, and determine appropriate metrics modeling desirable characteristics of fairness and prioritization to compare our approaches against. It is likely that we will begin by running experiments on TBB in isolation, i.e. microbenchmarks, and proceed to each subsystem at a time. The ultimate project goal would be to provide a simplified wrapper for tasks of varying lifetimes and priorities. On the learning front, we hope to gain a deep understanding of using a cutting-edge threading library for diverse workloads and hardware configurations. A cursory literature search indicates that most architectural decisions in tasking are not well justified, for example, given a distinct transaction that is generating (query execution, logging, and garbage collection) tasks, it is open whether it would be more performant to have a dedicated task arena for each subsystem or if it would be more performant to have a shared task arena that is used by multiple subsystems of a similar priority.


There has been an attempt to characterize the performance of Intel’s Threading Building Blocks before, see [0] for a general study with typical parallel workloads such as fluid simulation and n-body problems. One of our goals is to extend this analysis to the workload of a DBMS, for which we will start with the terrier code base and independent microbenchmarks. We will also attempt to obtain and use Intel’s Parallel Studio [1] for profiling purposes, it is our understanding that there is a free educational version. We would welcome AWS credit if you’re offering any, or in general access to machines with very high core counts for the purpose of studying TBB behavior across different configurations. We do not have prior experience with TBB and the current extent of TBB in terrier is just a dumb wrapper around TBB concurrent data structures.

  • [0]
  • [1]


Plan to achieve

  • Microbenchmarks showing the performance and relative benefits of various TBB task arena configurations.
  • The DBMS should be capable of executing the workloads of at least one subsystem with TBB tasking. We expect to see a net performance benefit over the existing “dedicated thread” architecture linked above.
  • Demo day should include speedup graphs for various TBB architectures across varying core counts.

Hope to achieve

  • Complete integration of TBB tasking as the default model of task execution in the terrier DBMS, with a wrapper for the various types of tasks that a DBMS generates.
  • Visualization of the TBB overhead, with arguments for why it is unavoidable or future directions for reducing the overhead.


Databases are a rich source of parallelism problems. You certainly want to execute queries in parallel wherever possible, but that would be a very limited view of opportunities for parallelism in a database; by using Intel’s TBB tasking framework across the other subsystems, it is possible that we can achieve good parallelism in different areas of the database.


  • 11/05 (Run microbenchmarks for various TBB configurations) + (Integrate TBB’s task scheduler into terrier in the form of a wrapper)
  • 11/12 Migrate at least one existing dedicated subcomponent of terrier to use the task scheduler, modifying existing benchmarks such as TPCC to see if an improvement is observed over the existing threading architecture. Investigate other modifications to tasking architecture, scheduling, or general optimizations and evaluate against existing benchmarks.
  • 11/19 (Intermediate Checkpoint) Continue migrating subcomponents of terrier to use the task scheduler. Now is a good time to re-evaluate the rest of the project. If the scope needs to be restricted, subsystems can be dropped. If the scope can be broadened, we can look into implementing parallel scans for the execution engine. Investigate other modifications to tasking architecture, scheduling, or general optimizations and evaluate against existing benchmarks.
  • 11/26, 12/03 Depends on the conclusion reached re: project scope at the intermediate checkpoint. Evaluate against existing benchmarks.
  • 12/09 Collecting final results, writing up the report