Data Processing on Modern Hardware

Typ: Vorlesung + Übung/Tutorium
SWS: 3
Credit Points: 4

Kursbeschreibung / -kommentar

The roots of many productive database systems today date back thirty years or more. Prominent systems such as IBM's System R or the then-research prototype Ingres were first developed in the 1970s and were designed to address the hardware landscape of the time: disks or even tapes were the only medium to hold reasonable amounts of data; main memory could be considered as truly random access; and the major cost factor in database processing was I/O.

Since that time, computer architectures have changed significantly. RAM chips have become cheap enough to make in-memory processing feasible; caches and other architectural details lead to non-uniform memory access cost (an increasingly relevant performance factor); and the omnipresence of multi-core systems adds a whole new class of complexity to the problem.

In this course we look at how architectural changes affect database systems. Rather than suffering from the increasing latency gap for accesses to main memory, for instance, we can use available CPU caches to our advantage. A cache-aware design can improve the performance of a database operation by orders of magnitude. Likewise, modern CPU features (such as vector instructions) or specialized CPUs (like IBM's Cell processor or the nVidia CUDA architecture) can accelerate database tasks if the respective implementation has beed designed carefully.

We will complement this course with many hands-on exercises, where we verify some of the presented techniques on commodity hardware.

For more information see the course website: http://www.systems.ethz.ch/education/courses/hs10/mmdbms