Craig Ulmer

High-Performance Data-Intensive Computing

2008-08-14 io pub

Maya Gokhale's Storage Intensive Super Computer (SISC) project at LLNL explored a number of new technologies for improving the performance of data-intensive applications. In addition to experimenting with commercial DB appliances (e.g., Netezza, LexisNexis, and XtremeData), we also looked at hardware accelerators (GPUs and FPGAs) and new flash-memory devices from Fusion-io. We wrote a paper describing a variety of experiences for IEEE Computer.

Fun Sidenote: The "ioMemory" device listed here was an early prototype of Fusion-io's ioDimm product. They asked us to change the name a day before publishing so we wouldn't spoil their official product release.


Data-intensive problems challenge conventional computing architectures with demanding CPU, memory, and I/O requirements. Experiments with three benchmarks suggest that emerging hardware technologies can significantly boost performance of a wide range of applications by increasing compute cycles and bandwidth and reducing latency.


  • IEEE Computer Paper Maya Gokhale, Jonathan Cohen, Andy Yoo, W. Marcus Miller, Arpith Jacob, Craig Ulmer, and Roger Pearce, "Hardware Technologies for High-Performance Data-Intensive Computing", in IEEE Computer vol. 41, Issue 4, April 2008.