Fourth Workshop on Big Data Benchmarking
Workshop: October 9-10, 2013
Venue: Brocade IMC Theatre, Building 3, 130 Holger Way, San Jose, CA.
To be successful, a benchmark should be:
- Simple to implement and execute;
- Cost effective, so that the benefits of executing the benchmark justify its expense;
- Timely, with benchmark versions keeping pace with rapid changes in the marketplace; and
- Verifiable so that results of the benchmark can be validated via independent means.
Based on discussions at the previous big data benchmarking workshops, two benchmark proposals are currently under consideration. One called BigBench (to appear in ACM SIGMOD Conference 2013), is based on extending the Transaction Processing Performance Council's Decision Support benchmark (TPC-DS) with semi-structured and unstructured data and new queries targeted at those data. A second is based on a Deep Analytics Pipeline for event processing
To make progress towards a big data benchmarking standard, the workshop will explore a range of issues including:
- Data features: New feature sets of data including, high-dimensional data, sparse data, event-based data, and enormous data sizes.
- System characteristics: System-level issues including, large-scale and evolving system configurations, shifting loads, and heterogeneous technologies for big data and cloud platforms.
- Implementation options: Different implementation options such as SQL, NoSQL, Hadoop software ecosystem, and different implementations of HDFS.
- Workload: Representative big data business problems and corresponding benchmark implementations. Specification of benchmark applications that represent the different modalities of big data, including graphs, streams, scientific data, and document collections.
- Hardware options: Evaluation of new options in hardware including different types of HDD, SSD, and main memory, and large-memory systems, and new platform options that include dedicated commodity clusters and cloud platforms.
- Synthetic data generation: Models and procedures for generating large-scale synthetic data with requisite properties.
- Benchmark execution rules: E.g. data scale factors, benchmark versioning to account for rapidly evolving workloads and system configurations, benchmark metrics.
- Metrics for efficiency: Measuring the efficiency of the solution, e.g. based on costs of acquisition, ownership, energy and/or other factors, while encouraging innovation and avoiding benchmark escalations that favor large inefficient configuration over small efficient configurations.
- Evaluation frameworks: Tool chains, suites and frameworks for evaluating big data systems.
- Early implementations: Of the Deep Analytics Pipeline or BigBench and lessons learned in benchmarking big data applications.
- Enhancements: Proposals to augment these benchmarks, e.g. by adding more data genres (e.g. graphs), or incorporating a range of machine learning and other algorithms, will be entertained and are encouraged.
Read More at Fourth Workshop on Big Data Benchmarking: Click Here