Cloud Computing 7 min read

Overview of SPEC 2016 Asian Summit and the First SPEC Cloud IaaS 2016 Benchmark

The article summarizes the SPEC 2016 Asian Summit, explains SPEC's role in defining server and cloud performance benchmarks—including CPU, web, and file‑system tests—and highlights the introduction of the SPEC Cloud IaaS 2016 benchmark for evaluating cloud infrastructure performance.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Overview of SPEC 2016 Asian Summit and the First SPEC Cloud IaaS 2016 Benchmark

The SPEC 2016 Asian Summit concluded with experts from around the world gathering in China, underscoring the growing global attention on China's computing capabilities. Detailed coverage is available on the ZD Top website.

SPEC (Standard Performance Evaluation Corporation) is a worldwide, authoritative third‑party organization that establishes, modifies, and validates performance evaluation standards for server applications. Its benchmarks are widely used in finance, telecommunications, securities, and other critical industries as a trusted metric for IT system selection.

Founded in 1988 by several computer hardware vendors, SPEC now includes over 60 prominent companies such as Intel, AMD, IBM, and HP. It provides benchmark suites for mail servers, web servers, file servers, supercomputers, clusters, CPUs, and professional graphics applications.

SPEC defines ten major server‑application benchmark suites and dozens of test models. The most common suites are SPEC CPU, SPEC WEB, and SPEC Power.

CPU Benchmark Classification SPEC CPU tests are divided into base (compiler‑optimised minimally) and peak (maximally optimised) categories. Base results include SPECint_base2006, SPECfp_base2006, SPECint_rate_base2006, and SPECfp_rate_base2006. Peak results include SPECint®2006, SPECfp®2006, SPECint®_rate2006, and SPECfp®_rate2006. Additionally, tests are classified as speed (single‑threaded execution time) and rate (throughput over time) tests.

File‑System Benchmark SPECsfs2008 evaluates NAS file‑service performance, measuring throughput and response time. Nearly a hundred NAS manufacturers have passed this benchmark, demonstrating high OPS (operations per second) that enable scenarios such as browsing 600,000 images per second or storing 450,000 electronic invoices per second.

The design of high‑performance storage systems combines metadata scaling, balanced data distribution, full‑IP interconnects, memory‑assist acceleration, RDMA over TCP, automatic hot‑spot detection, and tiered storage techniques.

How to Query SPEC Test Results 1. Log in to the SPEC/OSG Result Search Engine and select the desired test type from the "Available Configurations" dropdown. 2. Use filters such as hardware vendor, CPU model, or other criteria to refine the search. 3. Optionally limit by replica or publication date, then choose the output format. 4. Click "Fetch Results" to download the official third‑party test report.

SPEC 2016 Summit Highlights Among many topics, the most significant announcement was the release of the first cloud‑computing benchmark, SPEC Cloud IaaS 2016. This benchmark targets cloud providers, consumers, hardware, virtualization, and application vendors, measuring object storage, document storage, event‑driven systems, NoSQL transaction workloads, and MapReduce clusters. It evaluates high‑stress supply, I/O runtime, and CPU‑sensitive loads, addressing performance challenges in both private and public IaaS environments.

Early submissions include Dell's results on Red Hat Enterprise Linux OpenStack Platform 7/8 with KVM virtualization, showcasing capabilities in elasticity, scalability, and Mean Instance Provisioning Time.

cloud computingbenchmarkstorageCPU performanceSPEC
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.