Benchmarks
This section includes a description of the bottlenecks used and associated kernel components stressed by the benchmarks used in our suite. In addition, performance results and analysis is included for some of the benchmarks used by the Linux performance team.
Table 1. Linux kernel performance benchmarksLinux kernel component | Database query | VolanoMark | SPECweb99 Apache2 | NetBench | Netperf | LMBench | TioBench IOZone |
Scheduler | X | X | X | ||||
Disk I/O | X | X | |||||
Block I/O | X | ||||||
Raw, Direct & Async I/O | X | ||||||
Filesystem (ext2 & journaling) | X | X | X | X | |||
TCP/IP | X | X | X | X | X | ||
Ethernet driver | X | X | X | X | |||
Signals | X | X | |||||
Pipes | X | ||||||
Sendfile | X | X | |||||
pThreads | X | X | X | ||||
Virtual memory | X | X | X | ||||
SMP scalability | X | X | X | X | X | X |
Benchmark descriptions
The benchmarks used are selected based on a number of criteria: industry benchmarks that are reliable indicators of a complex workload, and component-level benchmarks that indicate specific kernel performance problems. Industry benchmarks are generally accepted by the industry to measure performance and scalability of a specific workload. These benchmarks often require a complex or expensive setup that is not available to most of the OSC (Open Source community). These complex setups are one of our contributions to the OSC. Examples include:
- SPECweb99
Representative of Web-serving performance - SPECsfs
Representative of NFS performance - Database query
Representative of database-query performance - NetBench
Representative of SMB file-serving performance
Component-level benchmarks measure performance and scalability of specific Linux kernel components that are deemed critical to a wide spectrum of workloads. Examples include:
- Netperf3
Measures performance of network stack, including TCP, IP, and network device drivers - VolanoMark
Measures performance of scheduler, signals, TCP send/receive, loopback - Block I/O Test
Measures performance of VFS, raw and direct I/O, block device layer, SCSI layer, low-level SCSI/fibre device driver
Some benchmarks are commonly used by the OSC. They are preferred because the OSC already accepts the importance of the benchmark. Thus, it is easier to convince the OSC of performance and scalability bottlenecks illuminated by the benchmark. In addition, there are generally no licensing issues that prevent us from publishing raw data. The OSC can run these benchmarks because they are often simple to set up, and the hardware required is minimal. On the other hand, they often do not meet our requirements for enterprise systems. Examples include:
- LMBench
Used to measure performance of the Linux APIs - IOZone
Used to measure native file system throughput - DBench
Used to measure the file system component of NetBench - SMB Torture
Used to measure SMB file-serving performance
There are many benchmark options available for our targeted workloads. We chose the ones listed above because they are best suited for our mission, given our resources. There are some important benchmarks we chose not to utilize. In addition, we have chosen not to run some benchmarks that are already under study by other performance teams within IBM (for example, the IBM Solution Technologies System Performance Team has found that SPECjbb on Linux is "good enough"). Presented in Table 1 are the benchmarks currently used by the Linux performance team and the targeted kernel component.
View Improving Linux Kernel Performance And Scalability Discussion
Page: 1 2 3 4 5 6 7 8 Next Page: Benchmark Results