Grafana Benchmark
(5 min)
Although we have explained the reasoning behind our Intelligent Cache and the need for faster cores, there is nothing quite like seeing the result.
Rather than use one of our internal verification projects or build a facsimile of a complex project, we benchmarked Grafana. Grafana provides enough complexity to demonstrate a real world caching bottleneck while also utilizing two languages and a container build.
While Grafana does not utilize Gitlab CI, we used the existing CI definitions as a guide to build a simple, but realistic Gitlab CI setup. The result was 6-9X faster than Gitlab.com.
Setup#
The .gitlab-ci.yml
consists of three builds jobs.
build:container
: install dependencies and build both Go and Node.js applications in a containerbuild:go
: install dependencies, codegen, and buildbuild:node
: install dependencies and build
For Gitlab.com hosted runner we configured a cache
stanza that includes all untracked
files. The cache key is oversimplified to ${branch}-${language}
, but would require a more complicated setup to effectively support feature branches.
For Cedar CI enhanced runner we configured equivalent resources (cpu and memory).
The package manager caches are configured via GOPATH
and YARN_CACHE_FOLDER
to be within the CI_PROJECT_DIR
so that the files will be cached by both runners.
The runner tags are configured based on the branch. The same source tree is tagged as gitlab
and cedarci
branches. A build pipeline is executed on push and an empty commit is made to trigger a follow-up pipeline after the first has completed to ensure warm caches are available.
Result#
The first pipeline seen by both runners has no cache available and thus demonstrates the raw power difference while the second pipeline benefits from cache.
These results demonstrate the most directly comparable results by caching all untracked files.
build:node#
The Node.js job benefits from the cache on both runners and the Cedar CI clock speed improvement.
- cold: 4X faster than Gitlab.com
- warm: 9X faster than Gitlab.com
The Intelligent Cache adds no overhead while the vanilla cache used by Gitlab.com takes 1m 1s to restore and 1m 52s to save on a 4m 12s job. That's 68.7% of the job spent processing the cache.
The actual core of the job went from 5m 55s to 1m 1s. Still much slower than 22s of Cedar CI, but a much more comparable to the original 4X faster rather than 9X. This demonstrates the diminishing returns experienced with traditional cache techniques.
build:go#
The Go job benefits from cache, but not as much as the Node.js job. The Cedar CI clock speed makes the biggest difference.
- cold: 4X faster than Gitlab.com
- warm: 6X faster than Gitlab.com
Interestingly the Gitlab.com cache performs so poorly the delta shown is actually negative. While the core of the job improved from 2m 20s to 1m 34s the cache eats so much extra time the jobs takes 18s longer. The cache takes 57.5% of the job.
Given such poor cache performance, the Gitlab.com job is better off without the cache entirely, but at the cost of reduced reliability due to always downloading the dependencies.
build:container#
Since the container build performs the most strenuous disk and network tasks while duplicating the previous two workloads concurrently it naturally shows the biggest improvement. Our superior network and disk I/O combined with a much faster cpu that allows bursting demonstrates the drastic difference that bespoke hardware can achieve.
An impressive 6X faster than Gitlab.com without changes.
While the container build could be optimized to take advantage of the Intelligent Cache it would deviate from our goal of an existing project benchmark.
Conclusion#
The results demonstrate the performance achieved by utilizing Cedar CI without making changes. The diminishing returns of traditional cache is clearly visible even in this over-simplified example. Bespoke hardware also makes a big difference. The more work being performed the bigger the Cedar CI advantage.
The same definitions ran on both Gitlab.com and Cedar CI which demonstrate our smooth migration process. From here, the jobs could be tuned for even more performance on Cedar CI.