Add benchmarks
This guide illustrates how to write a simple benchmark for a pallet, test the benchmark, and run benchmarking commands to generate realistic estimates about the execution time required for the functions in a pallet. This guide does not cover how to use the benchmarking results to update transaction weights.
Add benchmarking to the pallet
- Open the
Cargo.toml
file for your pallet in a text editor. -
Add the
frame-benchmarking
crate to the [dependencies] for the pallet using the same version and branch as the other dependencies in the pallet.For example:
frame-benchmarking = { version = "4.0.0-dev", default-features = false, git = "https://github.com/paritytech/polkadot-sdk.git", branch = "polkadot-v1.0.0", optional = true }
-
Add
runtime-benchmarks
to the list of [features] for the pallet.For example:
[features] runtime-benchmarks = ["frame-benchmarking/runtime-benchmarks"]
-
Add
frame-benchmarking/std
to the list ofstd
features for the pallet.For example:
std = [ ... "frame-benchmarking/std", ... ]
Add a benchmarking module
- Create a new text file—for example,
benchmarking.rs
—in thesrc
folder for your pallet. -
Open the
benchmarking.rs
file in a text editor and create a Rust module that defines benchmarks for your pallet.You can use the
benchmarking.rs
for any prebuilt pallet as an example of what to include in the Rust module. In general, the module should include code similar to the following:#![cfg(feature = "runtime-benchmarks")] mod benchmarking; use crate::*; use frame_benchmarking::{benchmarks, whitelisted_caller}; use frame_system::RawOrigin; benchmarks! { // Add individual benchmarks here benchmark_name { /* code to set the initial state */ }: { /* code to test the function benchmarked */ } verify { /* optional verification */ } }
-
Write individual benchmarks to test the most computationally expensive paths for the functions in the pallet.
The benchmarking macro automatically generates a test function for each benchmark you include in the benchmarking module. For example, the macro creates test functions similar to the following:
fn test_benchmarking_[benchmark_name]<T>::() -> Result<(), &'static str>
The benchmarking module for pallet-example-basic provides a few simple sample benchmarks. For example:
benchmarks! { set_dummy_benchmark { // Benchmark setup phase let b in 1 .. 1000; }: set_dummy(RawOrigin::Root, b.into()) // Execution phase verify { // Optional verification phase assert_eq!(Pallet::<T>::dummy(), Some(b.into())) } }
In this sample code:
- The name of the benchmark is
set_dummy_benchmark
. - The variable
b
stores input that is used to test the execution time of theset_dummy
function. - The value of
b
varies between 1 to 1,000, so you can run the benchmark test repeatedly to measure the execution time using different input values.
- The name of the benchmark is
Test the benchmarks
After you have added benchmarks to the benchmarks!
macros in the benchmarking module for your pallet, you can use a mock runtime to do unit testing and ensure that the test functions for your benchmarks return Ok(())
as a result.
- Open the
benchmarking.rs
benchmarking module in a text editor. -
Add the
impl_benchmark_test_suite!
macro to the bottom of your benchmarking module:impl_benchmark_test_suite!( MyPallet, crate::mock::new_test_ext(), crate::mock::Test, );
The
impl_benchmark_test_suite!
macro takes the following input:- The Pallet struct generated by your pallet, in this example
MyPallet
. - A function that generates a test genesis storage,
new_text_ext()
. - The full mock runtime struct,
Test
.
This is the same information you use to set up a mock runtime for unit testing. If all benchmark tests pass in the mock runtime test environment, it's likely that they will work when you run the benchmarks in the actual runtime.
- The Pallet struct generated by your pallet, in this example
-
Execute the benchmark unit tests generated for your pallet in a mock runtime by running a command similar to the following for a pallet named
pallet-mycustom
:cargo test --package pallet-mycustom --features runtime-benchmarks
- Verify the test results.
For example:
running 4 tests
test mock::__construct_runtime_integrity_test::runtime_integrity_tests ... ok
test tests::it_works_for_default_value ... ok
test tests::correct_error_for_none_value ... ok
test benchmarking::bench_do_something ... ok
test result: ok. 4 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Add benchmarking to the runtime
After you have added benchmarking to your pallet, you must also update the runtime to include the pallet and the benchmarks for the pallet.
- Open the
Cargo.toml
file for your runtime in a text editor. -
Add your pallet to the list of
[dependencies]
for the runtime:pallet-mycustom = { default-features = false, path = "../pallets/pallet-mycustom"}
-
Update the
[features]
for the runtime to include theruntime-benchmarks
for your pallet:[features] runtime-benchmarks = [ ... 'pallet-mycustom/runtime-benchmarks' ... ]
-
Update the
std
features for the runtime to include your pallet:std = [ # -- snip -- 'pallet-mycustom/std' ]
- Add the configuration trait for your pallet to the runtime.
-
Add the pallet the the
construct_runtime!
macro.If you need more details about adding a pallet to the runtime, see Add a pallet to the runtime or Import a pallet.
-
Add your pallet to the
define_benchmark!
macro in theruntime-benchmarks
feature.#[cfg(feature = "runtime-benchmarks")] mod benches { define_benchmarks!( [frame_benchmarking, BaselineBench::<Runtime>] [pallet_assets, Assets] [pallet_babe, Babe] ... [pallet_mycustom, MyPallet] ... ); }
Run your benchmarks
After you update the runtime, you are ready to compile it with the runtime-benchmarks
features enabled and start the benchmarking analysis for your pallet.
-
Build your project with the
runtime-benchmarks
feature enabled by running the following command:cargo build --package node-template --release --features runtime-benchmarks
-
Review the command-line options for the node
benchmark pallet
subcommand:./target/release/node-template benchmark pallet --help
The
benchmark pallet
subcommand supports several command-line options that can help you automate your benchmarking. For example, you can set the--steps
and--repeat
command-line options to execute function calls multiple times with different values. -
Start benchmarking for your pallet by running a command similar to the following:
./target/release/node-template benchmark pallet \ --chain dev \ --pallet pallet_mycustom \ --extrinsic '*' \ --steps 20 \ --repeat 10 \ --output pallets/pallet-mycustom/src/weights.rs
This command creates a
weights.rs
file in the specified directory. For information about how to configure your pallet to use those weights, see Use custom weights.
Examples
You can use the benchmarking.rs
and weights.rs
files for any prebuilt pallet to learn more about benchmarking different types of functions.