Skip to content Skip to sidebar Skip to footer

Grouping Parametrized Benchmarks With Pytest

I'm currently benchmarking an implementation of an AVL Tree I made against a non-rebalancing binary search tree using pytest-benchmark. It seems to be working well for me so far b

Solution 1:

There was a useful comment saying that the master branch of pytest is in the process of supporting this exact feature, but I was unable to get it to work (fingers crossed for next release).

In the meantime, I figured out this handy work around. I'm able to group by case, but not by (case,n) with this method. I added a @benchmark_this decorator above each test case to wrap the benchmark call. It's pretty handy even without the extra benefit of grouping by test case!

defbenchmark_this(test):
  defwrapper(benchmark, t, n):
    benchmark(test, None, t, n)
  return wrapper

types = [BaseTree, AvlTree]
sizes = [100,300,1000]

@pytest.mark.parametrize('t', types)@pytest.mark.parametrize('n', sizes)@benchmark_thisdeftest_insertRandomOrder(benchmark, t, n):
  random.seed(0x1C2C6D66)
  tree = t()
  for i inrange(n):
    tree.insert(random.randint(0, 0x7FFFFFFF), i)

@pytest.mark.parametrize('t', types)@pytest.mark.parametrize('n', sizes)@benchmark_thisdeftest_insertDescendingOrder(benchmark, t, n):
  tree = t()
  for i inrange(n):
    tree.insert(n-i, i)

# ...

Invoked with

py.test --benchmark-group-by=func

Post a Comment for "Grouping Parametrized Benchmarks With Pytest"