I quickly glanced over this PR only, do I understand it correctly that we’re treating benchmarks as tests here (I’m not sure I agree with that concept)
I’m not sure this is the best place for the conversation, but generally speaking, benchmarks are a form of testing. The difference is that they evaluate performance against previous runs rather than behavior.
adding parameters to the benchmark setup that only apply to a smaller subset of the benchmarks?
This applies to every benchmark requiring a node context. We currently have 33 of them.
Could we configure them via env variables instead and call the benchmark with e.g. TEST_DATA_DIR=a/b/c bench instead?
Please check the conversation above #31000 (comment).
As an example, if a few of our tests require a for example a timestamp for whatever reason, we wouldn’t add it to the testing framework as an additional parameter, right?
Unlike the hardware device type the benchmark runs on, which can’t be standardized for all users, software level variables like timestamps can be set in a general manner if needed. I don’t think there will be many args like this one.
So if my understanding is correct, I’m leaning towards a concept NACK - please let me know if I misunderstood it.
I think you understood it. Just have a different opinion.