Lttng is the best trace by far.
But I remember doing the performance tests with some open source codes, using the Lttng tracer to record performance metrics, by enabling perf event counters. However, when we enabled three perf counters, lttng would crash!!
But why we needed more than two perf events enable at the same time
Well, the answer is simple, to make sure that it wouldn’t be multiple influences, i.e. noise, in the measures. We make sure that that run had that event with that value. For our experiments with several runs, this is very important.
The solution that we did was to use a cross-validation scheme in which we would measure pairs for performance metrics in order to avoid this crash and than compare is in a table. Not was as Apriori algorithm, but more like a cross-validation table.