Polls show that algorithmic traders do not suffer the greatest time losses when developing a concept or coding a strategy.

Actual time is spent on:

- strategy analysis;

- assessment of its sustainability;

- search for optimal parameters for real work.

In this article, we will talk about finding the optimal and stable strategy parameters for real trading.

The effectiveness of any optimizer can be assessed on the basis of 2 indicators:

  1. The accuracy of finding a stable area;

2.   The time spent on solving this problem.

The quality can only be assessed by comparison with the exhaustive search of results that many algorithmic traders still use.

Although modern algorithms have shown their effectiveness, they still do not guarantee an ideal result. Due to the fact that innovative algorithms are becoming more and more popular, it will be useful to master  tools that will help you figure out which of the optimization algorithms is most likely to lead us to stable parameters!

A visual way of working with the results of a trading algorithm is their 3D presentation. In Wealth-Lab "out of the box" in this form, you can work only with Brute Force optimization values ​​and for a fairly short time, because the graph disappears after the next optimization.

More details of full brute force 3D in Wealth:

How to use the optimizer more efficiently? Analysis of stable areas

Analysis of the results in the form of 3D plots, or HeatMap (complementary to each other) provides ample opportunities for finding truly stable algorithms. This method represents a convenient analysis tool and, as a result, a competitive advantage.

This technique was developed for practical use, but it can be used to compare optimization algorithms.

Creation

As already noted, we will completely create the efficiency matrix in WealthLab, but we will build a 3D plot in Excel - it's simple, free and available to everyone!

In order to identify stable areas on the plane, we can highlight significant results according to the metrics we need (indicators of "Graility" / stability). For example, we are interested in Sharpe Ratio at least 2.

For complex analysis it would be perfect to simultaneously work with both the HeatMap format and with a 3D plot of the stability matrix. This format is optimal for comparing different strategies with each other, especially for the "Graility indicator", in which we know for sure that we are interested in the Graility percentage of more than 70. For other metrics, in which we do not know exactly what the values ​​will be, the following conditional format is suitable HeatMap formatting:

- 33% of the best values ​​- green;

- 33% intermediate - white;

- 33% of the worst results are red.

Stability matrices are derived using 3 metrics. If you are not a beginner, then most likely you already have 3-4 favorite metrics.

I am interested in Recovery Factor and some other indicators that help me to determine:

- the volatility of the equity curve;

- speed of exit from drawdowns;

- the overall effectiveness of the strategy.

The main purpose of the stability matrices is to:

- Determine how large the positive area is (for comparing strategies), i.e. how many good results it produces over the entire parameter space;

- Identify stable positive areas and work with stable parameter values, not maximum - which is also important for forward testing.

Several metrics are needed in order to find intersecting stable regions that satisfy all three indicators, and to use these values ​​in real work.

If this or that parameter has no effect on the result, then we are working with the wrong parameter!

Together with the work on several Date Range (data periods), we can more competently determine the parameters for our portfolio and count on more adequate, stable results.

More details on 2D and 3D Analysis of Results  in video:

Naturally, the results can only be trusted after we have made sure there is no overfitting - this can be determined by forward testing.

However, forward testing itself can be considered reliable if the optimized values ​​on the Out Of Sample were taken not the best, but the most stable. Therefore, the optimization process itself becomes a multi-stage process, which of course can and should be automated.

The author is leaning towards the following steps:

- identifying sustainable areas and assessing how extensive these areas are;

- then forward testing to assess the likelihood of a overfitting;

- deeper testing of these areas (we reduce the testing step).

Our goal is to use the most effective type of optimization, then carry out additional optimization runs for stable areas until we reach the exhaustive search results. And for this we need a comparative analysis of 2 algorithms: Genetic Algorithm and SWAR.

Comparative Analysis

Note * First, we will carry out genetic optimization on its standard parameters to determine the number of runs. Knowing the number of runs is necessary in order to adjust the parameters of the Swarm optimizer (optimizer of a swarm of particles) so that they can be compared with each other.

Genetic Algorithms

Standard parameters

  • Selection Method: Roulette Wheel Selection
  • Crossover method: WL Crossover 1

Result:

  • Time: about 2 hours
  • Number of results: 1530

Recommended parameters

  • Selection Method: Tournament Selection
  • Crossover method: WL Crossover 2

Result:

  • Time: about 1.5 hours
  • Number of results: 1635

More details on analysis of 3D results of genetic algorithm in video:


As we can see, the standard parameters give a much wider spread of values, which does not leave the positive area. However, the 2nd set of parameters recommended by the developers highlighted the positive area in the clearest way, which is convenient for future, more accurate testing.

Algorithms of the “Particle Swarm” Optimizer

Swarm GPAC PSO

Results:

  • Time: about 1.5 hour
  • Number of results: 1312

We specified the parameters in such a way that the number of runs was as close as possible to the standard parameters of genetics.

It is quite obvious that this algorithm clearly outlined a stable positive area of ​​the strategy parameters.

Swarm GPAC PSO, Plus Genetic Crossover

Results:

  • Time: about 1 hour
  • Number of results: 984

Clerc Tribes algorithm

Clerc Tribes - Optimization # 1

Results:

  • Time: about 15 minutes
  • Number of results: 494

Clerc Tribes - Optimization # 2

Results:

  • Time: about 15 minutes
  • Number of results: 455

Brute force algorithm

Results:

  • Time: about 3 days
  • Number of results: 36481

Conclusion

Obviously, there are algorithms that are good separately for a specific purpose:

- If you are afraid of missing a positive area - use standard Genetic Algorithm, which has a large spread of values;

- If you need the clearest area, it is best to get it with the Swarm PSO;

- If you need an express test in 15 minutes, then the Clerc Tribes Swarm optimizer algorithm is perfect! However, as we understood from the 3D graphs, there would be very inaccurate results - only 400-500 runs out of 36481;

We have come to the conclusion that the particle swarm optimizer is as good as the genetic optimizer and should be adopted. The particle swarm optimizer is currently undergoing a stage of active development and improvement.