None
How nature inspires us to solve problems? An introduction to metaheuristics
Paweł Jankiewicz 4 months, 2 weeks ago · 6 min read Machine Learning Optimization
Table of contents

In this blog post, I’d like you to present computational problem-solving techniques, called metaheuristics. We use some of them at LogicAI, but I’d like to focus on a general overview.  My aim is to keep things as concise and non-technical as possible and focus on concepts and business applications. If you are interested in use-cases and high-level ideas of such techniques, you’re in the right place

Did you ever hear about heuristics? It probably rings a bell, but what exactly are they? To put it in simple terms, a heuristic is some form of shortcut – a technique designed to solve an optimization problem in a quick yet effective way. It aims to find “good enough solution” to such a problem, especially when finding the exact solution is too hard or time-consuming, by trading accuracy, completeness, optimality for speed. 

Note that I’m using the definition of heuristic coming from computer science – different fields (e.g. cognitive science or psychology) may define it slightly differently, although the high-level concept of a heuristic as some form of “shortcut” in general remains the same, nevertheless the discipline.

But this post is about metaheuristics, right? They’re higher-level heuristics (or procedures in general) designed to find the “right” heuristic – the one which provides the sufficiently good solution I already mentioned. 

Many metaheuristics use some form of stochastic optimization, hence that the solution found is dependent on the set of generated random variables. They search over a large set of feasible solutions, so they can often find sufficient solutions with less computational effort than traditional optimization algorithms, iterative methods, or simple heuristics. Because of this, they are useful approaches for optimization problems. They have a major downside, though – compared to “regular” optimization algorithms and iterative methods, metaheuristics may not find a globally optimal solution on some classes of problems. 

There are plenty of metaheuristics and many ways to classify them. For example, we can group them due to the type of optimum that is searched for: 

We can also classify metaheuristics by a number of solutions they’re trying to improve: 

Having said that, there are many classes of problems where metaheuristics can be useful. I’d like to present some of them to you – I aim to give you some examples of such heuristics, a good intuition about how they work and last but not least, what class of business problems they can help you solve. Metaheuristics are a vast topic – to keep things concise I will focus on three of them – particle swarm optimization, ant colony optimization, and simulated annealing.

Particle swarm optimization

Remember the distinction between single-solution and population-based metaheuristics? Particle swarm optimization, or PSO in short, is one that focuses on populations. It aims to solve an optimization problem by moving the population of candidate solutions, in this scenario called “particles”, around in the search space. It’s performed according to predefined mathematical formulae over the particle’s position and velocity. Like many metaheuristics, it’s inspired by natural phenomena – in this case, bird flocking or fish schooling (birds and fish are our “particles”). It’s a population-based algorithm, which tries to implement some form of “swarm intelligence” to find the solution. To give you a little more insight, imagine that a group of birds is randomly searching for food in an area. 

To ensure the survival of the flock, the birds have to find food. So what strategy should they choose? The effective one is to follow the bird which is nearest to the food. Particle swarm optimization tries to implement such a mechanism using math. It has some convenient mathematical properties which make this method a good choice for optimization problems where “classic” optimization algorithms can’t be used (PSO does not require that the optimization problem be differentiable in contrast to the family of gradient descent methods).

 

Having this high-level overview of PSO, let’s go through some examples of business applications:

Ant colony optimization

Many computational problems rely on finding the shortest (or at least “good”) path. You may have heard about some of them, like traveling salesman problem: given a list of locations and the distances between each pair of them, what is the shortest possible route that visits each location and returns to the place of the start? Ant colony optimization is a family of algorithms designed to solve this kind of problem. Like in our former example, it’s inspired by nature – in this case, pheromone-based communication between ants, seeking the best route between their colony and a source of food. Real ants lay down pheromones directing each other to resources while exploring their environment. The algorithm model optimization problem as the actions of such an ant colony. 

Artificial ‘ants’ are in such case mathematical abstractions trying to locate the optimal solutions by moving through parameter space, which represents all possible solutions (e.g. locations, like in the traveling salesman problem). Some species of real ants initially wander randomly, but when they encounter a pheromone trail left by other ants from the colony, they modify their path accordingly and leave another pheromone trail for the rest of the ants and the cycle begins again. Ant colony optimization algorithms try to mimic that behavior to iteratively find the best possible route. 

So what are the real-world applications of such algorithms? In general – any problem which requires finding optimal paths. Examples include:

Simulated annealing

Let’s leave the animal kingdom for a while. Simulated annealing takes its inspiration from something else – annealing process in metallurgy. In the real world, it involves repetitive heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. Simulated annealing tries to translate this into a probabilistic algorithm. Slow cooling from “real-world” annealing is interpreted in the algorithm as a slow decrease in the probability of accepting worse solutions as possible solutions are explored. As I mentioned at the beginning this trade-off is an essential property of metaheuristics in general – accepting worse solutions allows for a more extensive search for the global optimal solution. In short, we sacrifice some local fidelity in favor of better global results. The general mechanism goes as follows:

That’s simulated annealing in a nutshell. It has a wide range of applications in optimization involving a large number of variables or parameters:

 

So this is basically it – now you have some high-level overview of metaheuristics with some examples of algorithms and their applications. In the next post, you’ll find a complete business case, solved with one of them. Stay tuned!

If you are interested in applying metaheuristics to your business problems, let us know – we’re glad to help.

Contact me to get answers to your questions: greg@logicai.io

Paweł Jankiewicz 4 months, 2 weeks ago · 6 min read Machine Learning Optimization