Did you ever hear about heuristics? It probably rings a bell, but what exactly are they? To put it in simple terms, a heuristic is some form of shortcut – a technique designed to solve an optimization problem in a quick yet effective way. It aims to find “good enough solution” to such a problem, especially when finding the exact solution is too hard or time-consuming, by trading accuracy, completeness, optimality for speed.
Note that I’m using the definition of heuristic coming from computer science – different fields (e.g. cognitive science or psychology) may define it slightly differently, although the high-level concept of a heuristic as some form of “shortcut” in general remains the same, nevertheless the discipline.
But this post is about metaheuristics, right? They’re higher-level heuristics (or procedures in general) designed to find the “right” heuristic – the one which provides the sufficiently good solution I already mentioned.
Many metaheuristics use some form of stochastic optimization, hence that the solution found is dependent on the set of generated random variables. They search over a large set of feasible solutions, so they can often find sufficient solutions with less computational effort than traditional optimization algorithms, iterative methods, or simple heuristics. Because of this, they are useful approaches for optimization problems. They have a major downside, though – compared to “regular” optimization algorithms and iterative methods, metaheuristics may not find a globally optimal solution on some classes of problems.
There are plenty of metaheuristics and many ways to classify them. For example, we can group them due to the type of optimum that is searched for:
- local search metaheuristics – focus on improvement on simple local search algorithms.
- global search metaheuristics – try to overcome the limitations of local search and utilization of population-based methods.
We can also classify metaheuristics by a number of solutions they’re trying to improve:
- single-solution approach – focuses on modifying and improving a single candidate solution.
- population-based approaches – aims to improve multiple candidate solutions, usually utilizing population characteristics to do so.
Having said that, there are many classes of problems where metaheuristics can be useful. I’d like to present some of them to you – I aim to give you some examples of such heuristics, a good intuition about how they work and last but not least, what class of business problems they can help you solve. Metaheuristics are a vast topic – to keep things concise I will focus on three of them – particle swarm optimization, ant colony optimization, and simulated annealing.
Particle swarm optimization
Remember the distinction between single-solution and population-based metaheuristics? Particle swarm optimization, or PSO in short, is one that focuses on populations. It aims to solve an optimization problem by moving the population of candidate solutions, in this scenario called “particles”, around in the search space. It’s performed according to predefined mathematical formulae over the particle’s position and velocity. Like many metaheuristics, it’s inspired by natural phenomena – in this case, bird flocking or fish schooling (birds and fish are our “particles”). It’s a population-based algorithm, which tries to implement some form of “swarm intelligence” to find the solution. To give you a little more insight, imagine that a group of birds is randomly searching for food in an area.
- There is only one piece of food in the area being searched.
- All the birds do not know where the food is.
- But they know how far the food is in each iteration.
To ensure the survival of the flock, the birds have to find food. So what strategy should they choose? The effective one is to follow the bird which is nearest to the food. Particle swarm optimization tries to implement such a mechanism using math. It has some convenient mathematical properties which make this method a good choice for optimization problems where “classic” optimization algorithms can’t be used (PSO does not require that the optimization problem be differentiable in contrast to the family of gradient descent methods).
Having this high-level overview of PSO, let’s go through some examples of business applications:
- Energy/power industry – optimization of generators’ output in a grid, to minimize fuel costs,
- Heating – minimization of the cost of heating system in a given life cycle time,
- Water supply – optimization of the water distribution network to avoid water shortage and minimize the cost of the supply.
Ant colony optimization
Many computational problems rely on finding the shortest (or at least “good”) path. You may have heard about some of them, like traveling salesman problem: given a list of locations and the distances between each pair of them, what is the shortest possible route that visits each location and returns to the place of the start? Ant colony optimization is a family of algorithms designed to solve this kind of problem. Like in our former example, it’s inspired by nature – in this case, pheromone-based communication between ants, seeking the best route between their colony and a source of food. Real ants lay down pheromones directing each other to resources while exploring their environment. The algorithm model optimization problem as the actions of such an ant colony.
Artificial ‘ants’ are in such case mathematical abstractions trying to locate the optimal solutions by moving through parameter space, which represents all possible solutions (e.g. locations, like in the traveling salesman problem). Some species of real ants initially wander randomly, but when they encounter a pheromone trail left by other ants from the colony, they modify their path accordingly and leave another pheromone trail for the rest of the ants and the cycle begins again. Ant colony optimization algorithms try to mimic that behavior to iteratively find the best possible route.
So what are the real-world applications of such algorithms? In general – any problem which requires finding optimal paths. Examples include:
- Scheduling – algorithms from this family were successfully applied to scheduling problems, such as job scheduling in an aluminum foundry or bus driver scheduling,
- Space planning – planning alleys and shelves organization to ensure optimal routes of shoppers in supermarkets,
- Market and business intelligence – there are some use cases in real-time fraud detection as a more “straightforward” alternative to traditional classification algorithms.
Simulated annealing
Let’s leave the animal kingdom for a while. Simulated annealing takes its inspiration from something else – annealing process in metallurgy. In the real world, it involves repetitive heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. Simulated annealing tries to translate this into a probabilistic algorithm. Slow cooling from “real-world” annealing is interpreted in the algorithm as a slow decrease in the probability of accepting worse solutions as possible solutions are explored. As I mentioned at the beginning this trade-off is an essential property of metaheuristics in general – accepting worse solutions allows for a more extensive search for the global optimal solution. In short, we sacrifice some local fidelity in favor of better global results. The general mechanism goes as follows:
- At each time step, the algorithm randomly selects a solution close to the current one and measures its quality.
- Having such information, it decides to move to that selected solution or to stay with the current one, based on the assessed probability that the new solution is better or worse than the current one.
- As the algorithm is searching through possible solutions, the initial temperature is progressively decreased to zero
- That affects the two probabilities – the probability of moving to a better candidate solution is either kept to 1 or is changed towards a positive value; in contrast, the probability of moving to a worse new solution is iteratively changed towards zero
That’s simulated annealing in a nutshell. It has a wide range of applications in optimization involving a large number of variables or parameters:
- Scheduling – especially in cases of schedules depending on a large number of factors and other schedules
- Route finding – similarly to an ant colony, simulated annealing was successfully applied in finding the optimal path between two locations
- Industrial production – optimization of work of various devices, to minimize malfunctions, fuel usage, etc.
- Heat / Energy – optimization of heat and energy flow in residential buildings
So this is basically it – now you have some high-level overview of metaheuristics with some examples of algorithms and their applications. In the next post, you’ll find a complete business case, solved with one of them. Stay tuned!
If you are interested in applying metaheuristics to your business problems, let us know – we’re glad to help.
Contact me to get answers to your questions: greg@logicai.io