GAs generate initial points randomly, facilitate parallel searches, and operate in encoded forms, minimizing impact on the decision variables themselves. Operations, like selection and mutation, generate newer, more optimal individuals and populations. After multiple iterations, the best-performing individual is obtained, representing the optimal solution to the optimization problem. With a broader operational space, the algorithms only need a fitness function for evaluations. The iterative process is probability-driven, ensuring efficient and effective global optimization. Given these advantages, this paper adopts the GA as the solution for the single-point intersection signal control optimization model.
3.3.1. Implementation Steps of GAs
The computational flowchart of the genetic algorithm is shown in
Figure 1.
Step 1: Chromosome Gene Encoding
In genetic algorithms, individuals in a population have the same number of chromosomes, each representing optimization parameters, like cycle duration and green signal ratio, for a single-point signal-controlled intersection. Genes on chromosomes encode these parameters, with different combinations, yielding varying fitness levels. Chromosomes are passed to the next generation through individual mating.
Step 2: Random Generation of Initial Population
The initial population size (N) in genetic algorithms must be carefully chosen. If it is too small, the algorithm’s optimization potential is limited, possibly hindering optimal solution identification. Conversely, if N is too large, although the likelihood of finding optimal fitness increases, it can decrease optimization efficiency, consuming more time. Therefore, the initial population size should be chosen based on the computational capabilities of the hardware.
Step 3: Calculate the Fitness of Individuals within the Population
When the genetic algorithm solves the objective function, fitness is used as the criterion to evaluate the effect. Higher fitness means better traits. Our objective function, on the other hand, needs to be as low as possible for each indicator, so we need to convert the objective function into fitness. We take fitness as the reciprocal of the objective function, as shown in Equation (21).
where
F is the objective function and f is the fitness function.
Step 4: Perform Selection Operator Operation
The selection operator is the most critical part of the genetic operations in a genetic algorithm. We choose roulette wheel selection [
37] because the basic idea of this method is that the probability of each individual being selected is directly proportional to its fitness. This method is conducive to the selection of targets with high adaptability in the process of multi-objective optimization and promotes the rapid evolution of genetic algorithms. This can be represented by Equation (22).
where:
N represents the population size.
represents the fitness of the individual, dimensionless.
represents the probability of the individual being selected during the genetic operation, dimensionless.
Step 5: Perform Crossover Operator Operation
In genetic algorithms, the crossover operator mates individuals to create new ones with unique genes. The crossover probability is a crucial parameter, balancing precision and diversity. High probabilities ensure enough offspring but may compromise precision, while low probabilities risk premature convergence. A piecewise function is needed: early stages favor high probabilities for diversity, later stages decrease them for precision. The crossover probability can be calculated using Equation (23).
where:
is the user-defined maximum crossover probability for the population, dimensionless.
is the user-defined minimum crossover probability for the population, dimensionless.
is the higher fitness value between the two individuals undergoing crossover in the population, dimensionless.
is the average fitness of the current generation in the population, dimensionless.
is the maximum fitness of the current generation in the population, dimensionless.
Step 6: Perform Mutation Operator Operation
In genetic algorithms, the mutation operator simulates gene mutation, enhancing population diversity and global search capabilities. The mutation probability is crucial. High probabilities risk instability due to excessive mutations, making it hard to find optimal solutions. Low probabilities limit search capabilities, hindering convergence. Balancing mutation probability is vital for effective optimization. Therefore, similar to the crossover operator, the mutation operator also requires a piecewise nonlinear function, as shown in Equation (24).
where:
is the user-defined maximum mutation probability for the population, dimensionless.
is the user-defined minimum mutation probability for the population, dimensionless.
is the average fitness of the current generation in the population, dimensionless.
Step 7: Determine if the Algorithm Meets the Termination Criteria
Given finite computational resources, termination criteria must be set for genetic algorithms. Two common methods are:
Precision-based: If the optimal solution remains unchanged or varies very slightly for x consecutive generations, the loop terminates, and the optimal solution is output.
Generation-based: The algorithm evolves for Y generations, and the optimal solution of the Yth generation is output. If the termination criteria are met, the algorithm concludes; otherwise, the process repeats until an optimal solution is achieved.
3.3.2. Genetic Algorithm Optimization Solution
In genetic algorithms, obtaining the global optimal solution can be challenging due to the standard operators potentially disrupting good gene combinations. To address this, an improved genetic algorithm is proposed. This includes a selection operator aimed at retaining the global optimal solution, crucial for converging multi-objective optimization problems to their global optima.
Step 1: According to the encoding rules, initialize a population P composed of N individuals. The population P can be described as follows:
Step 2: If the stopping condition is met, terminate the process. Otherwise, continue with the following steps.
Step 3: Calculate the fitness of the best individuals in the
and
generations, as follows:
Step 4: Pass all the best individuals from the current population to the next generation. Independently select the remaining N − 1 individuals from the current population, as follows:
if then replicate:
replace the worst ones of with
end if
Step 5: Independently perform crossover and mutation operations on the N − 1 individuals. Then, obtain a new generation of the population with N individuals.
Step 6: Return to Step 2. To verify the effectiveness of the improved genetic algorithm, we compared the fitness calculation results of the standard genetic algorithm and the improved genetic algorithm. The crossover and mutation probabilities for the genetic algorithm were set to 0.5, respectively, the initial population size was set to 1000, and the number of population iterations was set to 50. The final results are shown in
Figure 2.
According to
Figure 2a, we can observe that the standard genetic algorithm does not guarantee that the fitness of the best individual in the current generation is always better than that of the previous generation. For instance, when evolving from the eighth to the ninth generation, the fitness of the best individual is actually worse than before. On the other hand, the improved genetic algorithm retains the best individual from each generation, as described in Step 4, ensuring they do not participate in crossover and mutation. As a result, it guarantees that the best individual’s fitness in the current generation is never less than that of the previous generation. For example, as shown in
Figure 2b, the fitness of the population’s best individual always increases, indicating that the improved genetic algorithm can be used to find the global optimum for optimization problems.