Next Article in Journal
Methodological Proposal for the Analysis of Urban Mobility Using Wi-Fi Data and Artificial Intelligence Techniques: The Case of Palma
Next Article in Special Issue
Research and Implementation of Low-Power Anomaly Recognition Method for Intelligent Manhole Covers
Previous Article in Journal
Modelling and Simulation of Physical Systems with Dynamically Changing Degrees of Freedom
Previous Article in Special Issue
A Mixed Hardware-Software Implementation of a High-Performance PMSM Controller
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Outputs and States Encoding Bits to Outputs Using Multiplexers in Finite State Machine Implementations

by
Raouf Senhadji-Navarro
*,† and
Ignacio Garcia-Vargas
Department of Computer Architecture and Technology, University of Seville, 41012 Seville, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2023, 12(3), 502; https://doi.org/10.3390/electronics12030502
Submission received: 14 December 2022 / Revised: 12 January 2023 / Accepted: 14 January 2023 / Published: 18 January 2023
(This article belongs to the Special Issue Embedded Systems: Fundamentals, Design and Practical Applications)

Abstract

:
This paper proposes a new technique for implementing Finite State Machines (FSMs) in Field Programmable Gate Arrays (FPGAs). The proposed approach extends the called column compaction in two ways. First, it is applied to the state-encoding bits in addition to the outputs, allowing a reduction in the number of logic functions required both by the state transition function and by the output function. Second, the technique exploits the dedicated multiplexers usually included in FPGAs to increase the number of columns that can be compacted. Unlike conventional state-encoding techniques, the proposed approach reduces the number of logic functions instead of their complexity. An Integer Linear Programming (ILP) formulation that maximizes the number of compacted columns has been proposed. In order to evaluate the effectiveness of the proposed approach, experimental results using standard benchmarks are presented. In most cases, the proposed approach reduces the number of used Look-Up Tables (LUTs) with respect to the conventional FSM implementation.

1. Introduction

Finite State Machines (FSMs) are probably the most widely used component in digital design. They allow one to model any sequential circuit, particularly control units. Optimizing FSM implementations in terms of area, speed or power consumption is essential to meet the design constraints demanded by applications. For this reason, the synthesis of FSMs has received the attention of researchers, designers and EDA tool developers for decades [1,2,3,4]. One of the most active fields has been the encoding of the FSM states to achieve optimal implementations of the state transition and output functions [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20].
Other techniques are focused on the optimal assignment of don’t care outputs, which are very usual because the values of some outputs do not matter in some states. The technique called column compaction exploits these don’t cares to reduce the number of logic functions that must be implemented [21,22,23,24,25]. The aim of this technique is to assign values of zero or one to the don’t cares in order to achieve that two or more outputs are equal to each other, and so only one logic function must be implemented for each group of equivalent outputs.
Our technique extends column compaction in two ways. First, it extends the concept of column compaction to the state-encoding bits. To the best knowledge of the authors’ knowledge, column compaction has never been applied to the state transition function. In addition to assign values to don’t cares to find equivalent outputs, our technique also encodes the states to achieve that some encoding bits are equal to some outputs. Therefore, it can be viewed as an extension of column compaction that is applied to the state transition function, in addition to the output function. This allows one to reduce the number of logic functions required to implement both functions. We will refer to the one-to-one correspondence between state-encoding bits and outputs as direct mapping of state-encoding bits. For analogy, column compaction will be referred to as direct mapping of outputs.
Second, the proposed technique exploits the dedicated 2:1 multiplexers usually included in Field Programmable Gate Arrays (FPGAs), which allow one to combine the outputs of two Look-Up Tables (LUTs) [26]. In order to reduce the number of LUTs used, the technique searches pairs of outputs that can be combined by means of dedicated multiplexers to generate the next state-encoding bits or other outputs. Therefore, the proposed technique also extends the concept of compaction to that of mapping via multiplexers. Each multiplexer is controlled by a bit of the present state. Due to the fact that the state-encoding bits are control inputs of the multiplexers and outputs of some of them, state encoding has a great influence on the results that can be obtained. Unlike conventional state-encoding techniques, the proposed approach reduces the number of logic functions instead of their complexity. Our technique is based on an Integer Linear Programming (ILP) formulation, which allows a solver to find the optimal mapping (or suboptimal, when the execution time is limited).
As a conclusion, the proposed technique uses three mechanisms to reduce the number of logic functions: direct mapping of outputs (i.e., column compaction), direct mapping of state-encoding bits and mapping via multiplexers, which are each modeled by a unique ILP formulation. Unlike the other two mechanisms, which are presented for the first time, column compaction is a classical technique. However, the novelty lies in the fact that it is based on a ILP formulation instead of heuristics [22], which allows one to obtain the optimal value, or in the worst case, to find a bound of the error corresponding to a non-optimal solution.
The remainder of this article is organized as follows. In Section 2, brief background about FSMs and FPGA devices is presented. Section 3 describes the fundamentals of the proposed technique and its architecture. Section 4 presents the proposed ILP formulation. Section 5 shows the obtained experimental results. Finally, conclusions are drawn in Section 6.

2. Background

Let F = ( S , t , o , S 0 ) be a FSM, where S represents the set of states; t : S × { 0 , 1 } r S , the state transition function; o : S × { 0 , 1 } r { 0 , 1 , } m , the output function; and S 0 , the reset state. The parameters r and m represent the numbers of inputs and outputs, respectively.
The State Transition Diagram (STD) is commonly used to show the behavior of a FSM. Figure 1 shows an example of FSM, whose STD is shown in Figure 1a. For a proper description of the proposed technique, it is more convenient to represent the FSM in the form of a State Transition Table (STT). As Figure 1b,c show, a STT is a table whose rows represent the transitions of the FSM as the 4-tuple ( i n , p s , n s , o u t ) , where i n represents the inputs; p s , the present state; n s , the next state; and o u t , the outputs. The symbol “–” represents a don’t care value. A symbolic STT shows the states as symbols (see Figure 1b), whereas a codified STT shows the states once they have been codified (see Figure 1c).
FPGAs are semiconductor devices that are based around a matrix of Configurable Logic Blocks (CLBs) connected via programmable interconnects. In current Xilinx FPGA devices, each CLB is composed by two slices, and each one of them includes four LUTs. In addition, each slice includes three dedicated 2:1 multiplexers: two called F7 and one called F8. Each F7 combines the outputs of two LUTs whereas F8 combines the outputs of two F7s. Dedicated multiplexers can be used to create general-purpose functions of up to 13 inputs in 2 LUTs or 27 inputs in 4 LUTs (i.e., in one slice) [26].

3. Mapping Outputs and States Encoding Bits to Outputs by Means of Multiplexers

The aim of the proposed technique is to generate FSM outputs or the next state-encoding bits from FSM outputs without using extra LUTs. Both direct mapping and mapping via multiplexers are applied for this purpose. Mapping via multiplexers benefits from dedicated multiplexers, avoiding the use of extra LUTs. The advantage of direct mapping is that it allows one to reduce the number logic levels (i.e., to remove the logic levels corresponding to the dedicated multiplexers) and to improve the routing; therefore, this mapping can increase the speed. To sum up, including mapping via multiplexer allows one to find solutions with a more mapped signals. However, the proposed ILP formulation gives priority to direct mapping.
By exploiting the usual strong relationship between the transition and output functions, the proposed technique codifies the states to achieve that the maximum number of signals (whether outputs or state-encoding bits) are mapped. In addition to encode the states, the proposed technique assigns values to the don’t care outputs to achieve its objective. There is a key difference between the proposed technique and the conventional state-encoding approaches [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20] that justifies its novelty. State-encoding algorithms assign a different code to each FSM state with the purpose of simplifying the complexity of the transition and output functions. On the other hand, the proposed approach reduces the number of logic functions instead of its complexity. Compared with the conventional state-encoding approaches, assigning don’t care outputs and codifying states in a non-optimal way in terms of function complexity can result in an increase in the number of LUTs required by each logic function. However, a reduction in the number of logic functions achieved by the proposed technique can compensate for such an increase, resulting in a lesser overall LUT count.
Figure 2 shows the general architecture to implement the FSMs obtained from the proposed technique. For simplicity, the signals generated from a single output (i.e., by applying direct mapping) have been excluded from the figure. The combinational block implements the logic corresponding to the unmapped bits of the next state and the unmapped outputs, so it partially implements the state transition and output functions. The mapped signals are generated from unmapped outputs by means of multiplexers controlled by a bit of the present state (which is not necessarily different from those of other multiplexers). The larger the number of mapped signals, the simpler the combinational block. This is due to the fact that the number of logic functions is reduced.
Figure 1c shows the STT of the FSM example (Figure 1b) after applying a mapping of state-encoding bits and outputs to outputs. After codifying the states and assigning some don’t care outputs, the next state-encoding bit N 2 can be mapped to outputs O 1 and O 2 by means of a multiplexer controlled by the present state-encoding bit P 1 , which selects O 1 if P 1 = 0 and O 2 if P 1 = 1 (see Figure 1b,c). Note that Figure 1c shows in bold the don’t care outputs that have been assigned. Similarly, O 5 can be mapped to O 3 and O 4 by means of a multiplexer controlled by P 2 , which selects O 3 if P 2 = 0 and O 4 if P 2 = 1 . This mapping allows one to reduce the number of logic functions that must be implemented in LUTs from seven to five (see Figure 1d). By supposing LUTs of four inputs, the implementation obtained by the proposed technique requires five LUTs, whereas the conventional FSM implementation requires seven LUTs, independently of the state encoding used.
We define optimal mapping of outputs and state-encoding bits to outputs (OMOSEBO) as the problem of finding the mapping that maximizes the number of mapped signals (i.e., that minimizes the number of logic functions that are required to implement the output and state transition functions).
We have proposed an ILP formulation for this problem, which prioritizes direct mapping over mapping via multiplexers. Some constraints are imposed by the Xilinx FPGA architecture itself. As the output of each LUT is physically connected to a unique F7 multiplexer, routing the output of the LUT to a different multiplexer requires an additional LUT (called route-thru). F8 multiplexers present similar problems because the output of each F7 is physically connected to a unique F8 multiplexer. To prevent these situations, the ILP formulation must ensure that a FSM output cannot be used to generate more than one signal via multiplexer. However, although dedicated multiplexers F7 and F8 can be chained, allowing the multiplexing of up to four signals, we have limited our approach to a unique number of multiplexers to avoid increasing the complexity of the proposed ILP formulation. Therefore, the solver must exclude candidate solutions in which outputs mapped via multiplexers are inputs of other multiplexers.

4. Integer Linear Programming Formulation

Let S = { S 1 , S 2 , , S e } be the set of states of a FSM. Let us suppose that O i for i = 1 , , m represents each output of the FSM. Let us consider the tuple ( T 1 , T 2 , , T q ) , where T i ( S j , S k ) S × S , and S j and S k represent the present and next state, respectively, of the transition corresponding to i-th row of the STT. Let us consider that P i and N i represent the i-th encoding bit of the present and the next state, respectively. Let us suppose that O j i { 0 , 1 , } represents the value of the output O j for the transition corresponding to the i-th row of the STT.
The ILP formulation requires the following binary variables:
  • f i , j , k , r for all i , j , k , r , which is equal to one if and only if N i can be generated from O j and O k via a multiplexer controlled by P r so that N i = O j for all transitions in which P r = 0 , and N i = O k for the remaining ( N i is therefore mapped to O j and O k ).
  • g i , j , k , r for all i , j , k , r , which is equal to one if and only if O i can be generated from O j and O k via a multiplexer controlled by P r so that O i = O j for all transitions in which P r = 0 , and O i = O k for the remaining ( O i is therefore mapped to O j and O k ).
  • h i , j for all i , j , which represents the value of the O j for the transition corresponding to the i-th row of the STT. Note that h i , j must be equal to O j i for all i , j such that O j i { 0 , 1 } (i.e., O j i is not a don’t care); however, the remainder values of h i , j will be determined by the solver.
  • c i , j for all i , j , which represents the j-th bit of the encoding of S i .
  • d i , j , k for all i , j , k such that j < k , which is equal to one if and only if i-th bit of the encoding of S j and that of S k are different.
Note that if f i , j , j , r = 1 , then there exits a direct mapping of N i to O j (so the value of r is not relevant). Similarly, if g i , j , j , r = 1 , then there exists a direct mapping of O i to O j .
The objective of the OMOSEBO problem is to find a mapping of outputs and state-encoding bits that maximize the following tuple in lexicographical order:
( u , v , t , r f u , v , t , r + { w , v , t , r | v t } g w , v , t , r + { w , v , r | v > w } g w , v , v , r , u , v , r f u , v , v , r + { w , v , r | v > w } g w , v , v , r )
The first element allows one to maximize the number of mapped signals; the second one prioritizes the direct mapping over the mapping via multiplexer. The terms g w , v , v , r include only the cases in which v > w to avoid counting twice one output directly mapped to another (e.g., if g w , v , v , r = 1 and g v , w , w , r = 1 , then only O v or O w can be removed in the corresponding implementation).
Encoding e states requires d = log 2 e bits; therefore, z = d + m is the total number of signals (either outputs or state-encoding bits). As u , v , t , r f u , v , t , r + { w , v , t , r | v t } g w , v , t , r + { w , v , r | v > w } g w , v , v , r represent the number of mapped signals, it is obvious that u , v , t , r f u , v , t , r + { w , v , t , r | v t } g w , v , t , r + { w , v , r | v > w } g w , v , v , r < z . We use this property to define the weight α = 1 + 1 z for the terms corresponding to direct mapping in order to model the multi-objective function. The ILP formulation of the OMOSEBO problem can be expressed as follows:
Maximize
{ u , v , t , r | v t } f u , v , t , r + α u , v , r f u , v , v , r + { w , v , t , r | v t } g w , v , t , r + α { w , v , r | v > w } g w , v , v , r
subject to
f u , v , t , r + c i , r h j , v + c k , u 1 u , v , t , r , T j ( S i , S k )
f u , v , t , r c i , r h j , v + c k , u 1 u , v , t , r , T j ( S i , S k )
f u , v , t , r c i , r h j , t + c k , u 2 u , v , t , r , T j ( S i , S k )
f u , v , t , r + c i , r h j , t + c k , u 2 u , v , t , r , T j ( S i , S k )
g w , v , t , r + c i , r h j , v + h i , j , w 1 w , v , t , r , T j ( S i , S k )
g w , v , t , r c i , r h j , v + h i , j , w 1 w , v , t , r , T j ( S i , S k )
g w , v , t , r c i , r h j , t + h i , j , w 2 w , v , t , r , T j ( S i , S k )
g w , v , t , r + c i , r h j , t + h i , j , w 2 w , v , t , r , T j ( S i , S k )
v , t , r f u , v , t , r 1 u
v , t , r g w , v , t , r 1 w
{ u , t , r | t v } f u , v , t , r + { u , t , r | t v } f u , t , v , r + { w , t , r | t v } g w , v , t , r + { w , t , r | t v } g w , t , v , r 1 v s .
g w , w , t , r = 0 w , t , r
g w , t , w , r = 0 w , t , r
( g w , v , t , r 1 ) m + { v , t , r | t v } g v , v , t , r 0 w , v , t v , r
( g w , t , v , r 1 ) m + { v , t , r | t v } g v , v , t , r 0 w , v , t v , r
( f u , v , t , r 1 ) m + { v , t , r | t v } g v , v , t , r 0 u , v , t v , r
( f u , t , v , r 1 ) m + { v , t , r | t v } g v , v , t , r 0 u , v , t v , r
( g w , v , v , r 1 ) z + w , v , r g w , w , v , r + g w , v , w , r + u , v , r f u , w , v , r + f u , v , w , r 0 w , v , r
d i , j , k + c j , i + c k , i 0 i , j , k
d i , j , k + c j , i + c k , i 2 i , j , k
i d i , j , k 1 j , k
h i , j = O j i i , j | O j i { 0 , 1 }
Constraints (3)–(6) ensure that the mapping is coherent with the transition function. Constraints (3) and (4) are the result of the linearization of the following constraint:
f u , v , t , r = 1 c i , r = 0 c k , u = h j , v u , v , t , r , T j ( S i , S k )
For each transition T j ( S i , S k ) , if P r = 0 (i.e., the r-th bit of the encoding of the present state, S i , is equal to 0), then the multiplexer connects O v to N u ; therefore, the u-th bit of the coding of the next state, S k , must be equal to the value of the output O v . Similarly, for the cases in which P r = 1, (5) and (6) are obtained from the following constraint:
f u , v , t , r = 1 c i , r = 1 c k , u = h j , t u , v , t , r , T j ( S i , S k )
Constraints (7)–(10) guarantee the coherence with the output function. Constraints (7) and (8) are obtained by the linearization of the following constraint:
g w , v , t , r = 1 c i , r = 0 h j , w = h j , v w , v , t , r , T j ( S i , S k )
If P r = 0 , then the multiplexer connects O v to O w ; therefore, they must be equal to each other. Similarly, for the cases in which P r = 1 , (9) and (10) are obtained from the following constraint:
g w , v , t , r = 1 c i , r = 1 h j , w = h j , t w , v , t , r , T j ( S i , S k )
Constraints (11) and (12) ensure that the state-encoding bits and the outputs, respectively, are not mapped more than once (see Figure 3a,b). Constraint (13) guarantees that any output cannot be used to generate more than one signal by means of a multiplexer, whether a state-encoding bit or an output (see Figure 3c). Constraints (14) and (15) prevent that an output is mapped to itself (see Figure 3d).
Constraints (16) and (17) are the results of the linearization of the constraints
g w , v , t , r = 1 v , t v , r g v , v , t , r = 0 w , v , t v , r
and
g w , t , v , r = 1 v , t v , r g v , v , t , r = 0 w , v , t v , r
respectively. They prevent that an output generated from others via a multiplexer is used to map an output using a multiplexer (see Figure 4a). Similarly, (18) and (19) are obtained by linearization of the constraints
f u , v , t , r = 1 v , t v , r g v , v , t , r = 0 u , v , t v , r
and
f u , t , v , r = 1 v , t v , r g v , v , t , r = 0 u , v , t v , r
respectively. They prevent the mapping of a state-encoding bit in the circumstances described above (see Figure 4b).
Constraint (20) is obtained from constraint
g w , v , v , r = 1 w , v , r g w , w , v , r + g w , v , w , r + u , v , r f u , w , v , r + f u , v , w , r = 0 w , v , r
which prevents an output directly mapped to another being used to generate a third signal (either an output or a state-encoding bit). Figure 4c,d show the cases of outputs and state-encoding bits, respectively. The aim of these constraints is to prevent that a direct mapping leads to the situations shown in Figure 4e, despite constraints from (16) to (19). It might be thought that (20) prevents some valid solutions. For example, supposing that O v is not a mapped output, the implementation shown on the left of Figure 4c could be valid if it is not prevented by (20), allowing that O w and O w are mapped outputs. However, the solver will find the following equivalent solution (which do is allowed): O v and O v generate O w via the multiplexer, and O v also generates O w .
Finally, (21)–(23) guarantee the code assigned to each state is different from the other ones. Constraints (21) and (22) ensure that the i-th bit of the encoding of S j is different from the i-th bit of the encoding of S k if d i , j , k = 1 ; that is,
d i , j , k = 1 c j , i c k , i i , j , k
Constraint (23) guaranties that any pair of states, S j and S k , differ at least in one bit.
Constraint (24) assigns the values of the outputs that are not don’t care values, and thus they cannot be determined by the solver.

5. Experimental Results

OMOSEBO has been evaluated using FSMs of the MCNC benchmark set [27]. The number of states of these FSMs were previously minimized by STAMINA [28]. All designs were synthesized and implemented using Xilinx Vivado Design Suite 2022.1 in a Spartan-7 FPGA (xc7s6cpga196-2). Therefore, the results include the delay of the placement-and-routing stage. In order to evaluate the effectiveness of OMOSEBO, the implementations obtained using this technique were compared to the corresponding conventional FSM implementations (which will be called CONV). CONV implementations were generated using the template for FSMs of Vivado (see Figure 5, which shows the VHDL code for implementing the FSM example using the FSM template). The proposed ILP formulation was implemented using the API of Gurobi [29] for Python and solved with a time limit of 4 hours. The optimization was carried out on an Intel i7-10700K at 3.80 GHz with 16 GB of RAM. The HDL description generated after the optimization process uses a modified version of the FSM template that included a standard description of multiplexors (see Figure 6, which shows the VHDL code for implementing the FSM example by applying OMOSEBO). As the aim of OMOSEBO is to improve the area, Vivado was configured for area optimization using the preconfigured strategies Flow_AreaOptimized_high and Area_Explore for synthesis and implementation, respectively.
Table 1 shows the number of LUTs and the maximum clock frequency (in MHz) obtained by CONV and OMOSEBO. In addition, the columns “LUT Red.” and “Freq. Inc.” show the area reduction and the speed increment, respectively, obtained by OMOSEBO with respect to CONV; thus, positive values correspond to the cases in which the proposed approach was successful (in column “LUT Red.”, these values are shown in bold).
In 53% of cases, the time limit was reached. In half of these cases, the gap (i.e., the measure that indicates how long the obtained solution is from the optimal) was more than 50%. Therefore, the results could be improved using more time.
OMOSEBO required a lesser number of LUTs than CONV in 73% of cases. The average LUT reduction was 11%; however, if only the successful cases are taken into account, this value increases to 21%. Although OMOSEBO reduced the number of logic functions in all cases, there were three FSMs (20% of the sample) in which the number of used LUTs was greater than that obtained by CONV. This is due to the fact that OMOSEBO encodes the states to reduce the number of logic functions, whereas CONV encodes the states to simplify the logic functions, and so there may be cases in which the increase in complexity of the logic functions does not compensate for fewer functions. In addition, the don’t care outputs assigned by OMOSEBO may make the simplification process performed by Vivado more difficult.
Regarding the maximum clock frequency, the results do not show a clear correlation between LUT reduction and speed increment (the correlation coefficient is 0.69). Nevertheless, although the goal of the proposed technique is to reduce the area, in the 55% of cases in which the OMOSEBO reduced the area, it also increased the speed, obtaining an average speed increment of 15% for such cases. In order to study the relationship between speed increment and area reduction, we obtained the regression line (see Figure 7). Although area and speed usually are conflicting goals, in general terms, the area reduction obtained by OMOSEBO was achieved without degrading the speed; in fact, the regression line shows that the area reduction has a positive impact on the increase in speed.
In order to show the improvement that OMOSEBO represents with respect to conventional column compaction [22], we applied that technique to all studied FSMs. For that, we added the following constraints to the proposed ILP formulation:
f u , v , t , r = 0 u , v , t , r
g w , v , t , r = 0 w , v , t , r | t v s .
The constraint (35) prevents the mapping of state bits to outputs. Similarly, (36) prevents the mapping of outputs to outputs via multiplexers. Therefore, only direct mapping of outputs (i.e., column compaction) is allowed.
The impact of column compaction with respect to the solutions found by OMOSEBO, which also includes direct mapping of next state bits and mapping via multiplexers, is very limited. Although the optimal value was reached in all cases, column compaction reduced the number logic functions only in four cases. In the remaining 11 cases, the reduction was due exclusively to direct mapping of next state bits and/or to mapping via multiplexers. Even in those four cases, the number of logic functions obtained by OMOSEBO was less than that of column compaction for two of them (a 17% and a 27% less for keyb and mark1, respectively), and equal for the two others (s820 and s832).

6. Conclusions

In this paper, OMOSEBO, a new technique for implementing FSMs in FPGAs, has been presented. It can be viewed as an extension of column compaction that is applied to the state transition function in addition to the output function. Moreover, OMOSEBO also extends the concept of compaction to that of mapping via multiplexers. Therefore, the proposed technique uses three mechanisms to reduce the number of logic functions: direct mapping of outputs (i.e., column compaction), direct mapping of state-encoding bits and mapping via multiplexers, which are modeled by a unique ILP formulation. Unlike conventional state-encoding techniques, in which states are encoded to simplify the state transition and output functions, OMOSEBO reduces the number of required logic functions.
In order to evaluate the effectiveness of OMOSEBO, experimental results using standard benchmarks have been presented. Regarding column compaction, unlike other similar approaches, the proposed ILP formulation allows one to find optimal solutions. Despite that, in 11 of the 15 studied cases, the reduction in the number of logic functions is due exclusively to direct mapping of next state bits and/or mapping via multiplexers, whereas only in 2 of the studied cases, the reduction is due exclusively to column compaction. In conclusion, OMOSEBO represents a significant improvement over column compaction.
The FSM implementations obtained by applying OMOSEBO have been compared to conventional FSM implementations. The results show that OMOSEBO reduces the number of used LUTs with respect to the conventional implementation in 73% of cases.

Author Contributions

Conceptualization, R.S.-N. and I.G.-V.; Methodology, R.S.-N. and I.G.-V.; Software, R.S.-N. and I.G.-V.; Validation, R.S.-N. and I.G.-V.; Formal analysis, R.S.-N. and I.G.-V.; Investigation, R.S.-N. and I.G.-V.; Writing—original draft, R.S.-N. and I.G.-V.; Writing—review & editing, R.S.-N. and I.G.-V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We would like to thank Gurobi for providing us a free academic license to carry out this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Katz, R.H. Contemporary Logic Design; Benjamin/Cummings: Redwood City, CA, USA, 1994. [Google Scholar]
  2. Barkalov, A.; Titarenko, L. Logic Synthesis for FSM-Based Control Units; Lecture Notes in Electrical Engineering; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  3. Barkalov, A.; Titarenko, L.; Kolopienczyk, M.; Mielcarek, K.; Bazydlo, G. Design of EMB-Based Mealy FSMs. In Logic Synthesis for FPGA-Based Finite State Machines; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 193–237. [Google Scholar] [CrossRef]
  4. Baranov, S. Logic and System Design of Digital Systems; TUT Press: Tallin, Estonia, 2008. [Google Scholar]
  5. Barkalov, A.; Titarenko, L.; Mielcarek, K.; Chmielewski, S. Twofold State Assignment for Mealy FSMs. In Logic Synthesis for FPGA-Based Control Units; Springer: Berlin/Heidelberg, Germany, 2020; pp. 61–90. [Google Scholar]
  6. Barkalov, A.; Titarenko, L.; Mielcarek, K.; Chmielewski, S. Twofold State Assignment for Moore FSMs. In Logic Synthesis for FPGA-Based Control Units; Springer: Berlin/Heidelberg, Germany, 2020; pp. 91–116. [Google Scholar]
  7. El-Maleh, A. A Probabilistic Tabu Search State Assignment Algorithm for Area and Power Optimization of Sequential Circuits. Arab. J. Sci. Eng. 2020, 45, 6273–6285. [Google Scholar] [CrossRef]
  8. da Silva Ribeiro, R.; de Carvalho, R.L.; da Silva Almeida, T. Optimization of state assignment in a finite state machine. Acad. J. Comput. Eng. Appl. Math. 2022, 3, 9–16. [Google Scholar] [CrossRef]
  9. Solov’ev, V. Minimization of mealy finite-state machines by using the values of the output variables for state assignment. J. Comput. Syst. Sci. Int. 2017, 56, 96–104. [Google Scholar] [CrossRef]
  10. Salauyou, V.; Ostapczuk, M. State Assignment of Finite-State Machines by Using the Values of Output Variables. In Theory and Applications of Dependable Computer Systems. Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2020; Volume 1173, pp. 91–116. [Google Scholar]
  11. Das, N.; Priya P, A. Reset: A Reconfigurable state encoding technique for FSM to achieve security and hardware optimality. Microprocess. Microsyst. 2020, 77, 103196. [Google Scholar] [CrossRef]
  12. Barkalov, A.A.; Titarenko, L.; Mielcarek, K. Reducing LUT Count for Mealy FSMs With Transformation of States. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2022, 41, 1400–1411. [Google Scholar] [CrossRef]
  13. Aly, W.M. Solving the State Assignment Problem Using Stochastic Search Aided with Simulated Annealing. Am. J. Eng. Appl. Sci. 2009, 2, 703–707. [Google Scholar] [CrossRef] [Green Version]
  14. De Micheli, G.; Brayton, R.; Sangiovanni-Vincentelli, A. Optimal State Assignment for Finite State Machines. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 1985, 4, 269–285. [Google Scholar] [CrossRef] [Green Version]
  15. Villa, T.; Sangiovanni-Vincentelli, A. NOVA: State assignment of finite state machines for optimal two-level logic implementation. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 1990, 9, 905–924. [Google Scholar] [CrossRef] [Green Version]
  16. Mengibar, L.; Entrena, L.; Lorenz, M.; Millan, E. Partitioned state encoding for low power in FPGAs. Electron. Lett. 2005, 41, 948–949. [Google Scholar] [CrossRef]
  17. Chen, D.S.; Sarrafzadeh, M.; Yeap, G. State encoding of finite state machines for low power design. In Proceedings of the 1995 IEEE International Symposium on Circuits and Systems (ISCAS), Seattle, DC, USA, 30 April–3 May 1995; Volume 3, pp. 2309–2312. [Google Scholar] [CrossRef]
  18. Avedillo, M.; Quintana, J.; Huertas, J. State merging and state splitting via state assignment: A new FSM synthesis algorithm. IEEE Proc. Comput. Digit. Tech. 1994, 141, 229–237. [Google Scholar] [CrossRef] [Green Version]
  19. Almaini, A.; Miller, J.; Thomson, P.; Billina, S. State assignment of finite state machines using a genetic algorithm. IEEE Proc. Comput. Digit. Tech. 1995, 142, 279–286. [Google Scholar] [CrossRef] [Green Version]
  20. El-Maleh, A.; Sait, S.; Nawaz Khan, F. Finite state machine state assignment for area and power minimization. In Proceedings of the 2006 IEEE International Symposium on Circuits and Systems, Kos, Greece, 21–24 May 2006; p. 4. [Google Scholar]
  21. Wei, R.S.; Tseng, C.J. Column compaction and its application to the control path synthesis. In Proceedings of the 1987 IEEE International Conference on Computer-Aided Design Digest of Technical Papers, Santa Clara, CA, USA, 9–12 November 1987; Volume 87, pp. 320–323. [Google Scholar]
  22. Binger, D.; Knapp, D. Encoding multiple outputs for improved column compaction. In Proceedings of the 1991 IEEE International Conference on Computer-Aided Design Digest of Technical Papers, Santa Clara, CA, USA, 11–14 November 1991; pp. 230–233. [Google Scholar] [CrossRef]
  23. Mitra, S.; Avra, L.; McCluskey, E. An output encoding problem and a solution technique. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 1999, 18, 761–768. [Google Scholar] [CrossRef]
  24. Le Gal, B.; Ribon, A.; Bossuet, L.; Dallet, D. Area optimization of ROM-based controllers dedicated to digital signal processing applications. In Proceedings of the 2010 18th European Signal Processing Conference, Aalborg, Denmark, 23–27 August 2010; pp. 547–551. [Google Scholar]
  25. Lee, S.; Choi, K. Critical-Path-Aware High-Level Synthesis with Distributed Controller for Fast Timing Closure. ACM Trans. Des. Autom. Electron. Syst. 2014, 19, 1–29. [Google Scholar] [CrossRef]
  26. Xilinx. 7 Series FPGAs Configurable Logic Block: User Guide. 2016. Available online: https://docs.xilinx.com/v/u/en-US/ug474_7Series_CLB (accessed on 17 January 2023).
  27. Yang, S. Logic Synthesis and Optimization Benchmarks User Guide, Version 3.0; Microelectronic Center of North Carolina: Research Triangle, NC, USA, 1991. [Google Scholar]
  28. Rho, J.K.; Hachtel, G.D.; Somenzi, F.; Jacoby, R.M. Exact and heuristic algorithms for the minimization of incompletely specified state machines. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 1994, 13, 167–177. [Google Scholar] [CrossRef]
  29. Gurobi Optimization. Gurobi Optimizer Reference Manual. 2020. Available online: http://www.gurobi.com (accessed on 17 January 2023).
Figure 1. Example of the mapping of state-encoding bits and outputs to outputs: (a) STD, (b) symbolic SST, (c) encoded STT and (d) implementation.
Figure 1. Example of the mapping of state-encoding bits and outputs to outputs: (a) STD, (b) symbolic SST, (c) encoded STT and (d) implementation.
Electronics 12 00502 g001
Figure 2. General architecture for mapping state-encoding bits and outputs to outputs.
Figure 2. General architecture for mapping state-encoding bits and outputs to outputs.
Electronics 12 00502 g002
Figure 3. Solutions that ILP constraints from (11) to (15) prevent: (a) constraint (11), (b) constraint (12), (c) constraint (13) and (d) constraints (14) and (15).
Figure 3. Solutions that ILP constraints from (11) to (15) prevent: (a) constraint (11), (b) constraint (12), (c) constraint (13) and (d) constraints (14) and (15).
Electronics 12 00502 g003
Figure 4. Solutions that ILP constraints from (16) to (20) prevent: (a) constraints (16) and (17), (b) constraints (18) and (19), (c) constraint (20) for outputs, (d) constraint (20) for state encoding bits and (e) an example of the situation that must be prevented.
Figure 4. Solutions that ILP constraints from (16) to (20) prevent: (a) constraints (16) and (17), (b) constraints (18) and (19), (c) constraint (20) for outputs, (d) constraint (20) for state encoding bits and (e) an example of the situation that must be prevented.
Electronics 12 00502 g004
Figure 5. VHDL code for implementing the FSM example using the FSM template.
Figure 5. VHDL code for implementing the FSM example using the FSM template.
Electronics 12 00502 g005
Figure 6. VHDL code for implementing the FSM example by applying OMOSEBO.
Figure 6. VHDL code for implementing the FSM example by applying OMOSEBO.
Electronics 12 00502 g006
Figure 7. Speed increment vs. LUT reduction (r represents the correlation coefficient).
Figure 7. Speed increment vs. LUT reduction (r represents the correlation coefficient).
Electronics 12 00502 g007
Table 1. Implementation results.
Table 1. Implementation results.
CONVOMOSEBOComparison
FSM# LUTsFreq.# LUTsFreq.LUT Red.Freq. Inc.
(MHz) (MHz)(%)(%)
s178358324305920
mark126443155924234
s2776495693297
planet10440685438188
opus17567146651817
s510584615044014−5
s8207738968396122
ex1704156340710−2
ex6206431862710−2
styr103376953488−7
ex415742147167−4
cse4142641472011
s8327637985382−121
keyb4035746348−15−3
mc39074851−33−6
Mean4950142520115
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Senhadji-Navarro, R.; Garcia-Vargas, I. Mapping Outputs and States Encoding Bits to Outputs Using Multiplexers in Finite State Machine Implementations. Electronics 2023, 12, 502. https://doi.org/10.3390/electronics12030502

AMA Style

Senhadji-Navarro R, Garcia-Vargas I. Mapping Outputs and States Encoding Bits to Outputs Using Multiplexers in Finite State Machine Implementations. Electronics. 2023; 12(3):502. https://doi.org/10.3390/electronics12030502

Chicago/Turabian Style

Senhadji-Navarro, Raouf, and Ignacio Garcia-Vargas. 2023. "Mapping Outputs and States Encoding Bits to Outputs Using Multiplexers in Finite State Machine Implementations" Electronics 12, no. 3: 502. https://doi.org/10.3390/electronics12030502

APA Style

Senhadji-Navarro, R., & Garcia-Vargas, I. (2023). Mapping Outputs and States Encoding Bits to Outputs Using Multiplexers in Finite State Machine Implementations. Electronics, 12(3), 502. https://doi.org/10.3390/electronics12030502

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop