Next Article in Journal
Special Issue “Symmetry in Optimization and Control with Real-World Applications”
Previous Article in Journal
Numerical Investigation of the Fredholm Integral Equations with Oscillatory Kernels Based on Compactly Supported Radial Basis Functions
Previous Article in Special Issue
Service-Oriented Real-Time Smart Job Shop Symmetric CPS Based on Edge Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on the Multi-Screen Connection Interaction Method Based on Regular Octagon K-Value Template Matching

1
School of Computer Science, Xi’an Polytechnic University, Xi’an 710048, China
2
China Tobacco Chongqing Industrial Co., Ltd., Qianjiang Cigarette Factory, Chongqing 409000, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(8), 1528; https://doi.org/10.3390/sym14081528
Submission received: 17 June 2022 / Revised: 12 July 2022 / Accepted: 22 July 2022 / Published: 26 July 2022

Abstract

:
Manufacturing companies using CPSS integrate multiple data in organizational production and present the data in the form of data visualization with the help of a big screen data visualization. In order to help the manufacturing enterprises using the data visualization screen to understand the data trend of the whole production process more conveniently, this paper proposes a method to establish the connection and interaction between the visualization screen and the mobile terminal based on the positive octagon K-value template matching algorithm, match the pre-processed mobile phone pictures with the visual large screen image, determine the area where the captured image is located in the large screen image, and return the detailed information of chart components contained in the visual large screen area to the mobile phone for users’ observation and analysis. This matching algorithm has good matching accuracy and time efficiency. Application cases and related investigations in enterprises also prove the practicability of this method to a certain extent.

1. Introduction

As technology advances and society develops, problems involving engineering complexity, system complexity, and social complexity occur. The socio-physical information system (CPSS) has become the solution to such complex problems [1]. CPSS integrates various resources and values in an interconnected complex world to help enterprises in industrial systems manage and operate intelligently. In the context of the new collaborative manufacturing network [2], with the method of communication network CPS crossing the interaction boundary between physical and network [3], manufacturing companies support the integration of data information from various manufacturing operations to the final product management platform through today’s advanced IoT technologies, such as radio frequency identification (RFID) and smart sensors, to access a unified data management platform. After learning and processing by algorithms such as artificial intelligence and machine learning, the data is processed into valuable information and displayed in the form of data visualization. The quality of data information largely depends on how it is expressed. Data visualization processes and analyzes the meaning of digital data to visualize the results. Data visualization uses graphical means to convey and communicate data information clearly and effectively with visual dialogue as its essence. On the one hand, data visualization uses simple charts to present complex information and transfer it at a very fast speed. On the other hand, through data visualization analysis, multiple attributes of an object or event can be discovered with the aid of multidimensional data display. Therefore, enterprises can use data visualization methods to help them quickly extract information from data and harvest value. While data visualization on large screens solves the problem of limited display information, these problems can occur on conventional displays. The amount and breadth of information obtained by users has been improved to a certain extent. However, the Limited large screen still needs to display as much effective information as possible. The vast amount of data will cause information stacking, which will interfere with the processing of information by users. Simply using data visualization to display data on a large screen is only a one-way, passive way for users to obtain information. Users’ understanding of information is still at a very low level. The interaction with the large screen of data visualization is established, which enables users to manipulate and understand data through interaction with the data visualization backend system, alleviates the contradiction between limited visualization space and data overload, and enables users to understand better and analyze data.
Therefore, this paper determines the location of the picture taken on the large screen by matching the image taken by a mobile phone camera with the image displayed by the data visualization screen and then establishes the connection between the user’s mobile phone and data visualization screen. Returning the chart data details on the mobile phone helps users to understand the data content of the part they are interested in and improves the user’s insight into the information and analysis efficiency.
The rest of this paper is arranged as follows: in the second part, the research status of interactive technology with a data visualization screen is briefly described. The third part outlines the overall framework of the connection and interaction between the mobile terminal and the data visualization screen. In the fourth part, we discuss how the positive octagon K-value template matching algorithm determines the position of the image captured by the mobile camera in the large screen image. In the fifth part, two groups of experiments and manufacturing enterprise application cases are used to test the practical application effect of template matching to establish connection interaction. The last part summarizes the main contributions of this paper and the direction of future research and improvement.

2. Related Work

In the CPSS environment, using a data visualization screen for manufacturing companies enables users to more comprehensively understand the data and interact with the data visualization screen. This section briefly introduces the research aspects of the interaction methods with the large visualization screen, which are divided into two categories: establishing direct interaction with the large visualization screen directly using special devices or through gestures and body postures and connecting multiple screens for indirect interaction.

2.1. Direct Interaction with Large Data Visualization Screens

Current scholars use the input capabilities of tangible or intangible devices to interact directly with large data visualization screens to change the original display content of the screens. Yvonne Jansen used a customizable tangible remote control to interact directly with the large visualization screens by manipulating the remote-control buttons to change the large display content, helping users to focus more easily on the visual display during interaction [4]. Alex Olwal enabled direct, precise, and fast interaction on large digital displays by leveraging the higher visual and input resolution of small rough tracking mobile devices [5]. Peng Song integrated the natural user interface of multi-touch visual large-screen displays with the unrestricted physical flexibility offered by handheld devices with multi-touch and 3D tilt-sensing capabilities to control the visual large-screen display [6] directly. Kelvin Cheng allowed the user to interact with the large-screen display by tapping the touchscreen with their fingers by adding a virtual touchscreen between the user and the large-screen display [7]. Alireza Sahami Shirazi introduced Flashlight Interaction, a new way to use the phone camera flashlight based on the interaction between the phone and the big screen. It makes interactions easy to execute and understand with direct mapping between phone movements and on-screen responses [8]. Using purpose-built devices to control large-screen displays is extremely costly and suffers from a lack of interactive devices and interference when multiple devices are used together [9,10,11]. Markus L. Wittorf focused on the technical possibilities of mid-air gestures for visual large-screen display interaction. In describing user-defined mid-air gestures for visual large-screen display interaction, different gestures are used to directly control the visual large-screen for different interactions [12]. Raimund Dachselt introduced a set of throwing gestures to transmit media data information to the large-screen display to interact with the content displayed on the large screen [13]. Joshua Reibert introduced a multi-touch vocabulary for interacting with parallel coordinate graphs on large-screen displays. The gesture vocabulary helps solve typical analysis tasks that require arm-out-of-arm interactions, assisting users to understand better what is displayed on the display [14]. Garth Shoemaker designed and implemented a novel set of body-centered interaction techniques. This technique is used for mobile phone user feedback when interacting with a large-screen display [15]. Yoshio Matsuda proposed a user interface that allows multiple users to interactively access information and operate the interface simultaneously on a visual large-screen display. Users can interact with the display using gestures, allowing many users to easily use the system from a distance [16]. The recognition accuracy based on gesture manipulation is generally low, and the sensitivity of the interaction behavior recognition is low, which is inconvenient for data and information exploration on the visual large screen [17,18]. The direct interaction with the visualization large screen makes it inconvenient for users to observe information when facing more situations. The interaction operations interfere with each other, and the recognition response of interaction behaviors is slow.

2.2. Indirect Interaction with Large Data Visualization Screens

Indirect interaction with the large data visualization screen is mainly done by controlling small, easy-to-see screens with the help of multi-screen interaction techniques and thus indirectly interacting with the large visualization screen. Takuma Hagiwara developed CamCutter—a cross-device interaction technique that allows users to use the camera of a handheld device to quickly select and share applications running on another screen, allowing real-time, synchronized application sharing between devices as a way to connect and interact with the visual big screen [19]. Tim Paek addressed the conflicting trends between providing more information on larger displays and leveraging mobility on smaller devices by proposing a platform that allows multiple users to use their own personal mobile devices (e.g., mobile phones, laptops, or wireless PDAs) to access and interact with large shared displays, thus leveraging the advantages of both domains [20]. The multi-screen interaction approach does not change the original data visualization screen’s data display content and does not affect others’ behavior in observing and analyzing. Moreover, the cross-screen transfer approach allows users to save and analyze the content of interest, solving the problem of restricted time and place. Ballagas pioneered the interaction with visualization of large-screen displays by taking photos [21]. Yuan’s team first introduced this approach to graph visualization and proposed an efficient matching algorithm to determine the focal region of the graph topology from the taken photos while establishing the connection interaction between the mobile phone and the data visualization big screen [22]. However, the subgraph node matching algorithm used to establish the connection is suitable for images composed of nodes, and the data visualization big screen images are mostly graph images. The graph images need to be processed into images composed of nodes first to determine the final focal region. Such processing is tedious in steps and will bring unnecessary waste of time.
In this paper, a method of establishing connection and interaction between mobile terminal and data visualization screen is proposed for manufacturing enterprises that use a data visualization screen to panoramically monitor the data generated in the production process, to assist in the observation and analysis of the display content of data visualization screen. This method is mainly oriented to chart images. It matches the mobile phone captured image with the visual large screen image to establish the connection and interaction between the mobile phone and the large screen. The user captures the image content of interest on the large visual screen through the mobile phone. After the captured image is preprocessed, the interactive server matches the captured image with the large visual screen image with the help of the positive octagonal K-value template image fast matching algorithm, then determines the focus area captured in the large screen, returns the image information contained in the focus area to the mobile phone, and completes the connection establishment and interaction between the mobile phone and the large data visualization screen.

3. General Framework of Interaction

The goal of establishing connection interaction between cell phone terminal and data visualization large screen is to let users of manufacturing enterprises understand data quickly, conveniently, and comprehensively. It promotes multi-user collaborative analysis and helps users in manufacturing companies to grasp the changes in indicators throughout the production process quickly, prevent risks promptly, and reduce unnecessary property losses. The current technology of establishing connection and interaction with the visualization screen is mainly through the use of special equipment or gestures, body gestures for connection and interaction. These methods are more or less costly, do not meet the multi-user simultaneous operation, and the need to exclude the complexity of interference factors and various problems. To meet the needs of multiple users to establish connection and interaction with the data visualization screen at the same time and obtain information on their interests, this paper proposes a general framework for establishing connection and interaction between the cell phone and the data visualization screen through cell phone camera photography with the help of positive octagonal K-value template matching algorithm as shown in Figure 1.
The interaction device grabs the information in the interaction target. It sends a response to it, and the interaction server coordinates the communication and information processing between the interaction device and the interaction target. Within the same LAN, each device in the interaction space is identified by a URI according to the WebSocket protocol transmission rules. The partial chart image in the large screen taken by the cell phone and the complete image of the large screen display are simultaneously transmitted to the interaction server for template matching, and the matching result information is returned to the interaction device and the interaction target, respectively. During the interaction process, the interaction handler is mainly responsible for the communication and data exchange between the cross-devices. The event handler is primarily responsible for responding to user interaction, tracking and parsing the changing screenshots, and responding to the server’s changes to the large screen display content. The image processing program is mainly responsible for processing the captured images, such as moiré removal and noise reduction. The image template fast matching primarily refers to the template matching between the large screen image and the image taken by the cell phone, and finally, the matching result is obtained. The server component is independent of the interaction target and the interaction device, so it has the advantage that it can be easily extended to multiple targets and multiple users.
The goal of fast-matching image template is to match the captured image with the entire image displayed on the large data visualization screen and determine the position of the image captured by the phone in the big screen image. The matching process is divided into two stages: coarse matching and fine matching. In the coarse matching stage, the positive octagonal K-value template is used to quickly filter out areas of the image that have no possible matches. Then in the fine matching stage, each pixel point in the not-yet-excluded matching regions is searched, and the similarity is calculated using a modified NCC algorithm based on differential ordered arrays to determine the final image content with the highest similarity to the image captured by the phone. Finally, the image area with the highest matching degree is marked in the large data visualization screen using a marker box, and the detailed data information of the chart image contained in the marker box is returned to the mobile phone.
The user first scans the QR code attached to the data visualization screen to obtain permission to communicate and transmit with the interaction server. Then the user takes a picture of the chart of interest in the interaction target (data visualization screen) through the interaction device (cell phone). After processing by the image processing program, the interaction handler in the interaction device is passed to the interaction server. At the same time, the event handler in the interaction target captures the layout image of the whole data visualization screen in full screen. It passes it to the interaction server by the interaction handler in the interaction target. The fast image matching algorithm in the interaction server determines the specific location of the captured image in the interaction target. It obtains detailed information about the chart image in the captured focus area. Then the interaction handler in the interaction server passes the specific location information to the interaction target and the chart’s detailed information in the captured focus area to the interaction device to complete the establishment of the interaction. The interaction with the interactive device is divided into three main categories: selection, comparison, and annotation. Selection mainly includes querying the specific value of the element and linking to jump for more content. Comparison primarily refers to the comparison with the actual specific values of the chart by adding descriptive statistics such as mean values and target predicted values to observe how the data deviates from the changes. Annotation refers to the automatic generation of detailed summaries of chart information based on chart details through template-based natural language generation techniques. Users use interactive devices to assist in observing the large data visualization screen, making up for the lack of flexibility and interactivity of static visual observation by means of dynamic visual observation.

4. Fast Image Template Matching

To determine the position of the captured image in the data visualization large screen, one can resort to image matching algorithms. One of the image matching methods, the template matching algorithm, aims to find the part of the test image that is similar to the template image, and the result of this method is an exact match to the desired result. The main idea of template matching is to create a template based on the target prototype, find the part of the test image that is similar to that template image, and select the region with the highest similarity as the final matching result by calculating the similarity of the candidate regions.
The three main factors in the traditional image template matching process are the matching data type, similarity metric, and search strategy. The matching data type is mainly divided into template matching based on graph gray value information to find matching points and template matching based on image features to find matching points. A similarity metric is a similarity measure between a template image and target image, which is usually expressed as one of the cost functions, such as MAD (mean absolute difference), MSD (mean squared difference), Euclidean distance, Hausdorff distance, relative entropy, etc. The search strategy refers to what kind of law the template image follows when matching the target image for continuous matching. The common search strategies mainly include linear search strategy, pyramid hierarchical search strategy, genetic algorithm search strategy, etc. The above three factors primarily affect the algorithm complexity of the whole template matching algorithm, the accuracy of the matching result, the algorithm’s matching time, and the algorithm’s robustness.
The coding algorithm is based on the pixel distribution sequence property (PFC) proposed; the method divides the image and the template image into small blocks that do not overlap each other and subsequently sorts all the small blocks into which the template image is divided and encodes the sorting relationship between the adjacent gray value blocks. It then completes the similarity comparison between the template image and the target image by checking the encoded values [23]. Although the encoding method gets a certain degree of time reduction compared with the traditional method, this method is highly sensitive to illumination, and it is difficult for the illumination to be uniform and constant all the time, while the subtle local illumination will affect the sorting relationship of adjacent gray value blobs. At the same time, the encoding method is not suitable for the template matching image when the shooting angle is rotated, and the rectangular image search matching frame will not contain all the small blocks to be matched at the same time and does not have the rotation invariance.
Therefore, this paper proposes a rotation-invariant positive octagonal K-value template matching method based on this method. The whole template matching process is divided into two processes: coarse matching and fine matching. The coarse matching stage uses the rotation-invariant positive octagonal K-value template for candidate region screening and then uses the improved normalized correlation (NCC) algorithm based on differential ordered arrays for fine-grained one-by-one matching in the candidate regions, and the region with the highest similarity metric is used as the template matching result.

4.1. Rotationally Invariant Positive Octagonal K-Value Template

In the coarse matching stage of template matching, this paper used a rotational-invariant positive octagonal K-value template for template matching. In the matching process, the positive octagonal frame was rotationally invariant, which can exclude the influence of small changes in shooting angle on the results; the K-value template reduces the influence of illumination and allows the slight change in gray value due to the influence of light, which reduces the interference of light influence on the accuracy of the results.
As users use mobile phones to shoot visual large-screen images with different shooting angles, there is often a problem with shooting template image rotation, which can lead to differences in orientation angles between template images and visual large-screen images. To solve this problem, this paper chose to use a square, octagonal template candidate box and calculated the region’s characteristics inside the square, octagonal template region box to effectively select the region and exclude the candidate region with less similarity. Different from the commonly used circular template and rectangular template, the square, octagonal template solves the problem of the circular template having difficulty in calculating the area features effectively. However, it is rotationally invariant, and solves the problem that while the rectangular template easily calculates the area features, it is not rotationally invariant, causing its results to be easily affected by the shooting angle. This results in the same part of the area features being unable to be included in a candidate box of the same size simultaneously. The problem is the same part of regional features cannot be included in a candidate frame of the same size at the same time, which eventually leads to a significant decrease in the accuracy of the results. Since each vertex of the inner tangent of a circle is on the edge of the circle, the square, octagonal template retains the rotation invariance of the circular template. The square octagonal candidate frame consists of a square with the same side length and a diamond (the square is rotated by 45 degrees) superimposed in the specific style shown in Figure 2 below, which retains the complete regional feature values and facilitates the calculation of the mean, variance, and gradient within the candidate frame [24].
The influence of light during photography will interfere with the confirmation of image grayscale values and exclude the actual matching points, which affects the accuracy of matching results. To solve the above problem, this paper used the image K-value clustering matching method to exclude the unmatched candidate regions simultaneously in the coarse matching stage. The method divides the target image into R_block blocks of a certain size, calculates their average gray value, and then clusters the positive octagonal templates into K-degree value templates according to the gray distribution of the matching target. It also clusters the segmented templates into K-degree value templates TK, then builds a K-degree value template set based on the templates TK. Then it applies the K-degree value templates to search images in coarse matching to check whether the templates and the sub-images corresponding to blocks whether they are in the same gray level [25].
Suppose the average gray value of i R_blocks is Ei and the set {Ei, i = 1, 2, 3…n} can be divided into K classes by clustering. The size of K is determined according to different rating criteria for image grayness. It is effectively shown experimentally that when K ≥ 4, the choice of the K-degree value template instead of the 256-degree template has little effect on the matching accuracy, and the method is practical [26]. All R_blocks in the same gray level class are replaced by intermediate gray values stored in FK (i), and the K-degree template TK is called the basic template. By using the K-degree value template for the matching operation, the robustness of the matching algorithm is enhanced, and the matching accuracy is significantly improved due to a certain range of fault tolerance for the variation in local gray levels of the image.
In order to avoid the problems of large computation and the long-running time of traditional algorithms, this paper chose to use the integral image as the decisive index factor to exclude mismatched regions in the coarse matching process. An integral image is a region in which the gray value of each pixel is equal to the sum of the gray values of all pixels before that pixel. The image in the candidate box was divided into four regions, labeled A, B, C, and D, and the pixels in the lower right corner of each region were labeled a, b, c, and d. The values of the four pixels in the integral image are sum(A), sum(A + B), sum(A + C), and sum(A + B + C + D), and the value of the integral image for region D is sum(A + B + C + D)“−” Usum(A + C)“−” Usum(A + B)“−” Usum(A), and the specific method is demonstrated in Figure 3 below. Since the square, octagonal K-value template box was composed of two squares superimposed at different angles, the square octagon was first completed into two squares with the same side length, and the integral images corresponding to the two square box regions were calculated separately. The gray value of the pixel point was the middle gray value of the gray set to which the pixel point belonged. The summation of the two regional integral images was taken as the mean value as the final measure for judging whether the regions match.

4.2. Improved NCC Based on a Differential Summation of Ordered Arrays

The difference is defined as letting the variable y depend on the independent variable x. When the independent variable x changes from x to x + 1, the amount of change Dy(x) = y(x + 1) − y(x) in the dependent variable y = y(x) is called the difference in the function y(x) in steps of 1 at the point x.
The difference summation method means having one-dimensional arrays of the same size K f1(x) and f2(x), x = 1,2,3…K. Then the product of these two arrays is equal to the product of the cumulative summation of the difference in one of the arrays f1(n) with the other f2(n), which gives the following equation [27]:
x = 1 K f 1 x f 2 x = x = 1 K F 1 x F 2 x
which,
F 1 x = f 1 x f 1 x + 1
F 2 x = f 2 x 1 + f 2 x
F 2 0 = 0
f 1 K + 1 = 0
Due to the orderliness of the array f1(x), there are a large number of 0 s and 1 s in the differential array F1(x), and the result can be ignored in the case of 0 s and 1 s in the multiplication operation. Then the required computation time can be greatly reduced, and the computing speed can be improved.
The traditional normalized correlation matching algorithm (NCC) is a matching algorithm based on grayscale information, and since the final result of the algorithm is controlled in the range of 0 to 1, setting the resulting threshold can quantify the comparison result and judge the good or bad matching result, so the NCC algorithm is usually the preferred algorithm in image matching. The NCC algorithm Equation (6) is shown below:
NCC i , j = m = 1 M n = 1 N S i , j m , n S i , j ¯ T m , n T ¯ m = 1 M n = 1 N S i , j m , n S i , j ¯ 2 m = 1 M n = 1 N T m , n T ¯ 2
where T ¯ is the mean value of the matching template and S i , j ¯ is the mean value of the search image S under the current window i , j . If it is assumed T = T m , n T ¯ , and also because the mean value of the matching template T can be calculated at once, the NCC formula can be rewritten as Equation (7):
NCC i , j = m = 1 M n = 1 N S i , j m , n T S i , j ¯ m = 1 M n = 1 N T m = 1 M n = 1 N S i , j m , n S i , j ¯ 2 m = 1 M n = 1 N T m , n T ¯ 2
Since some of the values m = 1 M n = 1 N are zero, the NCC equation eventually changes to that shown in Equation (8):
NCC i , j = m = 1 M n = 1 N S i , j m , n T m = 1 M n = 1 N S i , j m , n S i , j ¯ 2 m = 1 M n = 1 N T m , n T ¯ 2
For the simplified NCC formula, the numerator part is understood as the convolution operation between the template and the subgraph to meet the applicable range of the difference summation formula, which can simplify the convolution operation and save time in the convolution operation with the help of the difference summation formula. First, the pixel points in the template are sorted, and then the difference between the sorted arrays is obtained. Since there will be a large number of identical grayscale values in the template, there will be a large number of 0 and 1 in the difference array obtained after the difference is performed, and the multiplication operation corresponding to these two values can be neglected, thus reducing the computational effort.
In order to reduce the influence of the K-value template on the matching accuracy and to ensure the normative accuracy of the result marker box, only the rectangular template is used as the template matching box in fine matching. Since the initial template is certain, the number of sorting difference summation operations on it is once, and the time consumed has minimal impact on the time of the whole matching process. The visualized large-screen image texture is relatively sparse, and there is a gray value change in the edge part of the image; and this part of the differential summation will generate non-zero differential values, while most of the other areas have little gray value change, and there are a large number of pixels with the same gray value, and this part of the differential summation will get a large number of 0 and 1 values. It can be seen that the number of operations of the normalized correlation algorithm after using differential ordered array simplification is reduced substantially, and the speed of operations is improved.

5. Application Cases and Experiments

The positive octagonal K-value template matching method proposed in this paper was evaluated by two sets of experiments and application cases. The first set of experiments used multiple chart images as the experimental test images for fast image template matching and compared the positive octagonal K-value template method with the traditional K-value template matching method and the normalized correlation (NCC) matching method to test the effectiveness of the positive octagonal K-value template in terms of matching time and accuracy. The second group of experiments simulated the process of data visualization by taking pictures of the computer screen and determining the position of the pictures taken on the computer screen to match and establish connection interaction. Finally, the actual application effect of template matching to establish connection interaction was tested through the application cases in data visualization projects.

5.1. Improved NCC Based on a Differential Summation of Ordered Arrays

5.1.1. Positive Octagonal K-Value Template Matching Efficiency Testing Experiment

The positive octagonal K-value template method proposed in this paper had certain advantages in counteracting the effects of scale rotation and illumination, so Experiment 1 chose to compare the method with the less illumination-sensitive K-value template matching method and the light-sensitive NCC method to test the superiority of the method in terms of the time required for method matching and accuracy, and the target matching results are shown in Figure 4. The matching time and accuracy detection results are shown in Table 1.
As shown in the following Table 1, the positive octagonal K-value template matching method had the least matching time and higher matching accuracy in the image matching experiments. This algorithm was better than the traditional K-value template matching method and the NCC matching method.

5.1.2. Mobile Phone and Visualization Large Screen Simulation Matching Experiment

Experiment 2 was an experiment of simulating a mobile phone to shoot a chart image of a digital visualization large-screen display. After shooting part of the chart image displayed on the computer monitor through the mobile phone, the image was processed for noise reduction, and brightness adjustment. Following template matching with the same de-mole and the image on the computer monitor, the location area of the shot image in the computer monitor image was determined. The operation results are shown in Figure 5 below. The results of the comparison between the matching time and matching accuracy of the positive octagonal K-value template method with the K-value template matching method and the NCC matching method are shown in Table 2.
As can be seen from Table 2 below, the octagonal K-value template matching method had better results in terms of matching time and matching accuracy in the simulated mobile phone shooting visualization large screen experiments. At the same time, it was concluded that lighting and picture angle rotation have a great influence on the matching results.

5.2. Application Cases

5.2.1. Assist in Observing Visualization Project Cases

In order to better control the whole process of manufacturing, manufacturing companies usually choose IoT to access various data on the production floor so as to achieve panoramic real-time monitoring to improve product productivity. With the help of data visualization large screen real-time display data, insight into the data abnormal trend, timely detection of potential security risks, reduce the probability of risk occurrence. The division of labor at all levels of an enterprise means staff usually only need to observe the data they need. The observation and analysis of data visualization at a large screen size cannot be directly manipulated, giving rise to a number of problems as discussed previously. However, the use of mobile phones as a tool to visualize portions of the large screen to assist in analysis enables multiple users to access the data they require. Data content displayed on the big screen for data visualization in manufacturing enterprises is mainly in the form of charts. Therefore, it is necessary to focus on detecting the interaction between the mobile phone and the visualization screen in the form of chart images.
In this paper, the proposed positive octagonal K-value template matching method to establish the connection and interaction between the data visualization screen and the mobile phone was applied to manufacturing enterprises’ quality data integration and visualization analysis platform [28]. After photographing the chart image on the data visualization screen, the mobile phone and the data visualization screen established a connection through quick image matching. The mobile phone was operated to interact to assist the observation of the data visualization screen, as shown in Figure 6.
As shown in Group A, using the selection function of the interactive function, the mobile phone took a picture of a line graph of the profitability of each product. It then established a connection with the data visualization screen, in which the mobile phone viewed the specific values of the profitability of each product in detail. The business user can check whether the values were at the expected level. The mobile phone took a picture of a table of the monitoring list and then established a connection with the data visualization screen. Using the mobile phone, the button of the alarm record can be selected. The user can view the specific details of the alarm record by means of a pop-up link, then summarize the experience to reduce the number of alarms. As shown in Group B, the comparison function in the interactive function was used. After the mobile phone took a picture of the basic line graph of the total daily production of the workshop of the manufacturing enterprise and established the connection with the data visualization screen, the specific details of the basic line graph of the total daily production of the workshop were displayed on the mobile phone. The enterprise user could select the buttons of average value and target value to add representative data again. After comparing the data in the original graph, it is possible to find the deviation value and the problem in time to ensure the completion of the plan on schedule. As shown in Figure 6C, using the annotation function in the interactive function, the mobile phone took a picture of the stacked bar chart of the number of product inspections in each workshop. It then established a connection with the data visualization screen. The specific situation of the stacked bar chart of the number of product inspections in each workshop was displayed on the mobile phone. At the same time, the system generated a template based on natural language generation technology for the stacked bar chart of the number of product inspections in each workshop based on the acquired details of the stacked bar chart and detailed text.

5.2.2. Case Evaluation

In order to evaluate the application effect of the method of establishing the interaction between the visualization screen and the mobile phone terminal through the positive octagonal K-value template matching in manufacturing enterprises for observing data, this paper randomly sent online questionnaires to the supervisors of each workshop, technicians of each workshop, data analysts, data visualization technicians, and controllers of the data visualization screen in manufacturing enterprises that use the data visualization screen for observing data, and received a total of 56 valid responses. The questionnaire was used to find whether the method of using K-value template matching to establish the interaction between the visualization screen and the mobile phone was helpful for data observation, whether the time required for matching and the accuracy of matching were satisfactory, and whether the time for discovering key information about the data was shortened to understand how the participants used the mobile phone and the visualization screen simultaneously for linked observation. The questionnaires were filled out according to the participant’s satisfaction with the method after use. After collecting the completed questionnaires from the participants, the results were tallied and presented in the form of bar graphs for analysis, as shown in Figure 7.
According to the results shown in the bar chart, it can be found that most participants agreed that the method of using positive octagonal K-value template matching to establish the interaction between the visualized large screen and the cell phone terminal connection helped data observation and were basically satisfied with the time required for matching and the accuracy of matching. Further, they agreed that the time for discovering key information after establishing the interaction was indeed shortened to some extent. While collecting the questionnaires, suggestions were obtained on appropriate processing of data information after the interaction to help data analysis. Further improvements will be made in the future in terms of continuing to shorten the matching time, improving the matching accuracy, and processing information after the interaction.

5.2.3. Case Comparison Evaluation

In order to compare and evaluate the advantages of the connection and interaction method between the data visualization screen and mobile phone based on the positive octagonal K-value template matching algorithm on the discovery of effective data information in the data visualization screen in terms of time and quantity, we found 20 enterprise workers who often observe data information through the data visualization screen, and divide these 20 people into two groups: A and B. Group A people only discovered the effective data information by observing the data visualization screen, and the group B people used mobile phones to take photos of some images of the data visualization screen and established the connection interaction between the data visualization screen and cell phones based on the positive octagonal K-value template matching algorithm, so as to observe the effective data information in the data visualization screen with the help of mobile phones. The number of valid data information in the data visualization screen was 10, and the comparison of the number of information discovered by each group of users in five minutes is shown in the following Table 3.
By comparing the number of information discovered by the two groups of users at the same time, it can be concluded that the use of the positive octagonal K-value template matching algorithm to establish the data visualization large screen and mobile phone terminal connection interaction method can help users in the discovery of effective information in the time and the number of significant help to improve.

6. Conclusions

In the context of CPSS and CMNs, this paper proposed a method based on a positive octagonal K-value template matching algorithm to establish the connection and interaction between the large screen of data visualization and the mobile phone, which increased the convenience of information interaction and sharing between multiple types of users of collaborative manufacturing data information. The matching algorithm had good rotational invariance and anti-lighting properties. The combination of coarse and fine matching not only shortened the time required for matching but also improved the accuracy of matching. Experimental results and case studies have demonstrated the effectiveness of the method in practical applications. However, the image matching method using positive octagonal K-value templates to establish the connection and interaction between the data visualization screen and the mobile phone side still has some problems, such as the pico-matching accuracy needs to be improved with proper processing, establishing the information after the interaction to help observation, etc. In the future, the algorithm will improve in terms of robustness and post-interaction information processing to better help manufacturing companies understand and analyze data more efficiently and intelligently in the production process and firmly control the entire production of data in the process.

Author Contributions

Conceptualization, investigation, L.C., S.Z. and C.L.; data curation, S.Z.; writing—original draft preparation, L.C.; writing—review and editing, L.C., S.Z. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jiang, P.; Leng, J.; Ding, K.; Gu, P.; Koren, Y. Social manufacturing as a sustainable paradigm for mass individualization. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2016, 230, 1961–1968. [Google Scholar] [CrossRef]
  2. Leng, J.; Jiang, P. Evaluation across and within collaborative manufacturing networks: A comparison of manufacturers’ interactions and attributes. Int. J. Prod. Res. 2018, 56, 5131–5146. [Google Scholar] [CrossRef]
  3. Frazzon, E.M.; Hartmann, J.; Makuschewitz, T.; Scholz-Reiter, B. Towards socio-cyber-physical systems in production networks. Procedia Cirp 2013, 7, 49–54. [Google Scholar] [CrossRef]
  4. Jansen, Y.; Dragicevic, P.; Fekete, J.D. Tangible remote controllers for wall-size displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; pp. 2865–2874. [Google Scholar]
  5. Olwal, A.; Feiner, S. Spatially aware handhelds for high-precision tangible interaction with large displays. In Proceedings of the 3rd International Conference on Tangible and Embedded Interaction, Cambridge, UK, 16–18 February 2009; pp. 181–188. [Google Scholar]
  6. Song, P.; Goh, W.B.; Fu, C.W.; Meng, Q.; Heng, P.A. WYSIWYF: Exploring and annotating volume data with a tangible handheld device. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; pp. 1333–1342. [Google Scholar]
  7. Cheng, K.; Takatsuka, M. Estimating virtual touchscreen for fingertip interaction with large displays. In Proceedings of the 18th Australia Conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments, Sydney, Australia, 20–24 November 2006; pp. 397–400. [Google Scholar]
  8. Shirazi, A.S.; Winkler, C.; Schmidt, A. Flashlight interaction: A study on mobile phone interaction techniques with large displays. In Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services, Bonn, Germany, 15–18 September 2009; pp. 1–2. [Google Scholar]
  9. Cheng, K.; Pulo, K. Direct interaction with large-scale display systems using infrared laser tracking devices. In Proceedings of the Asia-Pacific Symposium on Information Visualisation, Adelaide, Australia, 27–29 January 2003; Volume 24, pp. 67–74. [Google Scholar]
  10. Shizuki, B.; Hisamatsu, T.; Takahashi, S.; Tanaka, J. Laser pointer interaction techniques using peripheral areas of screens. In Proceedings of the Working Conference on Advanced Visual Interfaces, Venice, Italy, 23–26 May 2006; pp. 95–98. [Google Scholar]
  11. Mardanbegi, D.; Hansen, D.W. Mobile gaze-based screen interaction in 3D environments. In Proceedings of the 1st Conference on Novel Gaze-Controlled Applications, Karlskrona, Sweden, 26–27 May 2011; pp. 1–4. [Google Scholar]
  12. Wittorf, M.L.; Jakobsen, M.R. Eliciting mid-air gestures for wall-display interaction. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction, Gothenburg, Sweden, 23–27 October 2016; pp. 1–4. [Google Scholar]
  13. Dachselt, R.; Buchholz, R. Natural throw and tilt interaction between mobile phones and distant displays. In Proceedings of the CHI’09 Extended Abstracts on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; pp. 3253–3258. [Google Scholar]
  14. Reibert, J.; Riehmann, P.; Froehlich, B. Multitouch Interaction with Parallel Coordinates on Large Vertical Displays. Proc. ACM Hum. Comput. Interact. 2020, 4, 1–22. [Google Scholar] [CrossRef]
  15. Shoemaker, G.; Tsukitani, T.; Kitamura, Y.; Booth, K.S. Body-centric interaction techniques for very large wall displays. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, Reykjavik, Iceland, 16–20 October 2010; pp. 463–472. [Google Scholar]
  16. Matsuda, Y.; Komuro, T. Dynamic layout optimization for multi-user interaction with a large display. In Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020; pp. 401–409. [Google Scholar]
  17. Matsumura, K. Studying User-Defined Gestures Toward Off the Screen Interactions. In Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces, Funchal, Portugal, 15–18 November 2015; pp. 295–300. [Google Scholar]
  18. Vogel, D.; Balakrishnan, R. Distant freehand pointing and clicking on very large, high resolution displays. In Proceedings of the 18th annual ACM Symposium on User Interface Software and Technology, Seattle, WA, USA, 23–26 October 2005; pp. 33–42. [Google Scholar]
  19. Hagiwara, T.; Takashima, K.; Fjeld, M.; Kitamura, Y. CamCutter: Impromptu Vision-Based Cross-Device Application Sharing. Interact. Comput. 2019, 31, 539–554. [Google Scholar] [CrossRef]
  20. Paek, T.; Agrawala, M.; Basu, S.; Drucker, S.; Kristjansson, T.; Logan, R.; Toyama, K.; Wilson, A. Toward universal mobile interaction for shared displays. In Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work, Chicago, IL, USA, 6–10 November 2004; pp. 266–269. [Google Scholar]
  21. Ballagas, R.; Rohs, M.; Sheridan, J.G. Sweep and point and shoot: Phonecam-based interactions for large public displays. In Proceedings of the CHI’05 Extended Abstracts on Human Factors in Computing Systems, Portland, OR, USA, 2–7 April 2005; pp. 1200–1203. [Google Scholar]
  22. Chen, S.; Wu, H.; Lin, Z.; Guo, C.; Lin, L.; Hong, F.; Yuan, X. Photo4Action: Phone camera-based interaction for graph visualizations on large wall displays. J. Vis. 2021, 24, 1083–1095. [Google Scholar] [CrossRef]
  23. Li, Q.; Zhang, C. A fast image grayscale based matching algorithm. J. Softw. 2006, 17, 216–222. [Google Scholar] [CrossRef]
  24. Liu, B.; Shu, X.; Wu, X. Fast screening algorithm for rotation invariant template matching. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 3708–3712. [Google Scholar]
  25. Liu, J. Research on Fast Algorithms for Image Template Matching; Central South University: Changsha, China, 2007. [Google Scholar]
  26. Sha, S.; Jianer, C.; Sanding, L. A fast matching algorithm based on K-degree template. In Proceedings of the 2009 4th International Conference on Computer Science & Education, Beijing, China, 8–11 August 2009; pp. 1967–1971. [Google Scholar]
  27. Wang, B. A fast algorithm for grayscale image moments based on differential moment factors. J. Comput. 2005, 28, 1367–1375. [Google Scholar]
  28. Chen, L.; Mengting, W. Intelligent workshop quality data integration and visual analysis platform design. Comput. Integr. Manuf. Syst. 2021, 27, 1641–1649. [Google Scholar]
Figure 1. General interaction framework. The general framework for users to establish connection and interaction with the data visualization screen through mobile phone photography mainly contains three parts: interaction device (mobile phone), interaction server, and interaction target (data visualization screen).
Figure 1. General interaction framework. The general framework for users to establish connection and interaction with the data visualization screen through mobile phone photography mainly contains three parts: interaction device (mobile phone), interaction server, and interaction target (data visualization screen).
Symmetry 14 01528 g001
Figure 2. Schematic diagram of the square, octagonal template.
Figure 2. Schematic diagram of the square, octagonal template.
Symmetry 14 01528 g002
Figure 3. Demonstration diagram of integral image area calculation. The image on the left is an original image that identifies four regions: A, B, C, D. The pixel point at a corresponds to value in the integral image of sum(A), the pixel point at b corresponds to the value in the integral image of sum(A + B), the pixel point at c corresponds to the value in the integral image of sum(A + C), the pixel point at d corresponds to the value in the integral image of sum(A + B + C + D). Then the sum of the grauscale values of all the pixel points in region is: sum(A + B + C + D)“−” Usum(A + C)“−” Usum(A + B)“−” Usum(A).
Figure 3. Demonstration diagram of integral image area calculation. The image on the left is an original image that identifies four regions: A, B, C, D. The pixel point at a corresponds to value in the integral image of sum(A), the pixel point at b corresponds to the value in the integral image of sum(A + B), the pixel point at c corresponds to the value in the integral image of sum(A + C), the pixel point at d corresponds to the value in the integral image of sum(A + B + C + D). Then the sum of the grauscale values of all the pixel points in region is: sum(A + B + C + D)“−” Usum(A + C)“−” Usum(A + B)“−” Usum(A).
Symmetry 14 01528 g003
Figure 4. Graphical image template matching result graph.
Figure 4. Graphical image template matching result graph.
Symmetry 14 01528 g004
Figure 5. Simulation matching result between cell phone and visualization large screen.
Figure 5. Simulation matching result between cell phone and visualization large screen.
Symmetry 14 01528 g005
Figure 6. Chart image matching interaction example.
Figure 6. Chart image matching interaction example.
Symmetry 14 01528 g006
Figure 7. Chart image matching interaction example.
Figure 7. Chart image matching interaction example.
Symmetry 14 01528 g007
Table 1. Comparison of experimental results of chart image template matching.
Table 1. Comparison of experimental results of chart image template matching.
Testing IndexPositive Octagonal
K-Value Template Matching
K-Value Template MatchingNCC Matching
Matching Time (ms)48510679073
Matching Accuracy0.9840.9730.982
Table 2. Comparison of experimental results of simulated chart image template matching.
Table 2. Comparison of experimental results of simulated chart image template matching.
Testing IndexPositive Octagonal
K-Value Template Matching
K-Value Template MatchingNCC Matching
Matching Time (s)5.8311.5293.64
Matching Accuracy0.8680.5230.165
Table 3. Quantitative statistics table of mining information.
Table 3. Quantitative statistics table of mining information.
AB
148
257
358
467
558
647
7510
869
958
1058
Average Value57.9
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, L.; Zhang, S.; Liu, C. Research on the Multi-Screen Connection Interaction Method Based on Regular Octagon K-Value Template Matching. Symmetry 2022, 14, 1528. https://doi.org/10.3390/sym14081528

AMA Style

Chen L, Zhang S, Liu C. Research on the Multi-Screen Connection Interaction Method Based on Regular Octagon K-Value Template Matching. Symmetry. 2022; 14(8):1528. https://doi.org/10.3390/sym14081528

Chicago/Turabian Style

Chen, Liang, Shichen Zhang, and Changhong Liu. 2022. "Research on the Multi-Screen Connection Interaction Method Based on Regular Octagon K-Value Template Matching" Symmetry 14, no. 8: 1528. https://doi.org/10.3390/sym14081528

APA Style

Chen, L., Zhang, S., & Liu, C. (2022). Research on the Multi-Screen Connection Interaction Method Based on Regular Octagon K-Value Template Matching. Symmetry, 14(8), 1528. https://doi.org/10.3390/sym14081528

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop