Next Article in Journal
Determining If Oil Prices Significantly Affect Renewable Energy Investment in African Countries with Energy Security Concerns
Next Article in Special Issue
Sustainable Spatial Energy Planning of Large-Scale Wind and PV Farms in Israel: A Collaborative and Participatory Planning Approach
Previous Article in Journal
Competition in Power Generation: Ex-ante Analysis of Indonesia’s Electricity Market
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Combined Multi-Layer Feature Fusion and Edge Detection Method for Distributed Photovoltaic Power Station Identification

1
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, Xi’an 710129, China
4
Engineering Quality Supervision Center of Logistics Support Department of the Military Commission, Beijing 100142, China
5
Huizhou Academy of Space Information Technology, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Huizhou 516006, China
*
Author to whom correspondence should be addressed.
Energies 2020, 13(24), 6742; https://doi.org/10.3390/en13246742
Submission received: 29 November 2020 / Revised: 17 December 2020 / Accepted: 18 December 2020 / Published: 21 December 2020
(This article belongs to the Special Issue GIS and Remote Sensing for Renewable Energy Assessment and Maps)

Abstract

:
Distributed photovoltaic power stations are an effective way to develop and utilize solar energy resources. Using high-resolution remote sensing images to obtain the locations, distribution, and areas of distributed photovoltaic power stations over a large region is important to energy companies, government departments, and investors. In this paper, a deep convolutional neural network was used to extract distributed photovoltaic power stations from high-resolution remote sensing images automatically, accurately, and efficiently. Based on a semantic segmentation model with an encoder-decoder structure, a gated fusion module was introduced to address the problem that small photovoltaic panels are difficult to identify. Further, to solve the problems of blurred edges in the segmentation results and that adjacent photovoltaic panels can easily be adhered, this work combines an edge detection network and a semantic segmentation network for multi-task learning to extract the boundaries of photovoltaic panels in a refined manner. Comparative experiments conducted on the Duke California Solar Array data set and a self-constructed Shanghai Distributed Photovoltaic Power Station data set show that, compared with SegNet, LinkNet, UNet, and FPN, the proposed method obtained the highest identification accuracy on both data sets, and its F1-scores reached 84.79% and 94.03%, respectively. These results indicate that effectively combining multi-layer features with a gated fusion module and introducing an edge detection network to refine the segmentation improves the accuracy of distributed photovoltaic power station identification.

1. Introduction

Renewable energy is a sustainable and inexhaustible energy, including biomass energy, wind energy, solar energy, etc., which plays an important role in solving the energy crisis. Biomass energy can be converted into Eco-fuels, and it has been found that Eco-fuels are a sustainable energy scenario at the local scale [1]. The main use of wind energy is to convert energy into electricity through wind turbines. Solar energy is a clean and safe renewable energy source (RES) with strong development potential and application value [2]. Photovoltaic power generation is an effective way to use solar energy [3], of which there are two main forms: Centralized photovoltaic power generation and distributed photovoltaic power generation [4,5]. Centralized photovoltaic power stations are installed primarily in the desert and other ground areas and the generated electricity is usually incorporated into the national public power grid [6], while distributed photovoltaic power stations are generally installed on tops of buildings and the generated electricity is mainly for the inhabitants’ own use [7]. Distributed photovoltaic power stations have advantages such as unlimited installed capacity, no occupation of land resources [8], and no pollution. Thus, exploitation of distributed photovoltaic power generation is an important solar energy development mode that has entered a stage of rapid development and is supported by Chinese policy [9,10]. The International Energy Agency predicts that the world’s total renewable energy generation will grow by 50% between 2019 and 2024, with solar photovoltaic generation alone accounting for nearly 60% of the prospective growth. Distributed photovoltaic generation is expected to account for approximately half of the growth in total photovoltaic power generation [11]. The installed capacity of distributed photovoltaic power stations is currently growing rapidly. Consequently, the ability to accurately and efficiently acquire the installation locations, distribution, and total area of distributed photovoltaic power stations over a wide range is of importance to energy companies, governmental departments, and investors. For example, obtaining information of distributed photovoltaic power stations can help optimize power system planning [12]. The information of distributed photovoltaic power stations and solar irradiance data of building surfaces can be combined to predict the power generation potential [13]. Moreover, it can also support the development of open data and energy systems and facilitate the development of the energy field [14]. However, due to the spontaneity and randomness of distributed photovoltaic power station construction, it is difficult to obtain accurate information regarding the quantity and distribution of distributed photovoltaic power stations solely from governmental department planning information. In addition, distributed photovoltaic power stations are generally installed on the tops of buildings, making it difficult to investigate their distribution and area manually. High-resolution remote sensing imagery has the characteristics of high spatial resolution, high efficiency, and wide coverage. Thus, it provides the possibility for automatic identification of large-scale distributed photovoltaic power stations.
Traditional distributed photovoltaic power station identification methods rely mainly on manually designed features, and it is difficult to accurately obtain the location and area of photovoltaic power stations. Malof [15] pioneered the use of manual features for extracting distributed photovoltaic power stations and proposed a method that first obtains all the maximally stable extreme regions (MSERs) [16] from an image and then filters out the areas with low confidence. Then, color features and shape features in the remaining candidate area are extracted for classification by a support vector machine (SVM) [17]. However, this method does not obtain photovoltaic panel areas accurately. Later, Malof [18] used color, texture, and other features in the neighborhood of each pixel to represent the pixel, and then used a random forest [19] to predict the category of each pixel. However, this method also has difficulty accurately obtaining the location and area information of photovoltaic panels. On the basis of the research conducted by the authors of [18], Malof [20] cascaded the random forest and convolutional neural network [21] to identify distributed photovoltaic power stations. However, this method still relies on feature information designed by humans. In a later work, Malof [22] proposed a distributed photovoltaic power station identification model based on a VGG model [23]. However, its ability to accurately obtain the locations and shapes of photovoltaic panels is limited.
As deep learning technology has developed, a series of convolutional neural network (CNN) models have been proposed [23,24,25,26,27,28,29,30]. Semantic segmentation technology based on deep learning can use a CNN, which has strong feature-learning ability, to automatically learn object features from massive amounts of data. Compared with earlier machine learning methods, such as SVMs and random forests, CNNs significantly improved the object extraction accuracy. Semantic segmentation technology has been widely applied and developed rapidly in fields such as medical image segmentation, automatic driving, and video segmentation. Jiang [31] used a CNN model and small data sets to extract the heart and lungs. Zhou [32] proposed the UNet++ model that has achieved high accuracy in nodule, nuclei, and liver segmentation. In addition to 2D medical image segmentation, the 3D full convolutional neural network can be used to realize organ segmentation in CT images [33]. Deep learning has become a robust and effective method for medical image segmentation [34]. In the field of automatic driving, CCNet [35] and ACFNet [36], respectively, used spatial context information and class context information to achieve the segmentation of objects in the street scene. Gated-scnn [37] combined shape and semantic information to extract targets on the street. In addition, in order to improve the performance of target segmentation in automatic driving, the idea of knowledge distillation has been used to retain the model’s high precision while reducing the computation [38]. For the video semantic segmentation task, Paul [39] proposed an efficient video segmentation method that combines a convolutional neural network running on the GPU with an optical stream running on the CPU. Pfeuffer [40] added recurrent neural network into the video segmentation model to make full use of the time information of video sequence and improved the accuracy of video segmentation. Jain [41] proposed a video segmentation model with two input branches, which made use of the feature information of the current frame and the context information of the previous frame. Nekrasov [42] proposed a video segmentation algorithm without reliance on the optical flow, which further improved the efficiency of video segmentation. In addition to the natural image domain, semantic segmentation methods based on fully convolutional neural networks (FCN) [43] models have been widely used for object identification from remote sensing imagery, including road extraction, building extraction, and water extraction. For example, Zhou [44] proposed a road extraction method based on encoder-decoder structure and series-parallel dilated convolutions. Wu [45] added attention mechanism to the model [44], which further improved the accuracy of road extraction. Xu [46] designed a road extraction model based on DenseNet [30] and attracted local and global attention. Gao [47] used the refined residual convolutional neural network to extraction road in high-resolution remote sensing images. Xu [48] used deep convolutional neural network to extract buildings and optimized the results with guided filters. Yang [49] used DenseNet [30] and the spatial attention module to extract buildings. Huang [50] presented a residual refinement network for building extraction that fused aerial images and LiDAR point cloud data. Sun [51] proposed a building extraction method combining multi-scale convolutional neural network and SVM. Yu [52] proposed a water body extraction method based on convolutional neural networks, which used both spectral and spatial information from Landsat images Chen [53] proposed a cascade hyperpixel segmentation and convolutional neural network classification method to extract urban water bodies. Li [54] used fully convolutional network to extract water bodies from GeoFen-2 images with limited training data. Some previous deep learning-based semantic segmentation methods have been applied to the identification of distributed photovoltaic power stations. Yuan [55] was the first to introduce an FCN model for distributed photovoltaic power station identification. However, the adopted FCN model requires up-sampling by a large multiple, which may cause the loss of feature information. Subsequently, SegNet [56] and UNet [57] were used to identify distributed photovoltaic power stations [58,59]. Although the identification results of those models are superior to the results of traditional methods, they still do not solve the problem that photovoltaic panels with small areas are easily missed and densely installed photovoltaic panels are easily adhered.
To solve the above problems, this paper proposes a distributed photovoltaic power station identification method that combines multi-layer features and edge detection. The main contributions aims of this paper are as follows:
  • To address the problem that small photovoltaic panels are difficult to recognize, a gated fusion module is introduced into the encoder-decoder model to effectively fuse multi-layer features, which improves the model’s ability to identify small photovoltaic panels.
  • To address the problem of edge blurring, a multi-task learning model that combines edge detection and semantic segmentation is proposed to refine the edges of the segmentation results using feature information of the target edge.
  • Comparative experiments are conducted on the Duke California Solar Array data set [60] and the Shanghai Distributed Photovoltaic Power Station data set, and the results verify the effectiveness of the proposed method.
The remainder of this article is organized as follows. Section 2 introduces the distributed photovoltaic power station identification model designed in this paper, including the encoder-decoder architecture, gated fusion module, and edge detection network. Section 3 presents the experiments and results analysis on the two data sets, including the experimental data, evaluation metrics, experimental settings, and the experimental results. The results are analyzed and compared with those of other methods. Finally, Section 4 concludes this paper.

2. Model Architecture and Design

The model proposed in this paper was composed of a semantic segmentation network and an edge detection network. These 2 networks were trained in parallel for multi-task learning, as shown in Figure 1. The semantic segmentation network was used to extract the semantic features of photovoltaic panels, and its architecture included an encoder-decoder structure based on UNet. The encoder was Efficientnet-B1 [61]. In the semantic segmentation network, a gated fusion module was introduced to control the transmission of valuable information, effectively fuse multi-layer features, and improve the ability to identify small photovoltaic panels. The edge detection network was used to extract the edge features of the photovoltaic panels and guide the semantic segmentation network to produce segmentation results with more refined edges to alleviate the problem of blurred and unrefined edges in segmentation results.

2.1. Semantic Segmentation Network with Gated Fusion Multi-Layer Features

A semantic segmentation network was used to extract the semantic features of photovoltaic panels. Efficientnet-B1 uses an encoder, and a gated fusion module was introduced to effectively fuse multi-layer features.

2.1.1. Encoder and Decoder

This study adopted EfficientNet-B1, which has strong feature representation capabilities, as the encoder for feature extraction. This decoder is the same as that used in the original UNet. The Efficientnet-B1 network structure is shown in Figure 2. The basic component of Efficientnet-B1 is the MBConv module. In the MBConv module, a 1 × 1 convolution is first used to change the channels of the input features, followed by a depth-wise convolution. Then, the channel attention mechanism of SENet [62] is introduced, and finally, a 1 × 1 convolution is used to reduce the channels of the feature maps.
The original UNet encoder structure consists of 5 stages. The feature resolution at each stage is successively changed to half of that of the previous stage through down-sampling, and the features of each stage are fused with the corresponding decoder features through skip connections. Based on the UNet structure, this paper adopted the output features of Stages 0, 2, 3, 5, and 7 of Efficientnet-B1 as the 5 encoder blocks used in the encoder of our model, as shown in Figure 3, which assumes that the size of the input image is 256 × 256 × 3 .
The decoder is mainly used to gradually up-sample the low-resolution high-level features to restore the original size of the input image. During the up-sampling process, the corresponding features of the encoder and decoder are concatenated through skip connections. The decoder structure block is shown in Figure 4. The decoding features represent the output feature of the previous decoder block, and the encoding features represent the features passed to the corresponding decoder block through the skip connections. First, the decoding features are up-sampled twice and then concatenated with the encoding features on the channel dimension. The number of channels of the concatenated features is the sum of the number of channels of the two features. After the concatenation and two 3 × 3 convolutional layers, the output features of the decoder block are obtained. The output features of the current decoder block are the input decoding features for the next decoder block.

2.1.2. Gated Fusion Module

Inspired by the research conducted by the authors of [63], a gating fusion module was introduced to effectively fuse the multi-layer features to improve the ability to identify small photovoltaic panels. The gating fusion module structure is shown in Figure 5. The input is the feature of the adjacent layer of the encoder, and the features generated by the gating unit are used to measure the usefulness of the feature at each position in the spatial dimension. This arrangement controls the transmission of useful information and suppresses the transmission of useless information.
The input to the gated fusion module consists of the features F i from layer i and the features F i + 1 from the adjacent layer i + 1 . Due to the differences in the feature sizes and the channel numbers, F i + 1 is first up-sampled twice, and the number of channels in F i + 1 is converted to be the same as that in F i . Then, F i + 1 is input into the gating unit G . The output of gated fusion module is F i .
The purpose of gating unit G feeds the input features into a 1 × 1 convolution and then obtains the gated features G i through the sigmoid function, as shown in Equation (1). The gated feature graph is used to judge the usefulness of the spatial position features of the input features. The range of the gated feature values is [0, 1]. A value less than 0.5 (approximately 0) corresponds to useless feature information, whereas a value greater than 0.5 (approximately 1) corresponds to useful feature information. The transfer of useful information and useless information is controlled by element-by-element multiplication between the gated features and the input features of the gating unit:
G i = σ w i F i ,
where σ is the sigmoid function, the asterisk (‘ ’) represents the convolution operation, and w i is the weight parameter of the convolution.
The entire gated fusion module process can be defined as shown in Equation (2). For a position x , y , when G i + 1 x , y is larger and G i x , y is smaller, F i + 1 transmits useful information to F i that F i lacks at this position. When G i + 1 x , y is smaller or G i x , y is larger, this useless information is suppressed to reduce information redundancy:
F i   =   1 + G i     F i + 1 G i     G i + 1     F i + 1 ,
where denotes element-by-element multiplication.

2.2. Combining Edge Detection for Multi-Task Learning

The edge detection network was used to extract the edge features of photovoltaic panels. The semantic segmentation network was trained using multi-task learning so that the network model produced segmentation results with refined edges.

2.2.1. Edge Detection Network

Distributed photovoltaic stations have dense distribution characteristics, and the identified results of adjacent photovoltaic panels are prone to adhesion. In this paper, edge information extracted by the edge detection network was combined with the semantic segmentation network to ameliorate the problem of edge blurring.
In this paper, an encoder-decoder structure was adopted in the edge detection network, as shown in Figure 6. This is the same encoder used in semantic segmentation network for feature extraction and feature sharing. The decoder structure of the edge detection network is also the same as that of the semantic segmentation network. The object edge feature information is gradually obtained through multiple up-sampling operations, and the edge feature extracted by the encoder is fused by skip connections during the up-sampling process.

2.2.2. Loss Function

In the parallel training of 2 networks, a semantic segmentation loss function and an edge detection loss function are used to supervise the learning process for the semantic and edge features of photovoltaic panels, respectively. The semantic segmentation network loss function is calculated from the segmentation predictions and segmentation labels, while the edge detection loss function is calculated from the edge predictions and edge labels. Both the semantic segmentation and edge detection of photovoltaic power stations are binary classification tasks. In addition, compared with the background, the segmentation labels and edge labels account for only a small proportion. To avoid sample imbalance problems, a loss function composed of binary cross entropy (BCE) and the Dice loss function (Dice), namely, BCE + Dice [64,65], is used in both the semantic segmentation network and edge detection network. During training, the 2 loss functions are summed to obtain the total model loss, as shown in the following equation:
L o s s _ t o t a l = L o s s _ s e g + L o s s _ e d g e ,
where L o s s _ t o t a l is the total loss function of our proposed model, L o s s _ s e g is the loss function of the semantic segmentation network and L o s s _ e d g e is the loss function of the edge detection network.
The BCE loss function is shown in Equation (4). The Dice loss function is given by Equation (5).
B C E = 1 n i = 1 n g i × l o g p i + 1 g i × l o g 1 p i ,
D i c e = 1 2 G P G + P = 1 2 i = 1 n g i × p i i = 1 n g i 2 + i = 1 n p i 2 ,
where n represents the number of pixels in the image, g i represents the value of the i-th pixel in the label, p i denotes the value of the i-th pixel in the prediction result map, and G and P denote the label and prediction result map, respectively.

3. Experimental and Result Analysis

3.1. Experimental Data

The experimental data in this study consisted of the Duke California Solar Array and Shanghai Distributed Photovoltaic Power Station data sets.
1. Duke California Solar Array data set
This data set is currently the largest manually labelled distributed photovoltaic power station data set, containing images and coordinate information of object boundary which can be used to train semantic segmentation and object detection algorithms. The images in the data set are collected by the United States Geological Survey (USGS), which uses remote sensing technology to perform orthographic correction on images, eliminating distortions caused by camera and terrain. The image size is 5000 × 5000 pixels, the spatial resolution is 0.3 m, and each image includes three bands: Red, green, and blue (the RGB code that is used to reproduce a broad array of colors). To ensure comparable results, a total of 526 images from Fresno, Modesto, and Stockton were selected and split following SolarMapper [66]. Fifty percent of the images were randomly selected to form the test set, and the remaining 50% of images were divided into a training set and verification set at a ratio of 8:2.
Given the limited memory available on the graphics card, the original images in the training set were clipped into 256 × 256 image blocks and the data were augmented by horizontal and vertical mirroring and a rotation of 90 degrees. Finally, a total of 85,448 image blocks were collected for training. During the training of the edge detection network, photovoltaic panel edge labels are needed. In this study, the edge labels were obtained based on the semantic segmentation labels. Some sample images, segmentation labels, and edge labels from this data set are shown in Figure 7.
2. Shanghai Distributed Photovoltaic Power Station Data Set
To verify the effectiveness of the proposed method in this paper for identifying domestically distributed photovoltaic power stations, the Shanghai Distributed Photovoltaic Power Station data set was constructed. The images were collected from the Songjiang and Pudong New districts in Shanghai. The data set contains 1000 aerial images with a size of 2048 × 2048 and a spatial resolution of 0.1 m and the images include three bands: Red, green, and blue. The data set images were randomly divided into a training set, a validation set, and a test set at a ratio of 7:1:2. The training set data were clipped into 256 × 256 image blocks. Then, the data were augmented by horizontal and vertical mirroring and rotations of 90, 180 and 270 degrees. Contrast transformation and brightness transformation was carried out. Finally, a total of 55,560 image blocks were collected for training. Some sample images, segmentation labels, and edge labels for this data set are shown in Figure 8.

3.2. Evaluation Metrics

In this study, IoU, precision, recall, and F1-scores were used as evaluation metrics. The IoU is the ratio of the intersection and union of the predicted result area and the labelled area. Precision represents the ratio of pixels correctly predicted as positive among all pixels predicted as positive. Recall represents the ratio of pixels correctly predicted as positive among all positive pixels. The F1 is a metric that combines precision and recall. The four evaluation metrics are calculated as shown in the following equations:
I o U = T P T P + F P + F N ,
P r e c i s i o n = T P T P + F P ,
R e c a l l = T P T P + F N ,
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l ,
where TP (true positive) represents the number of pixels that are both predicted and labelled as positive FP (false positive) represents the number of pixels that are predicted as positive but labelled as negative, and FN (false negative) represents the number of pixels that are predicted as negative but labelled as positive.

3.3. Experimental Setting

1. Experimental environment
The computer used in the experiments was equipped with an Ubuntu 16.04.5 LTS operating system, an Intel (R) Xeon (R) E5-2678 v3 CPU, and two NVIDIA TITAN XP graphics cards, each with 12 GB of memory. PyTorch was used to build all the semantic segmentation models.
2. Training strategy and hyperparameter settings
All the models were trained using the Adam optimizer to help ensure a fast convergence speed. The batch size of the input images in each training epoch was 64. The initial learning rate was 1 × 10 3 and the learning rate decay adopted the cosine annealing learning rate decline strategy. The cycle was 10, and the minimum learning rate was 1 × 10 5 .

3.4. Experimental Results

To verify the effectiveness of the proposed method, EfficientNet-B1-UNet was considered as the baseline network. Then, the gated fusion module and edge detection network were added successively. The experiments used the Duke California Solar Array data set and the Shanghai Distributed Photovoltaic Power Station data set. The experimental results on the Duke California Solar Array data set are shown in Table 1.
On the Duke California Solar Array data set, by adding the gated fusion module, the IoU of the test set was increased from 72.41% to 73.33%, F1 was increased from 84.00% to 84.61%, and recall was increased from 82.64% to 83.24%. By adding the edge detection network, the IoU of the network model was further improved from 73.33% to 73.60% and F1 was improved from 84.61% to 84.79%.
The experimental results of the Shanghai Distributed Photovoltaic Power Station data set are shown in Table 2.
On the Shanghai Distributed Photovoltaic Power Station data set, adding the gating fusion module increased the IoU of the test set from 87.40% to 88.34%, the F1-score from 93.27% to 93.81%, and the recall from 93.47% to 94.08%. After adding the edge detection network, the IoU of the network model was further improved to 88.74% and the F1-score improved to 94.03%.
The added modules improved all four evaluation metrics. This shows that the gated fusion module and edge detection network proposed in this paper can improve the accuracy of distributed photovoltaic panel identification tasks.

3.5. Results Analysis

1. The influence of the gating fusion module on the segmentation results
Figure 9 shows a sample image, and its segmentation results are shown both before and after adding the gated fusion module. The first two rows of images are from the Duke California Solar Array data set and the second two rows of images are from the Shanghai Distributed Photovoltaic Power Station data set. The first column is the sample image, the second column is the labelled image, and the third column shows the segmentation results of Effi-UNet. Compared with the labelled image, the Effi-UNet results failed to detect of some small photovoltaic panels. The fourth column shows the segmentation results of Effi-UNet + GFM, revealing that, with the help of the GFM module, the network’s ability to identify small photovoltaic panels was improved, which verifies the effectiveness of the module.
2. The influence of the edge detection network on the segmentation results
By extracting edge information and conducting multi-task learning of the edge detection and segmentation networks, more refined segmentation results can be generated. In Figure 10, the first two rows of sample images come were sourced from the Duke California Solar Array data set, while the second two rows of sample images are were sourced from the Shanghai Distributed Photovoltaic Power Station data set. The first column is the sample image, and the second column is the segmentation label. The third column is the Effi-UNet + GFM segmentation results. Compared with the segmentation label, the segmentation results of adjacent photovoltaic panels were adhered. The fourth column and the fifth column, respectively, represent the semantic segmentation results and edge detection results of Effi-UNet + GFM + EDN, and the sixth column is the label of edge detection. With the help of the edge detection network, fine edge results were obtained, distinguishing adjacent photovoltaic panels insofar as possible and alleviating the adhesion problem.

3.6. Comparisons with Other Methods

To further verify the effectiveness of the proposed method, the identification method proposed in this paper was compared with SegNet, LinkNet [67], UNet, and FPN [68] on the adopted two data sets. The results and analysis are as follows.

3.6.1. Results on the Duke California Solar Array Data Set

The experimental results of each method on the test set of the Duke California Solar Array data set are shown in Table 3. The results show that the proposed method outperformed the other methods on all the evaluation metrics. The IoU of the proposed method in this paper reached 73.60%, and its F1-score reached 84.79%. Moreover, the IoU of the proposed method was 6.6% better than the IoU of SolarMapper [66]. The analysis of the results is as follows: (1) Although LinkNet, UNet, and FPN combine features from different layers, they do not consider the differences between the high-level and low-level features, nor do they make full use of object edge information. (2) In this paper, based on the encoder and decoder structure network, the multi-layer features were fused effectively by the gated fusion module, and the useful information was transferred by the gated mechanism improving the ability to identify small photovoltaic panels. (3) Based on the semantic segmentation network, the method in this paper combined an edge detection network for multi-task learning to ameliorate the edge-blurring problem.
Figure 11 shows some of the experimental results of each method on the Duke California Solar Array data set. The segmentation results in the first and second rows show that the method proposed in this paper was better at identifying small photovoltaic panels compared with the other methods. In the segmentation results shown in the third and fourth rows, although each method identified the photovoltaic panel in the image, the method in this paper obtained more refined edges.

3.6.2. Results on the Shanghai Distributed Photovoltaic Power Station Data Set

Table 4 shows the evaluation results of each model on the Shanghai Distributed Photovoltaic Power Station data set, revealing that the method proposed in this paper outperformed all the other methods on all the evaluation metrics. The IoU of the method in this paper reached 88.74%, and its F1-score reached 94.03% Due to the encoder-decoder structure, the method proposed in this paper effectively fused features from multiple layers, improved the ability to identify small photovoltaic panels, and refined the segmentation edge results using the edge detection network. Therefore, compared with the other methods, the method in this paper achieved higher accuracy.
Figure 12 shows an example of the experimental results of the proposed method and the compared methods on the Shanghai Distributed Photovoltaic Power Station data set. As seen from the results in the first row, the method proposed in this paper was better at identifying small photovoltaic panels, and the identification results were more complete. In the second row, the two separate photovoltaic panels were difficult to identify due to their small sizes. Compared with the other methods, the proposed method not only recognized them but also obtained more refined edges in the identification results. In the third row, multiple photovoltaic panels were close to each other, which was likely to cause adhesion problems in the identification process. Compared with the other methods, with the help of the edge detection network, the identification results of the method proposed in this paper had more refined edges and alleviated the adhesion problem. A comparison of the results in the fourth row shows that the identification results of the proposed method had more refined edges.

4. Conclusions

This paper presented a novel fully connected convolutional neural network model that can automatically extract distributed photovoltaic power stations from remote sensing imagery. A distributed photovoltaic power station identification method that combines multi-layer features and edge detection was proposed to solve two problems: That small photovoltaic panels are difficult to identify and that adjacent photovoltaic panels can easily adhere. The model structure was composed of a semantic segmentation network and an edge detection network. A gated fusion module was introduced into the semantic segmentation network to conduct effective multi-layer feature fusion, and an edge detection network was used to guide the production of segmentation results with refined edges. Experiments on the Duke California Solar Array data set and the Shanghai Distributed Photovoltaic Power Station data set showed that the problem of missed small photovoltaic panels was improved and that the identification accuracy was enhanced by introducing a gating fusion module. By combining the edge detection network and semantic segmentation network for multi-task learning, the edge information of the photovoltaic panel was used to constrain the segmentation results, resulting in the extraction of photovoltaic panels with finer edges, which further improved the identification accuracy. Compared with SegNet, LinkNet, UNet and FPN, the method proposed in this paper achieved the highest identification accuracy on both data sets, and its F1-scores reached 84.79% and 94.03%, respectively.
However, there are also some limitations in this study: (1) In terms of data source, due to the limitations of the current data set, the trained model is only applicable to RGB optical images and cannot be directly used to images containing more bands. (2) In terms of the spatial resolution of the image, the training and testing of the method in this paper were carried out on the images with the same spatial resolution. Due to the differences of solar panels in images with different resolutions, the accuracy may be uncertain when the trained model is directly used to predict images with different resolutions. (3) Since the training data only includes distributed photovoltaic power stations, the trained model cannot be used to identify centralized photovoltaic power stations. The future work will be carried out from the following aspects: (1) Explore the application of our method in multi-spectral images and further improve the segmentation performance with more spectral information. (2) Multiple images of different spatial resolutions will be collected to train our method so that our method can identify distributed photovoltaic power stations in images with different resolutions. (3) A centralized photovoltaic power station data set will be constructed, and our method will be extended to the identification of centralized photovoltaic power stations. (4) In addition, the extracted results of distributed photovoltaic power stations will be combined with solar radiation data to assess the power generation potential.

Author Contributions

Y.J., A.Y. and X.J. designed the network architecture. Y.J. performed the experiments and wrote the paper. X.J. and J.C. (Jingbo Chen) revised the paper. Y.D., J.C. (Jing Chen) and Y.Z. built the data set. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Key Research and Development Project (No. 2017YFC0821900).

Acknowledgments

The authors sincerely thank the editors and reviewers. We also sincerely thank the authors of the Duke California Solar Array data set.

Conflicts of Interest

The authors declare that no conflict of interest exist.

References

  1. Nastasi, B.; de Santoli, L.; Albo, A.; Bruschi, D.; Basso, G.L. RES (Renewable Energy Sources) availability assessments for Eco-fuels production at local scale: Carbon avoidance costs associated to a hybrid biomass/H2NG-based energy scenario. Energy Procedia 2015, 81, 1069–1076. [Google Scholar] [CrossRef] [Green Version]
  2. Moriarty, P.; Honnery, D. Feasibility of a 100% Global Renewable Energy System. Energies 2020, 13, 5543. [Google Scholar] [CrossRef]
  3. Li, W.; Ren, H.; Chen, P.; Wang, Y.; Qi, H. Key Operational Issues on the Integration of Large-Scale Solar Power Generation—A Literature Review. Energies 2020, 13, 5951. [Google Scholar] [CrossRef]
  4. Li, H.; Lin, H.; Tan, Q.; Wu, P.; Wang, C.; De, G.; Huang, L. Research on the policy route of China’s distributed photovoltaic power generation. Energy Rep. 2020, 6, 254–263. [Google Scholar]
  5. Xin-gang, Z.; Zhen, W. Technology, cost, economic performance of distributed photovoltaic industry in China. Renew. Sustain. Energy Rev. 2019, 110, 53–64. [Google Scholar] [CrossRef]
  6. Yi, T.; Tong, L.; Qiu, M.; Liu, J. Analysis of Driving Factors of Photovoltaic Power Generation Efficiency: A Case Study in China. Energies 2019, 12, 355. [Google Scholar] [CrossRef] [Green Version]
  7. Ahmed, R.; Sreeram, V.; Mishra, Y.; Arif, M.D. A review and evaluation of the state-of-the-art in PV solar power forecasting: Techniques and optimization. Renew. Sustain. Energy Rev. 2020, 124, 109792. [Google Scholar] [CrossRef]
  8. Mancini, F.; Nastasi, B. Solar energy data analytics: PV deployment and land use. Energies 2020, 13, 417. [Google Scholar] [CrossRef] [Green Version]
  9. Han, M.; Xiong, J.; Wang, S.; Yang, Y. Chinese photovoltaic poverty alleviation: Geographic distribution, economic benefits and emission mitigation. Energy Policy 2020, 144, 111685. [Google Scholar] [CrossRef]
  10. Xu, M.; Xie, P.; Xie, B.C. Study of China’s optimal solar photovoltaic power development path to 2050. Resour. Policy 2020, 65, 101541. [Google Scholar] [CrossRef]
  11. International Energy Agency. Renewables 2019. 2019. Available online: https://www.iea.org/reports/renewables-2019 (accessed on 6 February 2020).
  12. Lv, T.; Yang, Q.; Deng, X.; Xu, J.; Gao, J. Generation expansion planning considering the output and flexibility requirement of renewable energy: The case of Jiangsu Province. Front. Energy Res. 2020, 8, 39. [Google Scholar] [CrossRef]
  13. Nassar, Y.F.; Hafez, A.A.; Alsadi, S.Y. Multi-Factorial Comparison for 24 Distinct Transposition Models for Inclined Surface Solar Irradiance Computation in the State of Palestine: A Case Study. Front. Energy Res 2020, 7, 163. [Google Scholar] [CrossRef]
  14. Manfren, M.; Nastasi, B.; Groppi, D.; Garcia, D.A. Open data and energy analytics-An analysis of essential information for energy system planning, design and operation. Energy 2020, 213, 118803. [Google Scholar] [CrossRef]
  15. Malof, J.M.; Hou, R.; Collins, L.M.; Bradbury, K.; Newell, R. Automatic solar photovoltaic panel detection in satellite imagery. In Proceedings of the 2015 International Conference on Renewable Energy Research and Applications (ICRERA), Palermo, Italy, 22–25 November 2015; pp. 1428–1431. [Google Scholar]
  16. Matas, J.; Chum, O.; Urban, M.; Pajdla, T. Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 2004, 22, 761–767. [Google Scholar] [CrossRef]
  17. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  18. Malof, J.M.; Bradbury, K.; Collins, L.M.; Newell, R.G. Automatic detection of solar photovoltaic arrays in high resolution aerial imagery. Appl. Energy 2016, 183, 229–240. [Google Scholar] [CrossRef] [Green Version]
  19. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  20. Malof, J.M.; Collins, L.M.; Bradbury, K.; Newell, R.G. A deep convolutional neural network and a Random Forest classifier for solar photovoltaic array detection in aerial imagery. In Proceedings of the 2016 IEEE International Conference on Renewable Energy Research and Applications (ICRERA), Birmingham, UK, 20–23 November 2016; pp. 650–654. [Google Scholar]
  21. LeCun, Y.; Bengio, Y. Convolutional networks for images, speech, and time series. Handb. Brain Theory Neural Netw. 1995, 3361, 1995. [Google Scholar]
  22. Malof, J.M.; Collins, L.M.; Bradbury, K. A deep convolutional neural network, with pre-training, for solar photovoltaic array detection in aerial imagery. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 22–28 July 2017; pp. 874–877. [Google Scholar]
  23. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  24. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  25. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  26. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
  27. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  28. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-first AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  29. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 4–9 February 2016; pp. 770–778. [Google Scholar]
  30. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 4–9 February 2017; pp. 4700–4708. [Google Scholar]
  31. Jiang, F.; Grigorev, A.; Rho, S.; Tian, Z.; Fu, Y.; Jifara, W.; Adil, K.; Liu, S. Medical image semantic segmentation based on deep learning. Neural Comput. Appl. 2018, 29, 1257–1265. [Google Scholar] [CrossRef]
  32. Zhou, Z.; Siddiquee MM, R.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: Cham, Switzerland, 2018; pp. 3–11. [Google Scholar]
  33. Roth, H.R.; Oda, H.; Zhou, X.; Shimizu, N.; Yang, Y.; Hayashi, Y.; Oda, M.; Fujiwara, M.; Misawa, K.; Mori, K. An application of cascaded 3D fully convolutional networks for medical image segmentation. Comput. Med. Imaging Graph. 2018, 66, 90–99. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Hesamian, M.H.; Jia, W.; He, X.; Kennedy, P. Deep learning techniques for medical image segmentation: Achievements and challenges. J. Digit. Imaging 2019, 32, 582–596. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Huang, Z.; Wang, X.; Huang, L.; Huang, C.; Wei, Y.; Liu, W. Ccnet: Criss-cross attention for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 603–612. [Google Scholar]
  36. Zhang, F.; Chen, Y.; Li, Z.; Hong, Z.; Liu, J.; Ma, F.; Han, J.; Ding, E. Acfnet: Attentional class feature network for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 6798–6807. [Google Scholar]
  37. Takikawa, T.; Acuna, D.; Jampani, V.; Fidler, S. Gated-scnn: Gated shape cnns for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 5229–5238. [Google Scholar]
  38. Liu, Y.; Chen, K.; Liu, C.; Qin, Z.; Luo, Z.; Wang, J. Structured knowledge distillation for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2604–2613. [Google Scholar]
  39. Paul, M.; Mayer, C.; Gool, L.V.; Timofte, R. Efficient video semantic segmentation with labels propagation and refinement. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 1–5 March 2020; pp. 2873–2882. [Google Scholar]
  40. Pfeuffer, A.; Schulz, K.; Dietmayer, K. Semantic segmentation of video sequences with convolutional lstms. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1441–1447. [Google Scholar]
  41. Jain, S.; Wang, X.; Gonzalez, J.E. Accel: A corrective fusion network for efficient semantic segmentation on video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16-20 June 2019; pp. 8866–8875. [Google Scholar]
  42. Nekrasov, V.; Chen, H.; Shen, C.; Reid, I. Architecture search of dynamic cells for semantic video segmentation. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 1–5 March 2020; pp. 1970–1979. [Google Scholar]
  43. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  44. Zhou, L.; Zhang, C.; Wu, M. D-LinkNet: LinkNet With Pretrained Encoder and Dilated Convolution for High Resolution Satellite Imagery Road Extraction. In Proceedings of the CVPR Workshops, Salt Lake City, UT, USA, 19–21 June 2018; pp. 182–186. [Google Scholar]
  45. Wu, M.; Zhang, C.; Liu, J.; Zhou, L.; Li, X. Towards accurate high resolution satellite image semantic segmentation. IEEE Access 2019, 7, 55609–55619. [Google Scholar] [CrossRef]
  46. Xu, Y.; Xie, Z.; Feng, Y.; Chen, Z. Road extraction from high-resolution remote sensing imagery using deep learning. Remote Sens. 2018, 10, 1461. [Google Scholar] [CrossRef] [Green Version]
  47. Gao, L.; Song, W.; Dai, J.; Chen, Y. Road extraction from high-resolution remote sensing imagery using refined deep residual convolutional neural network. Remote Sens. 2019, 11, 552. [Google Scholar] [CrossRef] [Green Version]
  48. Xu, Y.; Wu, L.; Xie, Z.; Chen, Z. Building extraction in very high resolution remote sensing imagery using deep learning and guided filters. Remote Sens. 2018, 10, 144. [Google Scholar] [CrossRef] [Green Version]
  49. Yang, H.; Wu, P.; Yao, X.; Wu, Y.; Wang, B.; Xu, Y. Building extraction in very high resolution imagery by dense-attention networks. Remote Sens. 2018, 10, 1768. [Google Scholar] [CrossRef] [Green Version]
  50. Huang, J.; Zhang, X.; Xin, Q.; Sun, Y.; Zhang, P. Automatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network. ISPRS J. Photogramm. Remote Sens. 2019, 151, 91–105. [Google Scholar] [CrossRef]
  51. Sun, G.; Huang, H.; Zhang, A.; Li, F.; Zhao, H.; Fu, H. Fusion of multiscale convolutional neural networks for building extraction in very high-resolution images. Remote Sens. 2019, 11, 227. [Google Scholar] [CrossRef] [Green Version]
  52. Yu, L.; Wang, Z.; Tian, S.; Ye, F.; Ding, J.; Kong, J. Convolutional neural networks for water body extraction from Landsat imagery. Int. J. Comput. Intell. Appl. 2017, 16, 1750001. [Google Scholar] [CrossRef]
  53. Chen, Y.; Fan, R.; Yang, X.; Wang, J.; Latif, A. Extraction of urban water bodies from high-resolution remote-sensing imagery using deep learning. Water 2018, 10, 585. [Google Scholar] [CrossRef] [Green Version]
  54. Li, L.; Yan, Z.; Shen, Q.; Cheng, G.; Gao, L.; Zhang, B. Water body extraction from very high spatial resolution remote sensing data based on fully convolutional networks. Remote Sens. 2019, 11, 1162. [Google Scholar] [CrossRef] [Green Version]
  55. Yuan, J.; Yang, H.H.L.; Omitaomu, O.A.; Bhaduri, B.L. Large-scale solar panel mapping from aerial images using deep convolutional networks. In Proceedings of the 2016 IEEE International Conference on Big Data (Big Data), Washington, DC, USA, 5–8 December 2016; pp. 2703–2708. [Google Scholar]
  56. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  57. Ronneberger, O.; Fischer, P.; Brox, T.U. Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015. [Google Scholar]
  58. Camilo, J.; Wang, R.; Collins, L.M.; Bradbury, K.; Malof, J.M. Application of a semantic segmentation convolutional neural network for accurate automatic detection and mapping of solar photovoltaic arrays in aerial imagery. arXiv 2018, arXiv:1801.04018. [Google Scholar]
  59. Castello, R.; Roquette, S.; Esguerra, M.; Guerra, A.; Scartezzini, J.L. Deep learning in the built environment: Automatic detection of rooftop solar panels using Convolutional Neural Networks. J. Phys. Conf. Ser. 2019, 1343, 012034. [Google Scholar] [CrossRef]
  60. Bradbury, K.; Saboo, R.; Johnson, T.L.; Malof, J.M.; Devarajan, A.; Zhang, W.; Collins, L.M.; Newell, R.G. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification. Sci. Data 2016, 3, 1–9. [Google Scholar] [CrossRef] [Green Version]
  61. Tan, M.; Le, Q.V. Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv 2019, arXiv:1905.11946. [Google Scholar]
  62. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
  63. Li, X.; Zhao, H.; Han, L.; Tong, Y.; Yang, K. Gff: Gated fully fusion for semantic segmentation. arXiv 2019, arXiv:1904.01803. [Google Scholar]
  64. Milletari, F.; Navab, N.; Ahmadi, S.A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
  65. Patravali, J.; Jain, S.; Chilamkurthy, S. 2D-3D fully convolutional neural networks for cardiac MR segmentation. In International Workshop on Statistical Atlases and Computational Models of the Heart; Springer: Cham, Switzerland, 2017; pp. 130–139. [Google Scholar]
  66. Malof, J.M.; Li, B.; Huang, B.; Bradbury, K.; Stretslov, A. Mapping solar array location, size, and capacity using deep learning and overhead imagery. arXiv 2019, arXiv:1902.10895. [Google Scholar]
  67. Chaurasia, A.; Culurciello, E. Linknet: Exploiting encoder representations for efficient semantic segmentation. In Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, Russia, 10–13 December 2017; pp. 1–4. [Google Scholar]
  68. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
Figure 1. Structure of the proposed model.
Figure 1. Structure of the proposed model.
Energies 13 06742 g001
Figure 2. Structure of EfficientNet-B1.
Figure 2. Structure of EfficientNet-B1.
Energies 13 06742 g002
Figure 3. Encoder structure in the proposed model.
Figure 3. Encoder structure in the proposed model.
Energies 13 06742 g003
Figure 4. Structure of the decoder blocks.
Figure 4. Structure of the decoder blocks.
Energies 13 06742 g004
Figure 5. Structure of the gated fusion module.
Figure 5. Structure of the gated fusion module.
Energies 13 06742 g005
Figure 6. Structure of the edge detection network.
Figure 6. Structure of the edge detection network.
Energies 13 06742 g006
Figure 7. Samples from the Duke California Solar Array data set: (a) Image; (b) segmentation label; (c) edge label.
Figure 7. Samples from the Duke California Solar Array data set: (a) Image; (b) segmentation label; (c) edge label.
Energies 13 06742 g007
Figure 8. Samples from the Shanghai Distributed Photovoltaic Power Station data set: (a) Image; (b) segmentation label; (c) edge label.
Figure 8. Samples from the Shanghai Distributed Photovoltaic Power Station data set: (a) Image; (b) segmentation label; (c) edge label.
Energies 13 06742 g008
Figure 9. Result samples before and after adding GFM: (a) Image; (b) label; (c) Effi-UNet results; (d) Effi-UNet + GFM results.
Figure 9. Result samples before and after adding GFM: (a) Image; (b) label; (c) Effi-UNet results; (d) Effi-UNet + GFM results.
Energies 13 06742 g009
Figure 10. Result samples before and after adding the edge detection network: (a) Image; (b) segmentation label; (c) Effi-UNet + GFM results; (d) Effi-UNet + GFM + EDN segmentation results; (e) Effi-UNet + GFM + EDN edge detection results; (f) edge label.
Figure 10. Result samples before and after adding the edge detection network: (a) Image; (b) segmentation label; (c) Effi-UNet + GFM results; (d) Effi-UNet + GFM + EDN segmentation results; (e) Effi-UNet + GFM + EDN edge detection results; (f) edge label.
Energies 13 06742 g010
Figure 11. Sample results of each method on the Duke California Solar Array data set: (a) Image; (b) label; (c) SegNet; (d) LinkNet; (e) UNet; (f) FPN; (g) our method.
Figure 11. Sample results of each method on the Duke California Solar Array data set: (a) Image; (b) label; (c) SegNet; (d) LinkNet; (e) UNet; (f) FPN; (g) our method.
Energies 13 06742 g011
Figure 12. Sample results of each method on the Shanghai Distributed Photovoltaic Power Station data set: (a) Image; (b) label; (c) SegNet; (d) LinkNet; (e) UNet; (f) FPN; (g) our method.
Figure 12. Sample results of each method on the Shanghai Distributed Photovoltaic Power Station data set: (a) Image; (b) label; (c) SegNet; (d) LinkNet; (e) UNet; (f) FPN; (g) our method.
Energies 13 06742 g012
Table 1. Experimental results of each improved module on the Duke California Solar Array data set (%).
Table 1. Experimental results of each improved module on the Duke California Solar Array data set (%).
MethodsIoUPrecisionRecallF1
Effi-UNet72.4185.4082.6484.00
Effi-UNet + GFM73.3386.0383.2484.61
Effi-UNet + GFM + EDN73.6086.1783.4584.79
Effi-UNet represents UNet, which uses EfficientNet-B1 as the encoder; GFM represents the gated fusion module, and EDN represents the edge detection network.
Table 2. Experimental results from successively improved models on the Shanghai distributed photovoltaic power station data set (%).
Table 2. Experimental results from successively improved models on the Shanghai distributed photovoltaic power station data set (%).
MethodsIoUPrecisionRecallF1
Effi-UNet87.4093.0893.4793.27
Effi-UNet + GFM88.3493.5494.0893.81
Effi-UNet + GFM + EDN88.7493.8894.1994.03
Table 3. Accuracy of each method on the Duke California Solar Array data set (%).
Table 3. Accuracy of each method on the Duke California Solar Array data set (%).
MethodsIoUPrecisionRecallF1
SegNet66.9783.4877.2080.22
SolarMapper67.00
LinkNet69.2383.6080.1181.82
UNet70.2883.8381.3082.54
FPN71.1184.7981.5083.11
Our method73.6086.1783.4584.79
Table 4. Accuracy of each method on the Shanghai Distributed Photovoltaic Power Station data set (%).
Table 4. Accuracy of each method on the Shanghai Distributed Photovoltaic Power Station data set (%).
MethodsIoUPrecisionRecallF1
SegNet85.3291.9792.1992.08
LinkNet85.9692.2992.6292.45
UNet86.3292.4392.8992.66
FPN86.7792.7093.1492.92
Our method88.7493.8894.1994.03
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jie, Y.; Ji, X.; Yue, A.; Chen, J.; Deng, Y.; Chen, J.; Zhang, Y. Combined Multi-Layer Feature Fusion and Edge Detection Method for Distributed Photovoltaic Power Station Identification. Energies 2020, 13, 6742. https://doi.org/10.3390/en13246742

AMA Style

Jie Y, Ji X, Yue A, Chen J, Deng Y, Chen J, Zhang Y. Combined Multi-Layer Feature Fusion and Edge Detection Method for Distributed Photovoltaic Power Station Identification. Energies. 2020; 13(24):6742. https://doi.org/10.3390/en13246742

Chicago/Turabian Style

Jie, Yongshi, Xianhua Ji, Anzhi Yue, Jingbo Chen, Yupeng Deng, Jing Chen, and Yi Zhang. 2020. "Combined Multi-Layer Feature Fusion and Edge Detection Method for Distributed Photovoltaic Power Station Identification" Energies 13, no. 24: 6742. https://doi.org/10.3390/en13246742

APA Style

Jie, Y., Ji, X., Yue, A., Chen, J., Deng, Y., Chen, J., & Zhang, Y. (2020). Combined Multi-Layer Feature Fusion and Edge Detection Method for Distributed Photovoltaic Power Station Identification. Energies, 13(24), 6742. https://doi.org/10.3390/en13246742

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop