Next Article in Journal
Reference Generator for a System of Multiple Tethered Unmanned Aerial Vehicles
Previous Article in Journal
Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications Based on UAS-GNSS for High Andean Urban Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Smart Decision-Support System for Pig Farming

1
China-Singapore International Joint Research Institute, Guangzhou 510700, China
2
School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore
3
Webank, Shenzhen 518052, China
4
Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
5
Faculty of Applied Science, University of British Columbia, Vancouver, BC V6T 1Z3, Canada
*
Authors to whom correspondence should be addressed.
Drones 2022, 6(12), 389; https://doi.org/10.3390/drones6120389
Submission received: 4 November 2022 / Revised: 23 November 2022 / Accepted: 28 November 2022 / Published: 30 November 2022

Abstract

:
There are multiple participants, such as farmers, wholesalers, retailers, financial institutions, etc., involved in the modern food production process. All of these participants and stakeholders have a shared goal, which is to gather information on the food production process so that they can make appropriate decisions to increase productivity and reduce risks. However, real-time data collection and analysis continue to be difficult tasks, particularly in developing nations, where agriculture is the primary source of income for the majority of the population. In this paper, we present a smart decision-support system for pig farming. Specifically, we first adopt rail-based unmanned vehicles to capture pigsty images. We then conduct image stitching to avoid double-counting pigs so that we can use image segmentation method to give precise masks for each pig. Based on the segmentation masks, the pig weights can be estimated, and data can be integrated in our developed mobile app. The proposed system enables the above participants and stakeholders to have real-time data and intelligent analysis reports to help their decision-making.

1. Introduction

Substantial risk and uncertainty exist in the agricultural production process and the associated supply chain [1]. For instance, unpredictable virus spread may seriously harm pig growth conditions, resulting in large-scale pig illness. The financial wellbeing of the farmer, credit provider, and insurer may incur damages from these operational risks [2].
To make informed decisions, a smooth flow of modern agriculture information is necessary [3], which requires the collaboration of multiple participants and stakeholders. This consists of a number of stakeholders, including farmers, who serve as the industry’s primary producers; distributors and merchants that store, package, transport, and distribute the harvest to consumers; financial institutions that provide insurance and oversee capital allocation; and regulatory agencies responsible for ensuring food safety, environmental sustainability, and financial stability. Participation in and monitoring of the production process is of interest to each stakeholder. For instance, a farmer may track the pig weights to be aware of their growth conditions; banks may use the data to identify nonperforming loans; and regulatory agencies may adopt the data to evaluate environmental impacts. However, it would be costly if we used traditional methods to collect and distribute such real-time agriculture information.
Making decisions from out-of-date information may have negative effects [4]. The recent epidemic of African swine disease in China is a case in point. The reliance on manual data collection and reporting, which was inefficient and prone to human mistake (whether deliberate or not), caused slow government responses and the virus to spread quickly throughout China in just eight months [5]. Dr. Defa Li, Fellow of the Chinese Academy of Engineering, put the estimated economic cost of the pandemic at 1 trillion Yuan [6]. Moreover, insurance companies also incurred significant losses due to the pandemic [7].
Through the use of Artificial Intelligence (AI) [8], Internet-of-Things (IoT) technologies [9,10], and big data analysis [11], the next wave of the agricultural revolution promises to reduce the aforementioned operational risks and promote the flow of information [12,13]. However, the introduction of such technologies in developing nations in Southeast Asia and elsewhere faces a number of particular difficulties, including the predominance of small and medium-sized farms, unusual farm conditions, inadequate capital support, and a lack of skilled management. The current situation of a few farms in Guangdong, China, is shown in Figure 1. We caution the reader that these are small farms and that the farmers are using low-cost designs. While industrialized economies have had the most success with smart farming and IoT in agriculture, the problems in developing nations have received less attention.
In this paper, we propose a smart pig farming support system. The platform collects data on agricultural productivity using cutting-edge sensors, analyzes the data automatically and interactively, and helps different stakeholders to make well-informed decisions. To address the specific challenges in developing countries, we propose to use unmanned vehicles with sensors to monitor the growth conditions of pigs in a pig farm. Specifically, the unmanned vehicles can be installed at the top of pigsties with fixed rails, and pig images can be taken. The captured images are further processed and analyzed with machine learning techniques, where we attempt to do image stitching, pig instance segmentation and weight estimation. Since the data can be captured and analyzed in an automatic fashion, we are able to monitor livestock farming in real time.
Our contributions can be summarized as follows:
  • We propose to use unmanned vehicles based on fixed rails to capture pigsty images, which are low-cost and easy to maintain.
  • We propose to apply state-of-the-art AI techniques to conduct data analysis, including image stitching, pig segmentation and weight estimation.
  • We propose to develop an app for data fusion, which integrates the collected and analyzed information for stakeholders’ visualization.
In the following sections, we first review the related works (Section 2), including image stitching, pig segmentation, and pig weight estimation methods. Then, in Section 3, we describe each of our proposed smart pig farming system components. We further give experimental results in Section 4.

2. Related Work

2.1. Image Stitching

Image stitching is defined as combining two or more images of the same scene into one full image, which can be called a panoramic image [14]. Due to the advantage of being robust against scene movement and fast processing, many existing works [15,16] adopt feature-based techniques. This method aims to use properly designed features, such as SIFT [17], to get the input image relationships, which are further used to uncover the overlapping areas between input images.
The idea of the feature-based image stitching method is to build the correspondence for points, lines, edges, etc. In order to produce robust stitching results, it is of great significance to adopt scale-invariant and translation-invariant detectors. There are many prevailing feature detectors [18], including SIFT [17], ORB [19], KAZE [20], and AKAZE [21]. To be specific, SIFT [17] is designed based on the Difference-of-Gaussians (DoG) operator, which is invariant to image rotations and scales robustly. Alcantarilla et al. [20] exploit non-linear scale space through non-linear diffusion filtering and propose the KAZE features, which reduce image noise. AKAZE [21] is an improved version of KAZE. Since it uses the modified local difference binary (MLDB) algorithm, AKAZE is a computationally efficient framework. In this paper, we empirically use SIFT as our image stitching approach, since it gives better results than other approach.

2.2. Segmentation Techniques for Pig Images

Semantic image segmentation [22] is a classical computer vision task that aims to classify each of the image pixels into an instance. With the development of deep learning [23], prevailing segmentation models, such as Faster R-CNN [24] and single-shot detection (SSD) [25], perform well on various kinds of datasets. Specifically, DeepLabV3+ [26] improves upon previous work [27] with several improvements, such as adding a simple and effective decoder module to refine the segmentation results. It is proposed to perform multi-class semantic segmentation. Later on, several research works, such as ViTDet [28] and MViTv2 [29], propose to adopt the transformer as segmentation model backbones. MViTv2 [29] is able to perform both image classification and object detection tasks. The aforementioned methods mainly validate their efficacy on the COCO dataset [30].
Pig segmentation is under the umbrella of the general image segmentation task, allowing us to produce the fine-grained shapes and locations of all pigs in given images. The segmented results interpret the complex scene information and give intelligent analysis for the given images. Since the produced results can assist both pig body part identification and behavior recognition tasks [31], many research works [32,33,34] focus on tackling the pig segmentation problem. To be specific, Seo et al. [32] adopt You Only Look Once (YOLO) [35] to separate touching-pigs in real-time. Shao et al. [33] use the segmentation techniques to predict the posture changes of pigs. In this paper, we adopt the pretrained Mask R-CNN [36] model, which is developed on top of Faster R-CNN [24]. We further use our annotated pig images captured in local farms to fine-tune the pretrained model, such that the fine-tuned model is able to give precise masks for pigs.

2.3. Pig Weight Estimation

Weight is an important index in pig rearing [37] and has an effect on managing various stages of the supply chain [38,39]. In terms of the farm scale, one can evaluate a pig’s daily gain and nutritional status through pig weights [40]. Specifically, with real-time pig weight data, farmers can raise pigs in good or bad nutrient status separately to meet the uniform marketing weight standard [37], which brings convenience to the pig farming process. Traditional pig weight measurement requires direct contact with pigs, which is limited by its low efficiency [37]. The non-contact measurement of pig weight is challenging and has drawn much attention.
Some research works [41] adopt additional facilities other than cameras for more weight-related information capture. To be specific, Shi et al. [41] propose to utilize a binocular stereo vision system to analyze and estimate pig body size and weight. Pezzuolo et al. [42] adopt a Microsoft Kinect v1 depth camera to get the pig body dimensions. However, these methods require high-cost implementations and thus are not applicable to our focus scenarios, which are the rural areas of developing countries. Therefore, we refer to the method proposed by [37], where we estimate pig weights based on pig sizes.

2.4. Summary

In Table 1, we present the comparison between our proposed method and other methods, where we show our strengths and weaknesses. Specifically, our adopted image stitching and image segmentation methods have good performance on the unmanned vehicle captured pig images, where we produce dense detected keypoints to conduct image stitching and achieve high average precision for pig instance segmentation. However, our processing time is relatively longer than other methods. This is acceptable since we do not require instant feedback on the detected results. After we obtain the captured images from our unmanned vehicles, we process these images at our local machines. In terms of the weight estimation component, we utilize a monocular camera, which may yield inferior performance to the depth camera, but it is low-cost and gives low tolerance on the test samples.

3. Methodology

Specifically, our proposed smart pig farming support platform is intended to standardize agricultural procedures, boost farm output, improve pest and disease management, and lessen operational risks in agriculture. This platform enables us to conduct production optimization based on subject-matter expertise and sensor data. Governments, financial institutions, and regulators can also identify potential operational and financial risks and get ready for them. The proposed system primarily supports livestock pig farming. Pig farms can identify pig positions and pig weights and monitor their weight in real time.
We show the proposed smart pig farming support system architecture in Figure 2, which includes three main components. Firstly, we use unmanned vehicles equipped with on-site sensors to collect data at the farm. Secondly, the data are then processed utilizing inexpensive edge computing nodes that do not have issues with sporadic Internet connections. Thirdly, the data are analyzed, the findings are summarized, and they are made accessible on an open platform that supports an ecosystem. The ecosystem supports a number of business and agricultural activities as well as various operations. We present the details of our system components below.

3.1. Data Collection with Unmanned Vehicles

We use rail-based unmanned vehicles that are equipped with surveillance cameras to monitor the livestock pigs. To be specific, the vehicles are installed on the top of pig farms, and the rails are designed to be vertical to the pigsty, where the demonstrations are given in Figure 2. The unmanned vehicles move from one end to another end and take consecutive images simultaneously. We show the consecutive raw images taken by our rail-based unmanned vehicles in Figure 3. They are running at regular intervals automatically so that the pig growth conditions can be monitored in real time.
Since we need to train a new pig segmentation model, we hire annotators to perform semantic polygon mask labeling for pig images. In total, we have 636 and 54 images for training and validation, respectively. Moreover, we also collect real pig weight data to estimate the pig weights. Due to our limited on-site manpower and the number of pigs, we collected 100 real weight samples; then, we split the collected data into 80 training and 20 test samples.

3.2. Image Stitching

It can be observed that the obtained consecutive scanned images overlap with each other. If we directly use these images for pig segmentation and weight estimation without further processing, the same pigs may be present in different scanned images, which makes pig management difficult. To avoid this, we propose using the image stitching approach to stitch all the images in an attempt to remove overlaps and give an overall perspective on the livestock pigs in the captured pigsties. We follow the methods of [15,16] to combine two or more overlapping images to make one larger image, which is realized in four steps.
In the first step, we aim to detect key points and obtain feature descriptors. Here, we use the SIFT [17] detector since it is rotation-invariant and scale-invariant. In the second step, we match key points with features. Specifically, we define a region around each key point, and then compute local features from normalized regions. Based on the local descriptors, we can perform the matching. In the third step, we firstly sample four random potential matches from the second step, which are used to compute the transform matrix H using direct linear transformation. We use the computed H to obtain the projected points x from x, which is denoted as
x = H x .
where x and x are potentially matching pairs. We then count points with a projected distance smaller than a defined threshold, which is set as 0.6. We view these counted points as inliers. We repeat the above process of the third step and return H with the most inliers.
In the fourth step, we aim to stitch two given images, which are depicted as I 1 and I 2 for demonstration. Since we observe that there are the blurry regions in the final panoramic image, we apply linear blending to reduce this effect. Technically, we first define left and right-stitched margins for blending. We then attempt to define weight matrices W 1 and W 2 for I 1 and I 2 , respectively. For I 1 , we define the weights from the left side of the output panoramic image to the left stitched margin as 1, and the weights from the left stitched margin to the right stitched margin are linearly decremented from 1 to 0. For I 2 , we define the weights from the right side of the output panoramic image to the right-stitched margin as 1, and the weights from the left-stitched margin to the right-stitched margin are linearly decremented from 0 to 1. We further apply the weight matrices and combine I 1 and I 2 . The stitched images I are depicted as
I = W 1 I 1 + W 2 I 2 .

3.3. Pig Segmentation

Here, we aim to locate each instance of the farming pigs, such that we are able to perform individual analysis for them. To this end, we adopt the state-of-the-art semantic segmentation algorithm Mask R-CNN [36]. Technically, there are two stages of Mask R-CNN. First, based on the input image, multiple region proposals are generated where there might be an object. Second, the model predicts the object classes, refines the bounding box positions, and also generates a mask in pixel level on the predicted masks from the first step. Both stages are connected to the backbone structure.
Mask R-CNN uses a ResNet [43] as its backbone. It utilizes standard convolution and FC heads for mask and box prediction, respectively. Moreover, the Feature Pyramid Networks (FPN) [44] is also integrated into the ResNet model, which improves the segmentation accuracy without sacrificing the inference speed. To give the segmentation model a good training start point, we use the ImageNet [45] pretrained models to further improve the segmentation performance. Since there is no publicly available model for pig segmentation, we use our labeled data for model training.

3.4. Pig Weight Estimation

Pig weights can be used to monitor pig growth status. With real-time weight data, farmers are able to know if pigs are in a good or bad nutrient status and change their fodder, which brings convenience to the pig farming process. To estimate pig weights according to our generated segmentation masks, we first fit the obtained masks into ellipses. We utilize the major axis and minor axis as pig length h and pig width w, respectively. Then, we follow the equation proposed by [37]. However, considering the particular pig breeds and camera installation conditions in our farms, we need to modify part of the given equation parameters.
Specifically, we denote the weight estimation equations [37] as
M = a × h b × w c ,
where M denotes the estimated weights and a, b, and c are parameters that should be calculated from our collected data. In order to fit this equation, we first transform Equation (3) into
log ( M ) = log ( a ) + b log ( h ) + c log ( w ) .
Since Equation (4) is a linear system, we further adopt the linear regression algorithm to obtain the above a, b, and c parameters. Our final optimized heuristic equation to calculate pig weights is denoted as
M = 0.0017 × h 1.1908 × w 1.0618 .
The tolerance of Equation (5) on our test samples is 1.8%.

4. Experiments

4.1. Implementation Details

Unmanned vehicle characteristics. The material of our rail-based unmanned vehicle is iron, the size of which is 80 cm × 40 cm × 30 cm. The rails are installed 3 m above the ground, where the vehicle moves forward at 0.1 m/s. The video camera is attached to the vehicle and points vertically to the ground. The camera takes pictures at the interval of 1 s. The camera resolution is 1080p. To control the unmanned vehicle and camera, we use the Firefly-RK3399 (https://en.t-firefly.com/Product/Rk3399/spec.html (accessed on 3 October 2022)) platform.
Segmentation model. Regarding our adopted pretrained segmentation model, we adopt the Detectron2 (https://github.com/facebookresearch/detectron2 (accessed on 3 October 2022)) implementation. We fine-tune the pretrained segmentation model with a learning rate of 0.00025, and the training iteration number is set as 3000.
Operation cost and efficiency. Our whole on-site system implementation for a one-row pigsty costs around 5000 CNY, which is about 702 USD. This is affordable for our deployed local farms. In terms of our operation efficiency, we perform the image stitching and pig weight estimation components in the CPU, and the segmentation component is conducted in the GPU. To be specific, the image stitching and weight estimation components cost about 10 min and 5 s for a pigsty. We use a Nvidia 2080 GPU to obtain the segmented results, and the inference time is 0.3 s per image.

4.2. Image Stitching Results

In Figure 4, we present one resulting image of key points detected by SIFT [17]. We empirically observe that our adopted method gives abundant key point detection results. In Figure 5, we demonstrate a comparison of the matched key points between two images with various features, where we show the key point matching results of ORB [19], KAZE [20], AKAZE [21], and SIFT [17]. Since we do not have ground truth labeling, we only evaluate the performance of these methods qualitatively.
Specifically, it is observed that the SIFT approach produces few intersected matching lines, and the detected key points are diverse. In contrast, ORB and KAZE give many intersected matching lines, and the diversity of AKAZE-generated key points are limited. Key point matching is one of the intermediate steps, and its robust performance is important to our following processing. In Figure 6, we show the qualitative results of the stitched pig images, which cover multiple pigsties under our rail-based unmanned vehicles. The presented qualitative examples demonstrate the efficacy of our adopted method for pig farming image stitching.

4.3. Segmented Results

In Table 2, we present the quantitative results of our segmentation model, which is evaluated with average precision (AP) under different Intersection over Union (IoU) values. Specifically, we list the quantitative results of various backbone models. DeepLabV3+ [26] improves upon previous work [27] with several improvements, such as adding a simple and effective decoder module to refine the segmentation results. It is proposed to perform multi-class semantic segmentation. However, since we only need to perform segmentation on pig instances, we observe that DeepLabV3+ fails to yield better results than our method. Both ViTDet [28] and MViTv2 [29] adopt the transformer as their model backbones. Based on the experimental results, our adopted ResNet [43] + FPN [44] architecture generalizes to our particular scenarios and gives the best performance among our listed methods. When IoU = 0.50:0.95, our average precision achieves 77.5% with the ResNet-101 model, which indicates that our model performance is consistent under various conditions. ResNet-101 [43] yields better results than ResNet-50 due to its higher complexity.
In Figure 7, we show the pig segmentation results of images taken by our unmanned vehicle, where the results are from various scenarios. The challenge of pig segmentation in our system is that we can only take images from the top view, which suffers from different light conditions. Moreover, the livestock pigs have various poses, and some of them overlap with each other. However, we observe that our adopted Mask R-CNN framework still has achieved over 90% average precision for both IoU = 0.5 and IoU = 0.75, showing the robustness of our system. Specifically, in the second instance of Figure 7, although only parts of the pigs are shown in the image, our model still successfully captures the precise location of these pigs. In the third instance, there are multiple pigs with different poses, and our model also gives correct segmentation results.

4.4. Weight Estimation Results

In Figure 8, we present visualizations for the weight estimation of each pig, giving an overview to farmers. The tolerance of our adopted Equation (5) on our test set is 1.8%. The quantitative and qualitative results both validate the usefulness of our adopted pig weight estimation method. The advantage of our system is that it gives real-time feedback on livestock pigs automatically. With the real-time captured weight data of each pig, farmers are able to know the nutrition status and growth conditions of pigs.

5. Limitations

The proposed system has two main limitations. First, the image stitching results may fail due to pig movement, as shown in Figure 9. This is because our image stitching method assumes that images for stitching contain a static scene and objects only; when a pig moves too much across frames, failure cases may occur. However, our current implementation aims to get the overall estimated pig weights, instead of tracking each pig. Hence, we ran our unmanned vehicle multiple times a day to capture pig images in an attempt to alleviate this issue.
Second, our current implementation only estimates the pig weights based on the segmentation masks, which means we can only obtain an overview of the pig weights. It is useful to uncover some extreme cases. However, we cannot track the weight information and growth status for each individual pig. To improve this, we need to develop pig recognition algorithms so that we are able to track the real-time data and obtain the continuous weight change for each pig. We leave this to our future work.

6. Conclusions

The next phase of the agricultural revolution will increase agricultural output and enhance the sustainability and effectiveness of current farming methods. To support pig farming in developing countries, collecting real-time data and performing relevant analysis is critical. In this paper, we present a smart decision-support system for pig farming, which is low-cost and affordable for developing countries. Specifically, we first adopt on-site rail-based unmanned vehicles to capture all pigsty images at certain intervals. To avoid overlapped pigs in captured images, we conduct image stitching; then, we use an image segmentation model to output pig segmentation masks. Based on the extracted masks, the pig weights can be estimated, and data can be integrated in our developed mobile app. Our proposed system enables pig farming participants and stakeholders to have real-time data reports to help their decision-making.

Author Contributions

Methodology, H.W.; software, Y.C. (Yuanyuan Chen); experiment design, H.Z.; equipment installation, Y.H. and J.Z.; data collection, A.X.; investigation, P.W.; resources, Y.C. (Yiqiang Chen), C.L. and C.M.; writing—original draft preparation, H.W.; writing—review and editing, B.L.; supervision, B.L.; project administration, P.W., Y.C. (Yiqiang Chen) and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the Joint NTU-WeBank Research Centre on Fintech (Award No: NWJ-2020-007), Nanyang Technological University, by the AI Singapore Programme (Award No: AISG-GC-2019-003) and the NRF Investigatorship Programme (Award No. NRF-NRFI05-2019-0002) of the National Research Foundation, Singapore, and by the the China-Singapore International Joint Research Institute (Award No. 206-A021002).

Data Availability Statement

The data presented in this study are openly available in (https://github.com/hwang1996/Pig-Image-Data (accessed on 3 October 2022)).

Acknowledgments

Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of the funding agencies.

Conflicts of Interest

The research was partially supported by Webank. Webank employees participated in the design of the study, equipment installation and maintenance at the pig farms, and the data collection; they are listed as authors.

References

  1. Weis, A.J.; Weis, T. The Global Food Economy: The Battle for the Future of Farming; Zed Books: London, UK, 2007. [Google Scholar]
  2. Despommier, D. The Vertical Farm: Feeding the World in the 21st Century; Macmillan: New York, NY, USA, 2010. [Google Scholar]
  3. Janssen, S.J.; Porter, C.H.; Moore, A.D.; Athanasiadis, I.N.; Foster, I.; Jones, J.W.; Antle, J.M. Towards a new generation of agricultural system data, models and knowledge products: Information and communication technology. Agric. Syst. 2017, 155, 200–212. [Google Scholar] [CrossRef] [PubMed]
  4. Waldman, K.B.; Todd, P.M.; Omar, S.; Blekking, J.P.; Giroux, S.A.; Attari, S.Z.; Baylis, K.; Evans, T.P. Agricultural decision making and climate uncertainty in developing countries. Environ. Res. Lett. 2020, 15, 113004. [Google Scholar] [CrossRef]
  5. Hu, Y. Graphics: The Real Situation of African Swine Fever in China. Available online: https://news.cgtn.com/news/3d3d774e3559444f33457a6333566d54/index.html (accessed on 3 October 2022).
  6. Sun, L. Academician Li Defa: The Direct Loss of African Swine Fever in China Is Estimated to be 1 Trillion Yuan. Available online: https://finance.sina.com.cn/money/future/agri/2019-09-26/doc-iicezzrq8551138.shtml (accessed on 3 October 2022).
  7. Wu, Y.; Tang, Z. China’s Insurers Squeal as Swine Fever Hits Profits. Available online: https://www.caixinglobal.com/2019-12-12/chinas-insurers-squeal-as-swine-fever-hits-profits-101493551.html (accessed on 3 October 2022).
  8. Zhang, Y.; Wang, H.; Xu, R.; Yang, X.; Wang, Y.; Liu, Y. High-Precision Seedling Detection Model Based on Multi-Activation Layer and Depth-Separable Convolution Using Images Acquired by Drones. Drones 2022, 6, 152. [Google Scholar] [CrossRef]
  9. Li, J.; Long, B.; Wu, H.; Hu, X.; Wei, X.; Zhang, Z.; Chai, L.; Xie, J.; Mei, H. Rapid Evaluation Model of Endurance Performance and Its Application for Agricultural UAVs. Drones 2022, 6, 186. [Google Scholar] [CrossRef]
  10. Bai, A.; Kovách, I.; Czibere, I.; Megyesi, B.; Balogh, P. Examining the Adoption of Drones and Categorisation of Precision Elements among Hungarian Precision Farmers Using a Trans-Theoretical Model. Drones 2022, 6, 200. [Google Scholar] [CrossRef]
  11. Wolfert, S.; Ge, L.; Verdouw, C.; Bogaardt, M.J. Big data in smart farming—A review. Agric. Syst. 2017, 153, 69–80. [Google Scholar] [CrossRef]
  12. Zhang, N.; Wang, M.; Wang, N. Precision agriculture—A worldwide overview. Comput. Electron. Agric. 2002, 36, 113–132. [Google Scholar] [CrossRef]
  13. Vasisht, D.; Kapetanovic, Z.; Won, J.; Jin, X.; Chandra, R.; Sinha, S.; Kapoor, A.; Sudarshan, M.; Stratman, S. FarmBeats: An IoT Platform for Data-Driven Agriculture. In Proceedings of the 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17), Boston, MA, USA, 27–29 March 2017; pp. 515–529. [Google Scholar]
  14. Adel, E.; Elmogy, M.; Elbakry, H. Image stitching based on feature extraction techniques: A survey. Int. J. Comput. Appl. 2014, 99, 1–8. [Google Scholar] [CrossRef]
  15. Brown, M.; Lowe, D.G. Recognising panoramas. In Proceedings of the International Conference on Computer Vision (ICCV), Nice, France, 13–16 October 2003; Volume 3, p. 1218. [Google Scholar]
  16. Brown, M.; Lowe, D.G. Automatic panoramic image stitching using invariant features. Int. J. Comput. Vis. 2007, 74, 59–73. [Google Scholar] [CrossRef] [Green Version]
  17. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  18. Tareen, S.A.K.; Saleem, Z. A comparative analysis of sift, surf, kaze, akaze, orb, and brisk. In Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 3–4 March 2018; pp. 1–10. [Google Scholar]
  19. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Tokyo, Japan, 25–27 May 2011; pp. 2564–2571. [Google Scholar]
  20. Alcantarilla, P.F.; Bartoli, A.; Davison, A.J. KAZE features. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; pp. 214–227. [Google Scholar]
  21. Alcantarilla, P.F.; Solutions, T. Fast explicit diffusion for accelerated features in nonlinear scale spaces. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 1281–1298. [Google Scholar]
  22. Asgari Taghanaki, S.; Abhishek, K.; Cohen, J.P.; Cohen-Adad, J.; Hamarneh, G. Deep semantic segmentation of natural and medical images: A review. Artif. Intell. Rev. 2021, 54, 137–178. [Google Scholar] [CrossRef]
  23. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  24. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
  26. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
  27. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Li, Y.; Mao, H.; Girshick, R.; He, K. Exploring plain vision transformer backbones for object detection. arXiv 2022, arXiv:2203.16527. [Google Scholar]
  29. Li, Y.; Wu, C.Y.; Fan, H.; Mangalam, K.; Xiong, B.; Malik, J.; Feichtenhofer, C. MViTv2: Improved Multiscale Vision Transformers for Classification and Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 4804–4814. [Google Scholar]
  30. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 740–755. [Google Scholar]
  31. Li, J.; Green-Miller, A.R.; Hu, X.; Lucic, A.; Mohan, M.M.; Dilger, R.N.; Condotta, I.C.; Aldridge, B.; Hart, J.M.; Ahuja, N. Barriers to computer vision applications in pig production facilities. Comput. Electron. Agric. 2022, 200, 107227. [Google Scholar] [CrossRef]
  32. Seo, J.; Sa, J.; Choi, Y.; Chung, Y.; Park, D.; Kim, H. A yolo-based separation of touching-pigs for smart pig farm applications. In Proceedings of the 2019 21st International Conference on Advanced Communication Technology (ICACT), PyeongChang, Korea, 17–20 February 2019; pp. 395–401. [Google Scholar]
  33. Shao, H.; Pu, J.; Mu, J. Pig-posture recognition based on computer vision: Dataset and exploration. Animals 2021, 11, 1295. [Google Scholar] [CrossRef]
  34. Hu, Z.; Yang, H.; Lou, T. Dual attention-guided feature pyramid network for instance segmentation of group pigs. Comput. Electron. Agric. 2021, 186, 106140. [Google Scholar] [CrossRef]
  35. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  36. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  37. Li, Z.; Luo, C.; Teng, G.; Liu, T. Estimation of pig weight by machine vision: A review. In Proceedings of the International Conference on Computer and Computing Technologies in Agriculture, Beijing, China, 18–20 September 2013; pp. 42–49. [Google Scholar]
  38. Wongsriworaphon, A.; Arnonkijpanich, B.; Pathumnakul, S. An approach based on digital image analysis to estimate the live weights of pigs in farm environments. Comput. Electron. Agric. 2015, 115, 26–33. [Google Scholar] [CrossRef]
  39. Pezzuolo, A.; Milani, V.; Zhu, D.; Guo, H.; Guercini, S.; Marinello, F. On-barn pig weight estimation based on body measurements by structure-from-motion (SfM). Sensors 2018, 18, 3603. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Doeschl-Wilson, A.; Whittemore, C.; Knap, P.; Schofield, C. Using visual image analysis to describe pig growth in terms of size and shape. Anim. Sci. 2004, 79, 415–427. [Google Scholar] [CrossRef]
  41. Shi, C.; Teng, G.; Li, Z. An approach of pig weight estimation using binocular stereo system based on LabVIEW. Comput. Electron. Agric. 2016, 129, 37–43. [Google Scholar] [CrossRef]
  42. Pezzuolo, A.; Guarino, M.; Sartori, L.; González, L.A.; Marinello, F. On-barn pig weight estimation based on body measurements by a Kinect v1 depth camera. Comput. Electron. Agric. 2018, 148, 29–36. [Google Scholar] [CrossRef]
  43. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  44. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
  45. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
Figure 1. Farms in Guangdong, China. (1) depicts an aerial view of depicts farms with erratic shapes that rely on natural irrigation and simple plastic mulch (instead of more expensive greenhouses). (2) shows a pig farm run by a family with a great number of pigs, which is enclosed by basic fences. One of our co-authors took these pictures while on a field trip.
Figure 1. Farms in Guangdong, China. (1) depicts an aerial view of depicts farms with erratic shapes that rely on natural irrigation and simple plastic mulch (instead of more expensive greenhouses). (2) shows a pig farm run by a family with a great number of pigs, which is enclosed by basic fences. One of our co-authors took these pictures while on a field trip.
Drones 06 00389 g001
Figure 2. The architecture of our proposed smart pig farming support systems. From left to right, we show the images of on-site installation, the segmentation results on stitched images, and the screenshot of our developed app for smart monitoring.
Figure 2. The architecture of our proposed smart pig farming support systems. From left to right, we show the images of on-site installation, the segmentation results on stitched images, and the screenshot of our developed app for smart monitoring.
Drones 06 00389 g002
Figure 3. The consecutive raw images taken by our rail-based unmanned vehicles. Based on the collected data, we conduct image stitching, pig segmentation, and weight estimation.
Figure 3. The consecutive raw images taken by our rail-based unmanned vehicles. Based on the collected data, we conduct image stitching, pig segmentation, and weight estimation.
Drones 06 00389 g003
Figure 4. One result of the detected key points by the SIFT detector.
Figure 4. One result of the detected key points by the SIFT detector.
Drones 06 00389 g004
Figure 5. Comparison of different features used for image stitching. We show results of ORB [19], KAZE [20], AKAZE [21], and SIFT [17].
Figure 5. Comparison of different features used for image stitching. We show results of ORB [19], KAZE [20], AKAZE [21], and SIFT [17].
Drones 06 00389 g005
Figure 6. Results of our stitched pigsty images, where each covers one row of the pigsty.
Figure 6. Results of our stitched pigsty images, where each covers one row of the pigsty.
Drones 06 00389 g006
Figure 7. Pig segmentation results in images taken by our rail-based unmanned vehicle.
Figure 7. Pig segmentation results in images taken by our rail-based unmanned vehicle.
Drones 06 00389 g007
Figure 8. The estimated weight results of each pig for demonstration.
Figure 8. The estimated weight results of each pig for demonstration.
Drones 06 00389 g008
Figure 9. Failure cases due to pig movement.
Figure 9. Failure cases due to pig movement.
Drones 06 00389 g009
Table 1. Comparison between our proposed method and other methods, where we list our strengths and weaknesses.
Table 1. Comparison between our proposed method and other methods, where we list our strengths and weaknesses.
ComponentStrengthWeakness
Image stitchingOur detected keypoints are dense and
scattered generally all over the images.
Our processing time is longer than other
methods, because of the descriptor size.
Image segmentationOur average precision is better
than other methods.
Our processing time is longer than other
methods, because of the backbone complexity.
Weight estimationOnly monocular camera is required,
which is low-cost.
Our estimation accuracy is lower
than methods using more sensors.
Table 2. Quantitative results for our pig segmentation validation set. IoU denotes Intersection over Union. FPN denotes Feature Pyramid Network. We show average precision (AP) results of various backbone models.
Table 2. Quantitative results for our pig segmentation validation set. IoU denotes Intersection over Union. FPN denotes Feature Pyramid Network. We show average precision (AP) results of various backbone models.
MethodAverage Precision
IoU = 0.50IoU = 0.75IoU = 0.50:0.95
DeepLabV3+ [26]71.137.236.5
ViTDet [28]67.036.637.4
MViTv2 [29]78.240.242.3
ResNet50+FPN (Ours)97.187.772.4
ResNet101+FPN (Ours)98.190.277.5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, H.; Li, B.; Zhong, H.; Xu, A.; Huang, Y.; Zou, J.; Chen, Y.; Wu, P.; Chen, Y.; Leung, C.; et al. Smart Decision-Support System for Pig Farming. Drones 2022, 6, 389. https://doi.org/10.3390/drones6120389

AMA Style

Wang H, Li B, Zhong H, Xu A, Huang Y, Zou J, Chen Y, Wu P, Chen Y, Leung C, et al. Smart Decision-Support System for Pig Farming. Drones. 2022; 6(12):389. https://doi.org/10.3390/drones6120389

Chicago/Turabian Style

Wang, Hao, Boyang Li, Haoming Zhong, Ahong Xu, Yingjie Huang, Jingfu Zou, Yuanyuan Chen, Pengcheng Wu, Yiqiang Chen, Cyril Leung, and et al. 2022. "Smart Decision-Support System for Pig Farming" Drones 6, no. 12: 389. https://doi.org/10.3390/drones6120389

APA Style

Wang, H., Li, B., Zhong, H., Xu, A., Huang, Y., Zou, J., Chen, Y., Wu, P., Chen, Y., Leung, C., & Miao, C. (2022). Smart Decision-Support System for Pig Farming. Drones, 6(12), 389. https://doi.org/10.3390/drones6120389

Article Metrics

Back to TopTop