Next Article in Journal / Special Issue
Enhancing Environmental Control in Broiler Production: Retrieval-Augmented Generation for Improved Decision-Making with Large Language Models
Previous Article in Journal
Hierarchical Stratification for Spatial Sampling and Digital Mapping of Soil Attributes
Previous Article in Special Issue
Developing AI Smart Sprayer for Punch-Hole Herbicide Application in Plasticulture Production System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

A Novel and Automated Approach to Detect Sea- and Land-Based Aquaculture Facilities

1
Consiglio per la Ricerca in Agricoltura e l’Analisi dell’Economia Agraria, Centro Ricerca “Zootecnia e Acquacoltura”, Via Salaria 31, 00015 Monterotondo, Italy
2
Institute of Aquaculture, University of Stirling, Stirling FK9 4LA, Scotland, UK
*
Author to whom correspondence should be addressed.
AgriEngineering 2025, 7(1), 11; https://doi.org/10.3390/agriengineering7010011
Submission received: 4 September 2024 / Revised: 4 October 2024 / Accepted: 2 January 2025 / Published: 6 January 2025
(This article belongs to the Special Issue The Future of Artificial Intelligence in Agriculture)

Abstract

:
Aquaculture is a globally widespread practice and the world’s fastest-growing food sector and requires technological advances to both increase productivity and minimize environmental impacts. Monitoring the sector is one of the priorities of state governments, international organizations, such as the Food and Agriculture Organization of the United States (FAO), and the European Commission. Data collection in aquaculture, particularly information on the location, number, and size of production facilities, is challenging due to the time required, the extent of the area to be monitored, the frequent changes in farming infrastructures and licenses, and the lack of automated tools. Such information is usually obtained through direct communications (e.g., phone calls and e-mails) with aquaculture producers and is rarely confirmed with on-site measurements. This study describes an innovative and automated method to obtain data on the number and placement of structures for marine and freshwater finfish farming through a YOLOv4 model trained on high-resolution images. High-resolution images were extracted from Google Maps to test their use with the YOLO model for the identification and geolocation of both land (raceways used in salmonids farming) and sea-based (floating sea cages used in seabream, seabass, and meagre farming) aquaculture systems in Italy. An overall accuracy of approximately 85% of correct object recognition of the target class was achieved. Model accuracy was tested with a dataset that includes images from Tuscany (Italy), where all these farm typologies are represented. The results demonstrate that the approach proposed can identify, characterize, and geolocate sea- and land-based aquaculture structures without performing any post-processing procedure, by directly applying customized deep learning and artificial intelligence algorithms.

Graphical Abstract

1. Introduction

Aquaculture is practiced globally and is the world’s fastest-growing food sector, largely contributing to the market demand for seafood. Aquaculture plays a vital role in meeting the rising demand for seafood and reducing pressure on wild fish stocks. Moreover, it represents economic opportunities, job creation, and economic growth, particularly in rural and coastal areas [1]. Nevertheless, as these aspects are increasingly important, it is necessary to further improve farming practices and environmental performance. Technological advances, such as automated monitoring systems, are emerging tools that could enhance productivity while minimizing environmental impacts. Monitoring the sector in terms of biomass trend production, number, and dimension of rearing facilities is considered a priority in many countries. For example, one of the missions of EUROSTAT, the statistical office of the European Union (EU), is to provide high-quality aquaculture statistics and data. The European legislation provides a framework for the development and monitoring of activities in aquaculture. In particular, Reg. (EC) 762/2008 [2] requires EU Member States to update their structural data related to aquaculture (number and dimensions) every three years. Similarly, the General Fisheries Commission for the Mediterranean (GFCM) and the regional fisheries and aquaculture management organization of the Food and Agriculture Organization of the United States (FAO) ask 22 member countries to yearly collect and transmit several aquaculture statistics, including structural data on aquaculture production centers (binding [3]). The importance of these institutional databases relies on the fact that they serve as valuable sources of information at global and/or regional levels for policymakers and researchers to monitor the development of the sector and the production trends, enabling effective policy definition and decision-making processes. Therefore, they form the basis for the development of evidence-based strategies for the sustainable growth of the aquaculture sector [4]. From a pragmatic perspective, collecting aquaculture data, and particularly information on the number and dimension of production facilities, is highly challenging for national authorities due to the time spent on data gathering, the large sample size (especially in some states), the size of the area to be monitored, and the frequent and rapid changes in rearing structures. In general, such structural data are not obtained by national authorities with direct and official monitoring tools, but through interviews (e.g., by phone calls or e-mails) with aquaculture producers, and are rarely confirmed with on-site visits.
In this scenario, and in considering the rapid growth of the aquaculture sector, it would be advantageous to integrate traditional methods with a more modern, objective, and automated approach that can facilitate the identification of facilities. Among others, one approach is the utilization of computer vision, particularly object detection. Some studies have already explored the application of deep learning methods in marine water monitoring applications since object identification is a powerful tool for identifying specific items [5]. To address marine-related topics and anthropogenic artifacts in water, studies have highlighted the feasibility of detecting different objects such as boats [6], offshore drilling platforms [7], and specific aquaculture facilities such as mussel farms [8]. The studies conducted to date on the identification of aquaculture facilities are limited and involve time-consuming and labor-intensive methods. Although computer vision techniques have generally been shown to be successful in identifying and categorizing facilities, they have drawbacks in considering segmentation tasks and evaluating the subjective correctness of classification rules [9]. In addition, the lack of readily available datasets that were specifically matched to the goals and specifications of aquaculture facility detection and monitoring is an important gap that needs to be filled to achieve results in this area.
The contribution of this study can be summarized as follows:
  • The development of a novel automated method to obtain information on the number and positioning of sea cages and inland raceways for marine and freshwater finfish farming, respectively, through a YOLOv4 (You Only Look Once version 4) model trained on high-resolution images.
  • Setting up of a high-resolution image dataset of floating sea cages and inland raceways, with annotations for the quantitative assessment of aquaculture structures and cluster identification using satellite images.
  • Testing the model accuracy for aquaculture structure detection and geolocation, for both floating sea cages and inland raceways in the Tuscany region (Italy).
The results demonstrate that the proposed approach was able to identify, characterize, and geolocate sea- and land-based aquaculture structures without performing any post-processing procedure and directly applying customized deep learning and artificial intelligence algorithms (YOLOv4 model).

2. Methods and Data

2.1. YOLOv4 Model and Darknet with Alexdb’s

The YOLOv4 model was used for its state-of-the-art performance in object detection tasks. Object detection models are trained to look at an image and search for a subset of object classes. Then, these object classes are enclosed in a bounding box and their class is identified. YOLOv4-tiny was chosen to simplify processing and make it more streamlined; this model is designed to be a smaller and faster version of YOLOv4. YOLOv4-tiny is roughly 8 times faster at inference time compared to YOLOv4 and maintains approximately two-thirds of the performance, as demonstrated on specific MS COCO datasets [10]. For smaller, more specific detection tasks like this, the performance degradation is minimal [11].
Two models were trained, each tailored to detecting and localizing its respective target: floating sea-cages and inland raceways. The training process, which runs separately for the two different farming systems, was conducted using a “Google Colaboratory notebook” (“Colab”; [12]) that leveraged the Darknet framework, specifically utilizing Alexdb’s implementation of YOLOv4 [13].

2.2. Model Dataset Acquisition and Annotation

To train and test the model, already-known ground points all over the world were acquired (Italy, Greece, Chile, Australia, Norway, Island, Switzerland, Austria, and Romania). Two separate steps were undertaken for data acquisition and annotation. Initially, a dataset of 150 images (.jpg) was acquired and meticulously annotated to identify floating sea cages used in aquaculture, represented by light round-shaped objects on a blue background. Subsequently, an additional dataset of 130 images (.jpg) was collected and annotated specifically for inland raceways, corresponding to rectangular-shaped dark blue with light contour objects. In modern mariculture, floating sea cages are typically round-shaped, with diameters ranging from 15 to 28 m, located in coastal waters with depths of about 10–15 m, including bays, gulfs, and ports. On the other hand, raceways are rectangular land-based rearing systems of variable lengths and widths, situated close to rivers, channels, or freshwater basins.
Both target image collections were obtained by Google Maps, which allows downloading high-resolution images (1280 × 1280 pixels) [14]. Numerous target objects belonging to the two types of farming systems were labeled on every image using the YOLO label program [15]; this application is a dedicated annotation tool used for the labeling of specific categories. The bounding boxes’ parameters (the x-coordinate of the box’s top-left corner, the y-coordinate of the top-left corner, the width of the box, and the height of the box) collectively define the position and dimensions of the bounding box, allowing for the precise localization of objects within an image. After this process, the target object for each category could be trained using the YOLO model. In this specific case, the object detection process analyzed images where target objectives (e.g., cages and raceways) showed limited variation in terms of intrinsic characteristics. Thus, 280 images were sufficient to adequately describe the possible differences from each other, making the need for data augmentation unnecessary. Additionally, the model’s performance was demonstrably good (see Section 3), hence avoiding the need to use a greater number of images. Moreover, the limitation of the computational power of the server used ([12]; free version) also influenced this decision.

2.3. Creation of Training and Test Sets

To facilitate model training and evaluation, collected images were divided into training (75%) and test sets (25%) using a Python application (“.py”) created with a code implemented in Colab. A validation phase was not performed due to the relatively low number of images available and the obtained good model performance on the test set.

2.4. Model Implementation, Experimental Environment, and Parameter Settings

During the object detection phase, Colab simplifies the installation process by pre-configuring most of the dependencies. For local setups, the Darknet repository was cloned, and the appropriate architecture was specified based on the GPU’s computing capability. Detailed information is reported in the Supplementary Materials. The learning rate follows a stepwise approach, where it is decreased at two specified steps (4800 and 5400). Two classes for each kind of facility (floating sea cages and inland raceways) were used and recorded in a text file. In addition, another “control class” was added representing non-target items such as crop fields, open sea areas, etc. (Figure 1). The data file with all parameters (classes, batch, subdivisions, change line steps, network size, etc.) was stored in another text file (see Supplementary Materials).
Pre-trained weights, specifically YOLOv4-tiny.conv.29 [13], were uploaded into the model before the training step. YOLOv4-tiny.conv.29 refers to the pre-trained weights obtained from the Darknet framework, specifically convolutional layer 29 of AlexeyAB’s implementation of YOLOv4-tiny [13]. These pre-trained weights serve as an initialization for the model, providing a starting point for the subsequent training to improve the model’s performance on the specific object detection task.

2.5. Model Evaluation

The model was assessed using four metrics: precision–recall, average precision (AP), F1, and Intersection over Union (IoU). The first three metrics can evaluate the precision–recall trade-off of a model across multiple confidence thresholds, while the IoU is used to assess the accuracy of object localization.
The precision–recall curve (1) is generated based on the true positive (TP), false positive (FP), true negative (TN), and false negative (FN) values. It provides valuable insights into the model’s ability to balance between correctly identifying positive samples and minimizing false positives. The F1-score (2) combines precision and recall into a single metric; it considers both the ability of the model to correctly identify positive samples (precision) and the ability to capture all relevant positive samples (recall). The AP is the area under the precision–recall curve. It represents the average precision value at each point along the recall axis, ranging from 0 to 1. The calculation of the AP involves summing the precision values at different recall levels and multiplying them by the difference in recall between adjacent levels. To have a better understanding of model precision, the mean average precision (mAP) can be calculated (3); this metric is calculated by taking the average of the AP across all the classes under consideration and quantifying the performance of the object detection and localization algorithm. The IoU (4) is an index used to measure the overlap between two bounding boxes or regions of interest (ROIs) in object detection. The IoU is calculated by dividing the area of intersection between the two regions by the area of their union. The formulae used for the four metrics are reported below:
  • Precision (P) = TP/(TP + FP)
    Recall (R) = TP/(TP + FN)
    where TP (true positives) represents the number of correctly predicted positive instances, FP (false positives) is the number of instances predicted as positive but are truly negative, and FN (false negatives) is the number of instances predicted as negative but are truly positive.
  • F1-score = 2 × (Precision × Recall)/(Precision+Recall).
  • mAP = ( 1 k ) ( A P i ) .
  • IoU = Area of overlap/Area of Union.

2.6. Model Application: The Tuscany Case Study

The study area selected to test the model was divided into two domains within the Tuscany region boundaries (Italy): (1) a strip of coastal sea surface up to 5 km away from the Tyrrhenian coastline along mainland and islands coasts (397 km and 260 km, respectively) and (2) the inland hydrographic grid (latitude: 43.7711° N to 43.9943° N; longitude: 10.1136° E to 12.5136° E). To obtain RGB high-resolution images remote sensing and/or aircraft images collected in 2023 were extracted using Google Maps. Then, the obtained images were converted into georeferenced TIFF (EPGS 3847) files with a resolution of 768 × 768 pixels.
About 100 images were randomly downloaded for each domain, to which were added further satellite images of known finfish farming facilities recorded in the national database for the Tuscany region (Reg. (EC) 762/2008): 2 mariculture production sites (floating sea cages) and 18 inland facilities (raceways). Thus, as described in the previous section, target objects (floating sea cages and land raceways) were present only in a small number of images of the obtained dataset that was then used to automatically identify marine and land-based aquaculture facilities in Tuscany, testing the percentage probability of localization. Through analyzing the prediction probabilities generated by the model for each structure, the likelihood of correctly identifying the facility was determined. This probability can provide a reliable estimation of the percentage accuracy in identifying the rearing structure in one of the most important Italian regions for aquaculture production.
To automatically extract geographic coordinates of the recognized farming facilities in the two domains of the dataset, bounding boxes of all TIFF images containing target objects were calculated. The bounding boxes with their own x and y coordinates were identified by applying the YOLO model set up in the previous phase (see Section 2.2). Then, the geographic coordinates that correspond to the cartesian coordinates were extracted from the Colab environment using the “gdal” package and Python 3.9 [16] installed on a local workstation. Since the images were in TIFF format and already georeferenced, it was possible to import the coordinates from the trained YOLOv4 model into Colab. It is worth noting that to obtain a unique couple of coordinates of land-based farms, multiple raceways located within a radius of approximately 500 m have been combined into a single georeferenced point. Concerning mariculture facilities, the coordinates were obtained by combining sea cages, not more than 1000 m apart, in a single coordinate point.

3. Results and Discussion

3.1. Model Accuracy

The model exhibits a strong accuracy for floating sea cage detection: precision is measured at a confidence threshold of 0.50, resulting in a value of 0.79, a recall value of 0.73, and an F1-score equal to 0.76. The IoU value is 62.6%. The mAP hovers around 74.4%, while the AP, which considers only sea cages, is equal to 93.4%. The metric results of recall at different threshold levels, precision, and F1 are shown in Figure 2a.
In the case of raceways, the model allows for the identification of this typology of land-based aquaculture facilities with a reliable precision of 83.0%, measured at a confidence threshold of 0.5. The average IoU for these detections was 64.6%. With an IoU threshold of 50% and in computing the Area-Under-Curve (AUC) for each unique recall value, the mAP ([email protected]) corresponds to 83.7%. A recall value of 0.73 indicates a good coverage of true positives. The F1-score, which balances precision and recall, reaches 0.78, indicating an overall good performance. The average precision that considers only raceways is equal to 76.9%. The metrics of raceways are shown in Figure 2b.
To the best of our knowledge, the automatic identification of land-based aquaculture structures has not been described yet. Raceways, being rectangular items, could potentially be misclassified with other objects with the same shape. However, the characteristic pattern of closely positioned and interconnected placement, along with the specific color of the water, allows for their reliable differentiation from other objects of similar shape without requiring any pre-processing of the images.

3.2. Model Application on Tuscany Region

In the Tuscany Sea coastal area (more than 100 images), the model identified two existing different mariculture sites that corresponded to the desired pattern (circular floating cages not more than 1000 m apart). Both coincide exactly with the two areas dedicated to finfish mariculture in Tuscany present in the Reg. (EC) 762/2008 list of aquaculture companies of this region: the Gulf of Follonica and the island of Capraia (images of identified farming structures are reported in the Supplementary Materials). In the inland area, among more than 100 images processed, the model identified 20 aquaculture facilities that matched the predetermined aggregation rules (raceways located no more than 500 m from each other). A comparison between obtained georeferenced facilities, a manual inspection, and the list of the active sites of inland aquaculture in Tuscany (Reg. (EC) 762/2008) confirmed that 18 out of these 20 images represented the existing aquaculture facilities, whilst in two instances, the model miscategorized similar rectangular objects on the ground.
At a first visual evaluation, the model demonstrates its ability to accurately identify the desired patterns in the given dataset, with a high level of precision for both the floating sea cages and raceways facilities.
In addition, IoU metrics were used to evaluate the images bounding boxes overlap (see Section 2.5Model Evaluation”) to have counterevidence and a comparison with our visual inspection work. Hence, the ground truth from the images of the Tuscany database was annotated and compared with the bounding boxes generated by the model in the corresponding images. Through calculating the IoU between the predicted and annotated bounding boxes, it was possible to assess quantitative accuracy. In the case of raceways, the IoU was calculated in all identified images’ bounding boxes, resulting in an IoU ranging from 0.40 to 0.83 with a mean of 0.67. The images that represented floating sea cages showed IoU values ranging from 0.70 to 0.84 with a mean of 0.76. A total of 116 floating sea cages were identified in five images belonging to two different locations (see Supplementary Materials for details) that were successfully recognized by the model, whereas a total of 80 raceways were captured in all 20 images. In particular, all recognized floating sea cages in the dataset exhibited a probability prediction exceeding 80%, indicating a high level of confidence in correctly identifying these structures. On the other hand, the rectangular raceways showed a prediction percentage of almost 80%, indicating that the model has a good level of confidence in identifying these structures.
The automated identification of sea cage and land-based raceway aquaculture facilities, using deep learning techniques, remains relatively unexplored in the existing scientific and technical literature. In this study, high-resolution images were downloaded by Google Maps. These satellite images were used to test the YOLO model for the identification and geolocation of both land-based (raceways used for salmonid farming) and marine-based (floating sea cages used for seabream, seabass, and meagre) aquaculture facilities in Italy. This innovative process showed an overall precision of about 85% of correct recognition of the target class objects. Several methods have been employed to achieve the objective of identifying areas and structures associated with aquaculture activities. These approaches have primarily involved the visual interpretation of Synthetic Aperture Radar (SAR) imagery [17]. Other techniques based on spectral characteristics were also used [18]. In some cases, these approaches incorporated the use of indices, such as the Normalized Difference Water Index (NDWI), to enhance the detection and classification of aquaculture sites [19]. Recently, more advanced methods to specifically extract aquaculture sea cages using remote sensing have been used: for example, ref. [20] used a deep learning method that displayed a precision in offshore aquaculture facilities recognition of 83% using Multispectral Scan Imaging (MSI). However, these studies used high-quality SAR images, sourced from Sentinel-1 and Sentinel-2 satellites, which need a huge pre-processing phase before their use, affected by their structure whose bands must first be combined. In particular, Sentinel-1 images should be refined to remove the speckle noise contamination [21]. Conversely, the proposed approach is based on a thin CNN single-stage model (YOLOv4-tiny; [13]) with the use of aerial high-resolution images that do not require such time-consuming manipulation since the high-resolution aerial photos are freely available in RGB format. In addition, the intrinsic properties of the model allow training with minimal computational resources, therefore in a short time, and using free tools, while allowing optimal results to be achieved. The aerial images are downloaded as individual tiles once the region of interest is defined. The availability of several photos of reduced dimensions overcomes the need to download a single huge image, reducing downloading and processing times.
Lastly, we developed a novel methodology that allowed us to pinpoint aquaculture facilities in a diversified area in Italy, the Tuscany Region, where both coastal marine and inland aquaculture farms are present. The novelty of our approach consists of the automatic recognition of two common typologies of aquaculture facilities. The strength of this method lies in the ability to identify objects of interest while simultaneously extracting geographical coordinates in an integrated process. In fact, georeferenced TIFF images were used combined with the capabilities of the Python ‘gdal’ library, which enables the transformation of item characteristics from the bounding boxes, identified by the YOLO model, into geographic coordinates. This process might help to reduce the effort of data acquisition and will be consequently useful as an assistance tool for national or regional authorities, which are responsible for the census and monitoring over time of aquaculture facilities in both coastal and inland areas. The proposed approach, which combines the YOLO model for structure identification, high-resolution images, and the adaptability of Python libraries that facilitate coordinate conversion, is the first of its kind.

4. Conclusions

Our innovative methodological approach can provide a valuable additional tool to improve the knowledge of the current structure and trends of the aquaculture sector, which is crucial for the relevant administrations (local, national, or international) that, at different levels, are responsible for the census, monitoring, management, and governance of finfish aquaculture. The present study is the first that has applied the YOLO algorithm to detect both floating sea cages and land-based fish farming. The novelty of this automated method resides in its capability to estimate, in a light single-step process, the presence of structures without any labor-intensive image pre-processing, in comparison with the two-stage algorithms employed in the R-CNN series, which also use management-intensive images [22]. The accuracy and precision of the model are sufficiently high, despite the relatively small number of sample images used and the absence of a pre-processing phase. Our approach might be further improved with adjustments and developments, such as employing a larger and variegated image dataset, for example, using data augmentation algorithms or working on framework parameters setting. Moreover, our model could be improved to automatically calculate other parameters such as cage diameter and raceway dimensions, which are useful for estimating the potential carrying capacity of the farms.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/agriengineering7010011/s1: Figure S1: Examples of coastal and land-based aquaculture production sites identified by the model in Tuscany region (Italy); Figure S2: YOLO4 model examples of annotated images for correctly recognized land-based aquaculture facilities (a) and misclassified targets ((b), i.e., crop fields).

Author Contributions

M.V.: original manuscript, methodology, and artwork; R.N., M.M. and N.T.: referencing, editing, and data curation and validation; D.P. and A.M.: formal analysis and review; F.C.: conceptualization, review, editing, and overall supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Italian Ministry of Agriculture and Forestry: Sub-measure 4.3 of Macro-objective 1 of the project “Cooperation Agreement for PNSR 2014–2020” (n. J81G16000010007).

Data Availability Statement

The data supporting the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. FAO. The State of World Fisheries and Aquaculture 2020; FAO: Rome, Italy, 2020; Volume 32, ISBN 9789251326923. [Google Scholar]
  2. European Union. Regulation (EC) No. 762/2008 of the European Parliament and of the Council of 9 July 2008, on the Submission by Member States of Statistics on Aquaculture and Repealing Council Regulation (EC) No. 788/96. Official Journal of the European Union, L 218, August 14, 2008, 1–13. Available online: https://eur-lex.europa.eu (accessed on 2 July 2024).
  3. General Fisheries Commission for the Mediterranean (GFCM). Recommendation GFCM/41/2017/1 on a Regional Scheme for Port State Measures to Combat Illegal, Unreported, and Unregulated Fishing Activities in the GFCM Area of Application. Food and Agriculture Organization (FAO): Rome, Italy, 2017; Available online: https://www.fao.org/gfcm (accessed on 2 July 2024).
  4. Fisheries Department, Food and Agriculture Organization of the United Nations. Towards Improving Global Information on Aquaculture; FAO: Rome, Italy, 2005; Volume 480. [Google Scholar]
  5. Ma, Y.; Qu, X.; Feng, D.; Zhang, P.; Huang, H.; Zhang, Z.; Gui, F. Recognition and Statistical Analysis of Coastal Marine Aquacultural Cages Based on R3Det Single-Stage Detector: A Case Study of Fujian Province, China. Ocean Coast. Manag. 2022, 225, 106244. [Google Scholar] [CrossRef]
  6. Chang, Y.-L.; Anagaw, A.; Chang, L.; Wang, Y.C.; Hsiao, C.-Y.; Lee, W.-H. Ship Detection Based on YOLOv2 for SAR Imagery. Remote Sens. 2019, 11, 786. [Google Scholar] [CrossRef]
  7. Liu, C.; Yang, J.; Ou, J.; Fan, D. Offshore Oil Platform Detection in Polarimetric SAR Images Using Level Set Segmentation of Limited Initial Region and Convolutional Neural Network. Remote Sens. 2022, 14, 1729. [Google Scholar] [CrossRef]
  8. Martín-Rodríguez, F.; Isasi-de-Vicente, F.; Fernández-Barciela, M. Automatic Census of Mussel Platforms Using Sentinel 2 Images. arXiv 2022, arXiv:2204.04112. Available online: https://arxiv.org/abs/2204.04112 (accessed on 20 July 2024).
  9. Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Martinez-Gonzalez, P.; Garcia-Rodriguez, J. A Survey on Deep Learning Techniques for Image and Video Semantic Segmentation. Appl. Soft Comput. 2018, 70, 41–65. [Google Scholar] [CrossRef]
  10. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Computer Vision–ECCV 2014, Proceedings of the 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part V 13; Springer International Publishing: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
  11. Jiang, Z.; Zhao, L.; Li, S.; Jia, Y. Real-Time Object Detection Method for Embedded Devices. Comput. Vis. Pattern Recognit. 2020, 3, 1–11. [Google Scholar]
  12. Bisong, E. Google Colaboratory. In Building Machine Learning and Deep Learning Models on Google Cloud Platform; Apress: Berkeley, CA, USA, 2019; pp. 59–64. [Google Scholar]
  13. Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  14. Google. Google Maps. Retrieved [June 2024]. Available online: https://maps.google.com (accessed on 1 June 2024).
  15. Kwon, Y. YOLO_label. GitHub Repository. 2021. Available online: https://Github.Com/Developer0hye/YOLO_Label (accessed on 28 June 2024).
  16. Drake, F.; Van Rossum, G. Python 3 Reference Manual; CreateSpace: Scotts Valley, CA, USA, 2009. [Google Scholar]
  17. Fan, J.; Huang, H.; Fan, H.; Gao, A. Extracting aquaculture area with RADASAT-1. Mar. Sci. 2005, 29, 44–47. [Google Scholar]
  18. Zhu, C.; Luo, J.; Shen, Z.; Li, J.; Hu, X. Extract enclosure culture in coastal waters based on high spatial resolution remote sensing image. J. Dalian Marit. Univ. 2011, 37, 66–69. [Google Scholar]
  19. Ma, Y.; Zhao, D.; Wang, R.; Su, W. Offshore aquatic farming areas extraction method based on ASTER data. Trans. Chin. Soc. Agric. Eng. 2010, 26, 120–124. [Google Scholar]
  20. Lu, Y.; Shao, W.; Sun, J. Extraction of Offshore Aquaculture Areas from Medium-Resolution Remote Sensing Images Based on Deep Learning. Remote Sens. 2021, 13, 3854. [Google Scholar] [CrossRef]
  21. Filipponi, F. Sentinel-1 GRD Preprocessing Workflow. In Proceedings of the 3rd International Electronic Conference on Remote Sensing, Online, 22 May–5 June 2019. [Google Scholar]
  22. Shen, C.; Ma, C.; Gao, W. Multiple Attention Mechanism Enhanced YOLOX for Remote Sensing Object Detection. Sensors 2023, 23, 1261. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Cage and raceways annotation example of using YOLO label software. Yellow squares represent target items (sea cages and raceways); red squares represent control items (ex. crop fields, open sea areas, etc.).
Figure 1. Cage and raceways annotation example of using YOLO label software. Yellow squares represent target items (sea cages and raceways); red squares represent control items (ex. crop fields, open sea areas, etc.).
Agriengineering 07 00011 g001
Figure 2. Recall, precision, and F1 values at different threshold levels. (a) Inland raceway; (b) floating sea cages.
Figure 2. Recall, precision, and F1 values at different threshold levels. (a) Inland raceway; (b) floating sea cages.
Agriengineering 07 00011 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Veroli, M.; Martinoli, M.; Martini, A.; Napolitano, R.; Pulcini, D.; Tonachella, N.; Capoccioni, F. A Novel and Automated Approach to Detect Sea- and Land-Based Aquaculture Facilities. AgriEngineering 2025, 7, 11. https://doi.org/10.3390/agriengineering7010011

AMA Style

Veroli M, Martinoli M, Martini A, Napolitano R, Pulcini D, Tonachella N, Capoccioni F. A Novel and Automated Approach to Detect Sea- and Land-Based Aquaculture Facilities. AgriEngineering. 2025; 7(1):11. https://doi.org/10.3390/agriengineering7010011

Chicago/Turabian Style

Veroli, Maxim, Marco Martinoli, Arianna Martini, Riccardo Napolitano, Domitilla Pulcini, Nicolò Tonachella, and Fabrizio Capoccioni. 2025. "A Novel and Automated Approach to Detect Sea- and Land-Based Aquaculture Facilities" AgriEngineering 7, no. 1: 11. https://doi.org/10.3390/agriengineering7010011

APA Style

Veroli, M., Martinoli, M., Martini, A., Napolitano, R., Pulcini, D., Tonachella, N., & Capoccioni, F. (2025). A Novel and Automated Approach to Detect Sea- and Land-Based Aquaculture Facilities. AgriEngineering, 7(1), 11. https://doi.org/10.3390/agriengineering7010011

Article Metrics

Back to TopTop