Next Article in Journal
An Exact Algorithm for Task Allocation of Multiple Unmanned Surface Vehicles with Minimum Task Time
Previous Article in Journal
Experimental Study on the Influence of Bulbous Bow Form on the Velocity Field around the Bow of a Trimaran Using Towed Underwater 2D-3C SPIV
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Cloud-Based Array Electromagnetics on the Path to Zero Carbon Footprint during the Energy Transition

1
KMS Technologies, Houston, TX 77057, USA
2
Red Tree Consulting, Houston, TX 77055, USA
3
Lulea Technical University, 971 87 Lulea, Sweden
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2021, 9(8), 906; https://doi.org/10.3390/jmse9080906
Submission received: 30 July 2021 / Revised: 16 August 2021 / Accepted: 20 August 2021 / Published: 22 August 2021
(This article belongs to the Section Geological Oceanography)

Abstract

:
Fluid imaging is one of the key geophysical technologies for the energy industry during energy transition to zero footprint. We propose better Cloud-based fluid distribution imaging to allow better, more optimized production, thus reducing carbon dioxide (CO2) footprint per barrel produced. For CO2 storage, the location knowledge of the stored fluids is mandatory. Electromagnetics is the preferred way to image reservoir fluids due to its strong coupling to the fluid resistivity. Unfortunately, acquiring and interpreting the data takes too long to contribute significantly to cost optimization of field operations. Using artificial intelligence and Cloud based data acquisition we can reduce the operational feedback to near real time and even, for the interpretation, to close to 24 h. This then opens new doors for the breakthrough of this technology from exploration to production and monitoring. It allows the application envelope to be enlarged to much noisier environments where real time acquisition can be optimized based on the acquired data. Once all components are commercialized, the full implementation could become a real game changer by providing near real time 3-dimensional subsurface images in support of the energy transition.

1. Introduction

Geophysical fluid imaging technologies are used in a wide range of applications. Among geophysical methods, electromagnetics (EM) determines subsurface resistivities. Resistivity changes are caused primarily by fluid changes; hence the EM is a prime candidate to address the fluid properties and variations thereof. The biggest potential for EM lies in monitoring geothermal, carbon capture, utilization and storage (CCUS), and enhanced oil recovery (EOR) of hydrocarbon reservoirs. For EOR of hydrocarbon reservoirs, EM methods increase the recovery factor. At the same time, usage of CO2 flooding to produce the oil significantly reduces the carbon footprint.
In geothermal applications EM is a standard geophysical method for exploration and monitoring [1]. Monitoring is often carried out in compliance with induced seismicity monitoring and to understanding the fluid movement inside the reservoir. For carbon capture applications, only recently EM methods have become of interest [2]. For hydrocarbon applications, EM was in favor in the 1950s, 1960s, and1980s, but did not make a breakthrough until marine EM showed its value in the marketplace [3,4]. With the transition to renewal energy sources, we also must address the carbon reduction by either lower carbon footprint (of existing oil production) or by reinjection of CO2 in reservoir. Combining these two is called enhanced oil recovery + (EOR+), where we now use CO2 to drive the enhanced oil production and thus increase the recovery factor.
Thermal EOR is one of the secondary recovery methods that produces the largest environmental impact. In fact, the production of one barrel of heavy oil releases to the atmosphere about 10 kg of carbon dioxide equivalent per barrel (CO2e/bbl) (assuming the boiling of 3.5 bbls of water for each bbl produced). Optimizing the steam oil ratio (SOR) needed for thermal EOR by 1% using CSEM (typically improvements are much larger), it is possible to reduce emissions up to 300 thousand kg of CO2e/day (for a global production of 3 MM bbl/day).
Since we now have developed EM to be a technology candidate to contribute high value to the future energy transition, we now can establish the connection between the measurement methodology and how artificial intelligence would bring significant breakthrough. Simplistically speaking, nothing would be more convenient than to have a small sensor that would have a smart phone integrated and could produce the three-dimensional (3D) images as answers to our questions. Unfortunately, our signal and methodologies are more complicated. To understand this, we need to understand the issues first:
  • A typical signal we measure in the borehole is the magnetic field response to a transmitter located at the surface is 400 pT (10−12 T). In comparison, the Earth’s magnetic field varies between 25,000–65,000 nT. A strong refrigerator magnet produces a magnetic field of 10,000,000 nT. As result, adding data transmission or ‘noisy’ electronics to our very sensitive magnetic sensors is not easy and requires many iterations in implementation until high quality measurements are available in real time.
  • Because our geophysical problem is always an inverse problem (going from data to model), and as such is ill-posed, this means its solution is non-unique. This requires that we understand how data errors, model variations, a priori information, and regularization parameters affect multi-parameter sensitivity in 3 dimensions.
Electromagnetic monitoring technology is now well proven [5,6,7,8] for hydrocarbon applications, while in the 1980s it was still in the research phase [6]. The equipment is fully commercial, software algorithms are well tested, and surface measurements confirm in many cases’ borehole measurements. To improve surface-based measurement resolution, borehole sensors can be added. Needed is the demonstration of its value with more applications. To show commercial viability, the use of artificial intelligence is essential to optimize operations. Methods were originally developed for land application, but only used in limited cases. The success in the marine EM market proofed the technology value and its commercial viability [8]. While the methodology, instruments, and interpretation methods are mature, they are not commercial enough to let the business drive the technology. Cloud based artificial intelligence and deep learning, coupled carefully with the hardware and methodology, could reduce turn-around time to such a degree that the market gets enlarged from initial budgetary markets to operational markets. Thus, we selected to review some enabling components from the hardware and methodology side.
We designed a new EM acquisition architecture that combines novel technologies and addresses the need of calibrating surface and borehole data with each other. This is necessary to calibrate surface data to real reservoir scale and parameters. We also added various borehole receivers to the system to improve image focus and resolution [9]. Our array acquisition system applies multiple electromagnetic methods as well as microseismics in ONE layout. This reduces operational cost and provides synergy between the methods. In a production scenario, using multi-component EM allows resolving oil and water-bearing zones equally well, as well as obtaining fluid flow directions. The modular architecture allows a fit-for-purpose configuration tailored to specific exploration/monitoring targets (in terms of depth, frequency range, and sensitivity required). The entire system combines hardware with processing and 3D modeling/inversion software, streamlining the workflow for the different methods. 3D feasibility studies leading to acquisition design are routinely carried out. After 35 years’ experience with surveys, which included careful feasibility survey design, we see them always giving exceptional results. This leads to a fully integrated land and borehole acquisition system that can be optimized in a fit-for-purpose fashion and extended to transition zones and marine acquisition. The core of the system for all units is a unified sensor and system architecture. This alone does not tell us where the biggest technical effort and potential use of artificial intelligence (AI) is. Only when we combine hardware technology with operations, processing, and interpretation and go through a realistic project timeline can we see where the potential improvement can be made using AI.
Figure 1 shows the workflow for EM reservoir monitoring, including technical tasks, and shows which tasks are the most time consuming. On average the time between repeat surveys is approximately 15–18 months. The first step in the workflow is the 3D feasibility, during which we derive survey design and optimized system selection. The main task here is to build a 3D anisotropic model, to benchmark the 3D modeling tools to avoid false anomalies, and to run 3D models. The derivation and running of the models and their analysis takes the longest (more than 50% of the elapsed time).
During the data acquisition, most of the time is spent on moving of the equipment. Quality assurance is carried out in parallel to decide if a receiver site is of sufficient quality and can be moved, or whether the recording must continue. For every given day, this process can delay acquisition by the same amount. Quality assurance includes data processing, basic data interpretation, and applying various criteria to examine the data quality. Each of the criteria (like data error and signal length or behavior of the transmitter current) is controllable, but in combination they need to be prioritized. The time needed for a baseline survey can often be reduced by using more equipment and automating field operations. The next step includes the data processing and data evaluation, which again mostly depends on careful data administration and data processing that can mostly be further automated. After the data have been evaluated, local noise conditions are known, then the reservoir model and its electromagnetic response is better understood. Having more detailed a priori information makes it meaningful to further optimize for the next repeat survey. We usually add here a step, called predictive optimization, where 3D synthetic data are generated, and local noise is added to the data such that the next repeat survey can be simulated. This allows further survey parameter optimization to ensure better imaging. For the predictive optimization, we have again the 3D modeling time as the biggest elapsed time user. The next step when the time lapse data get converted to subsurface images includes straight forward processing that can be automated. The last step of producing a verified fluid image again includes time consuming 3D modeling. Throughout the workflow, 3D modeling, and quality assurance are the most time-consuming tasks and very critical. It would be worthwhile to address them in terms of how they can be made faster and easier using AI.
Before we review where AI is making the biggest impact in electromagnetics for reservoir monitoring, we need to understand where fluid imaging technology makes its contribution to carbon footprint reduction. After that, we describe methodology and instrument and where they fit in the Cloud-based acquisition. Then we focus on those parts that benefit most from Cloud enabling. Finally, we develop how Cloud-based AI can produce game changing technology and contribute to the carbon footprint reduction. This is underscored by some application examples.

2. Value Statement: Linking Carbon Footprint to Fluid Imaging

Reducing greenhouse gas emissions has become a primary need for the oil and gas industry and is reinforced by various national and international agreements. We analyze various parts in the hydrocarbon lifecycle to understand where our technology can make the biggest impact.
On the other hand, the energy needs of the world’s population are inescapable. In effect, the International Energy Agency [10] sees that the demand for energy for the year 2040 will increase by 24% and fossil fuels will supply 74% of this demand, with a growth of 13%. The growth of renewable energy sources will reach 83%.
Hence, to continue hydrocarbon production requires changes (reduction of carbon footprint) to the exploration, production, and refining processes to reduce greenhouse gas emissions.
Approximately 90% of greenhouse gas emissions (CO2, CH4, N2O) are produced in the downstream process. Only 10–20% of emissions are generated during exploration and extraction. Those emissions reach an average value of 10.3 gr per 1 MJ of crude oil produced [11]. This value may vary depending on the type of hydrocarbon and the process used for its extraction. Table 1 shows the typical emission values according to the types of hydrocarbons and the production stages [12].
Geophysics can greatly contribute to reducing emissions by supporting cleaner energy production, such as optimizing secondary recovery production that requires flood fluid monitoring (thermal, waterflooding, CO2).
Likewise, geophysics contributes to the generation and production of clean energies such as geothermal energy, besides its contribution to its exploration.
An emerging hydrocarbon production technique that will greatly contribute to reducing emissions is the so-called EOR+ [13]. EOR+ has a dual purpose in that it requires CO2 capture and sequestration while at the same time part of this CO2 can be used for enhanced recovery. As such, it is possible not only to reduce emissions by optimizing production, but also to create positive credits by sequestering the excess CO2 captured from the atmosphere. This enhanced recovery technique requires detailed description of the reservoirs and monitoring of the CO2 injection front. Both can be achieved with the application of EM methods.
In all these enhanced recovery processes (water, thermal, or CO2 flooding) the common denominator is the change in electrical resistivity that occurs in the reservoir fluids when water, steam, or CO2 are injected. These resistivity changes, caused by electron flow due to the fluid mobility and conductivity, are detectable with EM methods because in most cases there is a strong resistivity contrast between immobile and mobile fluids.

3. Workflows and Value of Electromagnetics for Fluid Imaging

An electromagnetic system is laid out at the surface with transmitter and multiple receivers. Figure 2 shows an example of such a layout where three receiver-lines are used to produce an area coverage. The transmitter in the figure is marked by the yellow lines (four transmitters in this figure). All receiver locations (shown as red circles in the figure) are used for each of the transmitters. Usually, before carrying out a survey, a feasibility study using 3D modeling is done to ensure that the survey setup can see target variations. For that, a resistivity log as shown on the right side of the figure is used. It is upscaled to determine an anisotropic model at a scale that can be resolved by surface measurements. This upscaled model is shown superimposed (blocky lines) on the log on the right: the upscaled horizontal resistivity are in blue and the vertical one in purple. The gamma ray log (GR) that is also in the figure is used to determine bed boundaries. The Earth model on the left shows an even more upscaled anisotropic resistivity model and indicates the target reservoir (marking the surface and side projections in the figure). In this model the target reservoir could either be the subject of enhanced oil recovery (EOR) or CO2 injection. Through the transmitter we inject a current of desired waveform and the response from the induced current in the subsurface is sensed by the receivers at the surface. This process is repeated for several hours, and the recorded signals are averaged/stacked to get a better signal-to-noise ratio.
Since the biggest value of electromagnetics is in reservoir monitoring, we explain the petrophysics using a model of Carlson [14] shown in Figure 3. The rock is represented by grains and pores between them. Here, the fluids are in the pore volume and once stress or strain is experienced on the fluid it causes mobility of the fluid. This mobility makes the electrons in the water saturated fluid part move, thus resulting in drastic resistivity decrease. For earthquakes, we often observe that electromagnetic signals go along with seismic signals and appear slightly before the seismic signal onset because the breaking of the grain-to-grain contact comes after the rock experiences stress and the electrons move. We show an example from an incidental observation we obtained in India on the right side of Figure 3. Here, we display the five components of the magnetotelluric field observed during the earthquake. The displayed time series speaks for itself as the earthquake is visible (time synchronous) on all data traces.
Since we attempt to measure our data better than with the accuracy of 0.5% (usual calibration variations that we observed are between 0.1–0.2%; maximum calibration deviation observed has been around 1.5%), careful survey design and optimized survey procedures are necessary. A feasibility study is in most cases mandatory, best even with additional local noise measurements. A workflow that we typically use is shown in Figure 4. For monitoring application, we often look for less than 5% reservoir resistivity variation and we thus recommend a full 3D modeling feasibility study. A feasibility includes getting all available geoscientific data and carrying out a noise test on site. The data is then analyzed such that we preserve the target and define a range of target petrophysical parameter variation. These get translated to resistivity variation to define the modeling parameters and build a 3D anisotropic model derived from the most accurate information available. The 3D modeling results define future survey designs, leading to a baseline survey. An example for data building the data decision space one is shown in Figure 5. On the left we see the noise spectra for the various sensors (electric and magnetic fields). On the right there are the 3D measured voltages induced in the receiver for various offsets. When we superimpose the noise spectrum as done in Figure 5, we need to stay with our survey design parameters (offset, gains, and recording times) above the noise level. For that reason, the superimposed noise was filtered earlier to simulate realistic situations.
From this we determine recording time and select sensors, survey layout, and other operational parameters. Receiver spacings are derived by simulating the receiver response with different spacings. We simulate the time lapse by substituting the fluid in the reservoir model and estimating if we can reconstruct the anomalous reservoir.
During the feasibility the most time-consuming part is the 3D modeling (compare Figure 1) and should thus be enhanced by using artificial intelligence, as will be discussed below.
Next, we will review the acquisition workflow. Data acquisition is usually an expensive part of a project and requires careful timing and preparation. Figure 6 shows a typical flow diagram for field data acquisition. A potentially time-consuming step is the quality assurance (QA) which is done concurrent to the acquisition. Here, the decision must be made whether a receiver is picked up and moved or remains on the ground to get better data (or needs to be improved). Thus, getting the data from the receivers to the QA specialist is important. We do this via a noise-free web access box which streams the data to the internet. This allows near real-time QA decision.

4. Background Enabling Instrumentation

The electromagnetic signals that we measure to obtain information of the subsurface are very small (tens to several hundred pT) and they need to be measured in the presence of an Earth’s magnetic field (25,000–65,000 nT) that is 1 million time bigger. In addition, we have external noise and man-made noise that can be more than 1000 times larger than the Earth’s magnetic field. It is a difficult task but it is not impossible, and is customary to geophysics/military/space science. Our sensors have such high sensitivity that they cannot be easily integrated with other data transmitting devices such as cell phones as they generate too much noise for the sensor and distort the signal. For passive electromagnetics, this is because of the broadband sensitivity. So, often, data transmission is done during acquisition pauses. For CSEM, that is not easily possible because we require a very accurate reference voltage level which can only be obtained with continuous data acquisition. Figure 7 shows the typical noise density for various magnetic field sensors (the electric field sensors have different, less critical, issues) used for land geophysical methods discussed here. In comparison we also plotted the noise density for an earthquake prediction coil and the noise density of the natural magnetic fields. All sensors must be below that noise level. We solve this issue by jumping via a low-noise Wi-Fi connection a short distance away from sensors and all analog parts into a web access box where the data are buffered and then sent to the internet with various protocols. Further discussion is in the section below.
Figure 8 shows field pictures of various receiver equipment components. Our target depth is between 500 m to 6 km in almost any geologic environment for a variety of applications from geothermal, carbon storage, to hydrocarbon applications. At each of the sites multi-component EM equipment is deployed to measure either magnetotelluric (MT) or controlled source electromagnetics (CSEM) response. We use ultra-stable electrodes for the electric field that have a broadband response from close to DC (better than <0.0001 Hz) to about 40 kHz on the high end. Stability of the sensors at the low frequency end is essential for fidelity of the signal and to determine the proper reference voltage. It should be noted that capacitive electrodes are not suitable as they do not go low enough in frequency to cover that depth range. At the top left of Figure 8 we show the electrode we used, which is a lead-free LEMI-701. The electrode spacing is either 50 or 100 m. Magnetic field measurements can be done by either fluxgate sensors, induction coils (center photo in Figure 8), or air loop (bottom left photo). We test all of them at each survey location and compare its noise with the local noise to optimize layout and acquisition times including operational deployment. Pictures of the acquisition system we use (KMS-820 array system) are shown in the other three photographs in the figure. The yellow box in the picture on the top right is the web access box that allows data streaming to the Cloud. It is important to note that the acquisition system must operate from −20 °C to 60 °C, acquire continuous data for weeks at a time, and be connectable to the Cloud.
Figure 9 shows various transmitter photographs under field conditions. The transmitter consists of a generator, a current switchbox, and a waveform controller that records the transmitter signals and is GPS synchronized. On the top left and the center bottom of the figure are pictures of our 150 kVA (KMS-5100-150) switchbox, back and front side, respectively. The switchbox converts the alternating current from the generator to direct current and switches it according to a predefined waveform (usually a square wave). This current (up to 400 A) is injected into the ground via large grounding electrode plates placed into pits on both sides of an approximately 1 km long thick cable. A picture of four pits (one dipole side) is shown in the top center of Figure 9. Further transmitter site picture is shown in the top right (generator and observer trailer on the top right) and in the bottom left a camp site in an oil field. In Figure 9, center right, is a picture of the 100 KVA version of the switch box and below on the right an inside view of the recording trailer.
The most important element in all of this is to send the data in real time to the Cloud. Where cellular phone coverage is available this can be done via cell phone.
After describing the instrument [15], we need to consider the methods and what they are used for. The most mature electromagnetic method is magnetotellurics (MT) as described in [16,17,18,19]. MT is the primary method for geothermal applications but also used to limited extend for hydrocarbon and other academic applications. MT uses the Earth’s natural field, and the signal can be improved by adding a signal to the weak band. In that mode the method is known as controlled source audiomagnetotellurics or CSAMT [20]. An even better coupling to the resistivity of the subsurface is obtained when the equipment is used in controlled source electromagnetics (CSEM) mode (in time or frequency domain) [20,21,22,23]. The problem with CSEM is that the image focus is unknown. This can be improved by using focusing methods like the borehole-logging-style focusing described by Rylinskaya and Davydycheva [24]. Using such a system for a combination of methods including boreholes is described by He et al. [25] and Strack [26] and for marine applications by Constable [27]. The choice of methodology is determined by finding the optimum solution using the 3D feasibility approach described above.

5. Converting the Workflow to Cloud-Based Application

Going from concept to real data implementation requires that the turn-around time of all the tasks in Figure 1 be reduced to near real time. We started to address the most time-consuming ones and to illustrate their progress to show its uniqueness. To appreciate the difficulty of the proposed undertaking, on the path to our goal—a fully distributed autonomous system where sensors, acquisition, and Cloud data transfer are near real time operations—we still need to understand small electromagnetic signals and their sensitivity to external noise, natural or man-made. To separate between data for storage/transmittal purposes, service support, and deep learning related analytics we separated the Cloud into three layers as shown in Figure 10: the DATA layer, which is mostly for data transmission and archival purposes; the OPERATIONS layer, where interaction between acquisition unit and user occurs; and the IMAGE layer, where the information is interpreted and improved using deep learning algorithms. In the OPERATIONS layer, feedback between acquisition unit and the user also occurs. When the web access box is placed away from the sensor both respective curves are very similar (left of bottom of Figure 10). Clearly, the web-access box generates little to no extra noise and let us realize the value of having the field data available in real time. It should be mentioned that for magnetotelluric data it is important to separate the acquisition unit from the web access box by a few meters. Only by avoiding the data transmission noise can you acquire true continuous time series which is essential for several data processing steps.
In practice, the most difficult part in this is the cell phone coverage and good data transfer. While replacing of the Sim card should be the standard, we experience a lot of inconsistency between cell phone providers around the globe and must work in most cases with their technician to make this work. Hopefully, once satellite-based internet is commonly available, this will become history even in the remote area where often geophysics is applied.
Operationally, having the data available in near real time in the Cloud is not only a huge time and thus money saver, but also an enabler. We can now do many things and apply the technology to problems hereto impossible. Collaboration between staff on site and the experts located half a globe away is much more immediate and beneficial not only during problem solving instances. An obvious advantage is that the data can be processed quickly while the equipment is still laid out on the ground. Another major cost saver can be obtained when safety/security training and licenses are required from the acquisition personal but not from remote personal and when extreme weather conditions existing from cold (−20 °C as used during the data acquisition shown here) to heat (+50 °C as often during our field testing).
It is an enabling operational benefit to have the instruments deliver the data in real time. For Reservoir Monitoring (which we determined is the highest value for electromagnetics in the future path to Zero footprint), we need to look at the most time critical task indicated in Figure 1. During the 3D feasibility step (workflow shown in Figure 4), 3D modeling is the most time consuming, while the analysis of the results only requires more experience to automate the process in the operation Cloud layer.
Usually, when we think about 3D modeling and data we think about inversion because we are trying to derive an Earth model from the data. Because our field measurements contain noise, and our methods are partially multi-valued, we cannot always provide a unique inverse solution [21]. When we analyzed this problem with borehole resistivity logs in the 1990 [28,29], we noticed that 95% of the computing resources were spent on the 3D forward model, which has a unique solution. So, instead of using the neural net for the inversion, it was more effective for the forward models [30], where it also can save up to 95% of the computing time. Subsequent substitution neural net based forward modeling reduces the inversion elapse CPU time by 92%. When replacing forward models with neural nets, we must realize that EM responses depend on resistivity contrast and the conductance or transverse resistance of the specific reservoir unit. This is often called equivalence in electromagnetics. This has fundamental implications as resistivity profiles are usually specific for each specific basin (or formation analogue). Not only do we need to train artificial intelligence-based algorithm for each basin (or geologic analogue) but also with many training sets (tens of thousands). Figure 11 describes the inversion components for electromagnetics. In the forward modeling the Earth model (1D or 3D) gets combined by the forward modeling function and model responses are generated. These are then compared with the real data and, if they match, we have found a realistic model explaining the data. Since they usually do not match in the initial iterations, the forward modeling is done many times and thus becomes the 95% CPU time user. Forward modeling can easily be substituted by artificial neural networks. The input to inversion is the data from the Cloud data layer and the output would go to the image layer. Figure 12 explains why we do not recommend substituting the inversion (better model match/updating) with a neural net. In the figure we have two parameters space that represent the model or transformed parameter space [21]. For a typical 30-layer modeling we have 29-layer parameter (one-dimensional) or 293 parameters in the three-dimensional case plus various vector anisotropies. Because electromagnetics often responds to conductance (thickness time conductivity product) or transverse resistance (resistivity time thickness product), we cannot always separate parameter combinations and obtain error surfaces, as in Figure 12, that have local and global minima. As geophysicists, we always look for the global minimum (as the best solution), which may not always be the right thing to do from the explorationist (geologist) viewpoint when you try to get a consistent model with most data sites and can tolerate higher curve fitting errors. That decision between local and global minimum is not easy as it requires understanding of all data sets and geology.

6. Implementation Example

To illustrate how we transition to using artificial intelligence, we apply the above-mentioned principles to the instrumentation and deliver the data directly to the Cloud. In Figure 1 we describe the most time-consuming parts in each workflow step. Here, we select some of them to illustrate the value of the implementation.
Obviously, the Cloud usage as data delivery and depository vehicle saves a lot of time/money. Figure 13 shows an example of electromagnetic data (here MT data) where the Cloud enables us to combine data sets acquired thousands of miles apart. We acquired a large set of MT data in the Northern USA and used two remote reference sites [17], one in the northern USA 600 km away from the survey area and another in the southern USA about 3000 km away. Using the Cloud allows us to utilize interpretation resources far away and take advantage of different time zones. In this case the data acquisition was done in northern USA at −20 °C at two locations and around 50 °C at the southern US location, where the instruments were field tested. The interpretation was done in Europe (Sweden and Germany) and Texas. The results were available within 24 h. This would not have been possible without the noise-free web access as described above. Figure 13 shows the results and comparison between using a single site and including a remote reference site. In the individual site processing on the left, we still see a significant mismatch over a wide frequency band. When including the remote reference site for noise compensation, this mismatch goes away as we get a more realistic data/model match. To the right of the curves are the Earth models obtained from the inversion (dashed) and the one derived using upscaling of the resistivity log shown in Figure 2. The inversion model matches the log well which is the basis for our judgement. The value in this is multi-fold:
  • Significant interpretation time saving.
  • For operations purposes, we more easily determine whether a site must be repeated or the quality is sufficient.
  • Significant confidence increase in the 3D feasibility subsurface model reduces interpretation time.
As a next step, we developed a deep learning artificial neural network to assist our data quality assurance (QA) effort, mostly because the turn-around time of the quality assessment can be very costly for operations. If we can implement artificial neural network to assist QA process, it will save us tremendous operation time. This is part of NOISE-FREE acquisition to cloud, which is explained in Figure 10. To automate and improve QA process, we introduce deep learning artificial neural network and implement the process in three steps: STEP 1. Set up a dedicated server to automatically harvest each data set from the Cloud; STEP 2. Run artificial neural network to derive instant QA result; STEP 3. Feedback to operator and add results to the operations Cloud layer. The success of this implementation relies on building a suitable artificial neural network and get it trained by deep learning method [31,32,33]. For illustration purposes we illustrate here the initial artificial neural network and deep learning phase.
To build an artificial neural network we need to derive input parameters that represent the data, the scientific processes applied to the data like processing or inversion, and the human interaction with the data (like operational behavior or model objectives). This seems initially simple if we manage to use clear technical metrics to evaluate each of the areas. Data QA comes from acquisition observation, raw data curve behaviors, and 1D inversion results. Since we had an accurate anisotropic borehole log derived model, the inversion match overrides acquisition quality (in this example only). Thus, we define two levels of QA parameters: Level 1 is based on acquisition observation and raw data curve behaviors; Level 2 is based on 1D inversion results. To simplify, we use QA 1 to refer to QA parameter Level 1, and QA 2 refers to QA parameter Level 2. In QA 1, MT curves are evaluated within the frequency range from 0.001 to 1000 Hz. QA 1 contains robust processing parameters [34]: root mean square (RMS) error, phase behavior, and apparent resistivity. QA 1 was set to classify each curve into 4 levels:
  • Level 1—Excellent, ≥85% of data points have a relative error (standard deviation (SD) of Amplitude/Amplitude; absolute error of phase = 0.56 relative error for amplitude) <10% and with smooth continuity.
  • Level 2—Good—more than 75% of data point (in period) have a relative error <20%.
  • Level 3—Acceptable—Phase does not go out of a quadrant (0–90°) (minimum) and amplitude of impedance tensor does not go down more than 45° and it is not increasing with period.
  • Level 4—Poor, data points show large dispersants, and it is impossible to define a curve for interpretation.
Further categorization is possible based on the impedance tensor (main components) and will be implemented in future deep learning practice.
In QA 2, 1D Inversion quality criteria—override (if inversion quality is better than acquisition quality including remote reference is taken precedence):
  • Level 1—Excellent, data fitting has a normalized root mean square fit (RMSf) ≤ 2.
  • Level 2—Good, 1D fitting has a normalized root mean square fit (RMSf) > 2 and ≤5.
  • Level 3—Acceptable, 1D data fitting has a normalized root mean square fit (RMSf) > 5 and ≤10.
  • Level 4—Poor, 1D data fitting has a normalized root mean square fit (RMSf) > 10.
We use normalized RMS fit which is RMS (fit between predicted observed)/SD to fit the data within 1: i.e., fits the data within standard error bars [18,34].
A deep learning algorithm can learn patterns from data sets. During an MT project in the US, we acquired about 250 data sets in 50 stations. We are using this data in two steps: training and testing steps. Deep learning programs train themselves through training data and test accuracy of the algorithm via testing data. Improvements which yield lower value impact are smaller in the operational context where the decision ‘repeat measurements or move receiver equipment’ must be made.
Our deep learning module has two parts: artificial neural network and statistical quality control module. An artificial neural network predicts QA result and the quality control module remove outlier from predicted QA results. To construct a deep learning module, we randomly selected 37 data sets as training data and rest of the data sets were testing data. For each training data set, we randomly defined a predicted QA result to simulate of the output of artificial neural network, and the predicted result was compared with the actual QA result (produced by experienced geophysicist). After the comparison, we took the difference between predicted QA result and actual QA result and then calculate the difference ratio, which is difference divides actual QA result. The difference and difference ratio are used as input value to a statistical quality control module [35] to train the neural network. This statistical quality control model is based on theory of central limit theorem [35], it removes outlier of predicted QA when training neural network and improves it.
Figure 14 shows Backpropagation (BP) artificial neural network (top left) in this application [36], input layer reference (top right), and deep learning feedback loop (bottom). Input data go directly to the artificial neural network, and this artificial neural network predicts the QA result. We also created a statistical quality control module to evaluate and improve artificial network predicted QA results. We chose BP network because of its efficiency over other algorithms. In the BP network, three input nodes are global positions satellite (GPS) information (Data 1 including coordinates, time, and altitude), instrument amplifier gain settings (Data 2 including acquisition unit amplifier settings for each channel), and operator’s name from each data sets (Data 3); two hidden layers are QA 1 (QA parameter Level 1) and QA 2 (QA parameter Level 2); One output node is predicted QA result. The predicted QA result will get compared with the actual QA result, which was produced by an experienced geophysicist. The comparison is quality controlled by central limit theorem algorithm [37], which is explained below. By feeding back comparison results to BP network, it is getting trained continuously and the deep learning cycle starts. The more data sets we provide to this BP network, the better it performs.
To improve our BP artificial network, we need to establish a statistical quality control module [35] to fine-tune deep learning predicted results. Artificial neural network predicted results, especially at the beginning stage, will have large error rate. We set up an outlier detection algorithm based on central limit theorem [37]. The workflow of quality control module is shown in Figure 14 (right side). The quality control module algorithm is explained in Figure 15.
The Difference for the data set means the difference between Predicted QA result and Actual QA result (Equation (1)), and Difference Ratio is difference divides Actual QA result (Equation (2)).
D i f f e r e n c e = P r e d i c t e d   Q A   r e s u l t A c t u a l   Q A   r e s u l t
D i f f e r e n c e   R a t i o = D i f f e r e n c e A c t u a l   Q A   r e s u l t  
According to central limit theorem, the following four properties are needed and calculated from sample training data sets:
  • Mean of sample ( μ x ¯ ) mean equals to the mean of whole data sets:
    μ x ¯ = μ
  • The standard deviation (SD) of the sample ( σ x ¯ ) equals the deviation of whole data set divided by square root of the sample size:
    σ x ¯ = σ n
  • We assume the original data sets are normally distributed; therefore, the sample means will be normally distributed.
  • If, in any case, the distribution of original data sets is not normal, a sample size of 30 or more is needed to use a normal distribution to approximate the distribution of the sample means. The larger the sample, the better the approximation will be.
We selected 37 sample data sets shown in Figure 15 (left) as initial input to quality control module for network. The data are selected from all stations (remaining data showed high cultural noise). Each data set has an initial (random) Predicted QA result. According to central limit theorem [37], we calculate mean of the sample means μ x ¯ = 19.59 , and standard deviation of sample mean σ x ¯ = 24.61 . We defined upper control limits (UCL) as well as lower control limits (LCL). The limit is range within μ ± 3 σ , which is from −54.24 to 93.42. When we start feeding our training data sets to BP artificial network, if the difference ratio of one data set is outside of the range (−54.24–93.42), we consider this data set an outlier, and it will be removed. According to our test, the range defined by UCL and LCL covers 99% of predicted results but it can still remove outliers. By implementing this central limit theorem as a quality control module, all the data points outside of limit range are considered outliers and removed. With this quality control module in place, our BP network gets trained to produce better Predicted QA results. As we feed more training data sets to BP network our deep learning neural network will get better to produce instant Predicted QA results.
Once the quality control model for our BP artificial network is set up, we started feeding it with data to train it. We randomly selected data sets of 37 stations (one data set from each station, exactly same amount of data sets as our initial sample data sets). BP network training process takes time, especially at the beginning stage. By feeding training data sets into the BP network in multiple iterations, we expect to get better predicted QA results gradually. As shown in Figure 16, training results of first iteration (table on the left in Figure 16) shows average difference is 45.09 and average different ration is 19.96. After 10 iterations, training results turns out to be average difference of 5.82 and average different ratio of 3.19. From this comparison, we can see there is about 84% improvement of predicted QA results after 20 iterations of training. Since we are continuously feeding more training data sets to our BP network, better predicted QA results will be produced. As we must make operational decisions with a fast turnaround time, we declare the network after 80% improvements to be operational.
Our work combined deep learning neural networks and statistical quality control module to make fast turnaround predication of MT QA results. So far this method has already shown us that it has great potential to reduce field geophysicist workload and significantly improve efficiency of field operation in terms of data quality feedback. As part of our cloud acquisition workflow, we continue to train our BP network to produce faster and better predicted QA results. Meanwhile, we are also adding fully automated features to have these QA results delivered to field crew’s fingertips simultaneously. This will significantly improve field operation decision making process, make field logistics more efficient, and thus reduce operational cost effectively.
We have discussed using AI in data delivery and quality assurance: both tasks are very time consuming and costly. We also outlined the roadmap for effective implementation in 3D modeling, as an illustrative example we looked at the value the Cloud enabled, AI supported technology provides by looking at its enabling capability in predictive optimization and performing repeat time-lapse surveys. During the energy transition a high value target for GHG reduction are heavy oil (HO) applications to EOR. We have taken a typical HO example which includes a shallow reservoir which is of near-term concern but also environmentally sensitive due to its shallow depth. Without the above-described technology and workflows this application would be extremely difficult to accomplish. With it we can apply new technologies that we explain forthcoming.
The production of HO can produce up to 715 kg CO2e/bbl from upstream to downstream, of which 10 kg CO2e/bbl are released when thermal methods are applied. Monitoring of steam injected becomes extremely critical to lower the number of emissions. Resistivity can change up to 150% for a temperature increase of 100 °C.
Several feasibility studies were carried out in the last decade [38,39] and we present below selective results one of these in this context.
Figure 17 shows the geologic section of a representative heavy oil field. Depending on the resistivity contrast between the fluids in the reservoir we will use different EM sensors. Simultaneous microseismic data are also acquired to monitor the changes of pressure occurring in the reservoir due to the injection of steam on one side. On the other side this will be used to observe potential breaks of the reservoir seals. In the figure we show the resistivity model of the formations and the saturation fluid description. Additionally shown is the CSEM survey configuration. We did not show the anisotropic model to avoid distraction from the purpose here on improving image focus and delivering the data in near real time.
We carried out a simulation of the EM responses using a 3D finite-difference method [40] applying to the feasibility concepts described above and in [21,41]. In Figure 18 we show the synthetic response of the offset-corrected horizontal electric field (Ex), for the isotropic (left) and the anisotropic case (right). We assumed here a vertical resistivity Rv = 8 ohm-m and a horizontal resistivity Rh = 4 ohm-m. Clearly, as expected, the anisotropy adds significantly to the anomaly. Henceforth, we will only use the anisotropic model. We also can see from this figure that the anomalous behavior or difference between flooded and unflooded reservoir is largest near the reservoir (outline at the bottom of the graphs). This suggests if we can get this difference out of the data using some type of transformation that only shows the variation between adjacent receivers, we might just see this [41]. This is commonly done in focused borehole measurements and can be translated to surface electrical fields [42,43].
Figure 19 shows two other electromagnetic field components to achieve this anomaly enhancement: one is the measurement of the vertical field and one the measurement of the horizontal magnetic field time-derivative. The implementation of the vertical electric field measurement would happen via a shallow vertical borehole tool that gets buried in 20 m deep boreholes. This leads directly to the benefits of using artificial intelligence and Cloud services.
The electromagnetic components and method discussed are more sensitive to thin horizontal resistors/reservoir with hydrocarbon saturation. The key value proposition is reducing operations cost (firstly injected steam cost, secondarily EOR-recovery factor improvement of 20–30%). So, this means we need the data very soon after acquisition to improve the steam injection operations plan. The highest value lies in data turn-around time going to nearly real time. If we assume large numbers of receiver installation in an HO steam flood operation, a necessity to get result at the latest 24 h later, the steps are:
  • Immediate data transmission to the CLOUD of receiver and noise-compensating reference receivers.
  • Real time QA to be fed to time-section interpretation (primary: display transformation can be done in near-real time with operator control).
  • Client receives decision making steam maps to produce optimization operation variations.
Clearly, after some time the operator interaction can be aided by deep learning.

7. Conclusions

Using artificial intelligence in the geophysical reservoir monitoring workflow helps to bring complex decisions closer to the data acquisition operations. The biggest value is in faster operations and making decisions at a time when they can impact acquisition data quality. This then enables wider and newer areas of application of the technology not possible with AI.
We selected electromagnetic geophysical equipment with the application of fluid monitoring which is important for the energy transition. Imaging the reservoir fluids allows us to get 20–40% higher recovery factor, thus reducing the carbon footprint. In the EOR+ scenario we can address lower carbon footprint of heavy oil production and CO2 injection at the same driving this energy resource faster to zero footprint which is a major contribution to climate change.
Since electromagnetic components depend on very low signal they are also sensitive to noise, and sending the data noise-free to the Cloud is non-trivial but essential to make near real time operating decisions. We demonstrate this with Cloud based acquisition and near real-time quality assurance.
Since we do not know the explicit structure and resistivities of the reservoir, we need to acquire all electromagnetic components as each has a bias to certain parts of the geoelectric section. Only directional multi-component measurements give you a clear description of the anisotropic model.
Over the past two decades we developed the technology for initial patent concept, hardware design and manufacturing, field application and operation to interpretation and value extraction. Parts of the system are in use in over well over 20 countries, proving its accuracy and reliability, allowing us to take the technology to the applications proposed here.
While the operational value of including AI in the electromagnetic workflow is greatest, further value assessment shows that HO applications and fluid monitoring could hardly be done without it. The enabling opportunity value could in the future be larger than all others and the technical breakthrough could change the game.

Author Contributions

Lead author, concepts, and technology details, K.S.; 3D modeling, S.D.; EOR and GHG research, H.P.; cloud-based quality assurance algorithm, guidance, and architecture, M.S.; cloud-based AI software implementation, X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data supporting this can be found at www.kmstechnologies.com.

Acknowledgments

We acknowledge the support and contribution of T. Hanstein and Y. Martinez throughout the development. KMS Technologies gave us permission to publish this material. A thank you to everyone.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gasperikova, E.; Cumming, W. How geophysics can help the geothermal industry. In SEG Technical Program Expanded Abstracts 2020; Nedorub, O., Swinford, B., Eds.; Society of Exploration Geophysicists: Tulsa, OK, USA, 2020; pp. 3379–3383. [Google Scholar]
  2. Barajas-Olalde, C.; Davydycheva, S.; Hanstein, T.; Laudal, D.; Martinez, Y.; MacLennan, K.; Mikula, S.; Adams, D.C.; Klapperich, R.J.; Peck, W.D.; et al. Using controlled-source electromagnetic (CSEM) for CO2 storage monitoring in the North Dakota CarbonSAFE project. In Technical Program Expanded Abstracts 2021; Society Exploration Geophysicists: Tulsa, OK, USA, 2021. [Google Scholar]
  3. Strack, K.M. Future directions of electromagnetic methods for hydrocarbon applications. Surv. Geophys. 2013, 35, 157–177. [Google Scholar] [CrossRef]
  4. Streich, R. Controlled-source electromagnetic approaches for hydrocarbon exploration and monitoring on land. Surv. Geophys. 2015, 37, 47–80. [Google Scholar] [CrossRef]
  5. Henke, C.H.; Krieger, M.H.; Strack, K.; Zerilli, A. Subsalt imaging in northern Germany using multiphysics (magnetotellurics, gravity, and seismic). Interpretation 2020, 8, SQ15–SQ24. [Google Scholar] [CrossRef]
  6. Strack, K.; Hanstein, T.; LeBrocq, K.; Moss, D.; Vozoff, K.; Wolfgram, P. Case histories of LOTEM surveys in hydrocarbon prospective area. First Break 1989, 7, 467–477. [Google Scholar] [CrossRef]
  7. Strack, K.-M.; Pandey, P.B. Exploration with controlled-source electromagnetics under basalt cover in India. Geophysics 2007, 26, 360–363. [Google Scholar] [CrossRef]
  8. Berre, L.; Morten, J.P.; Baillie, G.; Nerland, E. Experience on controlled-source electromagnetic performance for exploration in Norway. Interpretation 2020, 8, SQ25–SQ37. [Google Scholar] [CrossRef]
  9. Strack, K.; Davydycheva, S.; Hanstein, T.; Smirnov, M. A new array system for multiphysics (MT, LOTEM, and microseismics) with focus on reservoir monitoring. In Proceedings of the 1st International Geo-Electromagnetic Workshop (GEO-EM 2017), West Java, Indonesia, 21–24 February 2017; Volume 1861, p. 20001. [Google Scholar] [CrossRef]
  10. International Energy Agency (IEA). World Energy Outlook 2019 (WEO 2019). 2019. Available online: https://www.iea.org/reports/world-energy-outlook-2019 (accessed on 30 June 2021).
  11. Masnadi, M.S.; El-Houjeiri, H.M.; Schunack, D.; Li, Y.; Englander, J.G.; Badahdah, A.; Monfort, J.-C.; Anderson, J.E.; Wallington, T.J.; Bergerson, J.A.; et al. Global carbon intensity of crude oil production. Science 2018, 361, 851–853. [Google Scholar] [CrossRef] [PubMed]
  12. Gordon, D.; Feldman, J. Breaking down the Barrel: Tracing GHG Emissions through the Oil Supply Chain. 2016. Available online: https://carnegieendowment.org/2016/02/09/breaking-down-barrel-tracing-ghg-emissions-through-oil-supply-chain-pub-62722 (accessed on 30 June 2021).
  13. International Energy Agency (IEA). Storing CO2 through Enhanced Oil Recovery. 2015. Available online: https://www.iea.org/reports/storing-co2-through-enhanced-oil-recovery (accessed on 30 June 2021).
  14. Carlson, M.R. An analysis of the caprock failure at Joslyn. In Proceedings of the 2012 SPE Heavy Oil Conference Canada, Calgary, AB, Canada, 12–14 June 2012. [Google Scholar] [CrossRef]
  15. Jiang, J.; Aziz, A.A.; Liu, Y.; Strack, K.M. Geophysical Acquisition System. U.S. Patent 9,057,801, 11 July 2013. [Google Scholar]
  16. Vozoff, K. The magnetotelluric method in the exploration of sedimentary basins. Geophysics 1972, 37, 98–141. [Google Scholar] [CrossRef]
  17. Esparza, F.J.; Gómez-Trebiño, E. On: “The magnetotelluric method in the exploration of sedimentary basins,” by K. Vozoff (February 1972, Geophysics, 37, pages 98–141). Geophysics 1997, 62, 691–692. [Google Scholar] [CrossRef]
  18. Simpson, F.; Bahr, K. Practical Magnetotellurics; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  19. Chave, A.D.; Jones, A.G.; Mackie, R.L.; Rodi, W.; Evans, R.L.; Ferguson, I.J.; Viljanen, A.; Weidelt, P.; Weiss, C. The Magnetotelluric Method; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  20. Zonge, K.L.; Hughes, L.J. Controlled source audio-frequency magnetotellurics. In Electromagnetic Methods in Applied Geophysics: Application, Parts A and B; Nabighian, N.M., Ed.; Society of Exploration Geophysicists: Tulsa, OK, USA, 1991; Volume 2, pp. 713–810. [Google Scholar]
  21. Strack, K.M. Exploration with Deep Transient Electromagnetics; Elsevier: Amsterdam, The Netherlands, 1992; p. 373. [Google Scholar]
  22. Keller, G.V.; Pritchard, J.I.; Jacobson, J.J.; Harthill, N. Megasource time-domain electromagnetic sounding methods. Geophysics 1984, 49, 993–1009. [Google Scholar] [CrossRef]
  23. Kaufman, A.; Keller, G.V. Frequency and transient sounding methods in geochemistry and geophysics. Geophys. J. Int. 1984, 77, 935–937. [Google Scholar] [CrossRef] [Green Version]
  24. Rykhlinskaya, E.; Davydycheva, S. Method for Marine Geoelectrical Exploration with Electrical Current Focusing. U.S. Patent 8,762,062, 29 December 2011. [Google Scholar]
  25. He, Z.; Liu, X.; Qiu, W.; Zhou, H. Mapping reservoir boundary by using borehole-surface TFEM technique: Two case studies. In SEG Technical Program Expanded Abstracts 2004; Popovici, A.M., Fomel, S., Eds.; Society of Exploration Geophysicists: Tulsa, OK, USA, 2004; pp. 334–337. [Google Scholar]
  26. Strack, K.-M. Integrated Borehole System for Reservoir Detection and Monitoring. U.S. Patent 6,670,813 B2, 30 December 2003. [Google Scholar]
  27. Constable, S. Review paper: Instrumentation for marine magnetotelluric and controlled source electromagnetic sounding. Geophys. Prospect. 2013, 61, 505–532. [Google Scholar] [CrossRef]
  28. Vallas, H.A.; Messatesta, A.; Strack, K. Resistivity-log inversion using a heterogeneous artificial neural network. In SEG Technical Program Expanded Abstracts 1993; Society of Exploration Geophysicists: Tulsa, OK, USA, 1993; Volume 12, p. 1395. [Google Scholar]
  29. Zhang, L.; Poulton, M.M.; Wang, T. Borehole electrical resistivity modeling using neural networks. Geophysics 2002, 67, 1790–1797. [Google Scholar] [CrossRef]
  30. Strack, K.M. Fast 3D Inversion of Electromagnetic Survey Data Using a Trained Neural Network in the Forward Modeling Branch. U.S. Patent 7,574,410, 11 August 2009. [Google Scholar]
  31. Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef] [PubMed]
  32. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  34. Dai, W.; Yoshigoe, K.; Parsley, W. Improving data quality through deep learning and statistical models. In Information Technology-New Generations; Latifi, S., Ed.; Springer: Berlin, Germany, 2017; pp. 515–522. [Google Scholar]
  35. Smirnov, M.Y. Magnetotelluric data processing with a robust statistical procedure having a high breakdown point. Geophys. J. Int. 2003, 152, 1–7. [Google Scholar] [CrossRef] [Green Version]
  36. Goodfellow, I.; Bengio, Y.; Courville, A. 6.5 Back-Propagation and Other Differentiation Algorithms; MIT Press: Cambridge, MA, USA, 2015; pp. 200–220. [Google Scholar]
  37. Bárány, I.; Vu, V. Central limit theorems for Gaussian polytopes. Ann. Probab. 2007, 35, 1593–1621. [Google Scholar] [CrossRef] [Green Version]
  38. Passalacqua, H.; Strack, K. Integrated geophysical reservoir monitoring for heavy oil. In Proceedings of the 2016 SPE Heavy Oil Conference and Exhibition, Kuwait City, Kuwait, 6–8 December 2016. [Google Scholar] [CrossRef]
  39. Passalacqua, H.; Davydycheva, S.; Strack, K. Feasibility of multi-physics reservoir monitoring for heavy oil. In Proceedings of the 2018 SPE International Heavy Oil Conference and Exhibition, Kuwait City, Kuwait, 10–12 December 2018. [Google Scholar] [CrossRef]
  40. Davydycheva, S.; Druskin, V. Staggered grid for Maxwell’s equations in arbitrary 3-D inhomogeneous anisotropic media. In Three-Dimensional Electromagnetics; Oristaglio, M., Spies, B., Eds.; Society Exploration Geophysicists: Tulsa, OK, USA, 1999; pp. 119–137. [Google Scholar]
  41. Strack, K.; Davydycheva, S. Using electromagnetics to map lateral fluid variations in carbonates in SE Asia. New Approaches Eng. Res. 2021, 2, 69–79. [Google Scholar] [CrossRef]
  42. Davydycheva, S. Method and Apparatus for Detecting and Mapping Subsurface Anomalies. U.S. Patent 9,891,339 B2, 13 February 2018. [Google Scholar]
  43. Davydycheva, S.; Rykhlinski, N. Focused-source electromagnetic survey versus standard CSEM: 3D modeling in complex geometries. Geophysics 2011, 76, F27–F41. [Google Scholar] [CrossRef]
Figure 1. Workflow of a typical electromagnetic reservoir monitoring project. Below the flow block are the major tasks in each step.
Figure 1. Workflow of a typical electromagnetic reservoir monitoring project. Below the flow block are the major tasks in each step.
Jmse 09 00906 g001
Figure 2. Survey layout for a controlled source EM (CSEM) system showing four transmitter locations (yellow lines) and three receivers’ lines (red circles). At each location all electromagnetic components are being recorded (after [2]). The yellow circle in the center marks the surface projection of the flooded reservoir below that has its outline projected to the red squares on the sides. The numbers represent the Earth model resistivities (Rv and Rh are vertical and horizontal resistivities respectively). On the right is the borehole data (Gamma ray—GR (green) and resistivity—R (black), invasion zone (red), seismic horizon boundaries (cyan)). The blocky curves through the log are upscaled resistivities building the anisotropic model. Rv and Rh are marked with magenta and blue lines on the plot.
Figure 2. Survey layout for a controlled source EM (CSEM) system showing four transmitter locations (yellow lines) and three receivers’ lines (red circles). At each location all electromagnetic components are being recorded (after [2]). The yellow circle in the center marks the surface projection of the flooded reservoir below that has its outline projected to the red squares on the sides. The numbers represent the Earth model resistivities (Rv and Rh are vertical and horizontal resistivities respectively). On the right is the borehole data (Gamma ray—GR (green) and resistivity—R (black), invasion zone (red), seismic horizon boundaries (cyan)). The blocky curves through the log are upscaled resistivities building the anisotropic model. Rv and Rh are marked with magenta and blue lines on the plot.
Jmse 09 00906 g002
Figure 3. Pore space model based on Carlson [14] showing grains building the rock skeleton and pore space. The fluids are in the pore space, and the reservoir boundaries are defined by grain-to-grain contact indicated by the green line. When a grain-to-grain contact breaks, a seismic signal is sent. Before that, the stress and strain on the pore fluid causes mobility and this causes electrons to move dramatically, reducing the resistivity of the pore fluid and thus the bulk resistivity. On the right of the figure is an example of an EM signal from an earthquake in India as recording on all 5 components of an EM system.
Figure 3. Pore space model based on Carlson [14] showing grains building the rock skeleton and pore space. The fluids are in the pore space, and the reservoir boundaries are defined by grain-to-grain contact indicated by the green line. When a grain-to-grain contact breaks, a seismic signal is sent. Before that, the stress and strain on the pore fluid causes mobility and this causes electrons to move dramatically, reducing the resistivity of the pore fluid and thus the bulk resistivity. On the right of the figure is an example of an EM signal from an earthquake in India as recording on all 5 components of an EM system.
Jmse 09 00906 g003
Figure 4. Feasibility workflow of a typical electromagnetic reservoir monitoring project including on-site noise test. The major task is in each block.
Figure 4. Feasibility workflow of a typical electromagnetic reservoir monitoring project including on-site noise test. The major task is in each block.
Jmse 09 00906 g004
Figure 5. Noise spectra from on-site noise tests for a CSEM monitoring survey (after [2]) on the left. The different curves represent different electromagnetic fields. On the right are measured voltage responses for the magnetic field (dBz/dt) calculated for the 3D model in Figure 1. Superimposed on the right are the noise spectra after filtering. (After [2]). The black horizontal lines are different transmitter current switching periods.
Figure 5. Noise spectra from on-site noise tests for a CSEM monitoring survey (after [2]) on the left. The different curves represent different electromagnetic fields. On the right are measured voltage responses for the magnetic field (dBz/dt) calculated for the 3D model in Figure 1. Superimposed on the right are the noise spectra after filtering. (After [2]). The black horizontal lines are different transmitter current switching periods.
Jmse 09 00906 g005
Figure 6. Acquisition workflow of a typical CSEM reservoir monitoring project. Quality assurance decisions are marked with red (NO GO) and green (GO) dots. The data are sent directly from the instrument to the Cloud.
Figure 6. Acquisition workflow of a typical CSEM reservoir monitoring project. Quality assurance decisions are marked with red (NO GO) and green (GO) dots. The data are sent directly from the instrument to the Cloud.
Jmse 09 00906 g006
Figure 7. Noise density of various land sensors (fluxgate and induction coils) compared with an earthquake prediction sensor and the Earth’s magnetic field noise density.
Figure 7. Noise density of various land sensors (fluxgate and induction coils) compared with an earthquake prediction sensor and the Earth’s magnetic field noise density.
Jmse 09 00906 g007
Figure 8. Pictures of the various electromagnetic sensors and acquisition systems in field settings. At the top left is an electrode used for the electric field measurements. At the top center is the acquisition system near a power plant followed on the top right followed by a closer view of the system, web access box, and battery in a field container. Below on the bottom right are several systems during a test with seismic 3-component geophones connected. In the center is a magnetic field sensor (induction coil) and in the bottom left a magnetic field sensor implemented by a large multi-turn air loop.
Figure 8. Pictures of the various electromagnetic sensors and acquisition systems in field settings. At the top left is an electrode used for the electric field measurements. At the top center is the acquisition system near a power plant followed on the top right followed by a closer view of the system, web access box, and battery in a field container. Below on the bottom right are several systems during a test with seismic 3-component geophones connected. In the center is a magnetic field sensor (induction coil) and in the bottom left a magnetic field sensor implemented by a large multi-turn air loop.
Jmse 09 00906 g008
Figure 9. Field images of various components of the transmitter equipment. At the top left is the power connection of the 150 kVA transmitter switch box (back view). Its front view in the field is shown in bottom center. At the top center are pictures of the electrode puts where the current injecting electrode are buried. At the top right is a view of the 150 kVA generator and observer trailer (switch box is inside) used for 24 h continuous for 5 weeks operation partially at minus 20 °C. An inside view of the trailer is at the bottom right. On the left side is a transmitter site set up as camp in the tropic in an oil field.
Figure 9. Field images of various components of the transmitter equipment. At the top left is the power connection of the 150 kVA transmitter switch box (back view). Its front view in the field is shown in bottom center. At the top center are pictures of the electrode puts where the current injecting electrode are buried. At the top right is a view of the 150 kVA generator and observer trailer (switch box is inside) used for 24 h continuous for 5 weeks operation partially at minus 20 °C. An inside view of the trailer is at the bottom right. On the left side is a transmitter site set up as camp in the tropic in an oil field.
Jmse 09 00906 g009
Figure 10. Process flow of electromagnetic data directly to the Cloud (top half) and a data example illustration of the noise improvement using a noise-free web access box (bottom half). On the top left is the acquisition system with the web access box (yellow box) that is placed away (20 m) from the sensitive magnetic field sensor displayed in the pictorial flow diagram below the acquisition unit. From the web access box (Wi-Fi interface in the figure) the data are sent to the Cloud via cell phone, and wireless connection or wired local area network (LAN). The plots below show the frequency spectra of data recording with and without web access box and close to the magnetic field sensor (right side) versus away from the sensor.
Figure 10. Process flow of electromagnetic data directly to the Cloud (top half) and a data example illustration of the noise improvement using a noise-free web access box (bottom half). On the top left is the acquisition system with the web access box (yellow box) that is placed away (20 m) from the sensitive magnetic field sensor displayed in the pictorial flow diagram below the acquisition unit. From the web access box (Wi-Fi interface in the figure) the data are sent to the Cloud via cell phone, and wireless connection or wired local area network (LAN). The plots below show the frequency spectra of data recording with and without web access box and close to the magnetic field sensor (right side) versus away from the sensor.
Jmse 09 00906 g010
Figure 11. Inversion flow diagram dissecting the forward modeling from the model adjustments and Cloud interaction. The input comes from the DATA layer and the out goes to the IMAGE layer of the Cloud.
Figure 11. Inversion flow diagram dissecting the forward modeling from the model adjustments and Cloud interaction. The input comes from the DATA layer and the out goes to the IMAGE layer of the Cloud.
Jmse 09 00906 g011
Figure 12. Multi-parameter error surface illustrating how the initial guess model moves around the error surface towards a global solution. If we follow strict convergence criteria it could get stuck around the local minimum. The axis parameter represents only two parameters of the model.
Figure 12. Multi-parameter error surface illustrating how the initial guess model moves around the error surface towards a global solution. If we follow strict convergence criteria it could get stuck around the local minimum. The axis parameter represents only two parameters of the model.
Jmse 09 00906 g012
Figure 13. Comparison of magnetotelluric (MT) data using an individual site for processing and including data from a remote reference site delivered via the Cloud. The top rows show apparent resistivities and the bottom phase responses. Synthetic curves are the solid lines, and the dots are the field data. On the right of the data plots are the inversion Earth models (red line) derived from starting with the 3D anisotropic model. They are displayed in comparison with the borehole upscaled log (dashed black line).
Figure 13. Comparison of magnetotelluric (MT) data using an individual site for processing and including data from a remote reference site delivered via the Cloud. The top rows show apparent resistivities and the bottom phase responses. Synthetic curves are the solid lines, and the dots are the field data. On the right of the data plots are the inversion Earth models (red line) derived from starting with the 3D anisotropic model. They are displayed in comparison with the borehole upscaled log (dashed black line).
Jmse 09 00906 g013
Figure 14. Network diagram of the BP artificial network used to predict QA results, input reference, and deep learning feedback loop. There are three input nodes (left): Data 1 includes GPS coordinates, time, and altitude; Data 2 is gain settings for each unit and channel; and Data 3 is the operator’s name. This BP artificial network has two hidden layers: QA 1 includes robust processing parameters: root mean square (RMS) error, phase behavior, and apparent resistivity. QA 2 includes 1D inversion result like root mean square fit (RMSf) error. The outputs are the predicted QA results. They will get compared with actual QA results. Input layer reference generates quality control parameters from field data and define upper control limit (UCL) and lower control limit (LCL). Predicted QA results from BP network will get checked as to whether they’re within UCL and LCL envelope; if not, the bad data set will be removed, and new predicted QA result will be re-generated by BP network.
Figure 14. Network diagram of the BP artificial network used to predict QA results, input reference, and deep learning feedback loop. There are three input nodes (left): Data 1 includes GPS coordinates, time, and altitude; Data 2 is gain settings for each unit and channel; and Data 3 is the operator’s name. This BP artificial network has two hidden layers: QA 1 includes robust processing parameters: root mean square (RMS) error, phase behavior, and apparent resistivity. QA 2 includes 1D inversion result like root mean square fit (RMSf) error. The outputs are the predicted QA results. They will get compared with actual QA results. Input layer reference generates quality control parameters from field data and define upper control limit (UCL) and lower control limit (LCL). Predicted QA results from BP network will get checked as to whether they’re within UCL and LCL envelope; if not, the bad data set will be removed, and new predicted QA result will be re-generated by BP network.
Jmse 09 00906 g014
Figure 15. Quality control module for BP artificial network. Create central limit algorithm based on selected sample training data. This model is used to remove outliers of the predicted QA result and feed back to BP artificial network for improvement of training purposes. The table on the left side is 37 selected training data sets. Text and formula in the middle are calculated values of statistical quality control model. Diagram on the bottom explains how this quality control model removes outliers.
Figure 15. Quality control module for BP artificial network. Create central limit algorithm based on selected sample training data. This model is used to remove outliers of the predicted QA result and feed back to BP artificial network for improvement of training purposes. The table on the left side is 37 selected training data sets. Text and formula in the middle are calculated values of statistical quality control model. Diagram on the bottom explains how this quality control model removes outliers.
Jmse 09 00906 g015
Figure 16. Summary result comparison BP artificial network training for quality assurance of magnetotelluric data. The training results of first iteration are on the left. The average difference is 45.09 and average different ratio is 19.96. The training results after 20 iterations are on the right with an average difference of 5.82 and average difference ratio of 3.19. It shows that our BP network is getting trained and improves with deep learning.
Figure 16. Summary result comparison BP artificial network training for quality assurance of magnetotelluric data. The training results of first iteration are on the left. The average difference is 45.09 and average different ratio is 19.96. The training results after 20 iterations are on the right with an average difference of 5.82 and average difference ratio of 3.19. It shows that our BP network is getting trained and improves with deep learning.
Jmse 09 00906 g016
Figure 17. Simplified geologic model of a heavy oil (HO) reservoir. The HO section is the reservoir shown here as salt water saturated.
Figure 17. Simplified geologic model of a heavy oil (HO) reservoir. The HO section is the reservoir shown here as salt water saturated.
Jmse 09 00906 g017
Figure 18. Horizontal electric field response as a function of the distance from the transmitter (left) using the isotropic model on from Figure 15. On the (right), we added anisotropy to the model resistivities (modified after [41]).
Figure 18. Horizontal electric field response as a function of the distance from the transmitter (left) using the isotropic model on from Figure 15. On the (right), we added anisotropy to the model resistivities (modified after [41]).
Jmse 09 00906 g018
Figure 19. Comparison of two types of measurement that lead to focusing methodology. On the (left) are the vertical electric field measured in a shallow borehole (leads to shallow borehole tool described in text). On the (right) are the anomalous responses at various offsets for the horizontal magnetic field time-derivative (leading to differential measurements).
Figure 19. Comparison of two types of measurement that lead to focusing methodology. On the (left) are the vertical electric field measured in a shallow borehole (leads to shallow borehole tool described in text). On the (right) are the anomalous responses at various offsets for the horizontal magnetic field time-derivative (leading to differential measurements).
Jmse 09 00906 g019
Table 1. Total Green House Gas (GHG) Emissions for Oil Quality (after 12).
Table 1. Total Green House Gas (GHG) Emissions for Oil Quality (after 12).
OIL TYPEUPSTREAM
(Kg CO2e/bbl)
MIDSTREAM
(Kg CO2e/bbl)
DOWNSTREAM
(Kg CO2e/bbl)
Credit/Debit
(Kg CO2e/bbl)
Total (Kg CO2e/bbl)
Light (Well managed)3510435−5475
Heavy Oil (HO)20085450−20715
Light (gassy) Oil37525460−110750
Extra-Heavy Oil19050515+15770
CO2e refers to the number of units of CO2 emissions with the same warming potential as one unit of a different greenhouse gas. CO2e can be obtained by multiplying the amount of GHG by its global warming potential, e.g., one kg of methane is equivalent to 25 kg of CO2e.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Strack, K.; Davydycheva, S.; Passalacqua, H.; Smirnov, M.; Xu, X. Using Cloud-Based Array Electromagnetics on the Path to Zero Carbon Footprint during the Energy Transition. J. Mar. Sci. Eng. 2021, 9, 906. https://doi.org/10.3390/jmse9080906

AMA Style

Strack K, Davydycheva S, Passalacqua H, Smirnov M, Xu X. Using Cloud-Based Array Electromagnetics on the Path to Zero Carbon Footprint during the Energy Transition. Journal of Marine Science and Engineering. 2021; 9(8):906. https://doi.org/10.3390/jmse9080906

Chicago/Turabian Style

Strack, Kurt, Sofia Davydycheva, Herminio Passalacqua, Maxim Smirnov, and Xiayu Xu. 2021. "Using Cloud-Based Array Electromagnetics on the Path to Zero Carbon Footprint during the Energy Transition" Journal of Marine Science and Engineering 9, no. 8: 906. https://doi.org/10.3390/jmse9080906

APA Style

Strack, K., Davydycheva, S., Passalacqua, H., Smirnov, M., & Xu, X. (2021). Using Cloud-Based Array Electromagnetics on the Path to Zero Carbon Footprint during the Energy Transition. Journal of Marine Science and Engineering, 9(8), 906. https://doi.org/10.3390/jmse9080906

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop