1. Introduction
Becerik-Gerber et al. define Human–Building Interaction (HBI) as the dynamic interplay between humans and intelligence within built environments. In HBI, intelligent systems integrated in both residential and commercial settings assist building occupants with infrastructure upkeep and help to ensure safe habitability [
1]. Recent advancements in sensors, computer chips, Internet of Things (IoT) devices, robotics, and artificial intelligence have opened new opportunities for the development of advanced HBI systems. This is apparent in the FDNY’s (New York City Fire Department’s) plans to use Boston Dynamics’ Spot
® Dog-Bot in firefighting [
2], Amazon’s introduction of the Astro household robot, and the global sale of 1.4 million Ring Video Doorbells [
3]. Smart home devices, in particular, have taken off in popularity as they present several use cases such as optimizing energy management, assisting in the health monitoring of the elderly, and serving as central sensor hubs to notify absent homeowners of anomalous events [
4]. In fact, there are approximately 14.7 billion Machine-to-Machine (M2M) devices, a subset of IoT devices. A total of 48 percent of these are used for connected home applications [
5].
Despite the wide-scale adoption of home maintenance technology, indoor infrastructure problems persist. Rising levels of urbanization, urban-specific evolutionary pressures [
6], and alterations in climate patterns due to global warming are expected to cause rodent populations to surge [
7]. These trends are particularly problematic given the health risks that rodents present, both within and outside of proximity to humans [
8].
Another concern is air quality. A total of 26.6 billion cubic feet of methane gas leaks were reported to the government between 2010 and October 2021, with several incidences transpiring in domestic residences [
9]. Mold remediation costs homeowners an average of USD 2254 per incident [
10]. In 2020, household air contamination resulting from the partial combustion of kerosene and solid fuel used in cooking was responsible for an estimated 3.2 million deaths globally [
11]. In the context of home plumbing, 90 gallons of water are wasted daily in houses with leaking problems [
12].
The homeowner’s ability to effectively maintain a house depends on timely and accurate information, based on which they can promptly implement repairs, upkeep, and pest eradication. Aging populations, rising material costs, and reduced availability of the maintenance workforce place additional pressure on the homeowner to seek help through modern technology. The rise of technology-based home maintenance activities follows trends in industry and may be termed Home Maintenance 4.0 [
13,
14].
Among the main sub-components of Home Maintenance 4.0 is the integration of IoT protocols to extract information about home structures from sensors. A good example of this is the low-cost, microcontroller-based measurement of the thermal transmittance parameter for building envelopes. Studies have investigated the effects of sensor position on the accuracy of parameter measurement [
15,
16].
Furthermore, the utility and application of robots in mapping and assessing hazardous environments have been investigated widely [
17]. In 2023, Sun et al. [
18] utilized the Gmapping algorithm for SLAM to use with an indoor patrol robot. Another commonly used SLAM algorithm is Hector mapping, which is useful in cases that lack odometry data [
19]. Advancements in microrobot technology reduce the challenges of investigating inaccessible areas, which were not possible with conventional robots. Pests, such as small rodents and insects, often hide in tight spaces that new microrobot modules can now access and analyze [
20].
Anyone who lives in an old wood-frame house can attest that many events, including those related to maintenance, have recognizable acoustic signatures. Vibration and acoustic sensors tied to intelligent signal processing are a natural extension of human-based acoustic sensing and cognition [
21]. Chennai Viswanathan et al. [
22] used deep learning methods for the identification of faults in pumps, and in 2019, Guo et al. [
23] used direct-write piezoelectric transducers to monitor ultrasonic wave signals and analyze structural health. This method detects defects in pipe structures. Moreover, piezoelectric sensors connected to an Arduino microcontroller board measured the energy available to be harvested from rainfall [
24] in which a similar procedure could be replicated to detect the droplets of water in a leaky and running faucet. Water droplet measurements can also be used to minimize needless water waste in appliances like toilet tanks [
12].
Emerging Augmented Reality (AR) technology has proven to assist maintenance professionals in performing their duties. AR assistance can lead to greater efficiency when compared to operators without AR, particularly with preventative maintenance and repair [
25]. Additionally, AR systems can offer remote guidance to amateur personnel for routine maintenance tasks [
26], reducing training time and providing superior effectiveness when compared to Virtual Reality (VR) in multi-level and complex maintenance tasks [
27]. A variety of interfaces have been developed for AR, such as those used in equipment maintenance and diagnostics [
28], intuitive teleportation of bimanual robots [
29], and steering the MARSBot microrobot for the inspection of tight spaces and Unistrut channels [
30]. AR platforms offer a variety of applications such as interacting with RFID [
31] and the inspection of hard-to-reach areas using robots for Structural Health Monitoring (SHM) industries [
32].
Despite efforts to advance the state of the art in cyber–physical systems, comprehensive research on incorporating such systems into human–building interaction is limited. The complexity of these technologies may be the reason for lack of Do-It-Yourself (DIY) use in homes as they are designed for maintenance professionals, rather than homeowners. Nonetheless, sensing technologies are becoming low-cost and IoT devices are becoming widely available in homes.
This paper addresses the common maintenance problems that typical homeowners may encounter, such as pump failure, pest infestation, foundation damage, water leakage, and mold contamination, depicted in
Figure 1, and employs cyber–physical approaches using ML and AR to provide user-friendly feedback to the homeowners.
AR and VR headsets are becoming readily available for the typical household and have largely been used for entertainment purposes [
33]. The proposed systems described herein demonstrate the potential for expanding these devices from pure entertainment to true multipurpose devices, opening new markets for manufacturers and developers as well as opportunities for consumers. Considering that the interfaces in these devices are becoming more user-friendly, combining these systems with the described sensor applications will create novel home maintenance solutions.
The main contributions of this paper are outlined as follows:
An introduction of the Home Maintenance 4.0 framework for technical innovation to support home maintenance.
The integration of custom home maintenance sensors into a wireless network, with machine learning analysis of conditions, that interacts with humans and teams of humans through augmented reality interfaces.
The utilization of a Quadruped Robot Dog (QRD) to inspect confined spaces for air quality and provide mobile LiDAR-based mapping and geometric configuration assessment of structures.
The novel integration of ESP32-CAM, battery, and HEXBUG devices that crawl into tight spaces, such as ceiling and wall voids, to provide wireless first-person views of conditions.
The provision of internet and database links to users for potential maintenance remedies and parts suppliers.
Assistance to homeowners in detecting and finding their missing tool objects for maintenance or objects obstructing narrow passages via the AR interfaces.
For these purposes, a variety of wireless hardware and software solutions are presented for monitoring and repairing these maladies. In the proposed Home Maintenance 4.0 framework for human–building interaction, the architecture includes several layers, depicted in
Figure 2. The first layer is the human user or homeowner. The second layer consists of the devices users can interact with to access the technologies. These devices could include HoloLens 2 by Microsoft using AR, personal computers, electronic tablets, and smartphones, as displayed in
Figure 2. The third layer embodies a network access point which could be the homeowner’s local Wi-Fi or mobile hotspot, edge processing unit for machine learning applications and simulcast, and hardware devices consisting of robots, microrobots, and sensors used to perform the inspection or monitoring of the home environment. The implementation of the hardware includes using robots and microrobots to monitor for structural damages and mold contamination in confined spaces, detecting leaky faucets, pump failures, and pest infestation. The fourth layer is the application of the framework for solving the common home structure issues displayed in
Figure 1, which aims for maintenance, activities of daily living, and safety. The advantages of a common platform for all the home maintenance applications are having a main processing unit for all monitoring visual data and applying machine learning processing on them for different purposes, intuitive monitoring interfaces to incorporate advanced hardware for inspection, and the accessibility of universal data to prevent catastrophic failure and provide maintenance guides for sustaining the structures. Connecting to the sensors and robots in a private local network helps the homeowners to sustain their activities of daily living with ease while maintaining a safe environment for themselves. A key factor is to minimize placing humans in hazardous situations.
The rest of the paper is arranged into the following sections: In
Section 2, the theories, algorithms, and methods of using quadruped robot dogs, microrobots, AR headsets, and sensors for monitoring and detecting defects in structures are described. In
Section 3, various experiments and functioning tests using the methods described are designed to demonstrate the effectiveness of the proposed system in inspection and maintenance problems. In
Section 4, the results, comparison of the data, and functions of each system in tests and experiments are provided to verify the application of the proposed framework. In
Section 5, the results and applications of the platform for human–building interaction in maintenance are discussed. Finally, in
Section 6, a summary of the goals following the main achievements of the study is described.
2. Materials and Methods
A set of various sensor technologies, including some mounted on robots, were connected through a wireless local network. Machine learning and custom dashboard interfaces pass image and environmental data to humans through a facile interface. The following aims to elaborate on the human–technology interactions depicted in
Figure 1 by providing an overview of the proposed methods, equipment, and technology solutions incorporated in the platform.
2.1. Quadruped Robot Dog and Networked External Sensor Circuit for Air Quality Monitoring
The PuppyPi Pro by Hiwonder displayed in
Figure 3 is a QRD with LiDAR that was used in a series of experiments as a means of data acquisition for locations that are remote, confined, and generally inaccessible to humans. This particular QRD is powered by a Raspberry Pi 4B running the Robot Operating System (ROS).
Air quality and environmental data, including the spatial positioning of the QRD, are examined through readings collected from several sensors mounted on an 82 mm × 53 mm (L × W) half-sized breadboard attached to the front of the QRD. The MQ-9 CO and Combustible Gas sensor paired with the MQ-135 Gas sensor, calibrated to measure excess CO2 concentrations, monitor gas levels. A DHT-11 sensor tracks humidity and temperature, and a Keyes KY-018 Photoresistor Light Detector Module detects the presence and intensity of light levels in the surrounding environment. These data are aggregated using an Arduino Nano 33 IoT, a microcontroller board equipped with an LSM6DS3 3-axis digital accelerometer and gyroscope Inertial Measurement Unit (IMU), providing movement and orientation details.
The Arduino Nano 33 IoT includes the Nina W102 uBlox module for wireless communication via Bluetooth and single-band 2.4 GHz Wi-Fi. In this case, updated sensor readings are transmitted using the User Datagram Protocol (UDP) to a locally hosted Python server in JSON string format at a frequency of 2 Hz. An HTC 5G Hub is used as a Wireless Local Area Network (WLAN). On reception of sensor data packets, they are parsed, formatted, and inserted into a locally hosted InfluxDB time-series database as individual, timestamped points. The InfluxDB server connects to a locally hosted Grafana Dashboard, which supports real-time and interactive data visualization. The data are simultaneously written to a CSV file in a separate directory as a backup to the database, see
Figure 4.
Both MQ sensors have two ways to convey the measured gas parameters: an analog output voltage that can be remapped to PPM values and an active-low digital output pin which triggers if the reference voltage exceeds a level set by a potentiometer included in the breakout board. This project focuses on the digital output from the MQ-135 to detect CO
2 values exceeding 1000 ppm—a documented threshold where declines in cognitive faculties are noticeable in humans after 2.5 h of exposure [
34].
As the Arduino Nano 33 IoT operates at a 3.3 V operating voltage, two additional potentiometers act as voltage dividers to limit the MQ-9 and the MQ-135 sensor output from 5 V to 3.3 V. The Arduino Nano 33 IoT is programmed to transmit a warning to the Python server on the falling edge of the digital output of the MQ-135, indicating that the sensor exceeds approximately 1000 PPM.
2.2. Quadruped Robot Dog Floor Mapping with Infrastructural Acoustic Analysis and Low-Light Visual Monitoring
In order to monitor the terrain within a building, the original floor plan is used as a navigational aid for the robot, guiding it through a room filled with complex obstacles. For this procedure, the Hector SLAM algorithm [
35] completes the mapping. To use the maps for navigation, RViz collects the data and displays them on the PC. RViz is a 3D visualization tool in ROS Noetic [
36] that runs as part of an Ubuntu image on the VMware Workstation Pro on a PC. The robot navigates around the building with its PS2 wireless controller and sends the mapping data back with ROS Noetic. The PC serves as the Master in the ROS communication system. The developed map is overlaid and compared with the floor plan to survey the building to verify its safety before the human enters the suspected area.
Furthermore, to monitor low-light confined spaces, the QRD also mounts another Raspberry Pi connected to an infrared LED camera and a microphone. In this process, audio data are recorded by connecting to the Raspberry Pi through a VNC viewer and then analyzing the acoustic data using the MATLAB spectrogram function [
37].
2.3. Quadruped Robot Dog Curved Wall Mapping
The above mapping process was repeated with the QRD using LiDAR and the Gmapping algorithm [
38], in which point cloud data were collected through the RViz, sent to the curveFitter toolbox in MATLAB, and fit with a second-degree polynomial. This QRD has the advantage of tilting which enables it to point the LiDAR at the location of interest. A manual evaluation of the wall provided an independent measure of the radius of curvature of the wall, with Equation (
1) [
39] as follows:
where
H is the height of the bulge as a horizontal projection and
W is the width of the wall.
2.4. Augmented Reality Monitoring and Object Detection
Microrobots made from low-cost HEXBUG devices mounted with ESP32-CAM boards can broadcast images back to a Raspberry Pi acting as a base station. This paper uses two models: (1) the HEXBUG nano, which moves by vibratory locomotion to access hard-to-reach areas [
40], and (2) the HEXBUG Spider, which uses integrated servo motors to rhythmically propel six legs in a coordinated insect gait. An infrared remote link wirelessly directs the movement and rotation of the spider-like exoskeleton.
A network system, connecting deployed microrobots and a remote operator equipped with a HoloLens 2, can scan infrastructure for specific elements. The HoloLens 2 software is developed using an AR development suite, which consists of the Unity Game Engine, supplemented with the Microsoft Mixed Reality Tool Kit (MRTK) and OpenXR packages [
41]. The coding and editing processes are carried out in the Visual Studio code editor. The ESP32-CAM board is programmed using Arduino IDE by modifying the Espressif’s CameraWebServer code [
42]. In this code, firstly, the camera is initialized; then, using the Wi-Fi configurations, the board connects to the hotspot and starts a camera streaming web server.
Figure 5 demonstrates a system in which an ESP32-CAM attached to HEXBUG devices captures raw image data that stream to the AR headset user for either live stream monitoring by a human or broadcasting the data to a Raspberry Pi 4 Model B. The single-board processor can create a flask server to allow a team of AR headset users to view the live stream simultaneously. The board can also use ML-based processing for object detection, such as finding the missing tools required for maintenance tasks using a pre-trained model API on a hammer-screwdriver detection dataset [
43] and possibly objects obstructed in narrow spaces, namely, pipes utilizing YOLOv3 [
44], which is trained on the COCO dataset. Networked ML detects objects from accumulated sets of pictures and extracts the confidence interval. The processed data are then transmitted over a private network using TCP network sockets, where the Raspberry Pi is a server and several AR headsets may act as clients. The confidence interval and identified objects are then displayed as a correlating string and sprite token in a Heads-Up Display format. A trained ML algorithm can identify specific targets. Viewing in an AR format allows for direct real-time human interactions.
2.5. QR Code Microrobot Selection
Pipes, ventilation ducts, crawl spaces, and other infrastructure with confined spaces are challenging to monitor and inspect. AR algorithms use image features, known as anchors, to ensure that objects appear to remain in the same position and orientation in space. Using markers such as QR codes or checkerboards as anchors is a viable method of enabling AR devices with limited processing power to quickly overlay detected 3D models in the real world with respect to the anchors. This requires embedding microcomputers in the network with the ability to detect, locate, and read QR codes.
QR codes are printed patterns which convey information through cameras that read and decode the image. In this application, as the microrobots largely maneuver in confined spaces not visible to the operator, QR codes are placed in convenient locations providing versatile transmitters of information. They can either be used for robot or sensor selection via the URL containing the local network IP address or offering the repair instruction via self-contained information. The built-in QR code detection in HoloLens has range limitations which prevent it from being effectively used in our microrobot selection prototype. Therefore, to address this issue, a custom QR code detection algorithm was designed to access information from the microrobots, Raspberry Pi, and other target devices via the URL of the local network. The core of this QR code detection is Harris corner detection [
45], which can be described as
where
represents the second moment matrix of image gradients at a specific pixel location
.
is a window function that is applied to a group of pixels surrounding a specific pixel in an image.
is shift intensity, while
is intensity.
The window function weighs each pixel in the group based on the distance from the center pixel. The purpose of the window function is to ensure that the response function used in the Harris corner detection is sensitive to small variations in image intensity and to diminish the effects of noise and other artifacts. This QR code detection application uses the Gaussian window function. It assigns greater weight to pixels that are closer to the central pixel and less weight to pixels that are farther away.
Shift intensity quantifies the variation in pixel intensity that takes place when a pixel is moved in a particular direction. In this case, intensity refers to the brightness and darkness level of a pixel in an image. Equation (
2) measures the intensity of the image within small windows surrounding each pixel. Shifting two windows in a specified direction and then calculating the difference between the intensity of the two windows produces the shift intensity. For nearly constant patches,
is zero. For very distinctive patches, the
is larger. In a QR code image, Equation (
2) provides a suitable metric to pick patches with large
.
The Harris detection kernel, which finds pixels with large local neighborhood intensity gradients, can detect checkerboard patterns. The Harris kernel computes the gradient of each pixel to locate corners. If more than one QR code appears in the raw image, K-Means can group point clusters. Noise cancellation operates at the same time as Harris corner detection. In
Figure 6b, the corners on the QR codes are successfully detected and outlined with red indicators.
The Harris corner detector can easily extract the corners shown in the
Figure 6a, due to the large gradient between black and white pixels, as shown in
Figure 6b. After corner detection, Principal Component Analysis (PCA) can extract the point clusters, which are QR codes [
46]. Finally, the application of geometric transformations and decoding determines the QR code information.
2.6. Convolutional Neural Network Pest Detection
An image processing Convolutional Neural Network (CNN) detects small rodents by incorporating HEXBUG nano mounted with the ESP32-CAM board. The CNN consists of five convolutional layers, each followed by a max pooling layer, and three fully connected layers. The first convolutional layer has 32 filters with a kernel size of and a stride of 1. The second convolutional layer has 64 filters with a kernel size of and a stride of 1. The third convolutional layer has 128 filters with a kernel size of and a stride of 1. The fourth and fifth convolutional layers have 256 filters each with a kernel size of and a stride of 1. A max pooling layer with a pool size of and a stride of 2 follows each convolutional layer. The three fully connected layers have 1024, 512, and 2 neurons, respectively.
The training of the CNN used a dataset of 1700 images containing the original and augmented pictures, half of which contained rats and the other half were background images including the original images and pictures captured using the ESP32-CAM. The dataset was split into training and validation sets with a 9:1 ratio. An Adam optimizer with a learning rate of and 20 epochs trained the CNN.
2.7. Leaky Faucet Water Droplet Detection
In this experiment, acousto-elastic methods detect water leaks. Piezoelectric transducer discs are used in conjunction with an Arduino Uno, depicted in
Figure 7. The output of the piezoelectric patch is connected in parallel with 2.01 MΩ resistors to reach the level of sensitivity needed for this application. The Arduino Uno board is programmed using Arduino IDE to start serial communication at a 9600 baud rate and reads the analog values from pin 0 every 100 ms. Excel Data Streamer captures the data, which are then plotted for comparison.
3. Experiments and Functioning Tests
In this section, several experiments and functioning tests are designed to verify the capability of the proposed maintenance framework in addressing common problems of homeowners using human–building interaction. These tests and experiments are categorized following the proposed methods into functioning tests, including (1) QRD CO
2 detection in confined spaces; (2) missing object detection with AR device view overlay; (3) QR code robot selection for the inspection of confined spaces; and (4) rat detection using a microrobot, which are conducted to demonstrate the capabilities of the proposed system in maintenance, and experiments, consisting of (1) the comparison of the QRD map with the floor plan and detection of changes in the pump spectrogram after one year; (2) the comparison of the QRD LiDAR mapping with manual measurements of the wall; and (3) the differentiation of a leaky faucet versus a running faucet, which were performed to analyze the results of their application in the proposed home maintenance framework. The main hardware used in the experiments’ setup is given in
Table 1, and each experiment is elaborated upon as follows.
3.1. Quadruped Robot Dog Air Quality Monitoring Test
One of the main technologies in the proposed framework is the air quality monitoring QRD. To confirm its effectiveness, a sample data acquisition was conducted by maneuvering the QRD and attached sensor board into a confined space filled with CO
2 gas, as demonstrated in
Figure 8. The gas was discharged at a constant rate from a RedRock 16 G CO
2 cartridge and funneled through a syringe into the restricted enclosure.
The purpose of this test is to verify the capability of the QRD with external networked sensor circuit by maneuvering it into a confined space built for CO2 testing and successfully detecting elevated levels of CO2 at the 1000 PPM threshold.
3.2. Quadruped Robot Dog LiDAR Floor Mapping and Acoustic Visual Confined Space Monitoring Experiment
In order to provide the inspection capability of devices in confined spaces, the QRD is utilized in the proposed framework. To test this feature, a two-part experiment is conducted. In the first part of this experiment, the goal is to scout the whole area of the lab using the QRD LiDAR and compare the accuracy of the given floor mapping with the original floor plan to ensure the inspection of the whole area. In this experiment, the QRD navigates around the room using the controller and transfers the data to the PC using ROS.
The aim of the second part of the experiment is to maneuver the QRD into the narrow space of the pump and inspect it in low-light conditions visually and acoustically to look for any changes indicating damage over time. This visual and audio data acquisition is performed using an infrared LED light camera and microphone.
3.3. Quadruped Robot Dog Wall Inspection Experiment
To demonstrate the effectiveness of the QRD for the inspection of the walls, an experiment was conducted to calculate the curvature of the wall using the LiDAR mounted on the QRD, and its results were compared with the manual measurements and the measurements performed with Xbox Kinect in 2014. The setup in this experiment is that the QRD tilts, as displayed in
Figure 9, to point the 2D LiDAR at the area where the bulge is the most visible and the area is suspected for damage.
3.4. Microrobot AR Missing Object Detection Test
The purpose of this experiment is to test the effectiveness of the proposed framework in identifying missing tools for maintenance or objects that might obstruct narrow passages, namely, pipes, and display the abstracted information in the AR headset. To carry this process out, low-cost HEXBUG nano or HEXBUG Spider devices mounted with ESP32-CAM boards collect image data, pass the data to a nearby single-board processor on the network, and identify the objects of interest using ML. As an example, a baseball is used as the target for analysis. The Wi-Fi-enabled microrobot units transfer images of the target back to a Raspberry Pi acting as a base station for ML processing. An abstraction of the detected object is then sent to the AR headset for the human user to view. The HoloLens 2 is capable of taking screenshots that emulate the user’s first-person view while wearing the device. Screenshots on the HoloLens 2 include any running programs in user view, superimposed over real-world surroundings.
Many important maintenance tools can easily misplaced within a home. Locating such objects can be difficult and time-consuming, especially for occupants with special needs or those living in cluttered dwellings. Hammers and screwdrivers are used as example targets of missing tools in this functioning test. The tools are are placed at different angles and orientations relative to the view of the microrobot camera to determine the system’s ability to detect objects and provide associated confidence levels.
3.5. Microrobot Confined Space AR Inspection Test
The purpose of the given experiment is to, first, assign each microrobot a unique QR code selectable via AR headset, second, utilize the AR headset to select the specific microrobot and inspect the confined space for damages, and third, provide a simulcast to identify key objects through a team of AR headset users. The setup for the first two parts attaches a QR code to the wall below a ceiling that shows signs of a leak and then uses the microrobots to conduct an AR inspection of the area to find the source of the leak.
For enabling the teaming of the AR headset users to simultaneously view the live stream videos, a network consisting of a HEXBUG nano with a mounted ESP32-CAM, HTC Hub hotspot, Raspberry Pi 4 Model B, and two AR headsets is set up. In this network, the microrobot transmits the video to the single-board processor via the HTC Hub, and the single-board processor accesses the data and creates a flask server that allows AR headset users to view the live stream at the same time by scanning the QR code of the IP address corresponding to the single-board processor.
3.6. Microrobot Pest Detection Test
This test aims to verify the functionality of microrobots for pest detection in narrow spacesThe ESP32-CAM mounted on the HEXBUG nano performs a visual inspection while a CNN ML algorithm linked directly through the wireless network makes a prediction of the presence of a rat in the image. By training the model with a large dataset of labeled images including the original images and captured images via the microrobot with their augmentation, the CNN learns to recognize the features of rodents and distinguish them from other objects or backgrounds.
3.7. Leaky Faucet Experiment
In this experiment, the goal is to capture the vibrations in each case of the leaky faucet, running faucet, and watertight faucet to investigate whether the data from these cases are distinguishable to assist in detecting the leaky faucet in the home maintenance framework. For this purpose, the piezoelectric patch is used to sense the vibrations of the water droplets in each case, and the data acquisition is performed using an Arduino Uno board by reading the analog input data of the piezoelectric patch.
Additionally, in the case of the leaky faucet, a test is performed to investigate the capability of the AR headset in providing guidance steps using a QR code including the links and information to repair the faucet while the user is performing the repair and seeing the holographic guide in AR.
5. Discussion
To provide a safe and sustainable place for people to live, infrastructure health needs to be constantly monitored. As a means of safety, small versatile toy robots are modified for accessing hard-to-reach areas providing visual inspection data through AR to the human user for finding the source of leakage and monitoring damages to the structure, mechanical systems, and architectural details. Mold contamination is a major health and economic issue for homeowners in humid environments. Mold starts in confined humid spaces and is difficult for people to detect at an early stage. Using this platform, the human user was able to find the source of the stain on the ceiling which would prevent mold contamination in early stages. Furthermore, the results show that this technique can be beneficiary in incorporating ML with the microrobots to look for pest infestation and find human’s missing tool objects for maintenance or objects obstructing the narrow passages.
Advances in technology have opened new opportunities for monitoring and analyzing the health of building structures. With the help of vibration and acoustic sensors, piezoelectric sensors, and robotic applications, it is now possible to monitor the health of a building and identify structural, mechanical, and vermin infestation issues in a timely manner, along with recommended options for mitigation, remediation, and repair. This can help homeowners to take proactive and cost-effective measures to maintain their homes in a safe condition.
While the paper addresses various home maintenance problems that homeowners face, collapse prevention is an important aspect to consider. The dynamic interplay between humans and intelligence within built environments, as described by Becerik-Gerber et al. [
1], in HBI can play a significant role in ensuring the safety of the built environment. With the continued advance of technologies for sensors, IoT devices, robotics, and artificial intelligence, it is possible to monitor and detect structural issues in buildings, such as foundation damage or leaks, and take preventive measures to avoid any catastrophic damage. The use of intelligent robots and sensors in hazardous environments, as investigated by Trevelyan et al. [
17], can also assist in identifying potential risks and hazards in buildings. Therefore, by adopting Home Maintenance 4.0, homeowners can utilize a wide set of hardware and software wireless solutions presented in this paper and collaborate with devices such as AR headsets, personal computers, and phones to help maintain a safe environment for themselves and prevent any disastrous events.
The results show the effectiveness of using the QRD in detecting curved walls and elevated CO2 levels, floor mapping, and providing acoustic changes over time and AR visual inspection in confined low-light conditions. As it is dangerous for humans to be present in hazardous environments, using the techniques and platform tested in this paper can be the first step in verifying the safety of the environment before human physical intervention.
Suggested future works could include the autonomy of the QRD in performing routine inspections using the methods provided here to prevent defects in the preliminary stages. The connection of the QRD sensor board to the Grafana Dashboard and InfluxDB database can both be integrated into a 5G network system by enabling the proper settings on the HTC Hub. This allows either or both the Dashboard and the database to be remotely accessible over the internet through their respective cloud-based services [
58,
59]. This could facilitate advanced database feeds from multiple clients, all synchronized on UTC standardized time. For object detection, the system can be configured for custom ML identifiers such as the vermin identifier model demonstrated in
Figure 19. Certain microcontroller boards with stronger processors than the ESP32-CAM can be used to run MicroPython with ML frameworks like TensorLite [
60] and cut out the need for a middleman processor. A variety of interfaces could be suggested to account for the level of maintenance knowledge and age group of the users. The cooperation of several microrobots could be beneficial in inspections where viewing different angles provides valuable structural information. Additionally, future HoloLens UI alterations could involve consolidating all detected objects into one token, with an associated bounding box around the detected object and correlating telemetry embedded into the actual frame of detection. Other methods can be used to toggle between the video stream provided by microrobots in AR, such as a dedicated field within a UI via MRTK on top of the Unity framework, gesture interactions, and vocal input. However, a physical QR code provides the convenience of real-world accessibility without the overhead that other complex switching techniques entail.
Current limitations include a lack of human user studies with different backgrounds, age groups, and maintenance education in interpreting the represented data and the availability of the universal interface to incorporate all available data using each type of hardware. While end-user feedback would provide invaluable information for improving our systems, it is not within the scope of this study and the experiments are conducted in a controlled environment. It is important to note that technology cannot replace human intervention completely. Homeowners must still remain vigilant and attentive to their surroundings, especially in identifying and addressing potential maintenance problems. Additionally, regular maintenance check-ups and servicing by professionals are still necessary to ensure the safety and longevity of the infrastructure. Although this paper discusses the cyber–physical systems for small houses, the methods can be generalized and extend well beyond homes to most types of infrastructures. The only consideration is that the systems aim for non-specialized users, as opposed to the industrial buildings that have specialized maintenance personnel. Homeowners and technology developers must also keep in mind the importance of data privacy and security, especially when using interconnected devices and sensors. By taking these necessary reminders into account, homeowners can fully enjoy the benefits of technology in home maintenance while also ensuring a safe and sustainable living environment.