This section was focused on the design of effective methodologies for the evaluation of the results expected from the different ICT solutions presented, and the assessment of their impact on the consumption behavior of the end-users such as their comfort or acceptance.
3.1. UtilitEE
UtilitEE attempts to define the optimum point or in other words the trade-off between three main pillars of the project by answering, through the validation process, the three fundamentals to the project objectives/questions:
Energy Efficiency—How much energy can be saved?
User Acceptance—What’s the tolerance of the end-users with regards to their energy usage habits and comfort preferences to achieve energy consumption reduction?
Degree of Automation—How automated an energy management system should be to fulfil the user’s requirements?
The validation plan must take into account the evaluation criteria defined within the project, in line with the three assessment categories defined to evaluate whether the project objectives are fulfilled: (1) Technical assessment of the UTILITEE framework: focusing on the UTILITEE requirements defined within the project, this evaluation pillar aims at the system performance assessment from the perspective of the final users as well as from technical developers and pilot leaders in the consortium. (2) Impact assessment of the UTILITEE framework: the impact analysis must be defined so that all the critical aspects of the UTILITEE framework performance reflecting the achievement of the project objectives are examined. In this context, the evaluation criteria are categorized into four domains: the Energy, the (Indoor) Environmental, the Behavioral, and the Business impact of the UTILITEE framework; and (3) User acceptance assessment of the UTILITEE framework: referring to the acceptance, reliability, learnability, and attractiveness achieved by the UTILITEE system with regards to its users and to other people affected by its use.
3.2. eTEACHER
eTEACHER validation and impact assessment aim at (1) identifying behavior changes of building users towards energy efficiency and better indoor conditions encouraged by the project tools and (2) evaluating the impact and effects of those behavior changes regarding energy savings and improvement of indoor conditions.
For that purpose, eTEACHER has defined a methodology that uses measured and self-reported evidences and is based on three methods: (a) monitoring to collect data on energy consumption, outdoor and indoor conditions; (b) eTEACHER app to collect information related to users interaction (number of users registered, number of active users, etc.) and (c) feedback forum &surveys to gather the opinion of the building users. In addition, key performance indicators (KPIs) that represent project impact and success are calculated. An important aspect of the methodology is the experimental design which consists of comparing control environments (environments without eTEACHER) with study environments (environments with eTEACHER) before and after the deployment of eTEACHER to draw conclusions regarding behavior change caused by the project tools.
3.3. InBetween
InBetween overarching objective was to engage end-users to identify energy wastes, teach them how they can avoid wastes and conserve energy, motivate them to act and, finally, assist them to carry out energy efficiency practices. For this purpose, an IoT-enabled cloud platform was established and advance energy services were developed yielding an affordable solution that offers added value without significant disruption of end-users’ every day activities and comfort. Such ambitious goals required a sophisticated validation and impact assessment methodology.
The cornerstone of employed validation methodology was the widely adopted International Performance Measurement and Verification Protocol (IPMVP), where the “Option C” was selected since the savings were quantified by continuously measuring energy use, using smart meters and heat meters, on the household level over the entire reporting period. Moreover, for specific demand types (e.g., heating, DHW etc.), additional sub-metering was employed. To be able to validate the platform effectiveness, a range of KPIs was derived, which can be divided into three main categories: (1) energy use (e.g., total consumption per energy carrier, normalized consumption per are, number of inhabitants, CO2 emissions, peak load indicator, load match index, etc.); (2) comfort (e.g., indicators such thermal discomfort, stale air, volatile organic compounds, etc.); and (3) user engagement (platform usage, number of interactions etc.). Nevertheless, a tailored subset of KPIs was offered to each of the three stakeholder groups, i.e., platform end-users, demo site owners and supervisors, and platform maintenance and R&D teams, according to their requirements.
Finally, in some cases, the adopted validation methodology was customised to overcome unavoidable factors, influencing reliability of results, such as data availability, combination of invoice and monitored data, precision of HDD corrections, change of habits during COVID-19 lockdown, and ability to associate the cause of energy savings.
The impact assessment on end-user behavior was based on several user engagement and technology acceptance indicators. Namely, engagement with the platform was assessed from the intensity of interactions between end-users and the platform (e.g., number of sessions in a mobile app, most frequently used features etc.) and usage of control functions of the platform (e.g., remote appliance control, appliance working hours scheduling etc.). Moreover, end-user engagement was also analyzed by measuring their level of compliance to recommendations issued by the platform. In other words, we were able to track and associate end-user actions with a specific energy conservation recommendation (via notification within the InBetween mobile App), within a given timeframe. Finally, end-user satisfaction was monitored via a built-in feedback mechanism (5-star rating) within platform user interfaces.