Next Article in Journal
A New RBF Neural Network-Based Fault-Tolerant Active Control for Fractional Time-Delayed Systems
Next Article in Special Issue
Algorithms for Finding Vulnerabilities and Deploying Additional Sensors in a Region with Obstacles
Previous Article in Journal
Intelligent Multi-Robot System for Collaborative Object Transportation Tasks in Rough Terrains
Previous Article in Special Issue
Remote Laboratory for Online Engineering Education: The RLAB-UOC-FPGA Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RYEL: An Experimental Study in the Behavioral Response of Judges Using a Novel Technique for Acquiring Higher-Order Thinking Based on Explainable Artificial Intelligence and Case-Based Reasoning

by
Luis Raúl Rodríguez Oconitrillo
1,*,
Juan José Vargas
1,
Arturo Camacho
1,
Álvaro Burgos
2 and
Juan Manuel Corchado
3,4,5,6,*
1
School of Computer Science and Informatics, Universidad de Costa Rica (UCR), Ciudad Universitaria Rodrigo Facio Brenes, San José 11501-2060, Costa Rica
2
Law School, Universidad de Costa Rica (UCR), Ciudad Universitaria Rodrigo Facio Brenes, San José 11501-2060, Costa Rica
3
Bisite Research Group, Universidad de Salamanca, 37008 Salamanca, Spain
4
Air Institute, IoT Digital Innovation Hub, 37008 Salamanca, Spain
5
Department of Electronics, Information and Communication, Faculty of Engineering, Osaka Institute of Technology, Osaka 535-8585, Japan
6
Pusat Komputeran dan Informatik, Universiti Malaysia Kelantan, Kelantan 16100, Malaysia
*
Authors to whom correspondence should be addressed.
Electronics 2021, 10(12), 1500; https://doi.org/10.3390/electronics10121500
Submission received: 20 May 2021 / Revised: 15 June 2021 / Accepted: 16 June 2021 / Published: 21 June 2021

Abstract

:
The need for studies connecting machine explainability with human behavior is essential, especially for a detailed understanding of a human’s perspective, thoughts, and sensations according to a context. A novel system called RYEL was developed based on Subject-Matter Experts (SME) to investigate new techniques for acquiring higher-order thinking, the perception, the use of new computational explanatory techniques, support decision-making, and the judge’s cognition and behavior. Thus, a new spectrum is covered and promises to be a new area of study called Interpretation-Assessment/Assessment-Interpretation (IA-AI), consisting of explaining machine inferences and the interpretation and assessment from a human. It allows expressing a semantic, ontological, and hermeneutical meaning related to the psyche of a human (judge). The system has an interpretative and explanatory nature, and in the future, could be used in other domains of discourse. More than 33 experts in Law and Artificial Intelligence validated the functional design. More than 26 judges, most of them specializing in psychology and criminology from Colombia, Ecuador, Panama, Spain, Argentina, and Costa Rica, participated in the experiments. The results of the experimentation have been very positive. As a challenge, this research represents a paradigm shift in legal data processing.

1. Introduction

There is a need for a computational framework that allows capturing, representingm and processing the meta-knowledge [1] of a human in the context of the domain of discourse and study the behavioral response of a person in light of explanatory machine techniques [2]. For example, in the legal area, it means to get an intelligent and coherent explanation of the legal analysis made by a human in a particular scenario of a previous case and find the reasons about why a particular law was used [3] in order to support decision-making when other judges are dictating a resolution. This situation seems superfluous, but it is not, because it usually would imply navigating between a complicated set of theories that range from cognitive learning theories [4,5], instructional design [6], cognitive theory [7], and information processing theory [8], among others. These theories reveal how a judge can learn and support decision-making using the knowledge from other judges and the sentences. In this particular investigation, there is a path that subsumes and synthesizes in some way some parts of the previous theories and focuses them on a practical point of application, and that leads directly to basal knowledge [9], and that is the Subject-Matter Experts (SME) [10] from which the analysis of the merits of a case (legal matter background) is the main activity of a judge. In this way, it is possible to lead efforts to work with higher-order thinking [11] using the technology like a meta-media [12] to manage meta-knowledge.
The study of the merits of a case involves analyzing the scenarios formed by the facts associated with a case. A legal analysis consists of a judge perceiving [13] and analyzing the facts and evidence, according to a determined legal posture [14,15]. It is to emphasize that the behavioral response of a research subject, such as a judge, is diverse, so it is necessary to investigate the response when using the framework in terms of functionality, usability, and efficiency when analyzing the merits of a case.
Studying the judge’s behavior on accepting or rejecting the use of the framework is not as simple as asking the judge if they agree, like, or think it is possible to use this framework to do such work. This investigation is a complete challenge to the mental and psychological scheme because there are rules in the domain of discourse that represent substantial barriers to conduct experiments and studies in technology and human behavior inside the jurisdictional area. The main barriers are: (1) That nothing and no one can intervene in the decision-making of a judge, this is called judicial independence, and (2) the judges have a high degree of discretion to make decisions, and nothing and no one can tell them how or what decision to make [16,17]. So far, no research has evaluated the behavior of the judges when faced with the use of a framework that helps them with the deep analysis of a case by taking fragments of the human psyche [18] using meta-knowledge, focusing on the perception [13], and adapting Artificial Intelligence (AI) [19] techniques to explain that fragments. The psyche, in this research, is understood as the processes and phenomena that make the human mind works as a unit that processes perceptions, sensations, feelings, and thoughts, rather than a metaphysical phenomenon [18]. So, it is understandable that the psyche is exceptionally complex; however, it is possible to explore some deep traits and characteristics that can be expressed through layers of awareness [20] or envelope [21] of knowledge, based on the perception a human has from objects and relationships of the real-world. In this way, it is possible to express, through related meta-knowledge fragments, the meaning, and purpose of someone in a specific context.
Thus, this research aims not only to investigate the behavioral development of the judge when using new technology for in-depth analysis of cases but also to show computational advances with a high impact in cognitive and psychological fields. So, this research presents a Mixture of Experts (MOE) system [22,23] called RYEL [24,25,26,27]. This system was created based on CBR guidelines [3,28,29,30] and Explainable Artificial Intelligence (XAI) [31,32,33] using focus-centered organization fundamentals, which means the organization of XAI and CBR is done and focus according to the perspective and approach that a human has in a domain of discourse, meaning it is human-centric [34,35,36]. A human interacts with the system through Explanatory Graphical Interfaces (EGI) [2], which are graphic modules that implement computational techniques of knowledge elicitation [37] to capture, process, and explain the perception of a human about facts and evidence from scenarios in a context. RYEL uses the method called Interpretation-Assessment/Assessment-Interpretation (IA-AI) explained in [2] which consists not only in explaining machine inferences but also the point of view that a human has using metadata from the real world along with statistical graphs and dynamic graphical figures.
Various investigations try to obtain knowledge from past cases using the traditional Case-Based Reasoning (CBR) approach in a legal context, such as [3,28,30,38,39,40,41]. In those systems, CBR consists only of solving current cases according to how previous ones were solved, that is, in a deterministic way. This kind of solution is different in capturing the judge’s interpretation and assessment of facts and provides an intelligent simulation [42,43,44,45] that allows a legal analysis about the merits of the case. This approach is an understudied approach concerned in identifying the perception of a judge about the objects and relationships of the real world involved in a case, along with the machine’s ability for capturing and processing that information and explaining it graphically on an interface [46].
Thus, the novelty of this research not only lies in the societal impact of using XAI and CBR to assist judges in resolving legal disputes between humans with the novel IA-AI method to analyze the merits of a case, but also the behavioral study of the judge in the face of this technology. Therefore, a balance in explaining the software design and behavioral analysis of the judges is the key to reveal essential aspects of this investigation. The following sections explain this balance.

2. Framework Design

Design Science research process proposed in [47] allowed the creation of the computational framework of RYEL explained in [2], implementing CBR life-cycle stages as shown in Figure 1 as a guide to exchange and organize the information with the user. The system design was developed in [24,25,26] as a hybrid system [48,49] implementing different machine learning techniques for every CBR stages [28,30,38,39,41,50,51,52]. An adaptation took place to implement the stages to graphical interfaces where the judge can manipulate corresponding images representing connected facts and evidence of the scenarios. How the scenarios show a definition, relationship, characterization, and description according to a legal context using images allows the machine to acquire higher-order thinking [11] from humans dynamically and graphically. By manipulating interrelated images, a human expresses ideas and points of view. The system takes the images as inputs to carry out a legal analysis simulation and generates a graphical explanation of the laws applicable to the factual picture. Other judges use the explanations provided by the machine for decision-making support.
The data overview diagram of the system consists of image inputs, evidence and facts processing, norms and laws outputs, CBR articulating and processing the information between the user and machine; organizing the inputs and outputs of the system as depicted in Figure 2. The role of EGIs is to provide graphical interfaces that are used for human interaction with a computer, called Human-Computer Interaction (HCI) [53], an example of the interface is shown in Figure 3. The graphical techniques of the interfaces allow the elicitation of human knowledge using the ligaments between the shape of images and the content of its attributes, as well as the relationships between images. This means a meta-media to manage higher-order thinking by combining the functionality of the EGIs, for example, by combining the functions of the interfaces of Figure 4 with Figure 5. This combination makes it possible to work with the range, importance level, order, and attribute links between images. The role of IA-AI is to obtain the perception of a human from the dynamic triangulation of attributes expressed with images, relationships, and unsupervised algorithms [2].
Explanation of the system design must line up with the study of human behavior in light of the cognitive field and technology. Thus, the following points allow alignment, and the next sections explain them: (1) The cognitive environment [54] of the judge in order to delineate the domain of discourse and understand both the computational nature of the data and the behavioral study of the judge; (2) the knowledge representation, (3) the computational legal simulation, and (4) the hybrid nature of the system.

2.1. Cognitive Environment

A judge may have extensive knowledge. However, the system focuses on how a judge understands information from scenarios in a context, as shown in Figure 6. This figure explains the definition of understanding in this research in terms of (1) perception, (2) perspective, and (3) interpretation. These words seem to be standard and straightforward terms, but the system treats them as part of its nature and requires explanation.
Perception in [13] is a mental process that involves activities such as thought, learning, memory, and others, along with a link to the state of memory and its organization. It is a subjective state where a person captures and understands, in their way, the qualities of objects and facts from the real world. Therefore, a judge may have a different perception of the information between one file and another. For example, a judge in a Domestic Violence Court has grasped, learned, and is aware that the defendant from the beginning is an alleged aggressor given a situation of vulnerability over the victim. However, a judge in Criminal Court has learned and understood that the “Principle of Innocence” must be used with a defendant, which presumes the state of not being guilty until proven otherwise.
Perspective in [55] is the point of view from which an issue is analyzed or considered. The perspectives can influence people’s perceptions or judgments. The judges’ perceptions could change according to their attitude, position, or considerations about facts, objects, individuals, entities, or type of work. The annotations of a case, which are information from the legal file, can be analyzed using a different perspective; for example, a judge in a Criminal Tax Investigation Court may see the action of hitting a person as not so severe or even belittle it, while a judge in a Domestic Violence Court can see it as very serious.
Interpretation in [56] means expressing or conceiving a reality personally or attributing meaning to something. Thus, the judges could conceive an annotation from the legal file according to their reality and attribute and then assign a meaning. Consider this example, shooting to a person can be interpreted by a judge in a criminal court as an act of personal defense and assess it as a reason to preserve life, while another judge, from the same court, may interpret it as an act of aggression and assess it as a reason to steal something.
The system handles the interpretation and assessment made by a judge as two separate but interacting processes. In order to understand this interaction, consider the following example; person X assesses the help a person Y gave them in a trial, but person X cannot interpret the reasons of the help, because person Y is their enemy, or else, person X interprets that their enemy helped them in a trial because they want something from him. For that reason, the help is not so valued by person X. This example shows how the interpretation and assessment interact in this investigation.
In the file, how a judge understands the facts and evidence is not recorded. Currently, a file only contains the final decision of a judge supported by motivations and underpinnings of the law, along with chunks of structured data like “the outcome”, “the considering”, and “the therefore” as described in [17,57]. Thus, this unrecorded information is precisely the most important to understanding the perception of a human. The graphical techniques and explainable methods [33], in this investigation, allow to capture and detail this information.

2.2. Knowledge Graph

Internally, the system transforms images and relationships representing the scenarios of a case into directed graphs called Knowledge Graphs (KG) [58,59,60], which contain object types, properties, and relationships from real criminal cases. Through graphic media, the judge can obtain information about the ontological content [61] processed by the images. After the image transformation, each scenario is converted to a set of nodes and edges, representing facts or evidence along with the relationship that explains their bond, which translates into hermeneutical content [62]. There may be more than one scenario per legal case. The expressive semantic nature [63] of the KG allows for having different graphical forms [46] to show the reasoning of a judge and to understand the use of law in a proven fact (fact whose evidence accredits it as a true) in a crime. In [64] the KGs have been prevalent in both academic and industrial circles in these years because they are one of the most used approaches to integrate different types of knowledge efficiently.
Usually, the judge performs the mental process of relating legal concepts of the scenarios to find the meaning of the information provided by the parties in conflict. Thus, to determine whether the facts are truthful, the judge makes groups of evidence and links them to the facts. The groups, data type, and relationships in the legal scenarios mold a network that expresses meaning. Therefore, a network is generated and is visualized as Semantic Networks (SN) [65,66,67] by the system.
In [68], the SN is a directed graph composed of nodes, links, or arcs, as well as labels on the links. KG in [63] is a type of SN, but the difference lies in the specialization of knowledge and in creating a set of relationships. Thus, the knowledge structure depends on the domain of application, and graph structure changes according to the knowledge expressed. Since the system translated images into nodes describing physical objects, concepts, or situations in a legal context, the relationships (edges) between images are transformed into links and express a connection between objects in legal scenarios. Links have labels to specify a particular relationship between the objects of the legal case. Thus, nodes and edges are a means to make the structure of legal knowledge. In this way, the use of images and relationships allows the construction of KG that represents the judge’s knowledge after having interpreted and evaluated the facts and evidence contained in the scenarios, and this is the reason why the graphs include information about properties types and relationships between entities. An entity can be an object, a person, a fact, a proof, or the law.

Human Interaction

The judge can access graphic resources in the form of images representing legal elements [16] which are pieces of juridical data made of evidence and facts, as shown in Figure 7. An EGI offers the judge a popup menu to select the image that best reflects a record entry from the expedient. In addition, the system has a drawing area called working canvas where the judges can draw their perception of the scenarios by establishing, organizing, distributing, and relating the images that they select from the menu, as shown in Figure 7. The KGs built with the images are stored in an unstructured database, and when this happens, they become a more specific type of graph called a Property Graph (PG) [69,70,71].
The judge can change the display state of an EGI; this is between images or nodes to study the attribute representations in both states. The nodes acquire colors, sizes, and properties, to explain the details of the attributes visually. Edges acquire properties such as length, thickness, color, and orientation to explain how the nodes are linked and distributed. Both nodes, as well as edges, contain unique properties resulting from the transformation process. The system uses the IA-AI method to create properties and attributes. The method has three main processes. In the first process, after the judge has finished drawing on the working canvas, like in Figure 7, they can interpret and assign the levels and ranges of importance to the images drawn. The judge does it by dragging and dropping the images into previously designed graphic boxes (precedence and importance levels) as shown in Figure 5. In the second process, other EGIs are used to assess the links between images representing the facts and evidence (proof assessment); the judge does this by hanging up the links in different positions and establishing the bond length (link) between objects as shown in Figure 4. In the third process, another EGI is used to explain recommendations on laws and regulations concerning the factual picture depending on the context, as shown in Figure 8, where the Y-axis indicates the legal taxonomy, this means the order of importance of the legal norms according to the context. The X-axis represents the level of similarity that the norms have in the factual picture of the scenarios. Finally, the machine recommends groups of norms represented by a higher or right circle in the chart, depending on what the judge is analyzing.
During a case, the judges can run legal simulations to delve into the merits of the case gradually. The simulation carried out by the system is described below.

2.3. Intelligent Simulation

Figure 9 shows the legal simulation activity. There are three main processes in the simulation which are: (1) The capture of the interpretation and assessment values using EGIs [25,27] that a judge makes of the facts and evidence of a case, (2) identify the patterns of interpretation [3] using CYPHER [71] scripts to extract the semantic [72,73] and the ontological [74,75] content of the facts and evidence contained in the scenarios of a case depicted by EGIs, and (3) the options the machine offers to the judge to distill legal information from the patterns found in the graphs as shown in Figure 10 by using unsupervised algorithms [71], like Jaccard [76], Cosine of Similarity [77], and Pearson’s Correlation Coefficient [78] applied to graphs. Then the machine provides an explanation of the results. Some examples of the information that the machine explains are: (a) A graphical explanation about a set of norms that apply to a case; (b) explain and identify the evidence that is not related to some fact; (c) detects the evidence that not evaluated; and (d) indicate what evidence has been evaluated but not related.
When judges use the system continuously, they will be able to integrate legal knowledge during a trial.

Meta-Knowledge Integration

Knowledge Integration (KI) [79] happen by capturing and representing the interpretation and assessment of facts and evidence made by the judges at the beginning of a trial, along with the knowledge obtained from the analysis of the legal simulations. Thus, the system unifies unstructured knowledge [80] of the interpretation and assessment values of a judge according to their legal perspective and new information that may appear during the process of a trial to the end of it. In addition, KI allows the generation and integration of fragments of meta-knowledge. There are three points of KI and one more at the end of the trial when judges dictate a resolution or until the sentence appeal. If the resolution is legally challenged (contested decision), then there is one more point of KI. At each point, the judges can express new insights or changes of facts and evidence and run a legal analysis simulation, as many as necessary.

2.4. Hybrid System

RYEL uses different types of machine learning techniques therefore, it has the characteristic of being a hybrid [48] system. Hybridization applies in a multitude of computational areas, as in [81,82]. However, this research focuses on the legal field, specifically on facts and evidence from a case analyzed by a judge.
The hybridization [48] of RYEL is organized under the MOE [22,23] foundations based on the divide-and-conquer principle [23,83]. That means that different parts or segments constitute the problem space; each part corresponds to a module called an “expert” [23]. MOE usually uses “gate network” [23] that decides to which expert a specific task should be assigned to deal with complex and dynamic problems [48], for example, the use of various experts for multiple label classifications using Bayesian Networks (BN) and tree structures [22]. Supervised machine learning such as neural networks are typically used [22,83] in MOE, however our approach is unsupervised [84] using KG [25,27], to build SN with a CBR [3,85,86] and XAI [32,87].

2.5. Case Explicability

The implementation of XAI and CBR reveals the interconnections and characteristics of objects within the scenarios of a context. Due to the use of KG, it is possible to achieve legal exegesis [88] by obtaining the hermeneutical content of relations and objects together with ontological data through their properties. That means that a legal interpretation is according to the content expressed by a judge; therefore, the semantic explained initially.
The adaptations of the CBR stages, shown in Figure 1, are the following: (1) Retrieve, whereby the judges have graphical options to execute a legal analysis simulation to find patterns of interpretation of the facts and assessment of the evidence similar to the case depicted in the working canvas for a specific context; (2) reuse, whereby the system synthesizes and evaluates the patterns found, and detects the laws with which they have links in order to be considered by the judges in new cases; and (3) revise, whereby the judges of higher-hierarchy use the EGIs to make a review of the performance made by lower judges aimed to make modifications and corrections in the factual picture posed on the working canvas. In this stage, the system integrates knowledge of the judges and the parties in conflict. If the parties in conflict appeal to the resolution (legal challenge), then higher-hierarchy judges must revise the scenarios. The higher judges can also run legal analysis simulations in order to consult, verify, correct, or add new perspectives to refute or accept, in the whole or part, the analysis carried out by judges of lower-hierarchies; then, the issuance of a final resolution occurs and (4) retained, which means that the sentence is final and no further legal simulation is necessary. In the retention stage, the system incorporates cognitive information into the knowledge database because the possible errors of bias in perception were eliminated or corrected by reviewing several humans during the legal process using the system. Figure 1 shows a list that summarizes the stages of the CBR.
  • Case-Base: A KG represents this;
  • The Problem: Is the interpretation and assessment of both facts and evidence;
  • Retrieve: Using CYPHER script patterns and graph similarity algorithms like Cosine, Pearson, and Jaccard;
  • Reuse: Consists of detecting norms and laws related to the factual picture drawn on the working canvas;
  • Revise: Analyze and review the work done by lower judges using KG via EGI;
  • Retain: Is the stage of adding to the knowledge base a correct approach to interpreting and assessing a factual picture.

2.6. Case Definition, Data Model, and Example

Formally a case is a graphical deposition of facts and evidence made by the judges according to their perspective using EGI. In one case, there are segments of information called “scenarios” that contain facts related to the evidence. Scenarios are a way to express and organize legal information.
An in-depth legal analysis is the identification and description of both data and relationships within each scenario. The judges do this analysis as they work through the case during the trial. To exemplify the data and relationships, consider the data model of segment 1 in Figure 11 where bidirectional arrows represent that a relationship can go one way or the other from concepts or objects, and it demonstrates the organization and relation of the meta-knowledge. In this figure in segment 2, a simplified real world example of a “violation case” uses the data model from segment 1. The elements called “Material Object” and “Formal Object” broach the subject of each scenario; for example, in this figure, there is a case of a man affecting a woman through the action of rape. In [89] a formal object means carrying out a legal study, from a particular perspective, on the relationship of legal data; the material object is the matter that deals with such data, but in this case, all the relationships that describe each object are also represented and organized.
Segment 2 of Figure 11 shows a PG where there is a removal of properties and labels in the relationships in order to simplify the example. This PG represents a judge analyzing a man from the perspective of the state of mental health that could lead him to rape a woman because of medical issues which are related to the testosterone levels in his body, and the woman from the perspective of the moral damage suffered by desecrating her body. The rest of the graph describes tests, norms, laws, rights, resolutions, and decisions related to the rape felony following the model of segment 1.

2.7. Explainable Technique

RYEL uses the explanatory technique called Fragmented Reasoning (FR) [2]. This technique uses dynamic statistical graphics that granulate the information following a hierarchical order and importance of the information according to the interpretation and assessment made by a human of real-world objects. This technique means that the semantic and holistic constitution of objects, attributes, and vectors describing relationships between objects in a KG, as in Figure 7, are fragmented and link up to each other to explain the human conception according to its perception in a specific domain of discourse. Therefore, this technique expresses the hermeneutical content of a case from the perspective of a human and allowed the study of a new spectrum of cognitive information treatment [54] in the field of machine learning, associated with the human factor [90,91] specifically about the subjective information [92] of a person, which in this case is specific to the judge.
In Figure 12, the percentage of participation represented by the Y-axis is used to explain the level at which the concepts or objects of the current case are within the factual picture of other cases. The X-axis is used to explain the level of similarity that the concepts have between the cases. The size of the circles represents the dimension or level of importance of the scenarios within the files. The machine recommends the group of files distributed and located higher or more to the right of the graph. The machine handles each fragment as a collection of nodes to describe the interpretation of juridical objects and the assessment of a juridical concept. In this way, it has been possible to investigate the legal explanations related to the inferences [93] obtained by the system.
Internally the FR technique works using a strategic arrangement of data for each observation made by a human. FR uses the IA-AI method to get information as an input and reveals how it was interpreted and evaluated by a person. Figure 12 shows examples of some calculations and graphical view when the system provides recommendations. A fragment is a set of cognitive information pieces represented by geometric figures, colors, sizes, positions, and distribution of data elicited from EGIs using IA-AI and KG. The system uses the fragments to manage and organize the set of objects and to be able to explain them.
A fragment ω is represented by a collection of elements and the judge’s assessments. A fragment is an approximation of a set of nodes about a legal context p where a set of juridical concepts κ is in union with a set of nodes β joined with their relationships γ . The variables β and γ are decorating [94] the juridical concept κ . In this case, the decoration refers to the design pattern used programmatically (coding) to define a collection of objects that are capable of expressing the behavior of an individual object κ dynamically, but without affecting the behavior of other objects of the same type in the same context; the programming paradigm used is Object-Orientation (OOP) to handle nodes, relationships, attributes, and properties. From the interpretation patterns, the construction of predicates occurs; imperative programming is used directly to manage the objects, and declarative programming is used indirectly to manage the assertions of the objects using CYPHER scripts. The set of objects contained in the fragments participate in (1) The Jaccard, Pearson, and Cosine formulas to work with the interpretation patterns, and (2) to organize and construct vectors from said patterns to make an inference.

3. Machine Specifications

Table A1 explains the main components, technology, formula, and concepts of the system in approximate order of operation. We will call each component with a “C” attached to a number, for example, “Component 3 = C3”. Table 1 synthesizes and distills operations and essential functions to work with higher-order thinking and handle KGs in the system based on Table A1. For now, the focus will be on C2, which provides the data structures that represent a KG (case) in the form of an ordered triple as shown in Figure A1. Equation (1) shows the formal representation of the ordered triple, from which the extraction of elements such as concepts, nodes, and relationships is possible. The output of extraction is a list of values that represents the input for the vector construction algorithm shown in Algorithm 1. Inference generation uses vectors between scenarios. This section explains: (1) The ordered triple, (2) formulas and vector construction, and (3) a simplified real case scenario example using the formal representation of a case.

3.1. Data Structure

To explain each element in Figure A1, consider this: Given graph G represents an ordered pair in the form of G = ( N , E ) , where N are the nodes and E are the edges, the artifact handles this:
  • Each node or vertex (image) is an ordered pair in the form of N = ( n , t o ) where n is the label of the node and t o is an ordered triple in the form of t o = ( i n , i c , A ) where i n is the index of the node, i c is the index of the legal concept to which the node belongs, and A = x : x is an attribute of the node . The x attributes of the node are text fields, for example, name and description of the node, and numeric values about precedence and importance levels that belong to the set of numbers Q ;
  • Each edge or arc (relations between images) is an ordered pair in the form of E = ( e , t o ) where e is the relation label and t o is an ordered triple in the form of t o = ( i r , p o , A ) where i r is the index of the relation, p o is an ordered pair in the form of p o = ( n i , n f ) where n i is the index of the start node and n f is the index of the final node, and A = y : y is a relation attribute . The attributes of a relationship are text fields, for example, name and description of the relationship, as well as numerical values about the link and relevance of the relationship that belongs to the set of numbers Q .
Algorithm 1: Creation of vectors and related concepts in KG
Electronics 10 01500 i001

3.2. Case, Context, and Scenarios

From N and E , a case C is a ordered triple as shown in Equation (1) where:
  • p N and is an index that identifies the context of the case assigned by the artifact;
  • V means the case scenarios in the form of V = λ : λ is a legal element where n | V | > 0 ;
  • R represents the relationships in the scenarios of a case in a given context in the form of R = r : r is a relationship type E .
Given the above, Equation (1) shows a case with a set of relations R for a set of nodes that constitute the legal elements λ and describe the V scenarios of the factual picture that occurs in a p context. The relations and nodes were created from the transformation of interrelated images using the graphical interfaces of the artifact, and an index is an internal number that the machine assigns to the description of the context given by the judge, for example, “ Simple Homicide = 999999 = p ”:
C = p , V , R .

3.3. Legal Elements

A legal element is an ordered triple as shown in the Equation (2) where:
  • i N and is an index that identifies a particular legal element assigned by the artifact;
  • K represents a concept in the form of K = z : z P z H where:
    (a)
    P = p : p is a proof of the kind N ,
    (b)
    H = h : h is a fact of the kind N ;
  • T = t : t is a relationship of the kind E .
Equation (2) means that in a legal element λ there are relations T for a set of nodes that form the concepts K formed by facts or evidence, and an index i identify them. The artifact assigns the index to each set of nodes to identify that set:
λ = ( i , K , T ) .
From the formal representations of the case explained previously, it is possible to supply a real, simple, and reduced example of an interpretation pattern. Consider Listing 1, this script seeks for patterns about nodes connected to the act of raping (Violation) someone under 18 years old. The pattern can be modified to look for children, older people, or undefined sex according to the rules of gender ideology. Modifications can be made to the script to apply deductive logic by taking a general aspect of a fact, evidence, or person and looking for a particular attribute pattern to canalize some legal study.
Listing 2 seeks particular attributes of people; in this case, it is a man connected with a woman, regardless of age or other characteristics, but considers the names. This script traces connection patterns up to 15 deep layers between these two people, and at the same time, extracts the shortest links between them. Deep layers mean the depth of connections between one object and another. Therefore, using this script can determine objects or events that are intermediaries between people to understand their criminal nexus.
Listing 1. Simplified example of code about interpretation patterns related to the act of rape using CYPHER.
Listing 1. Simplified example of code about interpretation patterns related to the act of rape using CYPHER.
  • MATCH (n) WHERE n:Man or y:Woman
  • OPTIONAL MATCH (n)-[r]-(v:Violation)-[type]-(s:Sexual)-(y)
  • WHERE EXISTS(n.age) < 18
  • RETURN n, r limit 100
Listing 2. Simplified example of existing patterns between a node type Man and another type Woman using CYPHER.
Listing 2. Simplified example of existing patterns between a node type Man and another type Woman using CYPHER.
  • MATCH (hombre:Jackie name: ’Jack Smith’ ),
  • (mujer:Al name: ’Alice Kooper’ ),
  • p = shortestPath((Man)-[*..15]-(Woman))
  • RETURN p
There are 3 ways to avoid ambiguities which are: (1) By using a specific context, (2) searching for a particular pattern, and (3) using vector similarity. Let us consider the following examples about patterns: (1) In contrast with Listing 1, the pattern (n)-[r]-(v:Violation)-[type]-(s:Agreement)-(y) seeks for nodes and relations connected with the violation of an “agreement” rather than a violation in “sexual” terms, and (2) if we compare the (John)-[under_TheEffects_of]->(Drugs)-[in]->(Stabbing) pattern with the (Alice)-[under_TheEffects_of]->(Drugs)-[in]->(bed) pattern, we obtain that there is no ambiguity, due to the intrinsic nature of both patterns. However, the use of the pattern (person)-[under_TheEffects_of]->(Drugs)-[]->() would serve to look for other patterns in all the database knowledge, where a person is under the influence of the drug, regardless of gender, name, or any other characteristic. In the latest pattern, the Alice and John scenarios are collected, differentiated, and explained by the system using charts and geometric figures that explain their differences. From the interpretation patterns, it is possible to extract vectors from them to compare scenarios and generate inferences.

3.4. Vector Creation

C6, explained in Table A1, is responsible for constructing the vectors. The construction consists of 2 phases. In the first phase, attribute calculations of nodes and relationships take place. The attributes are effect (E), link ( V = adjacent side ), and importance ( I = opposite side ). The use of these attributes is through an adaptation of the Pythagorean formula in a Euclidean space; this formula generates values between two connected nodes by using a right triangle that is formed between their centers and the circumference of each one in a 3D plane, as shown in Figure 13. In the second phase, by using Algorithm 1 it is possible to obtain the vector modules.
Algorithm 1 receives as input a list of nodes and relationship attributes, as well as a list of legal concepts obtained from EGIs. The input lists of object attributes are processed to create output lists of vectors. The lists of vectors represent, in a unique way, the factual pictures of the scenarios in a case. C3 is responsible for applying Natural Language Processing (NLP) techniques like the paradigmatic and syntagmatic process to the input list of concepts to detect which ones accept a replacement and which ones can be combined, respectively. NLP techniques produce output lists of paradigmatic and syntagmatic relationships of concepts used as filters in searches for interpretation patterns. The output lists of vectors and concepts from Algorithm 1 are input parameters in Algorithm 2 which make inferences.
Algorithm 2: KG legal inference using Cosine, Jaccard, and Pearson functions
Electronics 10 01500 i002
Part 1 from Figure 13 represents linked nodes in a scenario. Each sphere is a node. Between the nodes, there are sets of vectors X , Y , and Z obtained from links, that is, the relationships of the nodes. The vector X has the origin point in C , which is the center of node A, and the endpoint is C which is the center of node B. The vector module C C constitutes the assessment of the links between facts and evidence using the relationships between the images, for example, the links shown in Figure 4. The vector Z has the origin point in C , which is the center of node A and the endpoint in C, which is the circumference of node B that represents the diameter of the node. The vector module C C is built with CYPHER scripts to classify information about the importance levels obtained from interfaces like the one shown in Figure 5.
Part 2 from Figure 13 represents multiple variations of nodes and relationships in a scenario and represents dynamic changes in the perception of objects in the real world. For example, consider node A as a fact and node B as evidence. Between these nodes, there is a small increase in the distance and makes vector X have a longer link between the nodes. The “increase” of the module C C means a “decrease” in the legal connection that node B has over node A. A longer link between the nodes reduces the ability of node B to be able to express the influence it has on node A. In other words, node B is not so capable of expressing the influence on node A. A decrease in the size of node B implies a decrease in the module C C and means a reduction in the legal importance of node B within a scenario in which A also participates.

3.5. Jaccard Index

Jaccard is a statistical measure that consists of measuring the similarity between finite datasets, for example, between a set of objects D and D . This is a division between the size of the intersection and the union of the element sets. In this case, vector modules provide a series of values to create the sets to be compared. This process provides values between 0 and 1; the first expresses inequality between vectors and the second total equality between them. This index is useful in queries to detect patterns of similar objects (nodes or relationships) within scenarios, for example, to obtain the granular similarity between sets of facts or evidence, or between attribute values that belong to different groups of scenarios, that is, to be able to obtain similarities between attributes belonging to the same type of nodes, but that belong to different scenarios:

3.6. Cosine Similarity

Cosine Similarity is a measure of similarity between two vectors, in this case, those that belong to a set of objects G and G other than zero. It means that it calculates the angle between vectors to get the cosine by multiplying the values of each vector, adding their results, and then dividing the result by multiplying the square root of each value of the vector squared. A pair of vectors oriented at 90° to each other have a similarity of 0, meaning they are not equal, and a pair of diametrically opposite vectors have a similarity of −1, meaning they are opposite. On the other hand, if both vectors point with an orientation towards the same place, they have a similarity of +1, meaning they are equal. The different values that the cosine angle acquires reflect a greater or lesser degree of similarity between the attributes of the relationships that the scenarios contain. This type of similarity is helpful in detecting assessment patterns, for example, to identify similarities between the angle produced between the link and the effect between a pair of nodes in the same scenario or indifferent ones. The angle a° produced by the assessment of the link (line connecting nodes from their centers), and the effect (line from the center of one node to the diameter of the other), are shown in Figure 13.

3.7. Pearson’s Correlation

The Pearson Correlation Coefficient is a statistical measure to detect a linear correlation between two variables A and B. It has a value between +1 and −1. A value of +1 is a total positive linear correlation, 0 means there is no linear correlation, and −1 is a total negative linear correlation. The Pearson similarity is the covariance of the values from vector modules divided by multiplying the standard deviation of the values of the first vector by the standard deviation of the values of the second vector. This coefficient is helpful in queries about the correlation of values, for example, to calculate the correlation between the link and importance of two connected nodes in a scenario, for example, the values of a vector V 1 represents the attributes contained in the link of a node A connected with node B. A second vector V 2 represents the importance that node B has for node A. Thus, if A is a fact and B is proof, then the level of correlation between link and importance is the degree to which evidence can demonstrate that an event occurred, according to the interpretation of a judge.

3.8. Dataset Example

Figure 14 shows a composite scenario of a real murder case. The structure shown in Figure A1 explains a piece of a set of objects of this case and describes the dataset involved. The following points summarize the example and the data structure.
  • The interrelated images of Figure 14 are taken to build the nodes according to the definition N = ( n , t o ) , and relationships following the specification E = ( e , t o ) . This produces the structures shown in Table A2;
  • Then labels, indices, and values of the nodes are obtained from step 1, as shown in Table A3;
  • From step 2, information about relationships, indices, and labels from the connection of each node, is shown in Table A4, and;
  • Using the information from Table A2 about the descriptions, the artifact extracts the concepts “injure” and “disable” according to the representation K = x : x P x H and an index is assigned to them, for example 333 and 444 respectively;
  • Using Table A2 and Table A3 and the concepts obtained from point 4, the artifact distills 2 legal elements that are shown with Equations (3) and (4) respectively. An index is assigned to each legal element, for example, 10,000 and 11,000, and this is done following the definition λ = ( i , K , R ) :
    λ = 10000 , 333 , 500 , 600 , 700 , 800
    λ = 11000 , 444 , 000 , 100 , 200 , 300 , 400 ;
  • The artifact assigns an index to the case, for example, “999999” and then it creates the case as shown in Equation (5) following the definition C = p , V , R and according to the information obtained from the previous steps:
    C = 999999 , 10000 , 11000 , 000 , 100 , 200 , 300 , 400 , 500 , 600 , 700 , 800 ;
  • The artifact offers different queries to analyze the case. Depending on the query type, the Jaccard, Cosine, and Pearson formulas are executed individually or in combination. Obtaining differences is according to the type of analysis and query the judge wants to execute. The artifact shows the recommendations as in Figure 8. At the end of this figure, the judge can select the laws and norms supplied by the system. The system automatically converts the selections into images and incorporates them into the working canvas so the judge can continue, if necessary, with the analysis of more information.

4. Research Question and Hypothesis

The following research question arises: “Is it possible to capture and represent a judge’s interpretation and assessment processes of the legal file data and apply machine learning on said processes, to generate recommendations before the resolution of a case related to jurisprudence, doctrine, and norms in different legal contexts and get a positive behavioral response from the judge?”
Thus, a secondary question arises: “Can the system can be used by a judge to support his decisions, but without being seen as a threat of decision-making [95] bias?”
The above research questions are intimately linked to the unsolved problem, raised long ago by Berman and Hafner in 1993 [3] on “how to represent teleological structures in CBR?” Teleology is the philosophical doctrine of final causes [51], which means, according to Berman and Hafner, identifying the cause, purpose, or final reason for applying a law or rule to regulate (punish) an act (fact) identified as a felony. Thus, the answer to the first questions also provides a reasonably approximate answer to Berman and Hafner’s question.
As judges have hierarchies in their roles and there are types of technical criteria to study the behavioral response of a judge, the statement of the following hypotheses is as follows. (1) H 0 : The hierarchy does not affect the acceptance of the system and H a : The hierarchy does affect the acceptance of the system, as well as (2) H 0 : The criterion does not affect the acceptance of the system, H a : The criterion does affect the acceptance of the system.

5. Material and Methods

SME has defined real world legal situations to test cases with RYEL, which represent criminal conflicts in a trial and have allowed to reduce the number of cases that initially would have been necessary to carry out the experiments. The use of multi-country scenarios for laboratory testing was 83 from Costa Rica, 25 from Spain, and 5 from Argentina. In addition, experts in artificial intelligence participated from Costa Rica and Spain [2] and were counted, to be a total of 17. As the laws are different in all countries, a norms equivalence mapping was necessary to implement, which means a set of implication rules in the form X 1 Y 2 , where X 1 is the name of a norm in a specific country and is equivalent, but not equal, to Y 2 which belongs to another country. In this way, there was no problem analyzing the same criminal factual picture (facts and evidence) in different countries without being strictly subject to the name of a norm.

5.1. Participants

Two groups of research subjects participated in this study. The first group of judges was selected at random, belonging to courts, tribunals, and chambers in criminal justice. In addition, military-grade judges were also included randomly at the magistracy level to include data about military behavior when using this technology. Experiments in Panama [26], Spain, and Argentina involved 16 expert judges in the criminal field, while in Costa Rica, there were 10 judges [2] which also include Ecuador and Colombia. The second group was a sample of judges selected randomly at the national level in Costa Rica.

5.2. Design

This study is an adaptation of a 3-stage experiment. The first stage is to study the acceptance or denial behavior of the judge when using the system. The second stage compares the results obtained from the first stage with the second group of judges. The third stage consists of investigating whether the responses of the second group were affected by factors such as judges’ hierarchies (their roles) and the kind of evaluation criteria. The results of one stage are the inputs of the next.
In the first stage, the use of User Experience (UX) [96] is a means to investigate the behavioral response of a judge in terms of accepting or rejecting the application of RYEL to analyze the merits of a case. Table 1 shows a synthesis of the primary operations that were used by the research subjects when manipulating KG using images. The fundamentals of measurement parameters are from the quality model called Software Quality Requirements and Evaluation (SQuaRE), defined in [97]. The characteristics of this model are adapted to investigate the degree to which a system satisfies the “stated” and “implied needs” of a human (stakeholders) and is used to measure the judge’s behavioral response. The characteristics used from the model are “functional suitability”, “usability”, and “efficiency” linked to technical criteria issued by the judge. Table 2 shows a synthesis of the characteristics, parameters, and criteria considered in the experiment.
A quality matrix [98] or evaluation matrix was created using Table 2 and applied to the judges at the end of the first stage. The matrix allowed to obtain quantitative values for each of the criteria. The criteria were posed as questions and measured with a Likert Scale [99] as 5–Totally agree, 4–Fairly agree, 3–Neither agree nor disagree, 2–Fairly disagree, 1–Totally disagree, and 0–Not started. A treatment is a legal case of homicide applied to each research subject (judge) using RYEL. The experimental unit consists of pairs of related nodes that form a KG that describes the case graphically.
The second stage consists of obtaining objective evidence [97] to validate the results of the matrix against the criteria of another group of judges. For this, obtaining an additional random sample of 172 judges from Costa Rica was necessary to take. The total population of judges working in Costa Rica is 1390 [100]. The sample includes all hierarchies of judges and represents 12.37% of active judges in the country. To this sample, a questionnaire was applied based on the criteria from Table 2. This sample focused on judges that do not necessarily know each other; they have not used or have seen the system before, and they do not know or have met the investigators conducting the research. In this way, it is possible to reduce the information bias [101] in this type of research. The judges received information on the system’s method, operation, and characteristics through the questionnaires’ descriptions and formulation. The criteria in the questionnaire were organized into groups of 10 questions and coded from 1-P to 10-P, as shown in Table 3, for statistical purposes. The design of the questions considered the Liker scale for their answers. This design was similar to the one used in the evaluation matrix explained before. It was necessary to coordinate with the Superior Council of the Judiciary in Costa Rica to contact the judges across the country.
The third stage uses the information gathered in the sample at the national level in Costa Rica to make a Two-way Analysis of Variance (ANOVA) [101]. This analysis was to check if there are significant statistical differences that prove the hypotheses about whether the factors like hierarchies and legal criteria affect the behavioral response of acceptance or denial of the judges about using the system. The criteria have 10 levels, one per group of questions, from 1-P to 10-P. The hierarchy has 4 levels which are: (1) Criminal courts; (2) tribunals; (3) chambers; and (4) other. The latter consider members of the superior council and interim positions of judges during designations.

5.3. Setting

Due to the circumstances caused by COVID-19 and in which the judges found themselves, the experiments were conducted either onsite (judge’s office) or remotely (virtual meeting via a shared desktop). In both cases, a Dell G5 laptop was the hardware used for experimentation. The laptop had 15.6” of full HD IPS display, 16GB RAM, and a Hard Drive of 1GB. After a legal and coordinated appointment with the judges and setting up the test environment, it was possible to proceed with the experiments.

5.4. Procedure

In the beginning, each research subject watched a video. The video explained the experiment, the operation of the system, and the function of the EGIs. The video was 2.6 min long. It used a test case A about a homicide using a dagger and another test case B about a homicide with a weapon. Various experts helped the design process of the test cases, 2 in law and 2 in artificial intelligence, who verified them.
In the first stage, N = 26 research subjects from Colombia, Ecuador, Panama, Spain, Argentina, and Costa Rica were obtained and asked to draw in the system the interpretation and assessment of the facts and evidence contained in the test case A according to their perspective using interrelated images. At the end of the drawing, each subject produced a KG. The KGs produced were compared to each other to determine differences. Then, a division of the group of subjects N into two groups in the form of N / 2 each took place. Test case B was given to the first group to obtain a new KG from each member. The second group, who never saw case B, was asked to observe and explain at least 3 KGs made by the first group to check if they were able to understand the interpretation and assessment contained in the KGs. Finally, the two groups ran legal analysis simulations with the system to determine whether it was possible to study the merits of the case. After cases A and B were used to explain the system, each member of the groups was allowed to use the artifact to enter new cases or vary the previous ones to test the system in depth. Then, the evaluation matrix was applied to each research subject to collect the UX that each one lived after using the system. Real life examples of the experiments with the research subjects are shown in Figure 15 when they were using the system to analyze the merits of a case about homicide.
In the second stage, it was necessary to request a legal license from the Superior Council of the Judiciary in Costa Rica in order to be able to contact all the judges of Costa Rica and to send them a questionnaire. The criteria of Table 2 allowed us to build the questionnaire containing 10 groups of questions.
In the third stage, 172 sample of judges responses were taken from the questionnaires sent. The data of the responses were processed and tabulated. Finally, a two-way ANOVA was applied to the data to determine if the hierarchies or criteria affect the behavioral response of the judge; if they accept or refute using the system to analyze the merits of a case.

6. Results and Discussion

The judge’s behavioral response was a tendency to accept the system, recognizing that it can help with the analysis of the merits of a case without violating judicial independence and discretionary level. Table 4 shows an extract of the evaluation matrix by country. Out of six countries, fr showed a behavioral tendency of 90%, or more, to accept the system, reaching almost 100% in some cases. Colombia and Ecuador presented different results that are very close to 90% acceptance because some of the legal cases used for experimentation did not contain the names of regulations from those countries, and the judges belonging to them wanted to evaluate the names related to their legislation. Despite this, the acceptance of the computational method implemented by RYEL was very positive in all the countries subject to experimentation. The explanation of the system required the use of two cases, but each judge entered from five to six real life cases when allowed to test the system. If 26 judges tested the system, it means that at least 130 case variations were used in total. In adition, the 113 cases used to manufacture the system from different countries must also be added. The total number of cases was approximately 243 from various countries used to create and test the system. It is necessary to remember that the SME supplied representative cases of the discourse domain; therefore, the high amounts of data did not present an obstacle and did not determine the risk of bias that would typically occur with another approach.
The radar graph in Figure 16a shows the comparison of the system evaluation results according to hierarchies. The characteristics described in each vertex reveal that the distances between criminal courts, military criminal courts, criminal magistrates, and superior courts are very close to each other and with high values. The average acceptability per hierarchy on the radar places values very close to 100% of acceptance. The provincial courts had a slightly lower acceptance rate. The reason was that some judges were unable to complete the experiment as they had to attend trials, and it was not possible to reschedule the experiment, and it reflects in the usability and efficiency vertices whose values are below average. Nevertheless, the vertex of the functionality in the provincial courts has values very close to 90%, which means that this hierarchy accepts the system well, despite the other low values.
Figure 16b shows the acceptability trend of the system among the judges, according to the hierarchical order. This trend remained unknown under the ordinary conditions of legal review processes, but detection was possible during the system’s evaluation. For example, it was possible to find that when the higher-ranking judges needed to review the work done by the lower ones, it was easy for them to graphically arrange the teleological structures of the facts and evidence using KG through the EGIs to carry out the reviews of the analysis made by the lower-ranking judges. Furthermore, it was possible to reveal that lower-hierarchy judges tended to accept the system in terms of the support they received from the EGIs to perform the interpretation and assessment of facts and evidence as part of the analysis of the merits of the case. On the other hand, the higher-ranking judges showed more acceptance of the system, especially regarding the support they received from the EGIs to access the teleological and semantic approach created by the lower-hierarchy judges.
The information collected up to this point responds to the first research question, and it reveals that the system was able to capture the interpretation and assessment of facts and evidence from the perspective of a judge. Regarding the second question, the results reveal that the judge’s behavioral response was very positive and with a tendency to accept the system to analyze a case without representing a risk of bias or a threat to decision making.
The validation of the previous results was against other evaluations of judges; this evaluation required the application of questionnaires to all the active judges of Costa Rica (1390), and the random sample of 172 showed a mean of 4, a mode of 5 (Likert scale designed), and a standard deviation of 1.2988. These data are in Table 3. It means that the judges tend to accept the approach, operation, and framework implemented by RYEL.
For the statistical verification of the sample and the collected results at the national level, Figure 17a shows the 1-Sample Z test, which had 92% of statistical power, a significant percentage for samples and experiments [101]. The statistical significance level is α = 0.06 , from which we can obtain 94% in the confidence intervals in the statistical tests. Figure 17b shows that the judges’ responses comply with the normality assumption, having a P- V a l u e = 0.213 > α 0.06 where the normality hypothesis is accepted. There is a low Anderson–Darling (AD) statistic value of 0.487 which means a good fit for the data distribution. The Levene statistic is 0.800 > α = 0.06 , which means the hypothesis acceptance about equality of variances when working with the hierarchy and legal criteria of the judges in the answers of the questionnaires.
Due to the results obtained in the previous statistical analysis, connected with the need to determine if indeed the results obtained from the UX and the questionnaires were affected by the hierarchical trend shown in Figure 16b or the criteria, a tTwo-way ANOVA was necessary to apply. The results are in Table 5 where the hierarchy factor has a P- V a l u e = 0.148 > α = 0.06 , which means that the null hypothesis that the hierarchy does not affect the response of the judge is accepted since there is sufficient statistical evidence to state with 94% confidence that the judge’ responses are not affected by the hierarchy. On the other hand, the criterion factor at the same table shows a P- V a l u e = 0.000 < α 0.06 and means that the null hypothesis that the criterion does not affect the response of the judge is rejected since there is significant statistical evidence with 94% confidence that the criteria do influence the judges’ response. These results reveal that the judge’s behavioral tendency to accept the system is due to the criteria discussed and analyzed, not because of a human’s position. It also means that the trend found in Figure 16b has a 94% statistical probability that it is due to the actual operation of the system and not to the position that the judge holds.
Concerning the above, Figure 18 shows the residuals from the analysis of variance. The Y a x i s represents the residual values, and the X a x i s represents the order of the observations. There are no patterns nor a fixed trend. Therefore, the data obtained from the responses on the criteria are independent, and this means that there is no codependency in the data that could affect the results.
Figure 19 shows the main effects in responses to legal criteria. The Y a x i s is the mean of the criteria; the X a x i s represents the criteria. Thus, criterion 2 or 2-P has the lowest main effect of all because this group of questions referred to whether a legal case must always be resolved similarly to a previous case, with more or less similar characteristics. It means that statistically, there is enough evidence to affirm with 94% confidence that the judges reject the idea of receiving help that implies always solving a case just as another similar one was solved. The 2-P group of criteria in Figure 19 compared with Table 3 which has a mean of 2 and a mode of 1 for the same criterion, indicates that indeed the judges do not approve the 2-P criterion. It is necessary to remember that RYEL uses the CBR stages to exchange and organize data; this means, as a guide of the information, and does not develop the traditional implementation of using strictly the same solutions from past cases to solve current ones. This implementation makes RYEL’s contribution to the domain of discourse evident.
Figure 19 shows the criterion 3 or 3-P, which is the second one to have low values and refers to whether the judges believe that IA could help them with the analysis of a case. This point deserves special attention because analyzing the extended answers made by the judges in this group of questions makes it possible to understand that the judges associate AI with the automation and repetition of legal solutions applied indiscriminately to each case, without receiving any explanation and without being in control of the machine. This situation, of course, is not the way of work of RYEL.
Figure 20 shows the cumulative acceptance percentages grouped by the SQuaRE-based design parameters. The lowest percentage is learning because not all people have the same abilities to learn. The highest values are attractiveness, understandability, and suitability, which translates into a motivational design consistent with the needs of the research subject and the legal domain. On average, the rest of the cumulative acceptance percentages of the system are pretty high; this means that the domain expert quantifies, according to the SQuaRE parameters, that the system is helpful in analyzing the merits of the case.
Table 6 shows the cross-check of the survey-based statistic analysis between the 4-P and 10-P criteria. The first criterion is that if, in order to analyze the merits of a case, it is essential to carry out an interpretation and assessment of facts and evidence. The second criterion is whether RYEL is novel and useful as a decision support tool. The results confirm the following: (a) No research subject marked option 1 for any of these criteria, (b) only four research subjects marked the options 2 and 3 for both criteria, which is only the 2.32% of the research subjects and it means that an insignificant number of them do not agree with the legal analysis approach and with the tool, and (c) most of the research subjects when marking options 5 and 4 for criterion 10-P also marked 4 and 5 options for criterion 4-P, which means that most of the research subjects understand and accept the legal analysis approach and the operation of RYEL.
All the results obtained show the following:
  • The system was able to capture the high-order thinking of a judge to assist with analyzing a case using KG through images;
  • The system is a novel implementation of machine learning in the legal domain;
  • It was possible to explore and find shortcomings in the behavioral response and position of a judge in the face of this type of technology.

6.1. Comparison with Similar Approaches

By comparing our research with works with similar approaches, we extend the results. Attention is on expert and case-based systems.
Table A6 shows the 23 most essential expert systems from 1987 to the present, which are related to our research. The table shows the key elements, computational technique, and approach. The most important results obtained when comparing our system with the systems in this table are: (1) No system works with dynamic KG, (2) they do not use graphical techniques to elicit legal meta-knowledge of a person, (3) they do not work with high-order thinking, (4) do not allow an analysis of the merits of a case, (5) they do not focus on the judge, and (6) cannot be extended to other domains of knowledge.
Table A5 shows the primary investigations focused on CBR from 1986 to the present and related to our approach. This table shows the key elements, case types, and approaches. The main results obtained when comparing these investigations with ours are: (1) They do not contemplate multiple and complex scenarios within the cases, (2) no investigation considers data processing from the perspective of a human, (3) they are only focused on lawyers or prosecutors, not to judges, and (4) none of them processes teleological, semantic, ontological, and hermeneutical information to support decision making.

6.2. Functional Limitations

The system works with data from a factual picture, direction of the legal process, and assessment of the evidence. Information about the criteria of the judge that are not typical of the analysis of facts and evidence, for example, the criteria a judge may have on the management and administration of an office, control of dates to avoid document delays, and office procedures, are not considered in this research. However, a judge can indeed consider that a case has been prescribed and request the archive of the documents. The type of data about this request is not part of the system.

6.3. Applicability

There are two fundamental aspects of a resolution that are “form” and “substance”. The form is the way to present and write a resolution complying with the requirements and formalities that the law requires, for example, a heading, covers, and numbers of pages. The substance refers to the in-depth study of the matter in conflict and then issues a resolution based on substantive law, which means a set of obligations and rights imposed by law. The applicability of this work refers to the substance of the case and not in the form.

6.4. Implications

The above results have particular implications in both the computational and legal domains. RYEL could mark a before and after in systems with a legal approach because it allows an evolution from predictive systems to systems with explanatory and analytical techniques. Some of the most relevant implications from this in the computational field are:
  • Due to explicability techniques, “Black Boxes” problems in machine learning could be overcome by methods like IA-AI when dealing with human perception;
  • The reduction and nullification of algorithmic bias and bias related to data and processes is gaining momentum because third parties do not manipulate the cases and analysis processes. Instead, the judge enters the cases in real life and commands the analysis with the options provided by the system; the latter explores the relationships and objects the judge creates, explains the inferences, and offers to the judge options to make decisions;
  • Judges from other hierarchies can review the sentences using RYEL in the different legal stages. This review could cause a reduction or elimination of bias related to a wrong perception, incorrect interpretation, and an erroneous assessment.
Some of the most relevant implications in the legal field are:
  • RYEL shows the potential to be a disruptive technology in the domain of discourse and could cause the user to resist the change;
  • The system allows experts to analyze the scenarios from different perspectives and reach agreements; this generates a unification of legal criteria and decreases legal uncertainty;
  • The system paves the way in the jurisdictional area by allowing a computational mechanism to participate in a judge’s exclusive functions when decision making takes place.

7. Conclusions

The use of the IA-AI method showed the ability to capture the high-order thinking of a judge. The behavioral response of the judges was quite positive in accepting the use of this technology to analyze the merits of a case. This research caused a paradigm shift in the way a judge thinks and works for two main reasons:
  • Legal files are always textual and therefore processed as text. Experimentation with the system was exclusively using interrelated images and the IA-AI method, making a big difference;
  • No judge who used the system and obtained a UX saw a threat of decision-making bias because the system did not impose solutions but instead allowed the judge to dissect a case and then analyze how other judges had perceived the facts and evidence to formulate their conclusions criteria. Moreover, the system operates without breaking the rules about “degree of discretion” and “judicial independence” in the domain of discourse.
The results obtained from the experimentation and technological characteristics of RYEL showed a new spectrum of research in which the interaction of technology and human behavior implies new techniques to capture the perception of a human. Therefore, this research could open doors to venture into other domains using this technology to study the behavioral response of a subject, where the interpretation and assessment of a person have to be the foundation for the development of the area under discussion. At present, there is no detection of investigations or experimental studies with the same approach as ours.

Author Contributions

Conceptualization, L.R.R.O.; methodology, L.R.R.O.; software, L.R.R.O.; validation, J.J.V., A.C., Á.B., and J.M.C.; investigation, L.R.R.O.; data curation, L.R.R.O.; writing—original draft preparation, L.R.R.O.; writing—review and editing, J.M.C. and L.R.R.O.; visualization, J.M.C. and L.R.R.O.; supervision, J.J.V., A.C., Á.B., and J.M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval for this study were waived for this study, due to experiments were not conducted in humans, only communication and interaction.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study, Oficio N. 9073-2020.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors would like to acknowledge the School of Computer Science and Informatics (ECCI) and the Postgraduate Studies System (SEP), both from the University of Costa Rica (UCR), Costa Rica; the BISITE Research Group and the Faculty of Law, both from the University of Salamanca (USAL), Spain; and the Edgar Cervantes Villalta School of Judiciary of Costa Rica. Special thanks to all the judges and AI experts who participated in this investigation from Mexico, Costa Rica, Spain, Panama, Argentina, and other countries.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
(IA-AI)Interpretation-Assessment/Assessment-Interpretation
(MOE)Mixture of Experts
(XAI)Explainable Artificial Intelligence
(SN)Semantic Networks
(SME)Subject-Matter Experts
(AI)Artificial Intelligence
(EGI)Explanatory Graphical Interfaces
(CBR)Case-Based Reasoning
(HCI)Human-Computer Interaction
(KG)Knowledge Graphs
(PG)Property Graph
(KI)Knowledge Integration
(BN)Bayesian Networks
(FR)Fragmented Reasoning
(UX)User Experience
(SQuaRE)Software Quality Requirements and Evaluation
(ANOVA)Analysis of Variance
(AD)Anderson-Darling

Appendix A

Table A1. RYEL system: Components and technology.
Table A1. RYEL system: Components and technology.
#ComponentsTechnology, Formulas or ConceptsOrientation and Use
1A visual component that allows a dynamic work with images representing objects.D3.js, HTML, JavaScript, Jquery, CSS.1 HCI—graphics to elicit knowledge.
2Component for managing the KG modeling.Neo4j database.Data model.
3According to the context, the component connects the images with the graph model and works with similar words.2 NLP using NEO4J scripts.HCI—graphics to elicit knowledge and semi-supervised method to detect word similarities.
4Component that extracts image patterns from the KG and provides query options.HTML, CSS, AJAX, CYPHER.Queries and analysis options management.
5Component to transform artifact inputs (images and relationships) to nodes and arcs.Jquery, JavaScrips, HTML.HCI—graphics to elicit knowledge.
6Component that adapts the Pythagorean theorem to Euclidean space creates and modifies the attributes of nodes and relationships.Adaptation of the Pythagorean formula using Javascript.Attribute calculation.
7Component to manage searches for node and relationship interpretation patterns.CYPHER, HTML, PYTHON.Performance pattern search operations.
8Component for similarity of attributes and interpretation patterns calculation.Adaptation of the Cosine, Jaccard and Person equations to the attributes calculated using PYTHON and CYPHER.Attribute similarity operations.
9Graphic component for visualizing for interpretation patterns using pie and bar charts.HTML, CSS, D3.js, Javascript, Jquery, CSS, PYTHON, CYPHER.HCI—graphics to show interpretation patterns.
10Component that converts found patterns and similar attributes into a graphic explanation using geometric figures.D3.js, HTML, CYPHER, PYTHON.HCI—graphics to explain the interpretation found.
1 Human-Computer Interface; 2 Natural Language Processing.
Table A2. Data structures about nodes and relationships in Figure 14.
Table A2. Data structures about nodes and relationships in Figure 14.
StructureSimplified Example of the Structure
Node s h o o t , 00 , 333 , injure, precedence, level
Relationship , N u l l , N u l l , N u l l , description , link , relevance
Node drugs , 10 , 333 , injure, precedence, level
Relationship induceTo , 000 , 10 , 00 , description , link , relevance
Node Psychological problem , 20 , 333 , injure, precedence, level
Relationship causesThe , 100 , 20 , 00 , description , link , relevance
Relationship consumes , 200 , 20 , 10 , description , link , relevance
Relationship searchOne , 300 , 20 , 30 , description , link , relevance
Node 45 handgun , 30 , 333 , injure, precedence, level
Relationship usedFor , 400 , 30 , 00 , description , link , relevance
Node hit , 40 , 444 , disable, precedence, level
Relationship priorTo , 500 , 40 , 00 , description , link , relevance
Relationship with , 600 , 40 , 50 , description , link , relevance
Relationship aNumberOf , 700 , 40 , 60 , description , link , relevance
Relationship recordedIn , 800 , 40 , 70 , description , link , relevance
Node baseball bat , 50 , 444 , disable, precedence, level
Relationship , N u l l , N u l l , N u l l , description , link , relevance
Node 12 times , 60 , 444 , disable, precedence, level
Relationship , N u l l , N u l l , N u l l , description , link , relevance
Node Sec urity video , 70 , 444 , disable, precedence, level
Relationship , N u l l , N u l l , N u l l , description , link , relevance
Table A3. The adjacency of the elements of a case in Figure 14.
Table A3. The adjacency of the elements of a case in Figure 14.
Adjacency Matrix
LabelsIndices010203040506070
shoot000000000
drug1010000000
psychological problem2011010000
45 handgun3010000000
hit4010000111
baseball bat5000000000
12 times6000000000
video security7000000000
Table A4. Relationships box from Figure 14.
Table A4. Relationships box from Figure 14.
Relations List
LabelsIndicesRelations
shoot0Null
drug10(10,00)
psychological problem20(20,00), (20,10), (20,30)
45 handgun30(30,00)
hit40(40,00), (40,50), (40,60), (40,70)
baseball bat50Null
12 times60Null
video security70Null
Figure A1. Representation of a case structure with ordered pairs ordered triple, and datasets used to process the information obtained graphically from images in the KG. The vectors are extracted through these structures and used in similarity functions in component 8 from Table A1. Observation: t structure is similar to r, so the details of t are omitted for clarity of the diagram.
Figure A1. Representation of a case structure with ordered pairs ordered triple, and datasets used to process the information obtained graphically from images in the KG. The vectors are extracted through these structures and used in similarity functions in component 8 from Table A1. Observation: t structure is similar to r, so the details of t are omitted for clarity of the diagram.
Electronics 10 01500 g0a1
Table A5. Case-based reasoning related work summary.
Table A5. Case-based reasoning related work summary.
ArticlesKey Elements1 CasesFocus on
1986 [38]The system JUDGE uses a case-based model of felonies where justifications of actions or the lack of them are used as a metric and determine if a situation is favorable or not; the model works entering actions and compares them with others stored previously to obtain differences (effects).A & MLawyer
1987 [102]A legal citations model created using Case-Based Knowledge (CBK), and it consists of analyzing the characteristics of what they call “phasic states of a legal case” (facts) and their “dimensions” (classifications) to help lawyers litigate. The citations use Blue Book and HYPO systems; the output is a citation network used to justify legal disputes.CLawyer
1997 [103]Divorce Property Separation Act (ASHSD) is a tool based on CBR and RBR to query cases about ways to separate assets. A case is a list of attributes processed using three stages: (1) Filtering attributes, (2) assigning a similarity between them, and (3) assigning a weight to them. The rules are if-then statements used for attribute classification.ASLawyer
2003 [104]The system CATO was created as a learning system to teach legal argumentation to beginning law students. It uses 14 cases having legal decisions and 2 cases used as evidence. There are 26 factors (facts) associated with 5 types of numerical values used to classify the factors.USTALaw student
2003 [105]AlphaTemis is a free text query system on attributes of legal cases. The user can assign a weight to each attribute that is a discrete number used to query those that are the same.SCBLawyer Prosecutor
2003 [106]The investigation deals with the organization of the legal arguments, obtaining differences between them, and evaluating if a previous case is essential for the current one; for this, it uses a hierarchy of factors (facts) to measure the importance. Finally, it applies the BUC (Best-untrumped Cases) to identify what factors are in common between the cases in the database and the current legal problem.USTALaw student
2011 [30]This research proposes a way in which one case can be compared to another using proposition and legal rules based on legal information about what they call “value judgments” and “legal concepts” where the judges handle values of specific factual scenarios according to what a proponent (plaintiff or appellant) presents in the arguments. An opponent (defendant or defendant) refutes that argument, and finally, the proponent makes a rebuttal.CadyDomLawyer
1 A & M = Assault and Murder, C = Citations, AS = Assets, USTA = US Trade Agreement Law, JD = Judicial Decisions, SCB = “Súmulas” of Court of Brazil, CadyDom = Fictional example oral argument based on Cady vs. Dombrowski case by the U.S. Supreme Court, N/D = Not Defined.
Table A6. Expert systems related work summary.
Table A6. Expert systems related work summary.
ArticlesKey Elements1 TechniqueFocus on
1987 [107]DEFAULT system uses hierarchical predicates ordering (general to specific), for consulting information about legal cases related to the eviction of indigents.PreLLawyer
1987 [108]Uses predicates (PROLOG) to define norms of legal cases and make queries about legal rules. It uses a number (“raking”) to indicate the importance of a norm.PreLLawyer
1991 [109]It explains the potential and advantages of working with legal information graphically because the law and arguments contain complex relationship schemes, and graphs can help identify them. The use of Toulmin charts allows to express arguments and helps the user define value judgments on the legal information.TCLawyer
1991 [110]Loge-expert is a system that consists of process flow charts with hypertext about the rules of the civil code in Canada used to consult multiple legal documents regarding a given law.Ht chartsLayman
1993 [111]Use the LES system that uses Horn clauses to find similarities between legal requirements and legal norms.ProLLawyer
1993 [86]Use of predicates called “slots” in a system called CIGOL for consulting facts in legal cases.PreLLawyer
1999 [112]Retrieve texts from legal cases from the Attorney General of the Republic of Portugal using Dynamic Logic, which is an extension of Modal Logic, through consultations using rules and predicates that describe events (facts) of legal cases.DL and PreLLawyer
1999 [29]SMILE is a system that searches for words in sentences of legal texts and searches for the rules associated with those words. It uses a decision tree (ID3 algorithm) and a legal language repository to generate the tree-like word structures and related rules.DTLawyers
2003 [104]It uses predicates to explain the concept of “Theoretical Construction” that consists of facts related to legal rules, values, and preferences.PreLLawyer
2005 [113]AGATHA is a system that searches for case precedents to explain how things happened. Cases are decision trees and use the A* algorithm to find the least cost path between a source node and a destination node. The lowest cost path is the one selected.DTLawyer
2005 [65]Use a semantic web with hyperlinks to legal documents on the Dutch Tax and Customs Law (DTCA) to query related legal documents.HtLawyer
2005 [114]Use propositional language to describe legal arguments, requests from plaintiffs, and advocates.ProLLawyer
2009 [115]It supports a litigant using predicates (PROLOG) to define and query legal situations from the House of the Lords in Quebec, Canada.PreLLawyer
2009 [116]ArguGuide software showing the text structure of a legal case and the legal topic. It shows a content map, whose elements are legal text and checklists.CMLawyer
2009 [117]Use of Carneades system to describe cases of the German Family Law. The arguments are lists and each tuple is a statement.ProLLawyer
2009 [118]This research is about displaying arguments using Toulmin charts that are flow charts of the arguments supplemented in this case with hypertext; a chart shows the text of the case, so a law student or lawyer can manually manipulate and segment the text that needs to be used as an argument.TCLawyer Students
2013 [119]Use variables of location and time of people about a crime and calculate the probability that a person is a murderer. It makes analogous use of the “Island Problem”.BYLawyer Prosecutor
2014 [74]Ontology building using rules and predicates for consulting legal case documents.PreLLawyer
2017 [120]Queries using a question-based text for searches and answers. It tries to get a question about a legal context and returns general and related information.NLPLawyer
2017 [121]Argumentation mining using pre-classified legal words with a KNS classifier. The input text is about facts and the output is a text about the general topic of arguments.NLPLawyer
2017 [41]Pre-existing mapping of arguments, rules to legal cases. It tries to demonstrate that the legislation and the precedents are sources of the arguments.FMLawyer
2017 [121]Prometea is a system for issuing a “legal opinion” of the legal cases that the prosecution has. This opinion consists of indicating which are the most relevant cases and therefore must be processed first. The definition of relevance is according to the “vulnerability” of the people described in the case; for example, it tries to find words or information about the elderly, children, women, or people with disabilities. Only considers cases whose legal complexity is simple.N/DProsecutor
2018 [122]It uses document classification techniques (TF-IDF) to process a set of legal cases on labor material and uses the K-NN algorithm to obtain a raking on the trend of opinions of judges in the Brazilian courts related to those specific cases.KNN & TF-IDFLawyer
1 NLP = Natural Language Processing, TC = Toulmin Chart, BN = Bayesian Networks, FM = Feature Mapping, CM = Content Mapping, ProL = Proposition Logic, PreL = Predicate Logic, DL = Dynamic Logic, Ht = Hypertext, DT = Decision Tree, KNN & TF-IDF = K-Nearest Neighbors & Term Frequency–Inverse Document Frequency, N/D = Not Defined.

References

  1. Evans, J.; Foster, J. Metaknowledge. Science 2011, 331, 721–725. [Google Scholar] [CrossRef]
  2. Rodríguez, L.; Vargas, J.; Camacho, A.; Burgos, A.; Corchado, J. RYEL system: A novel method for capturing and represent knowledge in a legal domain using Explainable Artificial Intelligence (XAI) and Granular Computing (GrC). In Interpretable Artificial Intelligence: A perspective of Granular Computing; Springer: Berlin/Heidelberg, Germany, 2020; pp. 369–399. [Google Scholar]
  3. Berman, D.; Hafner, C. Representing teleological structure in case-based legal reasoning: The missing link. In Proceedings of the ICAIL ’93 4th International Conference on Artificial Intelligence and Law, Amsterdam, The Netherlands, 15–18 June 1993; pp. 50–59. [Google Scholar]
  4. Tennyson, R.; Breuer, K. Cognitive-Based design guidelines for using video and computer technology in course development. In Video in Higher Education; Kogan Page: London, UK, 1984; pp. 26–63. [Google Scholar]
  5. Tennyson, R.; Rasch, M. Linking cognitive learning theory to instructional prescriptions. Instr. Sci. 1988, 17, 369–385. [Google Scholar] [CrossRef]
  6. Tennyson, R.; Park, O. Artificial intelligence and computer-based learning. In Instructional Technology: Foundations; Lawrence Erlbaum Associates: Hillsdale, NJ, USA, 1987; pp. 319–342. [Google Scholar]
  7. Bruner, J. Toward a Theory of Instruction; Belknap Press of Harvard University: London, UK, 1966; ISBN 978-0-674-89701-4. [Google Scholar]
  8. Peter, L.; Donald, N. Human Information Processing: An Introduction to Psychology; Academic Press, Inc.: Cambridge, MA, USA, 1977; ISBN 0124509509. [Google Scholar]
  9. Plant, R.; Gamble, R. Methodologies for the development of knowledge-based systems, 1982–2002. Knowl. Eng. Rev. 2003, 18, 47–81. [Google Scholar] [CrossRef]
  10. Baldwin, C. How to improve communication with co-workers and subject matter expe. In Proceedings of the Professional Communication Conference The New Face of Technical Communication: People, Processes, Products, Philadelphia, PA, USA, 5–8 October 1993; pp. 403–407. [Google Scholar]
  11. Stayanchi, J. Higher Order Thinking through Bloom’s Taxonomy. Humanit. Rev. 2017, 22, 117–124. [Google Scholar]
  12. Guitton, M. The immersive impact of meta-media in a virtual world. Comput. Hum. Behav. 2012, 28, 450–455. [Google Scholar] [CrossRef]
  13. Leonardo, G. La Definición del Concepto de Percepción en Psicología. Rev. Estud. Soc. 2004, 18, 89–96. [Google Scholar] [CrossRef] [Green Version]
  14. Atienza, M. Las Razones del Derecho Teorías de la Argumentación jurídica; Universidad Autónoma de México: México, Mexico, 2005; ISBN 978-970-32-0364-2. [Google Scholar]
  15. Romero, J. Notas sobre la Interpretación Jurídica. Rev. Cienc. Jurídicas 2014, 133, 79–102. [Google Scholar]
  16. Legislativa, A. Código Procesal Penal (Ley N. 7594 de 10 de abril de 1996); Tribunal Supremo de Elecciones: San José, Costa Rica, 1996. [Google Scholar]
  17. Legislativa, A. Código Penal (Ley N. 4573 de 15 de noviembre de 1970); Tribunal Supremo de Elecciones: San José, Costa Rica, 1970. [Google Scholar]
  18. Perroni, E. Play: Psychoanalytic Perspectives, Survival and Human Development; Routledge: London, UK, 2013; ISBN 9780415682084. [Google Scholar]
  19. McCarthy, J.; Minsky, M.; Shannon, C. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. AI Magazine, 31 August 1955; 1–17. [Google Scholar]
  20. Rochat, P. Layers of awareness in development. Dev. Rev. 2015, 38, 122–145. [Google Scholar] [CrossRef]
  21. Rapp, B. Handbook of Cognitive Neuropsychology What Deficits Reveal About the Human Mind; Psychology Press: London, UK, 2001; ISBN 9781841690445. [Google Scholar]
  22. Hong, C.; Batal, I.; Hauskrecht, M. A Mixtures-of-Experts Framework for Multi-Label Classification. arXiv 2014, arXiv:1409.4698. [Google Scholar]
  23. Masoudnia, S.; Ebrahimpour, R. Mixture of experts: A literature survey. Artif. Intell. Rev. 2014, 42, 275–293. [Google Scholar] [CrossRef]
  24. Rodríguez, L.; Osegueda, A. Business intelligence model to support a judge’s decision making about legal situations. In Proceedings of the IEEE 36th Central American and Panama Convention (CONCAPAN XXXVI), San Jose, Costa Rica, 9–11 November 2016; pp. 1–5. [Google Scholar]
  25. Rodríguez, L.R. Jurisdictional normalization based on artificial intelligence models. In Proceedings of the XX Iberoamerican Congress of Law and Informatics (FIADI), Salamanca, Spain, 19–21 October 2016; pp. 1–16. [Google Scholar]
  26. Rodríguez, L.R. Jurisdictional Normalization of the Administration of Justice for Magistrates, Judges Using Artificial Intelligence Methods for Legal Guidance Systems; II Central American and Caribbean Congress on Family Law; Central American and Caribbean Congress: Panamá, Panama, 2016; pp. 1–10. [Google Scholar]
  27. Rodríguez, L.R. Artificial Intelligence Applied in Procedural Law and Quality of Sentences. In Proceedings of the XXI Iberoamerican Congress of Law and Informatics (FIADI), San Luis Potosí, México, 17–20 October 2017; pp. 1–19. [Google Scholar]
  28. Kolodner, J. Case-Based Reasoning; Morgan Kaufmann Publishers, Inc.: San Mateo, CA, USA, 1993. [Google Scholar]
  29. Bruninghaus, S.; Ashley, K. Toward Adding Knowledge to Learning Algorithms for Indexing Legal Cases. In Proceedings of the ICAIL ’99 7th International Conference on Artificial Intelligence and Law, Oslo, Norway, 14–17 June 1999; pp. 9–17. [Google Scholar]
  30. Grabmair, M.; Ashley, K. Facilitating Case Comparison Using Value Judgments and Intermediate Legal Concepts. In Proceedings of the 13th International Conference on Artificial Intelligence and Law, Montreal, QC, Canada, 6–10 June 2011; pp. 161–170. [Google Scholar]
  31. Ha, T.; Lee, S.; Kim, S. Designing Explainability of an Artificial Intelligence System. In Proceedings of the TechMindSociety ’18: Technology, Mind, and Society, Washington, DC, USA, 5–7 April 2018; p. 1. [Google Scholar]
  32. Zhu, J.; Liapis, A.; Risi, S.; Bidarra, R.; Youngblood, M. Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation. In Proceedings of the IEEE Conference on Computational Intelligence and Games (CIG), Maastricht, The Netherlands, 14–17 August 2018; pp. 1–8. [Google Scholar]
  33. Khanh, H.; Tran, T.; Ghose, A. Explainable Software Analytics. In Proceedings of the ACM/IEEE 40th International Conference on Software Engineering: New Ideas and Emerging Results, Gothenburg, Sweden, 28 May–3 June 2018. [Google Scholar]
  34. Pedrycz, W.; Gomide, F. Fuzzy Systems Engineering: Toward Human-Centric Computing; Wiley-IEEE Press: Hoboken, NJ, USA, 2007. [Google Scholar]
  35. Bargiela, A.; Pedrycz, W. Granular Computing for Human-Centered Systems Modelling. In Human-Centric Information Processing Through Granular Modelling; Springer: Berlin/Heidelberg, Germany, 2008; pp. 320–330. [Google Scholar]
  36. Yao, Y. Human-Inspired Granular Computing. In Novel Developments in Granular Computing: Applications for Advanced Human Reasoning and Soft; Springer: Berlin/Heidelberg, Germany, 2010; pp. 1–15. [Google Scholar]
  37. Shadbolt, N.; Smart, P. Knowledge Elicitation: Methods, Tools and Techniques. In Evaluation of Human Work; CRC Press: Boca Raton, FL, USA, 2015; pp. 163–200. [Google Scholar]
  38. Bain, W. Judge: A case-based reasoning system. In The Kluwer International Series in Engineering and Computer Science (Knowledge Representation, Learning and Expert Systems); Springer: Boston, MA, USA, 1986; pp. 1–4. [Google Scholar]
  39. Aleven, V. Teaching Case-Based Argumentation through a Model and Examples. Doctoral Dissertation, University of Pittsburgh, Pittsburgh, PA, USA, 1997. [Google Scholar]
  40. Bench-Capon, T.; Sartor, G. Theory Based Explanation of Case Law Domains. In Proceedings of the ICAIL ’01 8th International Conference on Artificial Intelligence and Law, St. Louis, MO, USA, 21–25 May 2001; pp. 12–21. [Google Scholar]
  41. Verheij, B. Formalizing Arguments, Rules and Cases. In Proceedings of the 16th International Conference on Artificial Intelligence and Law, London, UK, 12–16 June 2017; pp. 199–208. [Google Scholar]
  42. Snyder, J.; Mackulak, G. Intelligent simulation environments: Identification of the basics. In Proceedings of the 20th conference on Winter simulation, New York, NY, USA, 1–2 December 1988; pp. 357–363. [Google Scholar]
  43. Zeigler, B.; Muzy, A.; Yilmaz, L. Artificial Intelligence in Modeling and Simulation. In Encyclopedia of Complexity and Systems Science; Springer: New York, NY, USA, 2009; pp. 344–368. [Google Scholar]
  44. Ruiz, N.; Giret, A.; Botti, V.; Feria, V. An intelligent simulation environment for manufacturing systems. Comput. Ind. Eng. 2014, 76, 148–168. [Google Scholar] [CrossRef] [Green Version]
  45. Li, K.; Li, J.; Liu, Y.; Castiglione, A. Computational Intelligence and Intelligent Systems. In Proceedings of the 7th International Symposium, ISICA, Guangzhou, China, 21–22 November 2015; pp. 183–275. [Google Scholar]
  46. Teerapong, K. Graphical ways of researching. In Proceedings of the Graphical Ways of Researching, Como, Italy, 27 May 2014. [Google Scholar]
  47. Offermann, P.; Levina, O.; Schönherr, M.; Bub, U. Outline of a Design Science Research Process. In Proceedings of the 4th International Conference on Design Science Research in Information Systems and Technology, Philadelphia, PHL, USA, 7–8 May 2009; pp. 7–11. [Google Scholar]
  48. Abraham, A.; Corchado, E.; Corchado, J. Hybrid learning machines. Neurocomput. Int. J. 2009, 72, 13–15. [Google Scholar] [CrossRef]
  49. Azizi, A. Hybrid artificial intelligence optimization technique. In In Applications of Artificial Intelligence Techniques in Industry 4.0; Springer: Singapore, 2019; pp. 27–47. [Google Scholar]
  50. Corchado, J.; Pavón, J.; Corchado, E.; Castillo, L. Development of CBR-BDI Agents: A Tourist Guide Application. In ECCBR 2004: Advances in Case-Based Reasoning; Springer: Berlin/Heidelberg, Germany, 2004; pp. 547–559. [Google Scholar]
  51. Hafner, C.; Berman, D. The role of context in case-based legal reasoning: Teleological, temporal, and procedural. Artif. Intell. Law 2002, 10, 19–64. [Google Scholar] [CrossRef]
  52. Conrad, J.; Al-Kofahi, K. Scenario Analytics Analyzing Jury Verdicts to Evaluate Legal Case Outcomes. In Proceedings of the 16th International Conference on Artificial Intelligence and Law (ICAIL 2017), London, UK, 12–16 June 2017. [Google Scholar]
  53. Card, S.; Moran, T.; Newell, A. The Psychology of Human-Computer Interaction; Lawrence Erlbaum Associates: Hillsdale, NJ, USA, 1986; ISBN 978-0898598599. [Google Scholar]
  54. Clewley, N.; Dodd, L.; Smy, V.; Witheridge, A.; Louvieris, P. Eliciting Expert Knowledge to Inform Training Design. In Proceedings of the ECCE 2019: 31st European Conference on Cognitive Ergonomics, BELFAST, UK, 10–13 September 2019; pp. 138–143. [Google Scholar]
  55. Galinsky, A.; Maddux, W.; Gilin, D.; White, J. Why It Pays to Get Inside the Head of Your Opponent: The Differential Effects of Perspective Taking and Empathy in Negotiations. Psychol. Sci. 2008, 19, 378–384. [Google Scholar] [CrossRef] [PubMed]
  56. Carral, M.d.R.; Santiago-Delefosse, M. Interpretation of Data in Psychology: A False Problem, a True Issue. Philos. Study 2015, 5, 54–62. [Google Scholar]
  57. Legislativa, A. Código Penal (Ley N. 7576 del 30 de Abril de 1996); Tribunal Supremo de Elecciones: San José, Costa Rica, 1996. [Google Scholar]
  58. Zhang, L. Knowledge Graph Theory and Structural Parsing; Twente University Press: Enschede, The Netherlands, 2002. [Google Scholar]
  59. Singhal, A. Introducing the Knowledge Graph: Things, not Strings. Available online: https://googleblog.blogspot.com/2012/05/introducing-knowledge-graph-things-not.htm (accessed on 3 December 2012).
  60. Paulheim, H. Knowledge Graph Refinement: A Survey of Approaches and Evaluation Methods. Semant. Web 2016, 2016, 1–23. [Google Scholar] [CrossRef] [Green Version]
  61. Gasevic, D.; Djuric, D.; Devedzic, V. Model Driven Architecture and Ontology Development; Springer: Berlin/Heidelberg, Germany, 2006; pp. 1–310. [Google Scholar]
  62. Tonon, M. Hermeneutics and Critical Theory. In A Companion to Hermeneutics; Wily: Hoboken, NJ, USA, 2015; pp. 520–529. [Google Scholar]
  63. Bonatti, P.; Cochez, M.; Decker, S.; Polleres, A.; Valentina, P. Knowledge Graphs: New Directions for Knowledge Representation on the Semantic Web. Rep. Dagstuhl Semin. 2018, 18371, 2–92. [Google Scholar]
  64. Yan, J.; Wang, C.; Cheng, W.; Gao, M.; Aoying, Z. A retrospective of knowledge graphs. Front. Comput. Sci. 2018, 55–74. [Google Scholar] [CrossRef]
  65. Winkels, R.; Boer, A.; De-Maat, E.; Van, T.; Breebaart, M.; Melger, H. Constructing a semantic network for legal content. In Proceedings of the ICAIL 05 10th International Conference on Artificial Intelligence and Law, Bologna, Italy, 6–11 June 2005; pp. 125–132. [Google Scholar]
  66. Florian, J. Encyclopedia of Cognitive Science: Semantic Networks; Wiley and Sons: Hoboken, NJ, USA, 2006; ISBN 9780470016190. [Google Scholar]
  67. Noirie, L.; Dotaro, E.; Carofiglio, G.; Dupas, A.; Pecci, P.; Popa, D.; Post, G. Semantic networking: Flow-based, traffic-aware, and self-managed networking. Bell Labs Tech. J. 2009, 14, 23–38. [Google Scholar] [CrossRef]
  68. Lehmann, F. Semantic Networks. Comput. Math. Appl. 1992, 23, 1–50. [Google Scholar] [CrossRef] [Green Version]
  69. Robinson, I.; Webber, J.; Eifrem, E. Graph Databases New Opportunities for Connected Data; O’Reilly Media: Sebastopol, CA, USA, 2015. [Google Scholar]
  70. McCusker, J.; Erickson, J.; Chastain, K.; Rashid, S.; Weerawarana, R.; Bax, M.; McGuinness, D. What is a Knowledge Graph? Semant. Web Interoperabil., Usabil. Appl. Ios Press J. 2018, 2018, 1–14. [Google Scholar]
  71. Neo Technology. What Is a Graph Database? 2019. Available online: https://neo4j.com/developer/graph-database/ (accessed on 5 January 2019).
  72. Ioana, H.; Prangnawarat, N.; Haye, C. Path-based Semantic Relatedness on Linked Data and its use to Word and Entity Disambiguation. In Proceedings of the International Semantic Web Conference, ISWC 2015: The Semantic Web—ISWC, Bethlehem, PA, USA, 11–15 October 2015; pp. 442–457. [Google Scholar]
  73. Seeliger, A.; Pfaff, M.; Krcmar, H. Semantic Web Technologies for Explainable Machine Learning Models: A Literature Review. PROFILES/SEMEX@ISWC 2019, 2465, 1–16. [Google Scholar]
  74. Mezghanni, I.; Gargouri, F. Learning of Legal Ontology Supporting the User Queries Satisfaction. In Proceedings of the International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), Warsaw, Poland, 11–14 August 2014; pp. 414–418. [Google Scholar]
  75. Loui, R. From Berman and Hafne’s teleological context to Baude and Sachs’ interpretive defaults: An ontological challenge for the next decades of AI and Law. Artif. Intell. Law 2016, 2016, 371–385. [Google Scholar] [CrossRef]
  76. Tan, P.N.; Steinbach, M.; Kumar, V. Introduction to Data Mining, 1st ed.; Addison-Wesley: Boston, MA, USA, 2005; ISBN 978-0-321-32136-7. [Google Scholar]
  77. Singhal, A. Modern Information Retrieval: A Brief Overview. Bull. IEEE Comput. Soc. Tech. Comm. Data Eng. 2001, 24, 35–43. [Google Scholar]
  78. Boddy, R.; Smith, G. Statistical Methods in Practice: For Scientists and Technologists; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2009; ISBN 9780470749296. [Google Scholar]
  79. Yu, G.; Yang, Y.; Qingsong, X. An Ontology-based Approach for Knowledge Integration in Product Collaborative Development. J. Intell. Syst. 2015, 26, 35–46. [Google Scholar] [CrossRef]
  80. Schneider, M. Knowledge Integration. Encycl. Sci. Learn. 2012, 2012, 1684–1686. [Google Scholar]
  81. Allama, Z.; Dhunny, Z. On big data, artificial intelligence and smart cities. Cities 2019, 89, 80–91. [Google Scholar] [CrossRef]
  82. Chamoso, P.; González-Briones, A.; Rodríguez, S.; Corchado, J. Tendencies of Technologies and Platforms in Smart Cities: A State-of-the-Art Review. Wirel. Commun. Mob. Comput. 2018, 2018, 1–17. [Google Scholar] [CrossRef] [Green Version]
  83. Baldacchino, T.; Cross, E.; Worden, K.; Rowson, J. Variational Bayesian mixture of experts models and sensitivity analysis for nonlinear dynamical systems. Mech. Syst. Signal Process. 2016, 66, 178–200. [Google Scholar] [CrossRef]
  84. Merkl, B.; Chao, J.; Howard, R. Graph Databases for Beginners. Ebook. 2018. Available online: https://neo4j.com/blog/data-modeling-basics/ (accessed on 15 May 2020).
  85. Ashley, K.; Rissland, E. A case-based system for trade secrets law. In Proceedings of the ICAIL ’87 1st International Conference on Artificial Intelligence and Law, New York, NY, USA, 27–29 May 1987; pp. 60–66. [Google Scholar]
  86. Yamaguti, T.; Kurematsu, M. Legal Knowledge Acquisition Using Case-Based Reasoning and Model Inference. In Proceedings of the ICAIL ’93 4th International Conference on Artificial Intelligence and Law, Amsterdam, The Netherlands, 15–18 June 1993; pp. 212–217. [Google Scholar]
  87. Lecue, F. On The Role of Knowledge Graphs in Explainable AI. Semant. Web 2019, 2019, 1–9. [Google Scholar] [CrossRef]
  88. Goodrich, P. Historical Aspects of Legal Interpretation. Indiana Law J. 1986, 61, 331–354. [Google Scholar]
  89. Rojas, G. El Objeto Material y Formal del Derecho; Universidad Católica de Colombia: Bogotá, Colombia, 2018. [Google Scholar]
  90. Abdul, A.; Vermeulen, J.; Wang, D.; Lim, B.; Kankanhalli, M. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the CHI ’18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–18. [Google Scholar]
  91. Pedrycz, W. Granular computing for data analytics: A manifesto of human-centric computing. IEEE/CAA J. Autom. Sin. 2018, 5, 1025–1034. [Google Scholar] [CrossRef]
  92. Gómez-Pérez, J.; Erdmann, M.; Greaves, M.; Corcho, O. A Formalism and Method for Representing and Reasoning with Process Models Authored by Subject Matter Experts. IEEE Trans. Knowl. Data Eng. 2012, 25, 1933–1945. [Google Scholar] [CrossRef] [Green Version]
  93. Wolf, C.; Ringland, K. Designing accessible, explainable AI (XAI) experiences. ACM Sigaccess Access. Comput. 2020, 6, 1–5. [Google Scholar] [CrossRef] [Green Version]
  94. Freeman, E.; Robson, E. Head First Design Patterns; O’Reilly Media, Inc.: Hoboken, NJ, USA, 2004; ISBN 978-0-596-00712-6. [Google Scholar]
  95. Khazaii, J. Fuzzy Logic. In Advanced Decision Making for HVAC Engineers; Springer: Berlin/Heidelberg, Germany, 2016; pp. 157–166. [Google Scholar]
  96. Lai-Chong, E.; Schaik, P.; Roto, V. Attitudes towards user experience (UX) measurement. Int. J. Hum. Comput. Stud. 2014, 2014, 526–541. [Google Scholar]
  97. Standards, B. Systems and software engineering—Systems and software Quality Requirements and Evaluation (SQuaRE)—System and software quality models. BS ISO/IEC 2011, 25010, 1–34. [Google Scholar]
  98. Salvaneschi, P. The Quality Matrix: A Management Tool for Software Quality Evaluation. In Proceedings of the IASTED International Conference on Software Engineering, Innsbruck, Austria, 15–17 February 2005; pp. 394–399. [Google Scholar]
  99. Robinson, J. Likert Scale. Encycl. Qual. Life Well-Being Res. 2014, 2014, 3620–3621. [Google Scholar]
  100. Judiciary, C.R. Informe de Labores 2019 Centro de Apoyo, Coordinación y Mejoramiento de la Función Jurisdiccional; Poder Judicial Costa Rica: San Jose, CR, USA, 2019; pp. 1–34. [Google Scholar]
  101. Montgomery, D. Design and Analysis of Experiments, 18th ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2013; ISBN 978-1-118-14692-7. [Google Scholar]
  102. Ashley, K.; Rissland, E. But, see, accord: Generating blue book citations in HYPO. In Proceedings of the ICAIL ’87 1st International Conference on Artificial Intelligence and Law, Boston, MA, USA, 27–29 May 1987; pp. 67–74. [Google Scholar]
  103. Pal, K.; Campbell, J. An Application of Rule-based and Case-based Reasoning Within a Single Legal Knowledge-based System. SIGMIS Database J. 1997, 1997, 48–63. [Google Scholar] [CrossRef]
  104. Chorley, A.; Bench-Capon, T. Developing Legal Knowledge Based Systems Through Theory Construction. In Proceedings of the ICAIL ’03: 9th International Conference on Artificial Intelligence and Law, Edinburgh, UK, 24–28 June 2003; pp. 85–86. [Google Scholar]
  105. Bueno, T.; Bortolon, A.; Hoeschl, H.; Mattos, E.; Ribeiro, M. Analyzing the use of Dynamic Weights in Legal Case Based System. In Proceedings of the ICAIL ’03: 9th International Conference on Artificial Intelligence and Law, Edinburgh, UK, 24–28 June 2003; pp. 136–141. [Google Scholar]
  106. Aleven, V. Using background knowledge in case-based legal reasoning: A computational model and an intelligent learning environment. Artif. Intell. 2003, 2003, 183–237. [Google Scholar] [CrossRef] [Green Version]
  107. Purdy, R. Knowledge representation in “Default”: An attempt to classify general types of knowledge used by legal experts. In Proceedings of the CAIL ’87 1st International Conference on Artificial Intelligence and Law, Boston, MA, USA, 27–29 May 1987; pp. 199–208. [Google Scholar]
  108. Belzer, M. Legal reasoning in 3-D. In Proceedings of the 1st International Conference on Artificial Intelligence and Law, Boston, MA, USA, 27–29 May 1987; pp. 155–163. [Google Scholar]
  109. Dick, J. Representation of Legal Text for Conceptual Retrieval. In Proceedings of the 3rd International Conference on Artificial Intelligence and Law, New York, NT, USA, 12 May 1991; pp. 244–253. [Google Scholar]
  110. Paquin, L.C.; Blanchard, F.; Thomasset, C. Loge–expert: From a legal expert system to an information system for non-lawyers. In Proceedings of the ICAIL 91 3rd International Conference on Artificial Intelligence and Law, New York, NY, USA, 18 May 1991; pp. 254–259. [Google Scholar]
  111. Yoshino, H.; Haraguchi, M.; Sakurai, S.; Kagayama, S. Towards a legal analogical reasoning system: Knowledge representation and reasoning methods. In Proceedings of the 4th International Conference on Artificial Intelligence and Law, New York, NY, USA, 15–18 June 1993; pp. 110–116. [Google Scholar]
  112. Quaresma, P.; Pimenta, I. A Collaborative Legal Information Retrieval System Using Dynamic Logic Programming. In Proceedings of the 7th International Conference on Artificial Intelligence and Law, Oslo, Norway, 14 June–17 June 1999; pp. 190–191. [Google Scholar]
  113. Chorley, A.; Bench-Capon, T. AGATHA: Using heuristic search to automate the construction of case law theories. Artif. Intell. Law 2005, 13, 9–51. [Google Scholar] [CrossRef]
  114. Suzuki, Y.; Tojo, S. Additive Consolidation for Dialogue Game. In Proceedings of the ICAIL ’05: 10th International Conference on Artificial Intelligence and Law, Bologna, Italy, 6–11 June 2005; pp. 105–114. [Google Scholar]
  115. Zurek, T.; Kruk, E. Supporting of legal reasoning for cases which are not strictly regulated by law. In Proceedings of the ICAIL 09 12th International Conference on Artificial Intelligence and Law, Barcelona, Spain, 8–12 June 2009; pp. 220–221. [Google Scholar]
  116. Colen, S.; Cnossen, F.; Verheij, B. How much logical structure is helpful in content-based argumentation software for legal case solving? In Proceedings of the ICAIL ’09: 12th International Conference on Artificial Intelligence and Law, Barcelona, Spain, 8–12 June 2009; pp. 224–225. [Google Scholar]
  117. Gordon, T.; Walton, D. Legal Reasoning with Argumentation Schemes. In Proceedings of the 12th International Conference on Artificial Intelligence and Law, Barcelona, Spain, 8–12 June 2009; pp. 137–146. [Google Scholar]
  118. Lynch, C.; Ashley, K.; Pinkwart, N.; Aleven, V. Toward assessing law students’ argument diagrams. In Proceedings of the 12th International Conference on Artificial Intelligence and Law, Barcelona, Spain, 8–12 June 2009; pp. 222–223. [Google Scholar]
  119. Dahlman, C.; Feteris, E. Legal Argumentation Theory: Cross-Disciplinary Perspectives; Springer: London, UK, 2013. [Google Scholar]
  120. Bennett, Z.; Russell-Rose, T.; Farmer, K. A scalable approach to legal question answering. In Proceedings of the 16th edition of the International Conference on Articial Intelligence and Law, London, UK, 12–16 June 2017; pp. 269–270. [Google Scholar]
  121. Corvalán, J. La Primera Inteligencia Artificial Predictiva al Servicio de la Justicia: Prometea. 2017. Available online: https://ialab.com.ar/wp-content/uploads/2019/05/Artículo-Juan-La-Ley.pdf (accessed on 15 December 2018).
  122. Barros, R.; Peres, A.; Lorenzi, F.; Krug-Wives, L. Case Law Analysis with Machine Learning in Brazilian Court. In Proceedings of the IEA/AIE 2018 - Recent Trends and Future Technology in Applied Intelligence, Montreal, QC, Canada, 25–28 June 2018; pp. 857–868. [Google Scholar]
Figure 1. Stages of the case-based reasoning life-cycle, used in the RYEL system: Retrieve, reuse, review, and retain.
Figure 1. Stages of the case-based reasoning life-cycle, used in the RYEL system: Retrieve, reuse, review, and retain.
Electronics 10 01500 g001
Figure 2. Data overview diagram of the system: Image inputs, evidence and facts processing, and norms and laws outputs.
Figure 2. Data overview diagram of the system: Image inputs, evidence and facts processing, and norms and laws outputs.
Electronics 10 01500 g002
Figure 3. EGI: The graphic arrangement of images according to the interpretation and assessment of a judge on a simplified legal scenario related to a stabbing in a homicide case.
Figure 3. EGI: The graphic arrangement of images according to the interpretation and assessment of a judge on a simplified legal scenario related to a stabbing in a homicide case.
Electronics 10 01500 g003
Figure 4. EGI allows the evaluation and listing of links between facts and evidence in a scenario.
Figure 4. EGI allows the evaluation and listing of links between facts and evidence in a scenario.
Electronics 10 01500 g004
Figure 5. Judges use EGI to classify the granules of information in a case according to their perspective. The levels represent the degree of interest (importance), and the location of each granule within each level represents the order of precedence (range). Again, the granules can represent facts or evidence, as well as other case data.
Figure 5. Judges use EGI to classify the granules of information in a case according to their perspective. The levels represent the degree of interest (importance), and the location of each granule within each level represents the order of precedence (range). Again, the granules can represent facts or evidence, as well as other case data.
Electronics 10 01500 g005
Figure 6. This figure shows the cognitive legal information and relationship-interaction between the perception, perspective, and interpretation.
Figure 6. This figure shows the cognitive legal information and relationship-interaction between the perception, perspective, and interpretation.
Electronics 10 01500 g006
Figure 7. Example of an EGI showing a factual picture used to analyze the merits of a homicide case in real-time according to the graphical interpretation. Internally each image is translated into a node containing a set of attributes, and the arrows turn into edges containing a set of vectors.
Figure 7. Example of an EGI showing a factual picture used to analyze the merits of a homicide case in real-time according to the graphical interpretation. Internally each image is translated into a node containing a set of attributes, and the arrows turn into edges containing a set of vectors.
Electronics 10 01500 g007
Figure 8. EGI explains to the user using circles, size, and colors, the set of laws and regulations in line with the factual picture of the case under analysis. The machine expresses by a graphical distribution of circles those norms and laws that best describe the legal scenario under study. The recommended norms and laws to take are higher and farther to the right in the chart. However, a judge can explore, analyze, and select those that best fit the factual picture.
Figure 8. EGI explains to the user using circles, size, and colors, the set of laws and regulations in line with the factual picture of the case under analysis. The machine expresses by a graphical distribution of circles those norms and laws that best describe the legal scenario under study. The recommended norms and laws to take are higher and farther to the right in the chart. However, a judge can explore, analyze, and select those that best fit the factual picture.
Electronics 10 01500 g008
Figure 9. Legal analysis simulation of the merits of a case, considering: Evidence, facts, and the direction process of the legal data in a trial.
Figure 9. Legal analysis simulation of the merits of a case, considering: Evidence, facts, and the direction process of the legal data in a trial.
Electronics 10 01500 g009
Figure 10. Graphical interface showing simulation options.
Figure 10. Graphical interface showing simulation options.
Electronics 10 01500 g010
Figure 11. (1) Data model, and (2) property graph example, labels are omitted from relationships for simplicity.
Figure 11. (1) Data model, and (2) property graph example, labels are omitted from relationships for simplicity.
Electronics 10 01500 g011
Figure 12. EGI explains employing circles, size, and colors to the user the set of legal files containing the factual picture and similar scenarios to the legal context with which the judge works. The machine explains that each circle is a set of legal files with characteristics and states. The machine recommends those that are higher and to the right of the graph. The judge can explore, analyze, and select other circles that consider the best for a specific legal context.
Figure 12. EGI explains employing circles, size, and colors to the user the set of legal files containing the factual picture and similar scenarios to the legal context with which the judge works. The machine explains that each circle is a set of legal files with characteristics and states. The machine recommends those that are higher and to the right of the graph. The judge can explore, analyze, and select other circles that consider the best for a specific legal context.
Electronics 10 01500 g012
Figure 13. Relationships between nodes in a KG; showing a collection of vectors in n-dimensional Euclidean space. The spheres represent the nodes, and the vectors are letters formed from the relationships between them. 3D rendering explains the vector projection performed by the artifact.
Figure 13. Relationships between nodes in a KG; showing a collection of vectors in n-dimensional Euclidean space. The spheres represent the nodes, and the vectors are letters formed from the relationships between them. 3D rendering explains the vector projection performed by the artifact.
Electronics 10 01500 g013
Figure 14. The graphic interface of images according to the interpretation and assessment made by a judge. A piece of a real scene in a murder case using a handgun.
Figure 14. The graphic interface of images according to the interpretation and assessment made by a judge. A piece of a real scene in a murder case using a handgun.
Electronics 10 01500 g014
Figure 15. Real live experiment samples using the RYEL system by judges from Costa Rica and Argentina, respectively.
Figure 15. Real live experiment samples using the RYEL system by judges from Costa Rica and Argentina, respectively.
Electronics 10 01500 g015
Figure 16. Characteristics evaluation score and acceptability trend. (a) Scoring radar on system by hierarchy. (b) RYEL acceptability tendency according to the hierarchy of the judges.
Figure 16. Characteristics evaluation score and acceptability trend. (a) Scoring radar on system by hierarchy. (b) RYEL acceptability tendency according to the hierarchy of the judges.
Electronics 10 01500 g016
Figure 17. Statistical significance, power, and normality of samples and results. (a) Power curve for 1-Sample Z Test with α = 0.06 . (b) Normal probability plot of answers with 94% confidence interval.
Figure 17. Statistical significance, power, and normality of samples and results. (a) Power curve for 1-Sample Z Test with α = 0.06 . (b) Normal probability plot of answers with 94% confidence interval.
Electronics 10 01500 g017
Figure 18. Residuals vs. observation order.
Figure 18. Residuals vs. observation order.
Electronics 10 01500 g018
Figure 19. Main effects plot for judges’ evaluation.
Figure 19. Main effects plot for judges’ evaluation.
Electronics 10 01500 g019
Figure 20. Grouping by design parameters SQuaRE.
Figure 20. Grouping by design parameters SQuaRE.
Electronics 10 01500 g020
Table 1. Synthesis of how the components of the artifact are used and operated, based on Table A1.
Table 1. Synthesis of how the components of the artifact are used and operated, based on Table A1.
KG Operating Point# Components Involved
1—create and manipulateC1, C2, C5, C6
2—seekC1, C2, C3, C4, C7
3—modifyC1, C2, C6
4—inferC3, C4, C7, C8, C9, C10
Table 2. Software evaluation characteristics defined in ISO-25010 [97] to study the behavioral response of the judge.
Table 2. Software evaluation characteristics defined in ISO-25010 [97] to study the behavioral response of the judge.
1 CharacteristicParameterCriteria
FunctionalSuitability(1) It allows capturing the interpretation and assessment of the judge. (2) Graphically represents the legal knowledge that a judge has about a case.
Accuracy(1) The system is capable of analyzing the factual picture and returning the correct legal norms.
Functionality compliance(1) Judicial independence and discretionary level are respected.
UsabilityUnderstandability(1) Suitable for case data manipulation. (2) Graphic interfaces describe the legal analysis made by humans.
Learnability(1) Easy to learn.
Operability(1) Easy to operate and control.
Attractiveness(1) Attractive and innovative graphical interfaces.
EfficiencyTime behaviour(1) System response time is acceptable.
Efficiency compliance(1) Flexible to capture different types of legal data. (2) It is possible to represent characteristics of facts and evidence. (3) Allows a flexible analysis of the merits of a case.
1 Adaptation and use of software evaluation characteristics defined in ISO-25010 [97].
Table 3. Responses statistical summary.
Table 3. Responses statistical summary.
CodedMeanSE MeanSt DevVarianceCoef VarMedianMode
1-P4.67440.04730.62020.38463.2755
2-P2.36630.08761.14941.321248.5821
3-P3.0640.09931.30291.697642.5233
4-P4.8140.03780.49590.245910.355
5-P3.37790.09411.23411.52336.5344
6-P3.38950.09381.23051.51436.33.53
7-P3.77330.09221.20951.462932.0545
8-P3.84880.08651.13441.28729.4744
9-P3.65120.09391.23091.51533.7144
10-P3.750.09761.28021.638934.1445
Table 4. Synthesis of the evaluation matrix according to the judge’s criteria.
Table 4. Synthesis of the evaluation matrix according to the judge’s criteria.
1 CharacteristicColombiaEcuadorPanamaSpainArgentinaCosta Rica
Suitability100.0095.0095.7198.00100.0092.00
Accuracy80.0080.0085.7196.0090.0093.00
Functionality compliance80.0080.0091.4396.00100.0094.00
Understandability90.0095.0094.2996.00100.0099.00
Learnability80.0070.0082.8692.00100.0088.00
Operability90.0080.0092.8692.00100.0094.00
Attractiveness100.00100.00100.0096.00100.0098.00
Time behaviour100.0090.0097.1492.00100.00100.00
Efficiency compliance80.0086.6791.4394.67100.0094.00
Average88.8986.3092.3894.7498.8994.67
1 Adaptation and use of software evaluation characteristics that are defined in ISO-25010 [97].
Table 5. Two-way ANOVA: Criteria and hierarchy.
Table 5. Two-way ANOVA: Criteria and hierarchy.
SourceDFSeq SSContributionAdj SSAdj MSF-ValueP-Value
Criteria912.90166.66%12.9011.43347.290.000
Hierarchy31.1425.90%1.1420.38051.930.148
Error275.31227.45%5.3120.1967
Total3919.354100.00%
Table 6. Cross-check between the 4-P and 10-P criteria.
Table 6. Cross-check between the 4-P and 10-P criteria.
1 Criterion 4-P
2 Criterion 10-P2345Gran Total
1 111618
2 369
31152532
4 104252
51 35761
Grand total2222146172
1,2 5—Totally agree, 4—Fairly agree, 3—Neither agree nor disagree, 2—Fairly disagree, and 1—Totally disagree.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rodríguez Oconitrillo, L.R.; Vargas, J.J.; Camacho, A.; Burgos, Á.; Corchado, J.M. RYEL: An Experimental Study in the Behavioral Response of Judges Using a Novel Technique for Acquiring Higher-Order Thinking Based on Explainable Artificial Intelligence and Case-Based Reasoning. Electronics 2021, 10, 1500. https://doi.org/10.3390/electronics10121500

AMA Style

Rodríguez Oconitrillo LR, Vargas JJ, Camacho A, Burgos Á, Corchado JM. RYEL: An Experimental Study in the Behavioral Response of Judges Using a Novel Technique for Acquiring Higher-Order Thinking Based on Explainable Artificial Intelligence and Case-Based Reasoning. Electronics. 2021; 10(12):1500. https://doi.org/10.3390/electronics10121500

Chicago/Turabian Style

Rodríguez Oconitrillo, Luis Raúl, Juan José Vargas, Arturo Camacho, Álvaro Burgos, and Juan Manuel Corchado. 2021. "RYEL: An Experimental Study in the Behavioral Response of Judges Using a Novel Technique for Acquiring Higher-Order Thinking Based on Explainable Artificial Intelligence and Case-Based Reasoning" Electronics 10, no. 12: 1500. https://doi.org/10.3390/electronics10121500

APA Style

Rodríguez Oconitrillo, L. R., Vargas, J. J., Camacho, A., Burgos, Á., & Corchado, J. M. (2021). RYEL: An Experimental Study in the Behavioral Response of Judges Using a Novel Technique for Acquiring Higher-Order Thinking Based on Explainable Artificial Intelligence and Case-Based Reasoning. Electronics, 10(12), 1500. https://doi.org/10.3390/electronics10121500

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop