Extending Drag-and-Drop Actions-Based Model-to-Model Transformations with Natural Language Processing
Abstract
:Featured Application
Abstract
1. Introduction
- It supports one-to-one, one-to-many, many-to-one, and many-to-many concept mappings. A set of defined concept types of a source model (or their properties) can be mapped to a set of defined concept types of a target model (or their properties).
- It supports consistency of concept mappings within a single model or across multiple models based on UML or UML profiles. In the M2M transformation specifications, one can specify concept mappings within a single model or among different models expressed using the same or different modeling languages, including UML, BPMN, SBVR, SoaML, and others implemented as UML profiles.
- It supports reuse of M2M transformations. Out of the set of defined transformation specifications, a particular transformation can be enacted by the transformation engine depending on the conditions of the actual situation related to the source model elements and/or drag-and-drop actions performed. This increases the reuse of libraries of transformation specifications across multiple domains.
- It assures traceability between the source model elements and the transformed target model elements [20].
- It enables conditional transformations, which allow us to specify and perform more advanced transformation scenarios based on different conditions or constraints.
- It enhances tolerance to various concept-naming conventions used in the source model when generating valid results: Advanced natural language-based parsing of expressions provides an additional means to acquire valid results (i.e., properly named target concepts) compared to our previously developed M2M transformation solution without NLP support. If required, one may specify and perform conditional processing depending on different, sometimes poor, concept naming conventions, e.g., one can properly process use case concepts named by both a verb + noun (e.g., “Issue invoice”) and a single verb or noun (e.g., “Issue”, “Invoice”).
- It reduces the level of redundancy in the target model: If required, one may simplify the target model by combining concepts that have identical or synonymous meanings. This is achieved by identifying synonymous forms and abbreviations.
2. Related Work
- Error correction is performed to identify and correct typographical, grammatical, and semantic errors (e.g., accidental use of homophones). Multiple approaches are used to solve these issues, including string distances, similarity techniques, n-grams, and rule-based, statistical, or probabilistic techniques [44]. Recent approaches apply statistical machine translation [45] or neural machine translation principles, mostly transformer architecture [46,47]; however, application of the latter techniques might be limited due to the lack of context required for detection.
- Parse tree generation is focused on performing grammatical analysis for the given text using identified parse trees. Dependency parsing tries to extract intra-sentence relationships between words, while constituency tree generation is focused on grammatical processing using context-free grammars [48] or recurrent neural networks [49,50].
- Dependency parsing is used to identify dependencies between entities within the given text. As with most natural language-related tasks, currently, the best performance is obtained by applying deep learning-based techniques, e.g., Stanford’s deep bi-affine parser based on bidirectional long-short term memory (Bi-LSTM) network to produce vector representations for part-of-speech (POS) tagging and dependency relations tasks [51].
- Relation extraction is focused on detecting semantic relationship mentions within the given text. Such relationships can be defined between two or more entities and represent particular connections between them. A wide range of techniques are applied for solving this task, including similarity-based [52], supervised [53,54], and multiple deep learning techniques; for extensive surveys of these approaches, one may refer to [55,56].
- Relation classification is focused on selecting proper relations from a predefined set of classes for two given entities. Consider a tuple (Manager, Invoice, write (Manager, Invoice)) consisting of two subjects and the associated verb “write.” The presence of the context word “write” justifies that this is related to a person and enforces assigning the class PERSON-ACTION. This is particularly important in the context of our research, as identifying particular types of entities will lead to the generation of different target elements. The state of the art in this research is focused on identifying such relation classes in unstructured text, with bidirectional LSTM [58] or convolutional neural networks (CNNs) [59].
3. Introductory Example
- We will assume that the user dragged the Actor element “Customer” from the use case model onto the opened class diagram “Order Management” (Figure 1, tag A).
- This action subsequently triggers a transformation action, which in turn triggers the transformation engine to execute the specific transformation specification visually designed for this action (i.e., dragging an Actor element from the use case model onto the UML class diagram).
- The transformation specification instructs the transformation engine to select “Customer” together with use cases associated with this actor and transform it into UML classes and a set of associations connecting those classes. In the exemplary use case model, “Customer” is associated with four use cases: “Place order”, “Payment”, “Send back item”, and “Fill-in complaint form”.
- This chain of performed actions will result in the generation of a fragment of the UML class diagram, as presented in Figure 1, tag B.
- In the resulting UML class diagram, the association between the two classes “Customer” and “Item” is named as the two-word verb phrase “send back”, which was extracted from the source element name, namely the use case “Send back item”. Such extraction requires NLP-enhanced extraction of noun and verb phrases. In our original solution [14], a simple text-processing technique was used. This technique always extracts the first and only the first word from the use case name to be considered as a candidate for the association name in the class model, while the rest of the use case name is transformed into the name of the corresponding class. Such an approach would obviously provide an invalid result in our example presented here, i.e., the association would be named as “send” and the class to which this association is connected to as “Back Item”.
- In the exemplary use case diagram, we have the use case “Payment”. We intentionally used a bad naming practice here, i.e., the use case is named using only a noun, without any preceding verb or verb phrase. This is arguably one of the most common bad naming practices in use case modeling, which will cause erroneous transformation results if no NLP is involved. In our NLP-enhanced solution, we solve the issue of the single-worded use case name by (1) identifying that there is only a noun or a noun phrase composing the name, which is then transformed into the class with its name equal to that noun or noun phrase; and (2) creating an association and naming it as “perform”, which is a predefined name intended for handing cases where the use of this specific bad naming practice is detected. This particular case of NLP-enhanced processing is presented in more detail in Section 6.3.
4. Extending M2M Transformations with NLP
4.1. Extended M2M Transformation Metamodel
- Source model concept type (SMCT); an instance (actual source element) is referred to as ISMCT.
- Target model concept type (TMCT); an instance (actual target element) is referred to as ITMCT.
- −
- SimpleTransformationPatternSpecification is a transformation pattern specification (TransformationPatternSpecification) containing a single mapping pattern (MappingPattern).
- −
- ConditionedTransformationPatternSpecification is a transformation pattern specification containing a set of conditioned mapping specifications (ConditionedMappingSpecification) together with a default mapping pattern.
- −
- ConditionedMappingSpecification is a mapping specification coupling mapping pattern (MappingPattern) with conditional expression restricting the execution of the defined mapping pattern based on a specific condition.
- −
- mergeSynonyms: If this property is set to true, elements whose names are recognized as synonymous with names of elements already existing in the target model will be automatically merged with those elements by the transformation engine (i.e., new elements will not be created in the target model); otherwise, new elements will or will not be created depending on the mergeMatchingConcepts setting, which sets the merging of elements with identical names on and off (straightforward string matching is used here).
- −
- resolveAbbreviations: If this property is set to true, elements whose names are recognized as abbreviations of names of elements already present in the target model will be automatically merged with those elements by the transformation engine (i.e., new elements will not be created in the target model); otherwise, new elements will or will not be created depending on the mergeMatchingConcepts setting.
4.2. Implementing Extended M2M Transformation Metamodel in a CASE Tool
4.3. Implementation Architecture
5. NLP Operators for Enhancing Partial M2M Transformations
5.1. Split and Concatenation Operators
- LEFT(from, quantity, separator): Extract the quantity of tokens starting from the left-hand position from (word positions are counted from 0); words are separated using the separator string. For example, LEFT(1, 2, ‘ ’) will extract two words starting from the second word in the source element name; words are separated by a white space. If from is 0 and quantity is undefined, then the whole string will be extracted. By default, it is assumed that white space tokenization will be used unless the separator character is specified as the third parameter.
- RIGHT(from, quantity, separator): Extract the quantity of tokens starting from the right-hand position from (word positions are counted from 0); words are separated using the separator string. For example, RIGHT(, 2, ‘ ’) will extract the last two words from the element name (i.e., the first two words counting from the right). If from is 0 and quantity is undefined, then the whole string will be extracted.
5.2. NLP Techniques for Implementing Advanced Text Processing Operators
- Lexical normalization deals with obtaining the initial word form. Two forms of normalization, namely, stemming and lemmatization, are widely known and applied in text processing. While stemming simply aims to reduce words to their base or root form, lemmatization depends on the part of speech and context and seeks to obtain the base form that is used in a dictionary. Besides dictionary-based lookup, statistical classifiers are used to get lemmas [66,67].
- Tokenization splits the text into separate chunks (tokens). Simple tokenizers generally use predefined separator symbols, like white spaces or commas, to identify limits of such tokens. Real-world texts, however, may contain such characters inside the tokens themselves (abbreviations are one example). This is also different for Asian languages, which may require deeper morphological analysis. Modern toolkits include advanced tokenizers that can handle such problems or deal with specific situations, such as REPP tokenizer [68].
- Part-of-speech (POS) tagging identifies part-of-speech tags for each token in the sentence. This is one of the most researched topics in the natural language processing domain. Most POS tagging solutions usually rely on multiple statistical and machine learning approaches, such as convolutional networks (CNNs) [69], conditional random fields (CRFs) [70,71], or transformation-based learning [72]. This is one of the key techniques used in noun phrase and verb phrase extraction, which is required to process element designations for proper generation of target elements.
- Named entity recognition (NER) focuses on finding entity instances in unstructured text and classifying them into a predefined set of categories, such as a person, organization, location, time, or address. Multiple approaches are available to solve this, including recent advances in deep learning [73,74]. Still, hybrid conditional random field-based approaches seem to dominate in this field [75,76,77].
- Semantic analysis focuses on capturing synonyms, homonyms, antonyms, and other semantic relations. Lexical databases such as WordNet [62] can be directly applied to solve this task. Finding synonymous forms is one of the most popular tasks in the context of knowledge extraction, and is greatly beneficial for query processing or semantic entity-based searching. Recent developments in deep learning provide advanced techniques to find synonymous or contextually related entries by learning and comparing contextual representations in the form of redistributable embeddings [75,78,79]. These representations can be transferred and reused for different NLP tasks, including POS tagging, document classification, sentiment analysis, and others.
- Hypernym/hyponym discovery enables the extraction of hierarchical relationships to form taxonomies or augment existing ontologies or vocabularies. Rule-based [80,81,82], vector space [83], neural [84], and hybrid [85] approaches are among the most prominent ones in this category. In the context of our research, semantic analysis can be used to identify and merge synonymous forms and generate generalization relationships or categorizations.
5.3. Advanced Text Processing Operators
- NORMALIZE(phrase): Normalizes the given text by converting a plural noun form to a singular form or converting a verb form to the present tense.
- INFINITIVE(verb): Gets an infinitive form for the given verb.
- STEM(word): Extracts a stem for the given word.
- LEMMA(word, pos): Gets a lemma for the given word when a particular part of speech (POS) is defined.
- POS(word): Gets a part of speech for the given word. If more than one part of speech option is identified, only the first one is returned.
- EXTRACTNE(phrase, type): Extracts named entities (individual concepts) from the given phrase. If type is not set, it will try to extract all existing named entities; otherwise, it will try to find entities of the defined type. Currently, Location, Person, Organization, and Time entity types are supported in the implementation.
- EXTRACTVERB(phrase): Extracts a verb or a verb phrase from the given phrase.
- EXTRACTNOUN(phrase, ‘all’): Extracts a noun or a noun phrase from the given phrase. Setting the second parameter to ‘all’ allows one to extract all possible nouns/noun phrases; otherwise, the most general noun phrase is extracted.
- CONTAINS(phrase1, phrase2): Returns true if phrase1 contains phrase2 and false otherwise.
- CONTAINSNE(phrase, entity): Checks whether the given phrase contains a particular named entity; returns true if such entity was found and false otherwise.
- TYPENE(phrase): Extracts named entities from the given phrase and determines their types. The determined types are returned together with corresponding entities. Currently, ‘Location’, ‘Person’, ‘Organization’, and ‘Time’ entity types are supported.
- ISNOUNPHRASE(phrase): Determines whether the given phrase is a noun phrase; returns true if this condition is satisfied and false otherwise.
- ISVERBPHRASE(phrase): Determines whether the given phrase is a verb phrase; returns true if this condition is satisfied and false otherwise.
- ISSYNONYM(phrase1, phrase2): Returns true if phrase1 is a synonym of phrase2 and false otherwise.
- ISHYPONYM(phrase1, phrase2): Returns true if phrase1 is a hyponym of phrase2 and false otherwise.
- ISHYPERNYM(phrase1, phrase2): Returns true if phrase1 is a hypernym of phrase2 and false otherwise.
- ISMERONYM(phrase1, phrase2): Returns true if phrase1 is a meronym of phrase2 and false otherwise.
- ISHOLONYM(phrase1, phrase2): Returns true if phrase1 is a holonym of phrase2 and false otherwise.
6. Basic use Cases of M2M Transformation Utilizing NLP Extension Capabilities
6.1. Extracting Phrases from the Source Element to Generate the Target Elements
6.1.1. Description and Applicability
6.1.2. Instance of the use Case Scenario
6.1.3. Transformation Specification for the Presented use Case Instance
6.1.4. Execution Instance of the Specific use Case Example
- −
- if is named as “Issue Invoice” or “Invoice”, then the UML class “Invoice” is generated;
- −
- if is named as “Issue Invoice to Customer”, then the UML classes “Invoice” and “Customer” are generated; and
- −
- if does not have any noun/noun phrase in its name, no target elements will be generated.
6.2. Merging Target Elements with Synonymous Meanings
6.2.1. Description and Applicability
6.2.2. Instance of the use Case Scenario
6.2.3. Transformation Specification for the Presented use Case Instance
6.2.4. Execution Instance of the Specific use Case Example
- −
- If is named as “Employee” and is named as “Worker”, the user is notified about the already existing matching element and no new element is generated; and
- −
- If is named as “Employee” and is named as “Manager”, no match will be identified, therefore the general concept “Manager” is generated.
- −
- Under the same conditions, let us consider a user setting the property resolveAbbreviations to true, then the following is possible:
- −
- If is named as “Mngr.” and is named as “Manager”, the user is notified about the already existing matching element and the new element is not generated; the same will happen if the element names are reversed, i.e., is named as “Manager” and is named as “Mngr.”; and
- −
- If is named as “Mngr.” and is named as “Clerk”, the general concept “Clerk” is generated.
6.3. Conditional Processing Addressing Different Element Naming Practices
6.3.1. Description and Applicability
6.3.2. Instance of the use Case Scenario
6.3.3. Transformation Specification for the Presented use Case Instance
- −
- ActivityToTask_Pattern, where SMCT is an Activity type and TMCT is a Task type; and
- −
- ActivityToTask2_Pattern, where SMCT is an Activity type and TMCT is a Task type; additionally, names of the instances of TMCT will be concatenated with a predefined verb “perform” at the beginning of those names.
6.3.4. Execution Instance of the Specific use Case Example
- −
- If is named as “Issue invoice”, then the BPMN task “Issue invoice” is generated; and
- −
- If is named as “Payment”, then the BPMN task “Perform payment” is generated.
6.4. Conditional Processing Addressing Different Types of Represented Entities
6.4.1. Description and Applicability
6.4.2. Instance of the use Case Scenario
6.4.3. Transformation Specification for the Presented use Case Instance
- −
- Named entities extracted from the name of ISMCT are classified as Person or Organization;
- −
- Named entities extracted from the name of ISMCT represent a more specific meaning of the Person or Organization.
6.4.4. Execution Instance of the Specific use Case Example
- −
- If is a UML class named as “Manager” and ISHYPONYM(“Manager”, “Person”) is true (i.e., “Manager” is identified as a subclass of “Person”), then the output of the transformation is a BPMN lane named “Manager”.
- −
- If is a UML class named as “Microsoft” and ISHYPONYM(“Microsoft”, “Organization”) is true (i.e., “Microsoft” is identified as a subclass of “Organization”), then the output of the transformation is a BPMN lane named “Microsoft”.
- −
- If is a UML class named “Invoice”, which is not a hyponym for either “Person” or “Organization”, then the output of the transformation is a BPMN data object named “Invoice”.
6.5. Resolution of Semantical Relationships: Hyponym/Hypernym, Holonym/Meronym
6.5.1. Description and Applicability
6.5.2. Instances of the use Case Scenario
6.5.3. Transformation Specification for the Presented use Case Instance
6.5.4. Execution Instance of the Specific use Case Example
- −
- If is a class named “Manager” and is a class named “Employee”, then the UML Generalization relationship is created between the two, where is identified as a more specific class (ClassOntoClassHyponym_Pattern pattern is executed).
- −
- If is a class named “Employee” and is a class named “Manager”, then the UML Generalization relationship is created between the two, where is identified as a more general class (ClassOntoClassHypernymy_Pattern pattern is executed).
7. Evaluation
7.1. Experiment Setup
- Applying a simple LEFT(0, 1, ‘ ’) operator (described in Section 5.1), which implements the “first word is a verb” pattern (i.e., “<VERB><NOUN>|<NOUN PHRASE>”); this was done to process all activity-like concept names in the original solution.
- Applying our proposed NLP extension, which uses NLP operators to recognize and extract the noun and verb phrases composed of any number of words.
- −
- 20 UML use case models collected freely from the Internet;
- −
- 20 BPMN process models selected from the large set of Signavio BPMN models provided by the BPM Academic Initiative [86].
7.2. Selection of NLP Tools for the Experiment
- Stanford Stanza [87], which uses bi-directional long-short term neural networks (Bi-LSTMs) to implement components and pipelines for solving multiple NLP tasks;
- Spacy 2.0 [88] toolkit by Explosion, which applies convolutional neural networks (CNNs);
- Stanford CoreNLP toolkit [89], which relies on conditional random field (CRF);
- Flair [90] toolkit by Zalando Research, which applies pooled contextualized embeddings together with deep recurrent neural networks for multiple tasks;
- Whether the extractor successfully determined that the model element name had entities that had to be extracted, i.e., whether it contained a verb phrase, a noun phrase, or a named entity.
- Whether the extractor actually extracted the required entities successfully. Note that it was required to evaluate whether both verb phrases and noun phrases were successfully extracted. In cases where multiple phrases had to be extracted, it was considered that all of them had to be present in the output for the result to be considered correct.
7.3. Evaluation Methodology
- Accuracy, which is defined as the proportion of correctly executed transformations to the total number of performed transformations:
- Mean deviation between the number of extracted outputs and benchmark output results, which is used to measure the extraction error per model:
- Jaccard distance between extracted outputs and actual outputs, which is used to evaluate the proportion of elements generated successfully compared to the set of elements that was generated actually; again, the mean is considered to aggregate the performance per model:
7.4. Experimental Results
7.5. Discussion
- A limited set of bad naming practices, restricted to the most common ones (e.g., naming a use case using a single verb or noun), was considered. During the initial dataset screening, we observed many such cases. Identifying more cases of bad practices and introducing automated resolution of such cases into the developed solution could provide even better transformation results.
- The use of non-alphanumeric symbols (e.g., dashes, commas, apostrophes) inside words also matters. It is advised to remove them from the model element names. While more advanced tokenizers should be able to handle many such cases, the risk of mishandling such cases remains.
- Detection and resolution of abbreviations is also an actual issue. As stated in Section 7.1, they can be very context-dependent and may or may not be recognized as expected. Another similar issue is case sensitivity (e.g., “US” vs. “us”), as named entity recognizers can easily be confused.
- Part-of-speech tagging can be sensitive to letter cases. While some modelers do prefer starting each word with a capital letter when naming activities, tasks, or use cases, some NLP tools may fail to tag them correctly (e.g., “issue invoice” could be tagged as <VERB><NOUN>, but “Issue Invoice” might become <NOUN><NOUN>, which would be an incorrect tagging result). During initial experimentation, we observed that some NLP tools, like Spacy, were quite sensitive to letter cases, which is also significant for practical application, as modeling practitioners tend to use proper or even mixed-case names. While such cases could be normalized to lowercase, doing so increases the risk that some features required for proper processing could be lost (e.g., recognition of named entities by the first uppercase letters).
- Generally, using conjunctive/disjunctive clauses in element names is also considered as a bad modeling practice in modeling BPMN processes, UML use cases, or any other type of activity-like element, as they should be refactored to two or more elements. In our experiment, we considered that cases containing a single verb phrase and multiple noun phrases could be processed to generate multiple ITMCT output elements (e.g., “assign manager and assistant” can be processed as “assign manager” and “assign assistant”). Multiple verb phrases for the same subject could be processed similarly, provided the verb phrases are identified correctly (e.g., “create and process invoice” can be processed as “create invoice” and “process invoice”). However, having multiple candidate verb phrases and noun phrases leads to much more complex processing, which may require more advanced NLP techniques, such as dependency parsing. The same applies to cases where verb phrases, noun phrases, or both are separated by commas or other separator symbols (e.g., “create, process, and manage invoice”). This could be addressed in our further research.
- There can be general ambiguity in detecting named entities and abbreviations. As stated in the previous section, there are numerous situations where entities cannot be processed correctly due to a lack of contextual information. Additional tools that take into account multiple features to improve precise meaning detection (e.g., context-based classifiers) could be applied to mitigate this problem and to correct extractor prediction. Such developments would require additional sources of input data and could also be considered as one of the goals of our future research.
8. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Anjorin, A.; Lauder, M.; Patzina, S.; Schürr, A. eMoflon: Leveraging EMF and professional CASE tools. In Proceedings of the Tagungsband Der Informatik 2011, Lecture Notes in Informatics, Berlin, Germany, 4–7 October 2011; Volume 192. [Google Scholar]
- Klassen, L.; Wagner, R. EMorF—A tool for model transformations. Electron. Commun. EASST 2012, 54, 1–6. [Google Scholar]
- Object Management Group (OMG). Meta Object Facility (MOF) 2.0 Query/View/Transformation Specification. OMG spec v.1.2. 2015. Available online: https://www.omg.org/spec/QVT/1.2 (accessed on 1 July 2020).
- Jouault, F.; Allilaire, F.; Bézivin, J.; Kurtev, I.; Valduriez, P. ATL: A QVT-like transformation language. In Proceedings of the Companion to the 21th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA, Portland, OR, USA, 22–26 October 2006. [Google Scholar]
- Young, T.; Hazarika, D.; Poria, S.; Cambria, E. Recent trends in deep learning based natural language processing. IEEE Comput. Intel. Mag. 2018, 13, 55–75. [Google Scholar] [CrossRef]
- Otter, D.W.; Medina, J.R.; Kalita, J.K.A. Survey of the usages of deep learning in natural language processing. IEEE Trans. Neural Netw. Learn. Syst. 2020, 1–21. [Google Scholar] [CrossRef] [Green Version]
- Mens, T.; Van Gorp, P.A. Taxonomy of model transformation. Electron. Notes Theor. Comput. Sci. 2006, 152, 125–142. [Google Scholar] [CrossRef] [Green Version]
- Rose, L.M.; Herrmannsdoerfer, M.; Mazanek, S.; Van Gorp, P.; Buchwald, S.; Horn, T.; Kalnina, E.; Lano, K.; Schätz, B.; Wimmer, M. Graph and model transformation tools for model migration. Softw. Syst. Model. 2012, 13, 323–359. [Google Scholar] [CrossRef]
- Hildebrandt, S.; Lambers, L.; Giese, H.; Rieke, J.; Greenyer, J.; Schäfer, W.; Lauder, M.; Anjorin, A.; Schürr, A. A survey of triple graph grammar tools. Electron. Commun. EASST 2013, 57. [Google Scholar] [CrossRef]
- Lano, K.; Kolahdouz-Rahimi, S.; Yassipour-Tehrani, S.; Sharbaf, M. A survey of model transformation design patterns in practice. J. Syst. Softw. 2018, 140, 48–73. [Google Scholar] [CrossRef] [Green Version]
- Silva, G.C.; Rose, L.; Calinescu, R.A. Qualitative study of model transformation development approaches: Supporting novice developers. In Proceedings of the 1st International Workshop in Model-Driven Development Processes and Practices (MD2P2), Valencia, Spain, 28 September–3 October 2014; pp. 18–27. [Google Scholar]
- Dori, D. Model-Based Systems Engineering with OPM and SysML; Springer: New York, NY, USA, 2016. [Google Scholar]
- Skersys, T.; Danenas, P.; Butleris, R. Extracting SBVR business vocabularies and business rules from UML use case diagrams. J. Syst. Softw. 2018, 141, 111–130. [Google Scholar] [CrossRef]
- Skersys, T.; Danenas, P.; Butleris, R. Model-based M2M transformations based on drag-and-drop actions: Approach and implementation. J. Syst. Softw. 2016, 122, 327–341. [Google Scholar] [CrossRef]
- Skersys, T.; Pavalkis, S.; Nemuraite, L. Implementing Semantically Rich Business Vocabularies in CASE Tools. In Proceedings of the AIP International Conference on Numerical Analysis and Applied Mathematics (ICNAAM-2014), Rhodes, Greece, 22–28 September 2014; Theodore, E.S., Charalambos, T., Eds.; AIP Publishing: Melville, NY, USA, 2015; Volume 1648, pp. 1–4. [Google Scholar]
- Object Management Group (OMG). Semantics of Business Vocabulary and Business Rules (SBVR) v.1.5, OMG Doc. No.: Formal/2019–10–02. 2019. Available online: https://www.omg.org/spec/SBVR/About-SBVR/ (accessed on 1 July 2020).
- Object Management Group (OMG). UML Profile for BPMN Processes. OMG spec. v.1.0. 2014. Available online: http://www.omg.org/spec/BPMNProfile/1.0/ (accessed on 1 July 2020).
- Object Management Group (OMG). Service Oriented Architecture Modeling Language (SoaML). OMG Spec. v. 1.0.1. May 2012. Available online: www.omg.org/spec/SoaML/ (accessed on 1 July 2020).
- Object Management Group (OMG). OMG System Modeling Language. OMG spec. v.1.6. 2019. Available online: https://www.omg.org/spec/SysML/1.6/ (accessed on 1 July 2020).
- Vileiniškis, T.; Skersys, T.; Pavalkis, S.; Butleris, R.; Butkienė, R. Lightweight Approach to Model Traceability in a CASE Tool. In Proceedings of the AIP Conference Proceedings: International Conference of Numerical Analysis and Applied Mathematics (ICNAAM 2016), Rhodes, Greece, 19–25 September 2016; AIP Publishing: Melville, NY, USA, 2017; Volume 1863 A, pp. 1–4. [Google Scholar] [CrossRef]
- Kahani, N.; Bagherzadeh, M.; Cordy, J.R.; Dingel, J.; Varro, D. Survey and classification of model transformation tools. Softw. Syst. Model. 2019, 18, 2361–2397. [Google Scholar] [CrossRef]
- Schürr, A. Specification of graph translators with triple graph grammars. In Proceedings of the WG’94 Workshop on Graph-Theoretic Con-cepts in Computer Science, Herrsching, Germany, 16–18 June 1994; pp. 151–163. [Google Scholar]
- Arendt, T.; Biermann, E.; Jurack, S.; Krause, C.; Taentzer, G. Henshin: Advanced concepts and tools for in-place EMF model transformations. In Proceedings of the Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Oslo, Norway, 3–8 October 2010; Volume 6394, pp. 121–135. [Google Scholar]
- Biermann, E.; Ehrig, K.; Köhler, C.; Kuhns, G.; Taentzer, G.; Weiss, E. Graphical definition of in-place transformations in the Eclipse modeling framework. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4199, pp. 425–439. [Google Scholar]
- Schippers, H.; Van Gorp, P.; Janssens, D. Leveraging UML profiles to generate plugins from visual model transformations. Electron. Notes Theor. Computer Sci. 2005, 127, 5–16. [Google Scholar] [CrossRef] [Green Version]
- Van Gorp, P. Model-Driven Development of Model Transformations. Ph.D.Thesis, University of Antwerpen, Antwerpen, Belgium, 2008. [Google Scholar]
- Muliawan, O.; Janssens, D. Model refactoring using MoTMoT. Int. J. Softw. Tools Technol. Trans. 2010, 12, 201–209. [Google Scholar] [CrossRef]
- Sendall, S.; Perrouin, G.; Guelfi, N.; Biberstein, O. Supporting Model-to-Model Transformat: The VMT Approach; CTIT Technical Report TR-CTIT-03–27; University of Twente: Enschede, The Netherlands, 2003. [Google Scholar]
- Willink, E.D. UMLX: A graphical transformation language for MDA. In Proceedings of the Workshop on Model Driven Architecture: Foundations and Applications, Nuremberg, Germany, 7–10 November 2003; pp. 13–24. [Google Scholar]
- Agrawal, A.; Karsai, G.; Neema, S.; Shi, F.; Vizhanyo, A. The design of a language for model transformations. Softw. Syst. Model. 2006, 5, 261–288. [Google Scholar] [CrossRef]
- Salemi, S.; Selamat, A.; Penhaker, M. A model transformation framework to increase OCL usability. J. King Saud Univ. Comput. Inf. Sci. 2016, 28, 13–26. [Google Scholar] [CrossRef] [Green Version]
- Kalnins, A.; Barzdins, J.; Celms, E. The model transformation language MOLA. Lect. Notes Comput. Sci. 2005, 3599, 62–76. [Google Scholar]
- The Eclipse Foundation: Viatra Project. 2016. Available online: http://www.eclipse.org/viatra/ (accessed on 1 July 2020).
- Ergin, H.; Syriani, E.; Gray, J. Design pattern-oriented development of model transformations. Comput. Lang. Syst. Struct. 2016, 46, 106–139. [Google Scholar] [CrossRef]
- Lano, K.; Kolahdouz-Rahimi, S. Model-transformation design patterns. IEEE Trans. Softw. Eng. 2014, 40, 1224–1259. [Google Scholar] [CrossRef]
- Leopold, H.; Smirnov, S.; Mendling, J. On the refactoring of activity labels in business process models. Inf. Syst. 2012, 37, 443–459. [Google Scholar] [CrossRef] [Green Version]
- Leopold, H.; Rami-Habib, E.-S.; Mendling, J.; Guerreiro Azevdo, L.; Baião, F.A. Detection of naming convention violations in process models for different languages. Decis. Support Syst. 2013, 56, 310–325. [Google Scholar] [CrossRef] [Green Version]
- Leopold, H. Natural Language in Business Process Models: Theoretical Foundations, Techniques, and Applications; Springer International Publishing: Cham, Switzerland, 2013. [Google Scholar]
- Pittke, F. Linguistic Refactoring of Business Process Models. Ph.D. Thesis, WU Vienna University of Economics and Business, Vienna, Austria, 2015. [Google Scholar]
- Pittke, F.; Leopold, H.; Mendling, J. When language meets language: Anti patterns resulting from mixing natural and modeling language. In Proceedings of the BPM: Business Process Management Workshops, Eindhoven, The Netherlands, 7–11 September 2014; pp. 118–129. [Google Scholar]
- Leopold, H.; Pittke, F.; Mendling, J. Ensuring the canonicity of process models. Data Knowl. Eng. 2017, 11, 22–38. [Google Scholar] [CrossRef]
- Somogyi, F.A.; Asztalos, M. Systematic review of matching techniques used in model-driven methodologies. Softw. Syst. Model. 2020, 19, 693–720. [Google Scholar] [CrossRef] [Green Version]
- Pileggi, S.F.; Fernandez-Llatas, C. Semantic Interoperability Issues, Solutions, Challenges; River Publisher: Wharton, TX, USA, 2012. [Google Scholar]
- Kukich, K. Techniques for automatically correcting words in text. ACM Comput. Surv. 1992, 24, 377–439. [Google Scholar] [CrossRef]
- Rozovskaya, A.; Roth, R. Grammatical error correction: Machine translation and classifiers. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Berlin, Germany, 7–12 August 2016; Volume 1, pp. 2205–2215. [Google Scholar]
- Junczys-Dowmunt, M.; Grundkiewicz, R.; Guha, S.; Heafield, K. Approaching neural grammatical error correction as a low-resource machine translation task, NAACL 2018, Association for Computational Linguistics. arXiv 2018, arXiv:1804.05940. [Google Scholar]
- Kiyono, S.; Suzuki, J.; Mita, M.; Mizumoto, T.; Inui, K. An empirical study of incorporating pseudo data into grammatical error correction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 3–7 November 2019; pp. 1236–1242. [Google Scholar]
- Petrov, S.; Barrett, L.; Thibaux, R.; Klein, D. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the ACL-44 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, Sydney, Australia, 22 July 2006; pp. 433–440. [Google Scholar]
- Socher, R.; Lin, C.C.-Y.; Ng, A.Y.; Manning, C.D. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th International Conference on International Conference on Machine Learning (ICML’11), Madison, WI, USA, 24–27 July 2011; Getoor, L., Scheffer, T., Eds.; Omnipress: Madison, WI, USA, 2011; pp. 129–136. [Google Scholar]
- Vinyals, O.; Kaiser, L.; Koo, T.; Petrov, S.; Sutskever, I.; Hinton, G. Grammar as a foreign language. In Proceedings of the 28th International Conference on Neural Information Processing Systems (NIPS’15), Cambridge, MA, USA, 7–12 December 2015; MIT Press: Cambridge, MA, USA, 2015; Volume 2, pp. 2773–2781. [Google Scholar]
- Dozat, T.; Manning, C.D. Simpler but more accurate semantic dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbour, Australia, 15 July 2018; Volume 2, pp. 484–490. [Google Scholar]
- Agichtein, E.; Gravano, L. Snowball: Extracting relations from large plain-text collections. In Proceedings of the Fifth ACM Conference on Digital Libraries, ACM, San Antonio, TX, USA, 2–6 June 2000. [Google Scholar]
- Lai, S.; Leung, K.S.; Leung, Y. SUNNYNLP at SemEval-2018 Task 10: A support-vector-machine-based method for detecting semantic difference using taxonomy and word embedding features. In Proceedings of the 12th International Workshop on Semantic Evaluation (Semeval), Association for Computational Linguistics, New Orleans, LA, USA, 5–6 June 2018; pp. 741–746. [Google Scholar]
- Santus, E.; Biemann, C.; Chersoni, E. BomJi at SemEval-2018 Task 10: Combining vector-, pattern- and graph-based information to identify discriminative attributes. In Proceedings of the 12th International Workshop on Semantic Evaluation (SEMEVAL), Association for Computational Linguistics, New Orleans, LA, USA, 5–6 June 2018; pp. 741–746. [Google Scholar]
- Kumar, S. A Survey of deep learning methods for relation extraction. CoRR abs/1705.03645. arXiv 2017, arXiv:1705.03645. [Google Scholar]
- Smirnova, A.; Cudré-Mauroux, P. Relation extraction using distant supervision: A survey. ACM Comput. Surv. 2018, 51, 106. [Google Scholar] [CrossRef]
- He, L.; Lee, K.; Lewis, M.; Zettlemoyer, L. Deep semantic role labeling: What works and what’s next. In Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics, Vancouver, WA, Canada, 30 July–4 August 2017; Volume 1, pp. 473–483. [Google Scholar]
- Lee, J.; Seo, S.; Choi, Y.S. Semantic relation classification via bidirectional LSTM networks with entity-aware attention using latent entity typing. Symmetry 2019, 11, 785. [Google Scholar] [CrossRef] [Green Version]
- Ren, F.; Zhou, D.; Liu, Z.; Li, Y.; Zhao, R.; Liu, Y.; Liang, X. Neural relation classification with text descriptions. In Proceedings of the 27th International Conference on Computational Linguistics, Santa Fe, NM, USA, 20–26 August 2018; pp. 1167–1177. [Google Scholar]
- Belinkov, Y.; Glass, J. Analysis methods in neural language processing: A survey. Trans. Assoc. Comput. Linguist. 2019, 7, 49–72. [Google Scholar] [CrossRef]
- No Magic, Inc.: UML Profiling and DSL. User Guide, v 19.0 LTR. 2019. Available online: https://docs.nomagic.com/display/MD185/UML+Profiling+and+DSL+Guide (accessed on 1 July 2020).
- Miller, G. WordNet: A lexical database for English. Comm. ACM 1995, 38, 39–41. [Google Scholar] [CrossRef]
- Apache Software Foundation. Apache OpenNLP Natural Language Processing Library. 2014. Available online: http://opennlp.apache.org/ (accessed on 1 July 2020).
- Baker, C.F.; Fillmore, C.J.; Lowe, J.B. The Berkeley FrameNet project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics-Volume 1 (ACL ‘98/COLING ‘98), Association for Computational Linguistics, Stroudsburg, PA, USA, 10–14 August 1998; Volume 1, pp. 86–90. [Google Scholar]
- Navigli, R.; Ponzetto, S. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artif. Intell. 2012, 193, 217–250. [Google Scholar] [CrossRef]
- Chrupala, G.A.; Dinu, G.; van Genabith, J. Learning morphology with Morfette. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08), Marrakech, Morocco, 28–30 May 2008. [Google Scholar]
- Müller, T.; Cotterell, R.; Fraser, A.; Schütze, H. Joint lemmatization and morphological tagging with lemming. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17–21 September 2015; pp. 2268–2274. [Google Scholar]
- Dridan, R.; Oepen, S. Tokenization: Returning to a long solved problem-a survey, contrastive experiment, recommendations, and toolkit. In Proceedings of the ACL 12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, Jeju Island, Korea, 8–14 July 2012; pp. 378–382. [Google Scholar]
- Collobert, R.; Weston, J.; Bottou, L.; Karlen, M.; Kavukcuoglu, K.; Kuksa, P. Natural language processing (almost) from scratch. J. Machine Learn.Res. 2011, 12, 2493–2537. [Google Scholar]
- Toutanova, K.; Klein, D.; Manning, C.D.; Yoram Singer, Y. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the HLT-NAACL, Edmonton, AL, Canada, 27 May–1 June 2003; Volume 1, pp. 173–180. [Google Scholar]
- Huang, Z.; Xu, W.; Yu, K. Bidirectional LSTM-CRF models for sequence tagging. arXiv 2015, arXiv:1508.01991. [Google Scholar]
- Brill, E. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Comput. Ling. 1995, 21, 543–565. [Google Scholar]
- Chiu, J.P.C.; Nichols, E. Named entity recognition with bidirectional LSTM-CNNs. Trans. Assoc. Comput. Ling. 2016, 4, 357–370. [Google Scholar] [CrossRef]
- Lample, G.; Ballesteros, M.; Subramanian, S.; Kawakami, K.; Dyer, C. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), Association for Computational Linguistics, San Diego, CA, USA, 12–17 June 2016; pp. 260–270. [Google Scholar]
- Peters, M.E.; Neumann, M.; Iyyer, M.; Gardner, M.; Clark, C.; Lee, K.; Zettlemoyer, L. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, LA, USA, 1–6 June 2018; Volume 1, pp. 2227–2237. [Google Scholar]
- Ghaddar, A.; Langlais, P. Robust lexical features for improved neural network named-entity recognition. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), Association for Computational Linguistics, Santa Fe, NM, USA, 20–26 August 2018; pp. 1896–1907. [Google Scholar]
- Liu, L.; Shang, J.; Ren, X.; Xu, F.; Gui, H.; Peng, J.; Han, J. Empower sequence labeling with task-aware neural language model. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; pp. 5253–5260. [Google Scholar]
- Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.; Dean, J. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems-Volume 2 (NIPS’13), Lake Tahoe, NV, USA, 5–8 December 2013; pp. 3111–3119. [Google Scholar]
- Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; pp. 4171–4186. [Google Scholar]
- Hearst, M.A. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the Conference on Computational Linguistics, Volume 2, Association for Computational Linguistics, Nantes, France, 23–28 August 2013. [Google Scholar]
- Snow, R.; Jurafsky, D.; Ng, A.Y. Learning Syntactic Patterns for Automatic Hypernym Discovery. In Proceedings of the 17th International Conference on Neural Information Processing Systems (NIPS’04), Singapore, 5–8 December 2006; Saul, L.K., Weiss, Y., Bottou, L., Eds.; MIT Press: Cambridge, MA, USA, 2004; pp. 1297–1304. [Google Scholar]
- Onofrei, M.; Hulub, I.; Trandabăț, D.; Gîfu, D. Apollo at SemEval-2018 Task 9: Detecting hypernymy relations using syntactic dependencies. In Proceedings of the 12th International Workshop on Semantic Evaluation (SEMEVAL), Association for Computational Linguistics, New Orleans, LA, USA, 5–6 June 2018; pp. 898–902. [Google Scholar]
- Kawaumra, T.; Sekine, M.; Matsumura, K. Hyponym/hypernym detection in science and technology thesauri from bibliographic datasets. In Proceedings of the 2017 IEEE 11th International Conference on Semantic Computing (ICSC), San Diego, CA, USA, 17–19 September 2017; pp. 180–187. [Google Scholar]
- Zhang, Z.; Li, J.; Zhao, H.; Tang, B. SJTU-NLP at SemEval-2018 Task 9: Neural hypernym discovery with term embeddings. In Proceedings of the 12th International Workshop on Semantic Evaluation (SEMEVAL), Association for Computational Linguistics, New Orleans, LA, USA, 5–6 June 2018; pp. 903–908. [Google Scholar]
- Hassan, A.Z.; Vallabhajosyula, M.S.; Pedersen, T. UMDuluth-CS8761 at SemEval-2018 Task9: Hypernym discovery using hearst patterns, co-occurrence frequencies and word embeddings. In Proceedings of the 12th International Workshop on Semantic Evaluation (SEMEVAL), Association for Computational Linguistics, New Orleans, LA, USA, 5–6 June 2018; pp. 914–918. [Google Scholar]
- Weske, M.; Decker, G.; Dumas, M.; La Rosa, M.; Mendling, J.; Reijers, H.A. Model collection of the business process management academic initiative. Zenodo 2020. [Google Scholar] [CrossRef]
- Qi, P.; Zhang, Y.; Zhang, Y.; Bolton, J.; Manning, C.D. Stanza: A Python natural language processing toolkit for many human languages. arXiv 2020, arXiv:2003.07082. [Google Scholar]
- Honnibal, M.; Montani, I. spaCy 2 GitHub Directory (2020). Available online: https://github.com/explosion/spaCy (accessed on 1 July 2020).
- Manning, C.D.; Mihai, S.; Bauer, J.; Finkel, J.; Bethard, S.J.; McClosky, D. The Stanford CoreNLP natural language processing toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, Baltimore, MD, USA, 23–24 June 2014; pp. 55–60. [Google Scholar]
- Akbik, A.; Bergmann, T.; Blythe, D.; Rasul, K.; Schweter, S.; Vollgraf, R. FLAIR: An easy-to-use framework for state-of-the-art NLP. In Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), NAACL 2019, Minneapolis, MN, USA, 2–7 June 2019; pp. 54–59. [Google Scholar]
- Gatt, A.; Reiter, E. SimpleNLG: A realisation engine for practical applications. In Proceedings of the 12th European Workshop on Natural Language Generation (ENLG 2009), Athens, Greece, 30–31 March 2009; pp. 90–93. [Google Scholar]
- Bird, S.; Loper, E.; Klein, E. Natural Language In Processing with Python; O’Reilly Media Inc.: Newton, MA, USA, 2009. [Google Scholar]
Stereotype | Description |
---|---|
«TransformationPatternSpecification» | Abstract stereotype containing a single optional representationText property. If specified, this property overrides representationText property in the global drag-and-drop specification. The class has two sub-types defined below. |
«SimpleTransformationPatternSpecification» | Specialization of «TransformationPatternSpecification» directly wrapping «MappingPattern» using its single property mappingPattern. This stereotype is generally applied if no conditional branching is required to run a particular transformation between source and target models. |
«ConditionedTransformationPatternSpecification» | Specialization of «TransformationPatternSpecification» enabling conditional processing of «MappingPattern» concerning constraints defined for the execution of each pattern. The properties are as follows:
|
«ConditionedMappingSpecification» | Defines the processing of «MappingPattern» per conditional branch and contains the following properties:
|
Model No. | Number of Executed Transformations | Number of Executed Atomic Transformations | Number of Expected Output Elements | Original Solution [14] | NLP-Enhanced Solution | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Number of Output Elements | MeanDiff | Accuracy | Jaccard | Number of Output Elements | MeanDiff | Accuracy | Jaccard | ||||
BPM 1 | 2 | 13 | 41 | 39 | 0.462 | 0.154 | 0.282 | 37 | 0.308 | 0.846 | 0.846 |
BPM 2 | 2 | 9 | 31 | 27 | 0.444 | 0.667 | 0.667 | 26 | 0.556 | 0.667 | 0.667 |
BPM 3 | 3 | 9 | 29 | 27 | 0.222 | 0.556 | 0.657 | 24 | 0.556 | 0.333 | 0.444 |
BPM 4 | 4 | 16 | 48 | 48 | 0 | 0.875 | 0.911 | 48 | 0 | 0.875 | 0.938 |
BPM 5 | 1 | 19 | 57 | 57 | 0 | 0.316 | 0.535 | 56 | 0.053 | 0.842 | 0.921 |
BPM 6 | 1 | 13 | 37 | 39 | 0.154 | 0.154 | 0.378 | 37 | 0 | 0.923 | 1 |
BPM 7 | 2 | 13 | 37 | 39 | 0.154 | 0.462 | 0.513 | 37 | 0 | 1 | 1 |
BPM 8 | 3 | 9 | 32 | 27 | 0.778 | 0.222 | 0.222 | 27 | 0.778 | 0.444 | 0.5 |
BPM 9 | 1 | 13 | 35 | 38 | 0.231 | 0.154 | 0.231 | 35 | 0 | 0.923 | 1 |
BPM 10 | 3 | 32 | 87 | 93 | 0.188 | 0.531 | 0.538 | 85 | 0.063 | 0.938 | 0.938 |
BPM 11 | 1 | 10 | 25 | 29 | 0.4 | 0.4 | 0.433 | 23 | 0.2 | 0.8 | 0.8 |
BPM 12 | 1 | 12 | 33 | 34 | 0.083 | 0.5 | 0.5 | 32 | 0.083 | 0.917 | 0.917 |
BPM 13 | 3 | 16 | 43 | 48 | 0.313 | 0.188 | 0.271 | 44 | 0.063 | 0.875 | 0.906 |
BPM 14 | 4 | 9 | 29 | 27 | 0.222 | 0.556 | 0.630 | 24 | 0.556 | 0.556 | 0.556 |
BPM 15 | 4 | 7 | 21 | 21 | 0 | 0.714 | 0.75 | 21 | 0 | 1 | 1 |
BPM 16 | 2 | 12 | 36 | 36 | 0 | 0.833 | 0.833 | 32 | 0.333 | 0.667 | 0.667 |
BPM 17 | 1 | 7 | 21 | 21 | 0 | 0.286 | 0.464 | 21 | 0 | 1 | 1 |
BPM 18 | 1 | 4 | 10 | 10 | 0 | 0.5 | 0.5 | 9 | 0.25 | 0.75 | 0.750 |
BPM 19 | 3 | 10 | 36 | 30 | 0.6 | 0 | 0.217 | 29 | 0.7 | 0.6 | 0.667 |
BPM 20 | 4 | 10 | 30 | 30 | 0 | 0.8 | 0.825 | 27 | 0.3 | 0.7 | 0.7 |
UCM 1 | 1 | 5 | 14 | 15 | 0.2 | 0.8 | 0.8 | 15 | 0.2 | 0.2 | 0.5 |
UCM 2 | 1 | 6 | 18 | 18 | 0 | 1 | 1 | 18 | 0 | 1 | 1 |
UCM 3 | 1 | 5 | 14 | 15 | 0.2 | 0.8 | 0.8 | 13 | 0.6 | 0.4 | 0.4 |
UCM 4 | 1 | 3 | 9 | 9 | 0 | 1 | 1 | 8 | 0.333 | 0.333 | 0.5 |
UCM 5 | 1 | 3 | 9 | 9 | 0 | 1 | 1 | 9 | 0 | 1 | 1 |
UCM 6 | 1 | 3 | 9 | 9 | 0 | 1 | 1 | 7 | 0.667 | 0.333 | 0.333 |
UCM 7 | 1 | 4 | 12 | 12 | 0 | 1 | 1 | 11 | 0.25 | 0.75 | 0.75 |
UCM 8 | 1 | 4 | 12 | 12 | 0 | 1 | 1 | 11 | 0.25 | 0.75 | 0.75 |
UCM 9 | 1 | 4 | 12 | 12 | 0 | 1 | 1 | 12 | 0 | 1 | 1 |
UCM 10 | 1 | 3 | 9 | 9 | 0 | 0.667 | 0.667 | 8 | 0.333 | 0.667 | 0.667 |
UCM 11 | 1 | 2 | 6 | 6 | 0 | 0.5 | 0.667 | 6 | 0 | 0.5 | 0.75 |
UCM 12 | 1 | 7 | 21 | 21 | 0 | 1 | 1 | 21 | 0 | 1 | 1 |
UCM 13 | 2 | 3 | 9 | 9 | 0 | 1 | 1 | 8 | 0.333 | 0.667 | 0.667 |
UCM 14 | 2 | 2 | 6 | 6 | 0 | 1 | 1 | 6 | 0 | 1 | 1 |
UCM 15 | 1 | 5 | 15 | 15 | 0 | 0.6 | 0.6 | 15 | 0 | 1 | 1 |
UCM 16 | 2 | 3 | 9 | 9 | 0 | 1 | 1 | 9 | 0 | 0.333 | 0.333 |
UCM 17 | 1 | 2 | 6 | 6 | 0 | 0.5 | 0.5 | 6 | 0 | 1 | 1 |
UCM 18 | 1 | 1 | 3 | 3 | 0 | 1 | 1 | 3 | 0 | 1 | 1 |
UCM 19 | 1 | 5 | 14 | 15 | 0.2 | 0.8 | 0.8 | 14 | 0 | 0.6 | 0.7 |
UCM 20 | 1 | 1 | 3 | 3 | 0 | 1 | 1 | 3 | 0 | 1 | 1 |
Mean values: | 0.121 | 0.663 | 0.705 | 0.194 | 0.755 | 0.789 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Danenas, P.; Skersys, T.; Butleris, R. Extending Drag-and-Drop Actions-Based Model-to-Model Transformations with Natural Language Processing. Appl. Sci. 2020, 10, 6835. https://doi.org/10.3390/app10196835
Danenas P, Skersys T, Butleris R. Extending Drag-and-Drop Actions-Based Model-to-Model Transformations with Natural Language Processing. Applied Sciences. 2020; 10(19):6835. https://doi.org/10.3390/app10196835
Chicago/Turabian StyleDanenas, Paulius, Tomas Skersys, and Rimantas Butleris. 2020. "Extending Drag-and-Drop Actions-Based Model-to-Model Transformations with Natural Language Processing" Applied Sciences 10, no. 19: 6835. https://doi.org/10.3390/app10196835
APA StyleDanenas, P., Skersys, T., & Butleris, R. (2020). Extending Drag-and-Drop Actions-Based Model-to-Model Transformations with Natural Language Processing. Applied Sciences, 10(19), 6835. https://doi.org/10.3390/app10196835