Essential Features in a Theory of Context for Enabling Artificial General Intelligence
Abstract
:1. Introduction
2. Background on Context: Definitions and Example Usage
2.1. Example Usage: Context in Conversation
2.2. Example Usage: Context as Background Knowledge
2.3. Example Usage: Expression and Behavior Adaptation in Social and Emotional Contexts
3. Research in Context-Rich Artificial Intelligence
3.1. Representation Learning
3.2. Commonsense Reasoning and Knowledge
3.3. Knowledge Graphs and Semantic Web
3.4. Explainable AI
4. Understanding Context in Practical AI Research
- Locality: Locality is usually an important aspect of context, especially in “embedding” or representation learning algorithms. For example, in the word2vec architecture [45], and others inspired by it [48,49], only words within a certain window of the target word are ‘activated’ and considered as the context of that word. Similar notions apply in graph and network embeddings [50,52]. However, more recently, context has become less local due to the use of powerful features such as biased random walks when training graph embeddings [51,79,80].
- Selective Activation of Salient Elements: Whether local (such as in the applications above) or non-local, context involves selective activation of salient elements. In the case of using random walks for embeddings, the nodes and edges in the walk may be considered to be the salient elements, even though they may not be considered local to the target node. In cognitive and agent-based architectures, certain kinds of long and short-term memory retrieval may be used to selectively activate salient elements [81].
- Relational Dependencies: In the definitions in Table 1, we noted indications of relational dependencies between elements or objects when context is invoked. Specifically, depending on the application, context may be defined as (emphasis ours) “a probability distribution over the concepts in an environment [20], a set of relationships between objects [21], logical statements that represent cause and effect [22], or a function to select relevant features for object recognition [23].” Another kind of relational dependency may arise due to the application. On social media, for example, the context for training a tweet embedding may include metadata (such as the user posting the tweet) rather than just the content in the tweet [82,83]. This metadata expresses the relation between a user profile and between the actual tweet content, both of which can exist independently, but which need to be related to each other to learn good embeddings for either.
- Implicitness: Especially in common sense reasoning and explainable AI research, certain pieces of information are considered implicit. Grice’s conversational maxims are good examples of statements that explain some of the implicitness in conversation [24]; however, other similar maxims, or a generalization of the original maxims, may be necessary to categorize and explain implicitness in domains such as computer vision that are non-linguistic.
- Open-World Environments: As AI systems are implemented in organizations and customer-facing applications, they must increasingly adapt to the “open world”. An open world environment is one where either the structure or parameterization (or both) is unknown or unknowable [84]. An example of the latter is a chaotic system with a high degree of uncertainty around the initial conditions [85]. One can probabilistically reason about the outcomes given the laws governing the dynamics of the system, but the outcome at a given point of time may be largely unknown with any reasonable confidence. The real world, and complex systems in the real world, are good examples of open worlds that are (at best) partially unknown, at least in practice, and (at worst) may not be completely knowable, even in theory. Philosophers have long argued about whether complete and provable knowledge is even possible in such systems, due to uncertainty [86], vagueness [87], and the circularity of induction [88].An important aspect of open worlds is that, due to their (partially) unknown parameterization and structure, unexpected situations and “novelties” might occur [89]. The COVID-19 pandemic is a good example of a global novelty that may have profound long-term consequences, long after the pandemic. If interactions between human or machine agents are occurring in an open world, an instantiated theory of context will likely have to rely on powerful representational techniques, such as open sets and infinite-state Markov processes [90,91]. We hypothesize that early theories of context will likely make the closed-world assumption, with a proper framing of tasks, goals and assumptions (as discussed below). Research on open worlds is still in its infancy in the AI community, although recent progress on open-world learning has been impressive [92]. We posit that a theory of context in open-world environments may be necessary for building a sufficiently powerful AGI.
- Event-Driven Triggers: While real life, and aspects of human–human interaction often seems to proceed seamlessly and “naturally”, there are epistemic and behavioral transitions in most non-trivial interactions. In other cases, a specific event (such as a disagreeable statement) may trigger such a transition explicitly. When two friends are having a casual conversation, and one of the friends receives an urgent phone call from her child’s school, the event triggers stress and causes the other friend to elicit concern, sometimes non-verbally. In most real world and open world environments, unexpected events will occur with non-vanishing probability, as argued earlier. Such events could potentially alter the terms of an interaction, which must be accounted for by a robust theory of context [93,94].
- Framing of Tasks, Goals and Assumptions: Contextualization often occurs in the presence or ‘frame’ of one or more tasks, goals and assumptions. In cognitive science, such framing is considered vital for human interaction [95,96]. Some or all of these may be implicit. When having a written dialog with a human individual, a machine is implicitly assuming that the person can read and understand the language in which the machine is outputting dialog. Moreover, in the context of a customer service dialog, the machine may assume that the customer has a specific need or problem, and the conversation is occurring with the goal of solving the problem. The task is then sequential and multi-step: to first understand the need of the customer, and to then devise a solution for it, without recourse to a human operator, if possible. Within a specific application, therefore, an instantiation of a ‘good’ theory of context must formalize and epistemically represent the tasks, goals and assumptions for all agents (human and machine) interacting in an environment. For example, if these epistemic states are considered continuous, rather than discrete, the theory may rely on Markov processes and differential equations, whereas workflow-like models [97,98] may suffice if there is only a small set of tasks, goals and assumptions.
5. Supporting Ecosystems and Social Factors
6. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Wang, Y.E.; Wei, G.Y.; Brooks, D. Benchmarking tpu, gpu, and cpu platforms for deep learning. arXiv 2019, arXiv:1907.10701. [Google Scholar]
- Rimal, B.P.; Choi, E.; Lumb, I. A taxonomy and survey of cloud computing systems. In Proceedings of the 2009 Fifth International Joint Conference on INC, IMS and IDC, Seoul, Korea, 25–27 August 2009; pp. 44–51. [Google Scholar]
- Janssen, M.; Charalabidis, Y.; Zuiderwijk, A. Benefits, adoption barriers and myths of open data and open government. Inf. Syst. Manag. 2012, 29, 258–268. [Google Scholar] [CrossRef] [Green Version]
- Guo, G.; Zhang, N. A survey on deep learning based face recognition. Comput. Vis. Image Underst. 2019, 189, 102805. [Google Scholar] [CrossRef]
- Singh, S.P.; Kumar, A.; Darbari, H.; Singh, L.; Rastogi, A.; Jain, S. Machine translation using deep learning: An overview. In Proceedings of the 2017 International Conference on Computer, Communications and Electronics (Comptelix), Jaipur, India, 1–2 July 2017; pp. 162–167. [Google Scholar]
- Sharma, Y.; Gupta, S. Deep learning approaches for question answering system. Procedia Comput. Sci. 2018, 132, 785–794. [Google Scholar] [CrossRef]
- Ford, M. Architects of Intelligence: The Truth about AI from the People Building It; Packt Publishing Ltd.: Birmingham, UK, 2018. [Google Scholar]
- Müller, V.C.; Bostrom, N. Future progress in artificial intelligence: A survey of expert opinion. In Fundamental Issues of Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2016; pp. 555–572. [Google Scholar]
- Moor, J. The Dartmouth College artificial intelligence conference: The next fifty years. Ai Mag. 2006, 27, 87. [Google Scholar]
- Mims, C. Why Artificial Intelligence Isn’t Intelligent. 2021. Available online: https://www.wsj.com/articles/why-artificial-intelligence-isnt-intelligent-11627704050 (accessed on 2 September 2021).
- Parkhi, O.M.; Vedaldi, A.; Zisserman, A. Deep face recognition. In Proceedings of the BMVC 2015, Swansea, UK, 7–10 September 2015. [Google Scholar]
- Dehghani, M.; Gouws, S.; Vinyals, O.; Uszkoreit, J.; Kaiser, Ł. Universal transformers. arXiv 2018, arXiv:1807.03819. [Google Scholar]
- Wang, Y.; Sun, A.; Han, J.; Liu, Y.; Zhu, X. Sentiment analysis by capsules. In Proceedings of the 2018 World Wide Web Conference, Lyon, France, 23–27 April 2018; pp. 1165–1174. [Google Scholar]
- Lockard, C.; Shiralkar, P.; Dong, X.L. Openceres: When open information extraction meets the semi-structured web. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 3–5 June 2019; pp. 3047–3056. [Google Scholar]
- Irvine, J.; Schieffelin, B.; Series, C.; Goodwin, M.H.; Kuipers, J.; Kulick, D.; Lucy, J.; Ochs, E. Rethinking Context: Language as an Interactive Phenomenon; Number 11; Cambridge University Press: Cambridge, UK, 1992. [Google Scholar]
- Brézillon, P. Context in Artificial Intelligence: I. A survey of the literature. Comput. Artif. Intell. 1999, 18, 321–340. [Google Scholar]
- Dey, A.K.; Abowd, G.D.; Salber, D. A conceptual framework and a toolkit for supporting the rapid prototyping of context-aware applications. Hum.–Comput. Interact. 2001, 16, 97–166. [Google Scholar] [CrossRef]
- Schaefer, K.E.; Oh, J.; Aksaray, D.; Barber, D. Integrating Context into Artificial Intelligence: Research from the Robotics Collaborative Technology Alliance. AI Mag. 2019, 40, 28–40. [Google Scholar] [CrossRef]
- Singhal, A.; Luo, J.; Zhu, W. Probabilistic spatial context models for scene content understanding. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; Volume 1, p. I. [Google Scholar]
- Rabinovich, A.; Vedaldi, A.; Galleguillos, C.; Wiewiora, E.; Belongie, S. Objects in context. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar]
- Zettlemoyer, L.S.; Collins, M. Learning Context-Dependent Mappings from Sentences to Logical Form. In Proceedings of the 47th Annual Meeting of the ACL, Suntec, Singapore, 2–7 August 2009. [Google Scholar]
- Heitz, G.; Koller, D. Learning spatial context: Using stuff to find things. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2008; pp. 30–43. [Google Scholar]
- Grice, H.P. Logic and conversation. In Speech Acts; Brill: Buckinghamshire, UK, 1975; pp. 41–58. [Google Scholar]
- Mitchell, T.; Cohen, W.; Hruschka, E.; Talukdar, P.; Yang, B.; Betteridge, J.; Carlson, A.; Dalvi, B.; Gardner, M.; Kisiel, B.; et al. Never-ending learning. Commun. ACM 2018, 61, 103–115. [Google Scholar] [CrossRef] [Green Version]
- Shawar, B.A.; Atwell, E. Chatbots: Are they really useful? Ldv Forum 2007, 22, 29–49. [Google Scholar]
- Brandtzaeg, P.B.; Følstad, A. Chatbots: Changing user needs and motivations. Interactions 2018, 25, 38–43. [Google Scholar] [CrossRef] [Green Version]
- Potter, J. Cognition as context (whose cognition?). Res. Lang. Soc. Interact. 1998, 31, 29–44. [Google Scholar] [CrossRef] [Green Version]
- Alterman, R. Adaptive planning. Cogn. Sci. 1988, 12, 393–421. [Google Scholar] [CrossRef]
- Öhman, A. Has evolution primed humans to “beware the beast”? Proc. Natl. Acad. Sci. USA 2007, 104, 16396–16397. [Google Scholar] [CrossRef] [Green Version]
- Meaney, M.J. Nature, nurture, and the disunity of knowledge. Ann. N. Y. Acad. Sci. 2001, 935, 50–61. [Google Scholar] [CrossRef] [PubMed]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Sarker, M.K.; Zhou, L.; Eberhart, A.; Hitzler, P. Neuro-Symbolic Artificial Intelligence Current Trends. arXiv 2021, arXiv:2105.05330. [Google Scholar]
- Floridi, L.; Chiriatti, M. GPT-3: Its nature, scope, limits, and consequences. Minds Mach. 2020, 30, 681–694. [Google Scholar] [CrossRef]
- Coulmas, F.; Rabin, C.; Ibrahim, M.H.; Massamba, D.P.B.; Daswani, C.J.; Pasierbsky, F.; Takada, M.; Sugito, S.; Porksen, U.; Ehlich, K.; et al. Language Adaptation; Cambridge University Press: Cambridge, UK, 1989. [Google Scholar]
- Attardo, S. The Routledge Handbook of Language and Humor; Taylor & Francis: New York, NY, USA, 2017. [Google Scholar]
- Shamay-Tsoory, S.G.; Tomer, R.; Aharon-Peretz, J. The neuroanatomical basis of understanding sarcasm and its relationship to social cognition. Neuropsychology 2005, 19, 288. [Google Scholar] [CrossRef] [Green Version]
- Tagg, C. Discourse of Text Messaging: Analysis of SMS Communication; Bloomsbury Publishing: London, UK, 2012. [Google Scholar]
- Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
- Zhong, G.; Wang, L.N.; Ling, X.; Dong, J. An overview on data representation learning: From traditional feature learning to recent deep learning. J. Financ. Data Sci. 2016, 2, 265–278. [Google Scholar] [CrossRef]
- Zheng, A.; Casari, A. Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2018. [Google Scholar]
- Feldman, R. Techniques and applications for sentiment analysis. Commun. ACM 2013, 56, 82–89. [Google Scholar] [CrossRef]
- Sharma, A.; Dey, S. A comparative study of feature selection and machine learning techniques for sentiment analysis. In Proceedings of the 2012 ACM Research in Applied Computation Symposium, San Antonio, TX, USA, 23–26 October 2012; pp. 1–7. [Google Scholar]
- Collobert, R.; Weston, J.; Bottou, L.; Karlen, M.; Kavukcuoglu, K.; Kuksa, P. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 2011, 12, 2493–2537. [Google Scholar]
- Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.S.; Dean, J. Distributed representations of words and phrases and their compositionality. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–10 December 2013; pp. 3111–3119. [Google Scholar]
- Blei, D.M.; Ng, A.Y.; Jordan, M.I. Latent dirichlet allocation. J. Mach. Learn. Res. 2003, 3, 993–1022. [Google Scholar]
- Hsu, C.C.; Lin, C.W. Cnn-based joint clustering and representation learning with feature drift compensation for large-scale image data. IEEE Trans. Multimed. 2017, 20, 421–429. [Google Scholar] [CrossRef] [Green Version]
- Joulin, A.; Grave, E.; Bojanowski, P.; Mikolov, T. Bag of Tricks for Efficient Text Classification. arXiv 2016, arXiv:1607.01759. [Google Scholar]
- Pennington, J.; Socher, R.; Manning, C.D. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1532–1543. [Google Scholar]
- Perozzi, B.; Al-Rfou, R.; Skiena, S. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 24–27 August 2014; pp. 701–710. [Google Scholar]
- Grover, A.; Leskovec, J. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 855–864. [Google Scholar]
- Kejriwal, M.; Szekely, P. Neural embeddings for populated geonames locations. In International Semantic Web Conference; Springer: Berlin/Heidelberg, Germany, 2017; pp. 139–146. [Google Scholar]
- Khan, S.; Naseer, M.; Hayat, M.; Zamir, S.W.; Khan, F.S.; Shah, M. Transformers in vision: A survey. arXiv 2021, arXiv:2101.01169. [Google Scholar]
- DeRose, J.F.; Wang, J.; Berger, M. Attention flows: Analyzing and comparing attention mechanisms in language models. IEEE Trans. Vis. Comput. Graph. 2020, 27, 1160–1170. [Google Scholar] [CrossRef]
- Minsky, M.L. The Emotion Machine: Commensense Thinking, Artificial Intelligence, and the Future of the Human Mind; Simon & Schuster: New York, NY, USA, 2006; Google-Books-ID: AT0UlQEACAAJ. [Google Scholar]
- Davis, E.; Marcus, G. Commonsense reasoning and commonsense knowledge in artificial intelligence. Commun. ACM 2015, 58, 92–103. [Google Scholar] [CrossRef]
- Kejriwal, M. An AI Expert Explains Why It’s Hard to Give Computers Something You Take for Granted: Common Sense. 2021. Available online: https://theconversation.com/an-ai-expert-explains-why-its-hard-to-give-computers-something-you-take-for-granted-common-sense-165600 (accessed on 3 September 2021).
- Lin, H.; Ng, V. Abstractive summarization: A survey of the state of the art. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 1 February 2019; Volume 33, pp. 9815–9822. [Google Scholar]
- Lin, B.Y.; Zhou, W.; Shen, M.; Zhou, P.; Bhagavatula, C.; Choi, Y.; Ren, X. CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning. In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020, Online, 16–20 November 2020; pp. 1823–1840. [Google Scholar] [CrossRef]
- Gordon, A.S.; Hobbs, J.R. A Formal Theory of Commonsense Psychology: How People Think People Think; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar] [CrossRef]
- Santos, H.; Kejriwal, M.; Mulvehill, A.M.; Forbush, G.; McGuinness, D.L.; Rivera, A.R. An experimental study measuring human annotator categorization agreement on commonsense sentences. Exp. Results 2021, 2. [Google Scholar] [CrossRef]
- Koehler, J.; Nebel, B.; Hoffmann, J.; Dimopoulos, Y. Extending planning graphs to an ADL subset. In European Conference on Planning; Springer: Berlin/Heidelberg, Germany, 1997; pp. 273–285. [Google Scholar]
- Koenig, S.; Simmons, R.G. Risk-sensitive planning with probabilistic decision graphs. In Principles of Knowledge Representation and Reasoning; Elsevier: Amsterdam, The Netherlands, 1994; pp. 363–373. [Google Scholar]
- Singhal, A. Introducing the knowledge graph: Things, not strings. Off. Google Blog 2012, 5, 16. [Google Scholar]
- Bellomarini, L.; Fakhoury, D.; Gottlob, G.; Sallinger, E. Knowledge graphs and enterprise AI: The promise of an enabling technology. In Proceedings of the 2019 IEEE 35th International Conference on Data Engineering (ICDE), Macau, China, 8–12 April 2019; pp. 26–37. [Google Scholar]
- Xu, D.; Ruan, C.; Korpeoglu, E.; Kumar, S.; Achan, K. Product knowledge graph embedding for e-commerce. In Proceedings of the 13th International Conference on Web Search and Data Mining, Houston, TX, USA, 3–7 February 2020; pp. 672–680. [Google Scholar]
- Kejriwal, M. Knowledge Graphs and COVID-19: Opportunities, Challenges, and Implementation. Harv. Data Sci. Rev. 2020, 11, 300. [Google Scholar]
- Kejriwal, M. Domain-Specific Knowledge Graph Construction; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
- Kannan, A.V.; Fradkin, D.; Akrotirianakis, I.; Kulahcioglu, T.; Canedo, A.; Roy, A.; Yu, S.Y.; Arnav, M.; Al Faruque, M.A. Multimodal knowledge graph for deep learning papers and code. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, Online, 19–23 October 2020; pp. 3417–3420. [Google Scholar]
- McGuinness, D.L.; Van Harmelen, F. OWL web ontology language overview. W3C Recomm. 2004, 10, 2004. [Google Scholar]
- Berners-Lee, T.; Hendler, J.; Lassila, O. The semantic web. Sci. Am. 2001, 284, 34–43. [Google Scholar] [CrossRef]
- Ilievski, F.; Szekely, P.; Kejriwal, M. Commonsense Knowledge Graphs (CSKGs). 2020. Available online: https://usc-isi-i2.github.io/ISWC20/ (accessed on 2 September 2021).
- Garcez, A.D.; Gori, M.; Lamb, L.C.; Serafini, L.; Spranger, M.; Tran, S.N. Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning. arXiv 2019, arXiv:1905.06088. [Google Scholar]
- Hagras, H. Toward human-understandable, explainable AI. Computer 2018, 51, 28–36. [Google Scholar] [CrossRef]
- Hoffman, R.R.; Mueller, S.T.; Klein, G.; Litman, J. Metrics for explainable AI: Challenges and prospects. arXiv 2018, arXiv:1812.04608. [Google Scholar]
- Turek, M. Explainable Artificial Intelligence (XAI). 2016. Available online: https://www.darpa.mil/program/explainable-artificial-intelligence (accessed on 2 September 2021).
- Došilović, F.K.; Brčić, M.; Hlupić, N. Explainable artificial intelligence: A survey. In Proceedings of the 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 21–25 May 2018; pp. 0210–0215. [Google Scholar]
- Valentino, M.; Thayaparan, M.; Freitas, A. Explainable natural language reasoning via conceptual unification. arXiv 2020, arXiv:2009.14539. [Google Scholar]
- Lim, S.; Lee, J.G. Motif-based embedding for graph clustering. J. Stat. Mech. Theory Exp. 2016, 2016, 123401. [Google Scholar] [CrossRef]
- Zhang, S.; Hu, Z.; Subramonian, A.; Sun, Y. Motif-driven contrastive learning of graph representations. arXiv 2020, arXiv:2012.12533. [Google Scholar]
- Garnham, A. Representing information in mental models. In Cognitive Models of Memory; Studies in Cognition; The MIT Press: Cambridge, MA, USA, 1997; pp. 149–172. [Google Scholar]
- Dhingra, B.; Zhou, Z.; Fitzpatrick, D.; Muehl, M.; Cohen, W.W. Tweet2vec: Character-Based Distributed Representations for Social Media. arXiv 2016, arXiv:1605.03481. [Google Scholar]
- Wang, L.; Gao, C.; Wei, J.; Ma, W.; Liu, R.; Vosoughi, S. An Empirical Survey of Unsupervised Text Representation Methods on Twitter Data. arXiv 2020, arXiv:2012.03468. [Google Scholar]
- Sehwag, V.; Bhagoji, A.N.; Song, L.; Sitawarin, C.; Cullina, D.; Chiang, M.; Mittal, P. Analyzing the robustness of open-world machine learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, London, UK, 15 November 2019; pp. 105–116. [Google Scholar]
- Cambel, A.B. Applied Chaos Theory: A Paradigm for Complexity; Elsevier: Amsterdam, The Netherlands, 1993. [Google Scholar]
- Zimmerman, M. Living with uncertainty: The moral significance of ignorance. Analysis 2009, 69, 785–787. [Google Scholar]
- Edgington, D. Vagueness by degrees. In Vagueness: A Reader; Keefe, R., Smith, P., Eds.; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
- Achinstein, P. Circularity and induction. Analysis 1963, 23, 123–127. [Google Scholar] [CrossRef]
- Senator, T. Science of Artificial Intelligence and Learning for Open-World Novelty (SAIL-ON). 2019. Available online: https://www.darpa.mil/program/science-of-artificial-intelligence-and-learning-for-open-world-novelty (accessed on 3 September 2021).
- Nakaoka, F.; Oda, N. Some applications of minimal open sets. Int. J. Math. Math. Sci. 2001, 27, 471–476. [Google Scholar] [CrossRef] [Green Version]
- Sennott, L.I. Average cost optimal stationary policies in infinite state Markov decision processes with unbounded costs. Oper. Res. 1989, 37, 626–633. [Google Scholar] [CrossRef]
- Langley, P. Open-world learning for radically autonomous agents. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 13539–13543. [Google Scholar]
- Kotz, A.M.; Dittrich, K.R.; Mulle, J.A. Supporting semantic rules by a generalized event/trigger mechanism. In International Conference on Extending Database Technology; Springer: Berlin/Heidelberg, Germany, 1988; pp. 76–91. [Google Scholar]
- Batista-Navarro, R.T.; Kontonatsios, G.; Mihăilă, C.; Thompson, P.; Rak, R.; Nawaz, R.; Korkontzelos, I.; Ananiadou, S. Facilitating the analysis of discourse phenomena in an interoperable NLP platform. In International Conference on Intelligent Text Processing and Computational Linguistics; Springer: Berlin/Heidelberg, Germany, 2013; pp. 559–571. [Google Scholar]
- Keren, G. Perspectives on Framing; Psychology Press: London, UK, 2011. [Google Scholar]
- Kühberger, A. Framing. In Cognitive Illusions: Intriguing Phenomena in Judgement, Thinking and Memory, 2nd ed.; Psychology Press: New York, NY, USA, 2017; pp. 79–98. [Google Scholar]
- Gottschalk, F.; Van Der Aalst, W.M.; Jansen-Vullers, M.H.; La Rosa, M. Configurable workflow models. Int. J. Coop. Inf. Syst. 2008, 17, 177–221. [Google Scholar] [CrossRef] [Green Version]
- Van Der Aalst, W.; Van Hee, K.M.; van Hee, K. Workflow Management: Models, Methods, and Systems; MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
- NSF. National Artificial Intelligence (AI) Research Institutes. 2020. Available online: https://beta.nsf.gov/funding/opportunities/national-artificial-intelligence-research-institutes (accessed on 2 September 2021).
- Chemodanov, D.; Esposito, F.; Sukhov, A.; Calyam, P.; Trinh, H.; Oraibi, Z. AGRA: AI-augmented geographic routing approach for IoT-based incident-supporting applications. Future Gener. Comput. Syst. 2019, 92, 1051–1065. [Google Scholar] [CrossRef]
- Launchbury, J. DARPA Perspective on AI. 2017. Available online: https://www.youtube.com/watch?v=-O01G3tSYpU&t=198s (accessed on 2 September 2021).
- Gunning, D. Machine common sense concept paper. arXiv 2018, arXiv:1810.07528. [Google Scholar]
- Kejriwal, M.; Knoblock, C.A.; Szekely, P. Knowledge Graphs: Fundamentals, Techniques, and Applications; MIT Press: Cambridge, MA, USA, 2021. [Google Scholar]
- De Nicola, A.; Karray, H.; Kejriwal, M.; Matta, N. Knowledge, semantics and AI for risk and crisis management. J. Contingencies Crisis Manag. 2020, 28, 174–177. [Google Scholar] [CrossRef]
Definition / Study | Source |
---|---|
The circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood and assessed. | Oxford Languages |
An early survey of context in AI in the late 1990s, including context in knowledge acquisition, context in communication, and context in ontologies, to only name a few. | [17] |
Any information that can be used to characterize the situation of an entity (whereby) an entity is a user, a place, or a physical or computational object that is considered relevant to the interaction between a user and an application, including the user and application themselves. | [18] |
(i) The parts of a discourse that surround a word or passage and can throw light on its meaning; (ii) The interrelated conditions in which something exists or occurs. | Merriam-Webster |
(i) Applying context to a situation, task, or system state provides meaning and advances understanding that can affect future decisions or actions; (ii) Integration of context-driven AI is important for future robotic capabilities to support the development of situation awareness, calibrate appropriate trust, and improve team performance in collaborative human–robot teams. | [19] |
Domain-specific usages include: “a probability distribution over the concepts in an environment [20], a set of relationships between objects [21], logical statements that represent cause and effect [22], or a function to select relevant features for object recognition [23].” | Quote is from [19], with cited sources including [20,21,22,23] |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kejriwal, M. Essential Features in a Theory of Context for Enabling Artificial General Intelligence. Appl. Sci. 2021, 11, 11991. https://doi.org/10.3390/app112411991
Kejriwal M. Essential Features in a Theory of Context for Enabling Artificial General Intelligence. Applied Sciences. 2021; 11(24):11991. https://doi.org/10.3390/app112411991
Chicago/Turabian StyleKejriwal, Mayank. 2021. "Essential Features in a Theory of Context for Enabling Artificial General Intelligence" Applied Sciences 11, no. 24: 11991. https://doi.org/10.3390/app112411991
APA StyleKejriwal, M. (2021). Essential Features in a Theory of Context for Enabling Artificial General Intelligence. Applied Sciences, 11(24), 11991. https://doi.org/10.3390/app112411991