Trust, but Verify: Informed Consent, AI Technologies, and Public Health Emergencies
Abstract
:1. Introduction
Overview of the Discussion
2. Responsible and Explainable AI
"… systems that can explain their rationale to a human user, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future"[6]
3. Behaviour and Causal Models
4. Informed Consent
“Voluntary agreement to or acquiescence in what another proposes or desires; compliance, concurrence, permission.”(https://www.oed.com/oed2/00047775) (accessed on 12 May 2021)
“[The] circumstances under which buyers of software or visitors to a public Web site can make use of that software or site”.
“Consent [which] permeates both our law and our lives-particularly in the digital context”.
“[the] process of informed consent occurs when communication between a patient and physician results in the patient’s authorization or agreement to undergo a specific medical intervention”.(https://www.ama-assn.org/delivering-care/ethics/informed-consent) (accessed on 12 May 2021)
“Respect for persons requires that subjects, to the degree that they are capable, be given the opportunity to choose what shall or shall not happen to them. This opportunity is provided when adequate standards for informed consent are satisfied.”(Part C (1), [25])
“any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her”.(Art. 4(11), [3])
4.1. Applied Research Ethics
- Participant Autonomy or Respect for the Individual: a guarantee to protect the dignity of the participant as well as respecting their willingness or otherwise to take part. This assumes that the researcher (including those involved with clinical research) have some expectation of outcomes and can articulate them to the potential participant. Deception is possible under appropriate circumstances [33], including the use of placebos. Big data requires careful thought, since it shifts the emphasis away from individual perspectives [34]. In so doing, a more societally focused view may be more appropriate [35]. Originally, autonomy related directly to the informed consent process. However, the potential for big data and AI-enabled approaches suggests that this may need rethinking.
- Beneficence and non-malevolence: ensuring that the research will treat the participant well and avoid harm. This principle, most obvious for medical ethics, puts the onus on the researcher (or data scientist) to understand and control outcomes (see also [15,17]). Although there is no suggestion that the researcher would deliberately wish to cause harm, unsupervised learning may impact expected outcomes. Calls for transparency and the human-in-the-loop to intervene if necessary [8,17] imply a recognition that predictions may not be fixed in advance. Once again, the informed nature of consent might be difficult to satisfy.
- Justice: to ensure the fair distribution of benefits. The final principle highlights a number of issues. The CARE Principles [36], originally conceived in regard to indigenous populations, stress the importance of ensuring that all stakeholders within research at least are treated equitably. For all its limitations, the trolley car dilemma [37] calls into question the assessment of justice. During a public health emergency, and inherent in contact tracing, the issue is whether justice is better served by protecting the rights of the individual especially privacy over a societal imperative.
4.2. Issues with Consent
4.3. Technology Acceptance
4.4. Trust
“… the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party.”Ref. [84]
5. Scenarios
- Contact tracing: During the COVID-19 pandemic, there has been some discussion about the technical implementation [100] and how tracing fits within a larger socio-technical context [101]. Introduction of such applications is not without controversy in socio-political terms [76,102]. At the same time, there is a balance to be struck between individual rights and the public good [103]; in the case of the COVID-19 pandemic, the social implications of the disease are almost as important as its impact on public and individual health [104]. Major challenges include:
- –
- Public Opinion;
- –
- Inadvertent disclosure of third party data;
- –
- Public/Individual responses to alerts.
- Big Data Analytics: this includes exploiting the vast amounts of data available typically via the Internet to attempt to understand behavioural and other patterns [105,106]. Such approaches have already shown much promise in healthcare [107], and with varying degrees of success for tracing the COVID-19 pandemic [108]. There are, however, some concerns about the impact of big data on individuals and society [109,110]. Major challenges include:
- –
- Identification of key actors;
- –
- Mutual understanding between those actors;
- –
- Influence of those actors on processing (and results).
- Public Health Emergency Research: multidisciplinary efforts to understand, inform and ultimately control the transmission and proliferation of disease (see for instance [111]) as well as social impacts [99,104], and to consider the long-term implications of the COVID-19 pandemic and other PHEs [112]. Major challenges include:
- –
- Changes in research focus;
- –
- Changes introduced as research outcomes become available;
- –
- Respect for all potential groups;
- –
- Balancing individual and community rights;
- –
- Unpredicted benefits of research data and outcomes (e.g., in future).
6. Discussion and Recommendations
6.1. Recommendations for Research Ethics Review
- The research proposal should first describe in some detail the trustworthiness basis for the research engagement. I have used characteristics from the literature—integrity, benevolence, and competence—though others may be more appropriate such as reputation and evidence of participant reactions in related work.
- The context of the proposed research should be disclosed, including the identification of the types of contextual effects which might be expected. These may include the general socio-political environment, existing relationships that the research participant might be expected to be aware of (such as clinician–patient), and any dynamic effects, such as implications for other cohorts, including future cohorts. Any such contextual factors should be explained, justified and appropriately managed by the researcher.
- The proposed dialogue between researcher and research participant should be described, how it will be conducted, what it will cover, and how frequently the dialogue will be repeated. This may depend, for example, on when results start to become available. The frequency and delivery channel of this dialogue should be simple for the potential research participant. This must be justified, and the timescales realistic. This part of the trust-based consent process might also include how the researcher will manage research participant withdrawal.
6.2. Recommendations for the Ethical Use of Advanced Technologies
- Understand who the main actors are. Each domain (healthcare, eCommerce, social media, and so forth) will often be regulated with specific obligations. More importantly though, I maintain, would be the interaction between end user and provider, and the reliance of the provider on the data scientist or technologist. These actors would all influence the trust context. So how they contribute needs to be understood.
- Understand what their expectations are. Once the main actors have been identified, their individual expectations will influence how they view their own responsibilities and how they believe the other actors will behave. This will contextualise what each expects from the service or interaction, and from one another.
- Reinforce competence, integrity and benevolence (from [84]). As the defining characteristics of a trust relationship outlined above, each of the actors has a responsibility to support that relationship, and to avoid actions which would affect trust. Inadvertent or unavoidable problems can be dealt with ([88,89]). Further, occasional (though infrequent [23]) re-affirmation of the relationship is advantageous. So, ongoing communication between the main actors is important in maintaining trust (see also [12]).
7. Future Research Directions
8. Conclusions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
Domain | Challenges | Informed Consent | Trust-Based Consent |
---|---|---|---|
Contact Tracing | The socio-political context within which the app is used or research is carried out. Media reporting, including fake news, can influence public confidence | One-off consent on research engagement or upon app download may not be sufficient as context changes. Retention may be challenging depending on trustworthiness perceptions of public authorities and responses to media reports leading to app/research study abandonment (i.e., the impact and relevance of context which may have nothing to do with the actual app/research) | Researchers (app developers) may need to demonstrate integrity and benevolence on an ongoing basis, and specifically when needed in response to any public concerns around data protection, and to any misuse or unforeseen additional use of data. Researchers must therefore communicate their own trustworthiness and position themselves appropriately within a wider socio-political context for which they may feel they have no responsibility. It is their responsibility, however, to maintain the relationship with relevant stakeholders, i.e., to develop and maintain trust. |
Big Data Analytics | The potential disruption to an existing ecosystem—e.g., the actors who are important for delivery of service, such as patient and clinician for healthcare, or research participant and researcher for Internet-based research. Technology may therefore be disruptive to any such existing relationship. Further, unless the main actors are identified, it would be difficult to engage with traditional approaches to consent. | Researcher (data scientist) may not be able to disclose all information necessary to make a fully informed decision, not least because they may only be able to describe expected outcomes (and how data will be used) in general terms. The implications of supervised and unsupervised learning may not be understood. Not all beneficiaries can engage with an informed consent process (e.g., clinicians would not be asked to consent formally to data analytics carried out on their behalf; for Internet-based research, it may be impractical or ill-advised for researchers to contact potential research participants). | Data scientists need to engage in the first instance with domain experts in other fields who will use their results (e.g., clinicians in healthcare; web scientists etc. for Internet-based modelling; etc.) to understand each other’s expectations and any limitations. For a clinician or other researcher dependent on the data scientist, this will affect the perception of their own competence. This will also form part of trust-based engagement with a potential research participant. Ongoing communication between participants, data scientists and the other relevant domain experts should continue to maintain perceptions of benevolence and integrity. |
Public Health Emergency | The difficulty in identifying the scope of research (in terms of what is required and who will benefit now, and especially in the future) and therefore identify the main stakeholders, not just participants providing (clinical) data directly | The COVID-19 pandemic has demonstrated that research understanding changed significantly over time: the research community, including clinicians, had to adapt. Policy decisions struggled to keep pace with the results. Informed consent would need constant review and may be undermined if research outcomes/policy decisions are not consistent. In the latter case, this may result in withdrawal of research participants. Further, research from previous pandemics was not available to inform current research activities | A PHE highlights the need to balance individual rights and the imperatives for the community (the common good). As well as the effects of fake news, changes in policy based on research outcomes may lead to concern about competence: do the researchers know what they are doing? However, there needs to be an understanding of how the research is being conducted and why things do change. So, there will also be a need for ongoing communication around integrity and benevolence. This may advantageously extend existing public engagement practices, but would also need to consider future generations and who might represent their interests. There is a clear need for an ongoing dialogue including participants where possible, but also other groups with a vested interest in the research data and any associated outcomes, including those who may have nothing to do with the original data collection or circumstances. |
References
- Walker, P.; Lovat, T. You Say Morals, I Say Ethics—What’s the Difference? In The Conversation; IMDb: Seattle, WA, USA, 2014. [Google Scholar]
- Adadi, A.; Berrada, M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
- European Commission. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016, 2016; European Commission: Brussels, Belgium, 2016. [Google Scholar]
- Samek, W.; Wiegand, T.; Müller, K.R. Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models. arXiv 2017, arXiv:1708.08296. [Google Scholar]
- Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef] [Green Version]
- Gunning, D.; Aha, D.W. DAPRA’s Explainable Artificial Intelligence Program. AI Mag. 2019, 40, 44–58. [Google Scholar]
- Weitz, K.; Schiller, D.; Schlagowski, R.; Huber, T.; André, E. “Do you trust me?”: Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design. In Proceedings of the IVA ’19: 19th ACM International Conference on Intelligent Virtual Agents, Paris, France, 2–5 July 2019; ACM: New York, NY, USA, 2019; pp. 7–9. [Google Scholar] [CrossRef] [Green Version]
- Taylor, S.; Pickering, B.; Boniface, M.; Anderson, M.; Danks, D.; Følstad, A.; Leese, M.; Müller, V.; Sorell, T.; Winfield, A.; et al. Responsible AI—Key Themes, Concerns & Recommendations For European Research and Innovation; HUB4NGI Consortium: Zürich, Switzerland, 2018. [Google Scholar] [CrossRef]
- Došilović, F.K.; Brčić, M.; Hlupić, N. Explainable Artificial Intelligence: A Survey. In Proceedings of the 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 21–25 May 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 0210–0215. [Google Scholar] [CrossRef]
- Khrais, L.T. Role of Artificial Intelligence in Shaping Consumer Demand in E-Commerce. Future Internet 2020, 12, 226. [Google Scholar] [CrossRef]
- Israelsen, B.W.; Ahmed, N.R. “Dave …I can assure you …that it’s going to be all right …” A Definition, Case for, and Survey of Algorithmic Assurances in Human-Autonomy Trust Relationships. ACM Comput. Surv. 2019, 51, 113. [Google Scholar] [CrossRef] [Green Version]
- Rohlfing, K.J.; Cimiano, P.; Scharlau, I.; Matzner, T.; Buhl, H.M.; Buschmeier, H.; Eposito, E.; Grimminger, A.; Hammer, B.; Häb-Umbach, R.; et al. Explanation as a social practice: Toward a conceptual framework for the social design of AI systems. IEEE Trans. Cogn. Dev. Syst. 2020. [Google Scholar] [CrossRef]
- Amnesty Internationl and AccessNow. The Toronto Declaration: Protecting the Right to Equality and Non-Discrimination in Machine Learning Systems. 2018. Available online: https://www.accessnow.org/the-toronto-declaration-protecting-the-rights-to-equality-and-non-discrimination-in-machine-learning-systems/ (accessed on 14 May 2021).
- Council of Europe. European Convention for the Protection of Human Rights and Fundamental Freedoms, as Amended by Protocols Nos. 11 and 14; Council of Europe: Strasbourg, France, 2010. [Google Scholar]
- UK Government Digital Services. Data Ethics Framework. 2020. Available online: https://www.gov.uk/government/publications/data-ethics-framework (accessed on 14 May 2021).
- Department of Health and Social Care. Digital and Data-Driven Health and Care Technology; Department of Health and Social Care: London, UK, 2021.
- European Commission. Ethics Guidelines for Trustworthy AI; European Commission: Brussels, Belgium, 2019. [Google Scholar]
- Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
- Murray, P.M. The History of Informed Consent. Iowa Orthop. J. 1990, 10, 104–109. [Google Scholar]
- USA v Brandt Court. The Nuremberg Code (1947). Br. Med. J. 1996, 313, 1448. [Google Scholar] [CrossRef]
- World Medical Association. WMA Declaration of Helsinki—Ethical Principles for Medical Research Involving Human Subjects; World Medical Association: Ferney-Voltaire, France, 2018. [Google Scholar]
- Lemley, M.A. Terms of Use. Minn. Law Rev. 2006, 91, 459–483. [Google Scholar]
- Richards, N.M.; Hartzog, W. The Pathologies of Digital Consent. Wash. Univ. Law Rev. 2019, 96, 1461–1504. [Google Scholar]
- Luger, E.; Moran, S.; Rodden, T. Consent for all: Revealing the hidden complexity of terms and conditions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; pp. 2687–2696. [Google Scholar]
- Belmont. The Belmont Report: Ethical Principles and Guidelines for The Protection of Human Subjects of Research; American College of Dentists: Gaithersburg, MD, USA, 1979. [Google Scholar]
- Beauchamp, T.L. History and Theory in “Applied Ethics”. Kennedy Inst. Ethics J. 2007, 17, 55–64. [Google Scholar] [CrossRef] [PubMed]
- Muirhead, W. When four principles are too many: Bloodgate, integrity and an action-guiding model of ethical decision making in clinical practice. Clin. Ethics 2011, 38, 195–196. [Google Scholar] [CrossRef]
- Rubin, M.A. The Collaborative Autonomy Model of Medical Decision-Making. Neuro. Care 2014, 20, 311–318. [Google Scholar] [CrossRef]
- The Health Service (Control of Patient Information) Regulations 2002. 2002. Available online: https://www.legislation.gov.uk/uksi/2002/1438/contents/made (accessed on 14 May 2021).
- Hartzog, W. The New Price to Play: Are Passive Online Media Users Bound By Terms of Use? Commun. Law Policy 2010, 15, 405–433. [Google Scholar] [CrossRef]
- Beauchamp, T.L.; Childress, J.F. Principles of Biomedical Ethics, 8th ed.; Oxford University Press: Oxford, UK, 2019. [Google Scholar]
- OECD. Frascati Manual 2015; OECD: Paris, France, 2015. [Google Scholar] [CrossRef]
- BPS. Code of Human Research Ethics; BPS: Leicester, UK, 2014. [Google Scholar]
- Herschel, R.; Miori, V.M. Ethics & Big Data. Technol. Soc. 2017, 49, 31–36. [Google Scholar] [CrossRef]
- Floridi, L.; Taddeo, M. What is data ethics? Philos. Trans. R. Soc. 2016. [Google Scholar] [CrossRef]
- Carroll, S.R.; Garba, I.; Figueroa-Rodríguez, O.L.; Holbrook, J.; Lovett, R.; Materechera, S.; Parsons, M.; Raseroka, K.; Rodriguez-Lonebear, D.; Rowe, R.; et al. The CARE Principles for Indigenous Data Governance. Data Sci. J. 2020, 19, 1–12. [Google Scholar] [CrossRef]
- Thomson, J.J. The Trolley Problem. Yale Law J. 1985, 94, 1395–1415. [Google Scholar] [CrossRef] [Green Version]
- Parsons, T.D. Ethical Challenges in Digital Psychology and Cyberpsychology; Cambridge University Press: Cambridge, UK, 2019. [Google Scholar]
- Murove, M.F. Ubuntu. Diogenes 2014, 59, 36–47. [Google Scholar] [CrossRef]
- Ess, C. Ethical Decision-Making and Internet Research: Recommendations from the AoIR Ethics Working Committee; IGI Global: Hershey, PA, USA, 2002. [Google Scholar]
- Markham, A.; Buchanan, E. Ethical Decision-Making and Internet Research: Recommendations from the Aoir Ethics Working Committee (Version 2.0). 2002. Available online: https://aoir.org/reports/ethics2.pdf (accessed on 14 May 2021).
- Sugarman, J.; Lavori, P.W.; Boeger, M.; Cain, C.; Edson, R.; Morrison, V.; Yeh, S.S. Evaluating the quality of informed consent. Clin. Trials 2005, 2, 34–41. [Google Scholar] [CrossRef]
- Biros, M. Capacity, Vulnerability, and Informed Consent for Research. J. Law Med. Ethics 2018, 46, 72–78. [Google Scholar] [CrossRef] [Green Version]
- Tam, N.T.; Huy, N.T.; Thoa, L.T.B.; Long, N.P.; Trang, N.T.H.; Hirayama, K.; Karbwang, J. Participants’ understanding of informed consent in clinical trials over three decades: Systematic review and meta-analysis. Bull. World Health Organ. 2015, 93, 186H–198H. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Falagas, M.E.; Korbila, I.P.; Giannopoulou, K.P.; Kondilis, B.K.; Peppas, G. Informed consent: How much and what do patients understand? Am. J. Surg. 2009, 198, 420–435. [Google Scholar] [CrossRef] [PubMed]
- Nusbaum, L.; Douglas, B.; Damus, K.; Paasche-Orlow, M.; Estrella-Luna, N. Communicating Risks and Benefits in Informed Consent for Research: A Qualitative Study. Glob. Qual. Nurs. Res. 2017, 4. [Google Scholar] [CrossRef]
- Wiles, R.; Crow, G.; Charles, V.; Heath, S. Informed Consent and the Research Process: Following Rules or Striking Balances? Sociol. Res. Online 2007, 12. [Google Scholar] [CrossRef]
- Wiles, R.; Charles, V.; Crow, G.; Heath, S. Researching researchers: Lessons for research ethics. Qual. Res. 2006, 6, 283–299. [Google Scholar] [CrossRef]
- Naarden, A.L.; Cissik, J. Informed Consent. Am. J. Med. 2006, 119, 194–197. [Google Scholar] [CrossRef] [PubMed]
- Al Mahmoud, T.; Hashim, M.J.; Almahmoud, R.; Branicki, F.; Elzubeir, M. Informed consent learning: Needs and preferences in medical clerkship environments. PLoS ONE 2018, 13, e0202466. [Google Scholar] [CrossRef]
- Nijhawan, L.P.; Janodia, M.D.; Muddukrishna, B.S.; Bhat, K.M.; Bairy, K.L.; Udupa, N.; Musmade, P.B. Informed consent: Issues and challenges. J. Adv. Phram. Technol. Res. 2013, 4, 134–140. [Google Scholar] [CrossRef]
- Kumar, N.K. Informed consent: Past and present. Perspect. Clin. Res. 2013, 4, 21–25. [Google Scholar] [CrossRef]
- Hofstede, G. Cultural Dimensions. 2003. Available online: www.geerthofstede.com (accessed on 12 May 2021).
- Hofstede, G.; Hofstede, J.G.; Minkov, M. Cultures and Organizations: Software of the Mind, 3rd ed.; McGraw-Hill: New York, NY, USA, 2010. [Google Scholar]
- Acquisti, A.; Brandimarte, L.; Loewenstein, G. Privacy and human behavior in the age of information. Science 2015, 347, 509–514. [Google Scholar] [CrossRef] [PubMed]
- McEvily, B.; Perrone, V.; Zaheer, A. Trust as an Organizing Principle. Organ. Sci. 2003, 14, 91–103. [Google Scholar] [CrossRef]
- Milgram, S. Behavioral study of obedience. J. Abnorm. Soc. Psychol. 1963, 67, 371–378. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Haney, C.; Banks, C.; Zimbardo, P. Interpersonal Dynamics in a Simulated Prison; Wiley: New Youk, NY, USA, 1972. [Google Scholar]
- Reicher, S.; Haslam, S.A. Rethinking the psychology of tyranny: The BBC prison study. Br. J. Soc. Psychol. 2006, 45, 1–40. [Google Scholar] [CrossRef] [PubMed]
- Reicher, S.; Haslam, S.A. After shock? Towards a social identity explanation of the Milgram ’obedience’ studies. Br. J. Soc. Psychol. 2011, 50, 163–169. [Google Scholar] [CrossRef]
- Beauchamp, T.L. Informed Consent: Its History, Meaning, and Present Challenges. Camb. Q. Healthc. Ethics 2011, 20, 515–523. [Google Scholar] [CrossRef]
- Ferreira, C.M.; Serpa, S. Informed Consent in Social Sciences Research: Ethical Challenges. Int. J. Soc. Sci. Stud. 2018, 6, 13–23. [Google Scholar] [CrossRef]
- Hofmann, B. Broadening consent - and diluting ethics? J. Med Ethics 2009, 35, 125–129. [Google Scholar] [CrossRef]
- Steinsbekk, K.S.; Myskja, B.K.; Solberg, B. Broad consent versus dynamic consent in biobank research: Is passive participation an ethical problem? Eur. J. Hum. Genet. 2013, 21, 897–902. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sreenivasan, G. Does informed consent to research require comprehension? Lancet 2003, 362, 2016–2018. [Google Scholar] [CrossRef]
- O’Neill, O. Some limits of informed consent. J. Med. Ethics 2003, 29, 4–7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 319–340. [Google Scholar] [CrossRef] [Green Version]
- Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef] [Green Version]
- McKnight, H.; Carter, M.; Clay, P. Trust in technology: Development of a set of constructs and measures. In Proceedings of the Digit, Phoenix, AZ, USA, 15–18 December 2009. [Google Scholar]
- McKnight, H.; Carter, M.; Thatcher, J.B.; Clay, P.F. Trust in a specific technology: An investigation of its components and measures. ACM Trans. Manag. Inf. Syst. (TMIS) 2011, 2, 12. [Google Scholar] [CrossRef]
- Thatcher, J.B.; McKnight, D.H.; Baker, E.W.; Arsal, R.E.; Roberts, N.H. The Role of Trust in Postadoption IT Exploration: An Empirical Examination of Knowledge Management Systems. IEEE Trans. Eng. Manag. 2011, 58, 56–70. [Google Scholar] [CrossRef]
- Hinch, R.; Probert, W.; Nurtay, A.; Kendall, M.; Wymant, C.; Hall, M.; Lythgoe, K.; Cruz, A.B.; Zhao, L.; Stewart, A. Effective Configurations of a Digital Contact Tracing App: A Report to NHSX. 2020. Available online: https://cdn.theconversation.com/static_files/files/1009/Report_-_Effective_App_Configurations.pdf (accessed on 14 May 2021).
- Parker, M.J.; Fraser, C.; Abeler-Dörner, L.; Bonsall, D. Ethics of instantaneous contact tracing using mobile phone apps in the control of the COVID-19 pandemic. J. Med. Ethics 2020, 46, 427–431. [Google Scholar] [CrossRef]
- Walrave, M.; Waeterloos, C.; Ponnet, K. Ready or Not for Contact Tracing? Investigating the Adoption Intention of COVID-19 Contact-Tracing Technology Using an Extended Unified Theory of Acceptance and Use of Technology Model. Cyberpsychol. Behav. Soc. Netw. 2020. [Google Scholar] [CrossRef]
- Velicia-Martin, F.; Cabrera-Sanchez, J.-P.; Gil-Cordero, E.; Palos-Sanchez, P.R. Researching COVID-19 tracing app acceptance: Incorporating theory from the technological acceptance model. PeerJ Comput. Sci. 2021, 7, e316. [Google Scholar] [CrossRef]
- Rowe, F.; Ngwenyama, O.; Richet, J.-L. Contact-tracing apps and alienation in the age of COVID-19. Eur. J. Inf. Syst. 2020, 29, 545–562. [Google Scholar] [CrossRef]
- Roache, R. Why is informed consent important? J. Med. Ethics 2014, 40, 435–436. [Google Scholar] [CrossRef] [PubMed]
- Eyal, N. Using informed consent to save trust. J. Med. Ethics 2014, 40, 437–444. [Google Scholar] [CrossRef] [Green Version]
- Eyal, N. Informed consent, the value of trust, and hedons. J. Med. Ethics 2014, 40, 447. [Google Scholar] [CrossRef] [PubMed]
- Tännsjö, T. Utilitarinism and informed consent. J. Med. Ethics 2013, 40, 445. [Google Scholar] [CrossRef] [PubMed]
- Bok, S. Trust but verify. J. Med. Ethics 2014, 40, 446. [Google Scholar] [CrossRef]
- Rousseau, D.M.; Sitkin, S.B.; Burt, R.S.; Camerer, C. Not so different after all: A cross-discipline view of trust. Acad. Manag. Rev. 1998, 23, 393–404. [Google Scholar] [CrossRef] [Green Version]
- Robbins, B.G. What is Trust? A Multidisciplinary Review, Critique, and Synthesis. Sociol. Compass 2016, 10, 972–986. [Google Scholar] [CrossRef]
- Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An Integrative Model of Organizational Trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
- Weber, L.R.; Carter, A.I. The Social Construction of Trust; Clincal Sociology: Research and Practice; Springer Science+Business Media: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
- Ferrin, D.L.; Bligh, M.C.; Kohles, J.C. Can I Trust You to Trust Me? A Theory of Trust, Monitoring, and Coopertaion in Interpersonal and Intergroup Relationships. Group Organ. Manag. 2007, 32, 465–499. [Google Scholar] [CrossRef] [Green Version]
- Schoorman, F.D.; Mayer, R.C.; Davis, J.H. An integrative model of organizational trust: Past, present, and future. Acad. Manag. Rev. 2007, 32, 344–354. [Google Scholar] [CrossRef] [Green Version]
- Fuoli, M.; Paradis, C. A model of trust-repair discourse. J. Pragmat. 2014, 74, 52–69. [Google Scholar] [CrossRef] [Green Version]
- Lewicki, R.J.; Wiethoff, C. Trust, Trust Development, and Trust Repair. Handb. Confl. Resolut. Theory Pract. 2000, 1, 86–107. [Google Scholar]
- Bachmann, R.; Gillespie, N.; Priem, R. Repairing Trust in Organizations and Institutions: Toward a Conceptual Framework. Organ. Stud. 2015, 36, 1123–1142. [Google Scholar] [CrossRef]
- Bansal, G.; Zahedi, F.M. Trust violation and repair: The information privacy perspective. Decis. Support Syst. 2015, 71, 62–77. [Google Scholar] [CrossRef]
- Memery, J.; Robson, J.; Birch-Chapman, S. Conceptualising a Multi-level Integrative Model for Trust Repair. In Proceedings of the EMAC, Hamburg, Germany, 28–31 May 2019. [Google Scholar]
- Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 2004, 46, 50–80. [Google Scholar] [CrossRef]
- Lee, J.-H.; Song, C.-H. Effects of trust and perceived risk on user acceptance of a new technology service. Soc. Behav. Personal. Int. J. 2013, 41, 587–598. [Google Scholar] [CrossRef]
- Cheshire, C. Online Trust, Trustworthiness, or Assurance? Daedalus 2011, 140, 49–58. [Google Scholar] [CrossRef] [PubMed]
- Pettit, P. Trust, Reliance, and the Internet. Inf. Technol. Moral Philos. 2008, 26, 161. [Google Scholar]
- Stewart, K.J. Trust Transfer on the World Wide Web. Organ. Sci. 2003, 14, 5–17. [Google Scholar] [CrossRef]
- Eames, K.T.D.; Keeling, M.J. Contact tracing and disease control. Proc. R. Soc. Lond. 2003, 270, 2565–2571. [Google Scholar] [CrossRef] [Green Version]
- Jetten, J.; Reicher, S.D.; Haslam, S.A.; Cruwys, T. Together Apart: The Psychology of COVID-19; SAGE Publications Ltd.: Thousand Oaks, CA, USA, 2020. [Google Scholar]
- Ahmed, N.; Michelin, R.A.; Xue, W.; Ruj, S.; Malaney, R.; Kanhere, S.S.; Seneviratne, A.; Hu, W.; Janicke, H.; Jha, S.K. A Survey of COVID-19 Contact Tracing Apps. IEEE Access 2020, 8, 134577–134601. [Google Scholar] [CrossRef]
- Kretzschmar, M.E.; Roszhnova, G.; Bootsma, M.C.; van Boven, M.J.; van de Wijgert, J.H.; Bonten, M.J. Impact of delays on effectiveness of contact tracing strategies for COVID-19: A modelling study. Lancet Public Health 2020, 5, e452–e459. [Google Scholar] [CrossRef]
- Bengio, Y.; Janda, R.; Yu, Y.W.; Ippolito, D.; Jarvie, M.; Pilat, D.; Struck, B.; Krastev, S.; Sharma, A. The need for privacy with public digital contact tracing during the COVID-19 pandemic. Lancet Digit. Health 2020, 2, e342–e344. [Google Scholar] [CrossRef]
- Abeler, J.; Bäcker, M.; Buermeyer, U.; Zillessen, H. COVID-19 Contact Tracing and Data Protection Can Go Together. JMIR Mhealth Uhealth 2020, 8, e19359. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Van Bavel, J.J.; Baicker, K.; Boggio, P.S.; Caprano, V.; Cichocka, A.; Cikara, M.; Crockett, M.J.; Crum, A.J.; Douglas, K.M.; Druckman, J.N.; et al. Using social and behavioural science to support COVID-19 pandemic response. Nat. Hum. Behav. 2020, 4, 460–471. [Google Scholar] [CrossRef] [PubMed]
- Ackland, R. Web Social Science: Concepts, Data and Tools for Social Scientitis in the Digital Age; SAGE Publications Ltd.: Thousand Oaks, CA, USA, 2013. [Google Scholar]
- Papacharissi, Z. A Networked Self and Platforms, Stories, Connections; Routledge: London, UK, 2018. [Google Scholar]
- Raghupathi, W.; Raghupathi, V. Big data analytics in healthcare: Promise and potential. Health Inf. Sci. Syst. 2014, 2, 1–10. [Google Scholar] [CrossRef]
- Agbehadji, I.E.; Awuzie, B.O.; Ngowi, A.B.; Millham, R.C. Review of Big Data Analytics, Artificial Intelligence and Nature-Inspired Computing Models towards Accurate Detection of COVID-19 Pandemic Cases and Contact Tracing. Int. J. Environ. Res. Public Health 2020, 17, 5330. [Google Scholar] [CrossRef]
- Cheney-Lippold, J. We Are Data: Algorithms and the Making of Our Digital Selves; New York University Press: New York, NY, USA, 2017. [Google Scholar]
- O’Neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy; Crown: New York, NY, USA, 2016. [Google Scholar]
- Austin, C. RDA COVID-19 Zotero Library—March 2021. Available online: https://www.rd-alliance.org/group/rda-covid19-rda-covid19-omics-rda-covid-19-epidemiology-rda-covid19-clinical-rda-covid19 (accessed on 14 May 2021).
- Norton, A.; Sigfrid, L.; Aderoba, A.; Nasir, N.; Bannister, P.G.; Collinson, S.; Lee, J.; Boily-Larouche, G.; Golding, J.P.; Depoortere, E.; et al. Preparing for a pandemic: Highlighting themes for research funding and practice—Perspectives from the Global Research Collaboration for Infectious Disease Preparedness (GloPID-R). BMC Med. 2020, 18, 273. [Google Scholar] [CrossRef] [PubMed]
- Floridi, L. On the intrinsic value of information objects and the infosphere. Ethics Inf. Technol. 2002, 4, 287–304. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pickering, B. Trust, but Verify: Informed Consent, AI Technologies, and Public Health Emergencies. Future Internet 2021, 13, 132. https://doi.org/10.3390/fi13050132
Pickering B. Trust, but Verify: Informed Consent, AI Technologies, and Public Health Emergencies. Future Internet. 2021; 13(5):132. https://doi.org/10.3390/fi13050132
Chicago/Turabian StylePickering, Brian. 2021. "Trust, but Verify: Informed Consent, AI Technologies, and Public Health Emergencies" Future Internet 13, no. 5: 132. https://doi.org/10.3390/fi13050132
APA StylePickering, B. (2021). Trust, but Verify: Informed Consent, AI Technologies, and Public Health Emergencies. Future Internet, 13(5), 132. https://doi.org/10.3390/fi13050132