DraftTHREE — DraftTHREE — DraftTHREE — DraftTHREE — DraftTHREE — DraftTHREE —

Ethical decision-making and Internet research

Recommendations from the aoir ethics working committee

Copyright © 2002 by Charles Ess and the Association of Internet Researchers1

NOT FOR DUPLICATION, CITATION OR ATTRIBUTION WITHOUT PERMISSION FROM AOIR (President, Steve Jones, University of Illinois at Chicago) and/or the aoir ethics working committee (Chair, Charles Ess, Drury University)

Contents:

I. Prologue

II. Questions to ask when undertaking Internet research

III. Case Studies

IV. Resources

V. Addendum: Discussion of contrast between utilitarian and deontological approaches - as these are reflected in contrasts between the U.S. and Europe (Scandinavia and the EU) in laws regarding privacy and consumer protection.

 

I. Prologue

The Internet has opened up a wide range of new ways to examine human behavior in new contexts, and from a variety of disciplinary and interdisciplinary approaches. As in its offline counterpart, online research also raises critical issues of risk and safety to the human subject. Hence, online researchers may encounter conflicts between the requirements of research and its possible benefits, on the one hand, and human subjects rights to and expectations of autonomy, privacy, informed consent, etc.

The many disciplines already long engaged in human subjects research (sociology, anthropology, ethnography, psychology, medicine, etc. 2) have established ethics statements intended to guide researchers and those charged with ensuring that research on human subjects follows both legal requirements and ethical practices. (In the context of the United States college and university, these are characteristically called Institutional Review Boards or IRBs.) Researchers and those charged with research oversight are encouraged in the first instance to turn to the discipline-specific principles and practices of research (many of which are listed below).

But as online research takes place in a range of new venues (email, chatrooms, webpages, various forms of "instant messaging," MUDs and MOOs, USENET newsgroups, audio/video exchanges, etc.) — researchers, research subjects, and those charged with research oversight will often encounter ethical questions and dilemmas that are not directly addressed in extant statements and guidelines. In addition, both the great variety of human behaviors observable online and the clear need to study these behaviors in interdisciplinary ways have thus engaged researchers and scholars in disciplines beyond those traditionally involved in human subjects research: for example, researching the multiple uses of texts and graphics images in diverse Internet venues often benefits from approaches drawn from art history, literary studies, etc. This document is intended to aid both researchers from a variety of disciplines and those responsible for insuring that this research adhere to legal and ethical requirements in their work of clarifying and resolving ethical issues encounter in online research.

This document stresses:

Ethical pluralism

Ethical concerns arise not only when we encounter apparent conflicts in values and interests — but also when we recognize that there is more than one ethical decision-making framework used to analyze and resolve those conflicts. In philosophical ethics, these frameworks are commonly classified in terms of deontology, consequentialism, virtue ethics, feminist ethics, and several others. 3

Researchers and their institutions, both within a given national tradition and across borders and cultures, take up these diverse frameworks in grappling with ethical conflicts. Our first goal in this document is to emphasize and represent this diversity of frameworks — not in order to pit one against another, but to help researchers and those charged with research oversight to understand how these frameworks operate in specific situations. On occasion, in fact, ethical conflicts can be resolved by recognizing that apparently opposing values represent different ethical frameworks. By shifting the debate from the conflict between specific values to a contrast between ethical frameworks, researchers and their colleagues may understand the conflict in new light, and discern additional issues and considerations that help resolve the specific conflict. (Examples of this will be given below.)

Cross-cultural awareness

Different nations and cultures enjoy diverse legal protections and traditions of ethical decision-making. Especially as Internet research may entail a literally global scope, efforts to respond to ethical concerns and resolve ethical conflicts must take into account diverse national and cultural frameworks.4 See also "V. Addendum," below.

Guidelines — not "recipes"

As noted in our Preliminary Report (October, 2001), given the range of possible ethical decision-making procedures (utilitarianism, deontology, feminist ethics, etc.), the multiple interpretations and applications of these procedures to specific cases, and their refraction through culturally-diverse emphases and values across the globe — the issues raised by Internet research are ethical problems precisely because they evoke more than one ethically defensible response to a specific dilemma or problem. Ambiguity, uncertainty, and disagreement are inevitable.

In this light, it is a mistake to view our recommendations as providing general principles that can be applied without difficulty or ambiguity to a specific ethical problem so as to algorithmically deduce the correct answer.

At the same time, recognizing the possibility of a range of defensible ethical responses to a given dilemma does not commit us to ethical relativism ("anything goes"). On the contrary, the general values and guidelines endorsed here articulate parameters that entail significant restrictions on what may — and what may not — be defended as ethical behavior. In philosophical terms, then, like most philosophers and ethicists, we endorse here a middle-ground between ethical relativism and an ethical dogmatism (a single set of ostensibly absolute and unquestionable values, applied through a single procedure, issuing in "the" only right answer - with all differing responses condemned as immoral).

II. Questions to ask when undertaking Internet research

(For additional examples of such question lists, see:

University of Bristol, "Self Assessment Questionnaire for Researchers Using Personal Data," available from < http://www.bris.ac.uk/Depts/Secretary/datapro.htm>

[Posted by Christine M. Hine to aoir ethics working committee]

Suler, John (2000). Ethics in Cyberspace Research. In Psychology of Cyberspace. <http://www.rider.edu/users/suler/psycyber/ethics.html>

[Posted by Lois Ann Scheidt to aoir list])

A. Venue/environment

Where does the behavior, communication, etc. under study take place? Current venues include:

Homepages

Weblogs

Google searches ("Have you been googled?")

Email (personal e-mail exchanges)

Listservs (exchanges and archives)

USENET newsgroups

ICQ/IM (text-based)

CUSeeMe (and other audio-video exchanges)

Chatroom

MUDs/MOOs

gaming

images and other forms of multi-media presentation (webcams, etc.)

(some forms of) Computer-Supported Cooperative Work systems?

….

What ethical expectations are established by the venue?

For example:

Is there is a posted site policy that establishes specific expectations — e.g., a statement notifying users that the site is public, the possible technical limits to privacy in specific areas or domains, etc.

Example: Sally Hambridge (Intel Corporation, 1998) has developed an extensive set of "Netiquette Guidelines" that includes the following advice:

Unless you are using an encryption device (hardware or software), you should assume that mail on the Internet is not secure. Never put in a mail message anything you would not put on a postcard.

(see <http://www.pcplayer.dk/Netikette_reference.doc>)

Is there a statement affiliated with the venue (chatroom, listserv, MOO or MUD, etc.) indicating whether discussion, postings, etc., are ephemeral, logged for a specific time, and/or archived in a private and/or publicly-accessible location such as a website, etc.?

Are there mechanisms that users may choose to employ to indicate that their exchanges should be regarded as private — e.g., "moving" to a private chatroom, using specific encryption software, etc.? — to indicate their desire to have their exchanges kept private?

One broad consideration: the greater the acknowledged publicity of the venue, the less obligation there may be to protect individual privacy, confidentiality, right to informed consent, etc.

Informed consent: specific considerations

Timing

Ideally, protecting human subjects’ rights to privacy, confidentiality, autonomy, and informed consent means approaching subjects at the very beginning of research to ask for consent, etc.

In some contexts, however, the goals of a research project may shift over time as emerging patterns suggest new questions, etc. Determining not only if, but when to ask for informed consent is thus somewhat context-dependent and requires particular attention to the "fine-grained" details of the research project not only in its inception but also as it may change over its course.

Medium?

Researchers should determine what medium — e-mail? postal letter? —for both requesting and receiving informed consent best protects both the subject(s) and their project. (As is well known, compared with electronic records, paper records are less subject to erasure and corruption through power drops, operator error, etc.)

Addressees?

In studying groups with a high turnover rate, is obtaining informed consent from the moderator/facilitator/list owner, etc. sufficient?

Additional issues that should be included here?

B. Initial ethical and legal considerations

How far do extant legal requirements and ethical guidelines in your discipline "cover" the research? (For the guidelines as published by a number of disciplines, see Resources, below. See as well the discussion of the ethical and legal contrasts between the United States and Europe - especially "V. Addendum,", below.)

How far do extant legal requirements and ethical guidelines in the countries implicated in the research apply?

For example: all persons who are citizens of the European Union enjoy strong privacy rights by law as established in the European Union Data Protection Directive (1995), according to which data-subjects must:

* Unambiguously give consent for personal information to be gathered online;

* Be given notice as to why data is being collected about them;

* Be able to correct erroneous data;

* Be able to opt-out of data collection; and

* Be protected from having their data transferred to countries with less stringent privacy protections.

(see <http://www.privacy.org/pi>

U.S. citizens, by contrast, enjoy somewhat less stringent privacy protections (See "V. Addendum,", below)

Obviously, research cannot violate the legal requirements for privacy protection enforced in the countries under whose jurisdiction the research and subjects find themselves.

What are the initial ethical expectations/assumptions of the authors/subjects being studied?

For example: Do participants in this environment assume/believe that their communication is private? If so — and if this assumption is warranted — then there may be a greater obligation on the part of the researcher to protect individual privacy in the ways outlined in human subjects research (i.e., protection of confidentiality, exercise of informed consent, assurance of anonymity in any publication of the research, etc.).

If not — e.g., if the research focuses on

publicly accessible archives;

behaviors intended by their authors/agents as public, performative (e.g., intended as a public act that invites recognition), etc.;

venues assigned the equivalent of a "public notice" that participants and their communications may be monitored for research purposes;

….

then there may be less obligation to protect individual privacy.5

Alternatively: Are participants in this environment best understood as "subjects" (in the senses common in human subjects research in medicine and the social sciences) — or as authors whose texts/artifacts are intended as public?

If participants are best understood as subjects in the first sense (e.g., as they participate in small chatrooms, MUDs or MOOs intended to provide reasonably secure domains for private exchanges), then greater obligations to protect autonomy, privacy, confidentiality, etc., are likely to follow.

If, by contrast, subjects may be understood as authors intending for their work to be public (e.g., e-mail postings to large listserves and USENET groups; public webpages such as homepages, Web logs, etc.; chat exchanges in publicly accessible chatrooms, etc.) — then fewer obligations to protect autonomy, privacy, confidentiality, etc. will likely follow.6

[The following three questions are interrelated: as will be seen, they reflect both prevailing approaches to ethical decision-making — e.g., in Deborah Johnson (2001) — as well as cultural/national differences in law and ethical traditions.]

What ethically significant risks does the research entail for the subject(s)?

Examples:

If the content of a subject’s communication were to become known beyond the confines of the venue being studied — would harm likely result?

For example: if a person is discussing intimate topics — psychological/medical/spiritual issues, sexual experience/fantasy/orientation, etc. — would the publication of this material result in shame, threats to material well-being (denial of insurance, job loss, physical harassment, etc.), etc.?

A primary ethical obligation is to do no harm. Good research design, of course, seeks to minimize risk of harm to the subjects involved.

By contrast, if the form of communication is under study, this shift of focus away from content may reduce the risk to the subject.

In either case (i.e., whether it is the form or content that is most important for the researcher), if the content is relatively trivial, doesn’t address sensitive topics, etc., then clearly the risk to the subject is low.

What benefits might be gained from the research?

This question is obviously crucial when research in fact may entail significant risk to the author(s)/agent(s) considered as subjects.

What are the ethical traditions of researchers and subjects’ culture and country?

This question is crucial precisely when facing the conflict between possible risks to subjects, including the violation of basic human rights to self-determination, privacy, informed consent, etc., and the benefits of research.

In the United States, for example, there is a greater reliance on utilitarian approaches to deciding such conflicts — specifically in the form of "risk/benefit" analyses. Crudely, if the benefits promise to be large, and the risks/costs small, then the utilitarian calculus may find that the benefits outweigh the risks and costs.

By contrast (and as is illustrated in the differences in laws on privacy), European approaches tend to emphasize more deontological approaches — i.e., approaches that take basic human rights (self-determination, privacy, informed consent, etc.) as so foundational that virtually no set of possible benefits that might be gained from violating these ethically justifies that violation.

When considering conflicts between subjects’ rights and benefits to be gained from research that compromises those rights — researchers and those charged with research oversight may well arrive at different decisions as to what is ethically acceptable and unacceptable, depending on which of these cultural/ethical approaches they utilize.

(See "V. Addendum," below)

III. Case Studies

A. Are chatrooms public spaces? When to obtain consent for recording conversations in a chatroom?

[From: Hudson, James M. and Amy Bruckman. "IRC Franais: The Creation of an Internet-Based SLA Community." Computer Assisted Language Learning (CALL), forthcoming 2002. Quoted by permission from the authors and CALL.]

In our first version of IRC Francais, an ethical dilemma immediately emerged. Our plan was for students to converse with native French speakers already on IRC. Clearly, the rules governing human subjects research dictate that we need freely given informed consent from our students before we can ethically use them as experimental subjects ("The Nuremberg Code," 1949). But what about their conversational partners? Were they research subjects or not? We were not studying them in particular, but were recording their conversations with our students and analyzing their words. Did we need their consent?

The status of real-time chatrooms is ambiguous. On the one hand, one can argue that they are like a public square. It is considered ethical to record activities in a public place without consent, provided that individuals are not identifiable (Eysenbach & Till, 2001). In this view, we would be justified to simply record conversations and not tell anyone that this was taking place. On the other hand, one can argue that chatroom conversations are normally ephemeral. Participants have a reasonable expectation that they are not being recorded without their freely given informed consent. Under this stricter interpretation, we would need consent from any person whom we wish to record. Additionally, if the process of requesting that consent proved too intrusive, we would need to abandon the research (Department of Health, 1979).

With the approval of the Institutional Review Board (IRB) for human subjects research, we settled on a compromise approach: we would get written consent from our students, but merely notify other people on the channel of our study. These individuals would also be given the option to opt out if they so chose. Because we wrote our own client software, we could automatically send a public message to this effect when one of our students joined the channel, and then privately inform others who join the channel subsequently.

To our surprise, this compromise failed. IRC participants were angered at the idea of being studied without their prior consent. Our students were greeted with hostility. They were routinely harassed by IRC channel members, and often had threats and obscenities directed at them. This seems to indicate that an opt in solution might be more acceptable than an opt out. However, there was a further problem: our messages notifying channel participants of the study and offering the opportunity to opt out were found in themselves to be unacceptably intrusive. Even though each person saw the message only once, it was still deemed unacceptable by many members. An opt in message would have that same problem.

Based on the reaction our study generated, we concluded that the "public square" model is untenable and, in fact, the second interpretation holds: you may not ethically record an otherwise ephemeral medium without consent from participants. How then could we continue our research? We came upon a solution: create our own IRC channel explicitly for this project. We could direct our students to that channel, and others would not normally join. Since it was our channel, we could create a channel logon message informing people about the study and its purpose. We could also limit access to the channel to our students only; however, to date we have not found this necessary. Few people come to the channel outside of students assigned to use it, and those few are warned by the channel logon message. Now, we do not intrude on a pre-existing space, but instead have our own.

In addition to solving our ethical dilemma, the new channel also provided pedagogical benefits. While people come to general IRC channels for a variety of social purposes, everyone on the IRC Francais channel is there for the purpose of practicing French. This shared goal greatly improved the educational value of the conversation for all concerned.

[Additional case study suggestions?]

 

IV. Resources

[to include here the print and web-based resources already collected for the Preliminary Report, enhanced where appropriate by the current AAAS-CSFR literature analysis project: in this draft, the following are new additions to the resource list in the Preliminary Report <aoir.org/reports/ethics.html>]

American Psychological Association (1992). Ethical Principles of Psychologists and Codes of Conduct (currently under revision). <http://www.apa.org/ethics/code.html>

Association for Computing Machinery. (1992, October 16). ACM Code of Ethics and Professional Conduct. <http://www.acm.org/constitution/code.html>

European Commission. Privacy on the Internet - An integrated EU Approach to On-line Data Protection." <http://europa.eu.int/comm/internal_market/en/dataprot/wpdocs/wpdocs_2k.htm>

[Posted by Christine M. Hine to aoir ethics list]

Natural Sciences and Engineering Research Council of Canada, Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans <http://www.nserc.ca/programs/ethics/english/ policy.htm >

The UK Data Protection Site. <http://www.dataprotection.gov.uk>

Posted to the aoir ethics list by Christine Hine, who comments that the site

…contains some useful items in relatively plain English, including a FAQ on how data protection issues apply to the web (locates it via "Guidance and Other Publications", "Compliance Advice", then "FAQs - Web"). This has some good advice for web site owners on how to protect visitors' privacy. However, most of this applies to commercial data use. "Scientific research" may be exempt from many of the provisions, as long as fundamental rights to privacy are not infringed and anonymity of subjects is ensured. The situation on exemptions is too complex to explain in brief...but European researchers who are doing relevant research will need to clarify with their own country's data protection framework and their own institutions what their obligations are. It may come down to such issues as whether you can ensure that the data subjects are fully anonymised well before research reaches publication.... which seems to me that it might cause problems if direct quotations from newsgroup postings are used in reports.

Wendy Robinson has also suggested that we include references to

ethical codes for the

ACM, EE orgs, CPSR; principles drawn from Johnson and some of the other

basic texts; school policies for online conduct and computer use such as at

MIT, Stanford, Carnegie-Mellon, Cal Tech, Georgia Tech and other schools

with strong engineering or comp sci programs and thorny issues that

therefore arise since their students are technologically sophisticated (10/21/01)

 

 

Resources in Internet Research Ethics

Allen, Christina. 1996. What's Wrong with the "Golden Rule"? Conundrums of Conducting Ethical Research in Cyberspace. The Information Society 12 (2), 175-187.

Allen describes a method of "dialogical ethics" (my terms) that works from the bottom up (following the approach of Mikhail Bakhtin) rather than beginning with general principles and moving "top down." Her approach - illustrated with an example of her own research on LambdaMOO - further draws from anthropology and cultural studies as these "acknowledge and seek to understand the ramifications of the positionality of the researcher for the phenomena and individuals under study," and thereby challenges the more prevailing approaches in medicine and social science as these instead emphasize the researcher adopting the posture of dispassionate observer (186). In contrast with the usual emphasis on protecting subjects from potential harm - Allen finds that when the research process is undertaken "as a respectful dialogism between two equal interlocutors," participants enjoy "positive gains from the process of interviewing and reflecting on their cyberspace stories" (186).

In these ways, in fact, Allen's approach recalls Aristotle's emphasis on praxis as reshaping our ethical considerations - with the goal of achieving phronesis (practical wisdom or judgment): while skeptical of the possibility of abstractly codifying research ethics (because of the sorts of differences between research venues noted in this report), Allen concludes that "Researchers can, however, develop ethical wisdom that comes from experience with many configurations of research in cyberspace, and report on the conditions that grounded their ethical choices, and the results that emerged from their work in the site" (186).

On this view, ethical considerations are not separate from research considerations, but rather an integral component, one interwoven as an explicit and intentional dimension of the research project itself.

Boehlefeld, Sharon Polancic. 1996. Doing the Right Thing: Ethical Cyberspace Research. The Information Society 12(2), 141-152.

Boehlefeld argues that "doing ethical cyberspace research is not much different from doing any ethical research involving human subjects" (142). She recognizes utilitarian considerations (see p. 142) in establishing the importance of treating subjects ethically, and carefully develops guidelines for research - again, utilizing her own work as a case study - based on the ethics statement of the Association of Computing Machinery. In particular, she stresses anonymity and seeking permission to use long quotes (149f.) Here she observes that "The act of seeking permission, while it may lead to 'loss' of data, could also lead to developing potentially valuable 'key informant' relationships with list participants" (150) - thus reinforcing Allen's more dialogical orientation (1996).

Eysenbach, Gunther and Jim Till. 2001. "Ethical issues in qualitative research on internet communities." British Medical Journal 2001(10 Nov); 323(7321): 1103-1105. <http://www.bmj.com/cgi/content/full/323/7321/1103>

["an interesting utilitarian-oriented perspective for medical practitioners using social science methods" - Amanda Lenhart, posted to aoir list.]

Jankowski, Nickolas and Martine van Selm. 2001 (?). "Research Ethics in a Virtual World: Some Guidelines and Illustrations" <http://www.brunel.ac.uk/depts/crict/vmpapers/nick.htm>

King, S. A. 1996. Researching Internet Communities. Proposed Ethical guidelines for the Reporting of Results. The Information Society 12 (2): 119-127.

Schrum, Lynne. 1997. "Ethical Research in the Information Age: Beginning the Dialog," Computers in Human Behavior, Vol. 13 (2), pp. 117-125.

Excellent for its discussion of the qualitative research tradition and its connecting extant guidelines with research on listservs. Schrum develops a list of ten guidelines that stress that the authors of listserv postings are the owners of that material; e-mail should be treated as private correspondence "that is not to be forwarded, shared, or used as research data unless express permission is given"; and she likewise stresses the importance of informed consent and protecting the confidentiality of listserv members.

Sharf, B. F. 1999. Beyond netiquette: the ethics of doing naturalistic discourse research on the internet. In S. Jones (Ed.), Doing internet research (pp. 243-256). Thousand Oaks, CA: Sage.

[Posted to aoir list by David Eddy-Spicer.]

Suler, John (2000). Ethics in Cyberspace Research. In Psychology of Cyberspace. <http://www.rider.edu/users/suler/psycyber/ethics.html>

[John Suler provides an excellent list of questions for researchers to help them consider how far their work fulfills the requirements for informed consent, privacy, and consultation and evaluation of the study.

Submitted to the aoir list by Lois Ann Scheidt <lscheidt@indiana.edu>]

University of Bristol, "Self Assessment Questionnaire for Researchers Using Personal Data," available from < http://www.bris.ac.uk/Depts/Secretary/datapro.htm>

[Submitted by Christine Hine to aoir ethics list]

Waern, Yvonne. 2001. Ethics in Global Internet Research. Report from the Department of Communication Studies, Linköping University, 2001:3. (Available in PDF format from the author, <yvowa@tema.liu.se>)

Includes a good discussion of utilitarian vs. rights approaches, and a series of careful reflections of how to apply the guidelines from the Natural Sciences and Engineering Research Council of Canada, including

Respecting human dignity implies protecting the multiple and interdependent interests of the person - from bodily to psychological to cultural integrity. (cited in Waern, p. 7)

While she recognizes the utilitarian benefits of research, Waern tends to lean much more towards observing rights in research (and in this way, is an example of a stronger tendency towards the deontological among European and Scandinavian researchers). So she says in her conclusion, for example:

…research should provide more benefit than harm. However, the exposition here shows that it is problematic to propose that no harm is done, and even more so to claim what benefit research gives. (11)

Waern also describes a bit of Internet research on her own - one documenting the dominance of English- and German-language literature on research ethics. This leads to her observation that there is a cultural bias in Internet research and its ethics:

…the ethical guidelines found (on the Internet) are based on Western culture in general and Anglo/Saxon culture in particular. It may well be the case that these guidelines place less value upon establishing trust and intimate relationship between the research and the subject than other cultures. On the other hand, it might place higher value on privacy than other cultures. A continued investigation of ethical issues in various cultures is therefore greatly needed for research with the aim of studying global Internet use. (12)

Resources on US / EU / European differences

Aguilar, John R. 1999/2000. Over the Rainbow: European and American Consumer Protection Policy and Remedy Conflicts on the Internet and a Possible Solution. International Journal of Communications of Law and Policy (Issue 4, Winter 1999/2000, 1-57):

Documents extensively the differences in consumer protection - see especially section III, "E-Commerce Concerns and the Cultural Battle Waging Between the EU and US" (11ff.)

Nihoul, Paul. 1998-1999. Convergence in European Telecommunications: A Case Study on the Relationship between Regulation and Competition (Law). International Journal of Communications Law and Policy, Issue 2 (Winter), 1-33.

Reidenberg, Joel R. 2000. Resolving Conflicting International Data Privacy Rules in Cyberspace, Stanford Law Review, Vol. 52:1315-1376.

[My thanks to Kirk St. Amant for making me aware of these resources.]

Resources in Philosophical Ethics

Birsch, Douglas. 1999. Ethical Insights: A Brief Introduction. Mountain View, California: Mayfield.

Boss, Judith. 2001. Ethics for Life: An Interdisciplinary and Multicultural Introduction, 2nd ed. Mountain View, CA: Mayfield Publishing.

Rachels, James. 1999. The Elements of Moral Philosophy, 3rd ed. Boston: McGraw-Hill.

Thomson, Anne. 1999. Critical Reasoning in Ethics: A Practical Introduction. London, New York: Routledge.

Weston, Anthony. 2001. A 21st Century Ethical Toolbox. New York, Oxford: Oxford University Press.

Zeuschner, Robert B. 2001. Classical Ethics: East and West. Boston: McGraw-Hill.

 

 

V. Addendum: Discussion of contrast between utilitarian and deontological approaches - as these are reflected in contrasts between the U.S. and Europe (Scandinavia and the EU) in laws regarding privacy and consumer protection.

As noted in our Preliminary Report, a comparison between extant US (e.g., the Belmont Report, the Federal Codes, the 1999 AAAS report, and a spread of articles from US-based researchers and ethicists) and EU guidelines (first of all, the NESH guidelines [National Committee for Research Ethics in the Social Sciences and the Humanities [NESH], Norway) "Guidelines for research ethics in the social sciences, law and the humanities." [2001]: <http://www.etikkom.no/NESH/guidelines.htm>] and the EU Data Privacy Protection Act) - there appears to be a clear contrast between US and EU approaches. In ethical terms, it is the contrast between more utilitarian (US) approaches (e.g., as these are more likely to allow cost-benefit analyses to override concerns regarding primary rights and responsibilities) and more deontological (EU) approaches (as these lay greater stress on protecting individual rights - first of all, the right to privacy - even at the cost of thereby losing what might be research that promises to benefit the larger whole 7).

This contrast can be seen, for example, in the differences between two "ethical protocols" available on the web, the first from the UK and the second from the US:

University of Bristol, "Self Assessment Questionnaire for Researchers Using Personal Data," available from < http://www.bris.ac.uk/Depts/Secretary/datapro.htm>;

Suler, John (2000). Ethics in Cyberspace Research. In Psychology of Cyberspace. <http://www.rider.edu/users/suler/psycyber/ethics.html>.8

More broadly, it appears that this contrast is further mirrored in the contrast between the EU and the US in terms of laws regarding privacy and consumer protection. According to the 1995 E.U. Data Privacy Protection Act, data-subjects must:

* Unambiguously give consent for personal information to be gathered online;

* Be given notice as to why data is being collected about them;

* Be able to correct erroneous data;

* Be able to opt-out of data collection; and

* Be protected from having their data transferred to countries with less stringent privacy protections.

(see <http://www.privacy.org/pi>

In this light, it is clear that E.U. citizens enjoy a priority on individual privacy vis-a-vis business interests

- i.e., a deontological emphasis on respect for persons in the form of privacy protections

vs.

U.S. favoring business interests over individual privacy. For example, Reidenberg argues that while there is global convergence on what he calls the First Principles of data protection - there are clear differences in how these First Principles are implemented, i.e., through "either liberal, market-based governance or socially-protective, rights-based governance." (Joel R. Reidenberg, Resolving Conflicting International Data Privacy Rules in Cyberspace, STANFORD LAW REVIEW [Vol. 52 (2000):1315-1376], 1315)

In particular, the European model is one in which

omnibus legislation strives to create a complete set of rights and responsibilities for the processing of personal information, whether by the public or private sector. First Principles become statutory rights and these statutes create data protection supervisory agencies to assure oversight and enforcement of those rights. Within this framework, additional precision and flexibility may also be achieved through codes of conduct and other devices. Overall, this implementation approach treats data privacy as a political right anchored among the panoply of fundamental human rights and the rights are attributed to "data subjects" or citizens. (1331f.)

By contrast, the United States is distinctive in its approach, in which

… the primary source for the terms and conditions of information privacy is self-regulation. Instead of relying on governmental regulation, this approach seeks to protect privacy through practices developed by industry norms, codes of conduct, and contracts rather than statutory legal rights. Data privacy becomes a market issue rather than a basic political question, and the rhetoric casts the debate in terms of "consumers" and users rather than "citizens." (1332)

- i.e., a consequentialist position, one that emphasizes economic benefit at large over possible risks to individual privacy.

However well the associations between U.S.+consequentialism and E.U.+deontology will hold up 9 - recent discussion among the aoir ethics committee, following informal research by Christine M. Hine, has made even clearer that the problems of contrasts between the US and the EU on data privacy protection are paralleled by more fine-grained contrasts between the EU member states themselves.

 

Endnotes

1. My profound thanks to the members of the committee who have generously shared their time, expertise, and care through discussion and critical evaluation of the issues raised in this document. The committee includes: Poline Bala – Malaysia;Amy Bruckman – USA; Sarina Chen - USA; Brenda Danet – Israel/USA; Dag Elgesem – Norway; Charles Ess – USA; Andrew Feenberg - USA; Stine Gotved – Denmark; Christine M. Hine – UK; Soraj Hongladarom - Thailand; Jeremy Hunsinger - USA; Klaus Jensen - Denmark; Storm King - USA; Chris Mann - UK; Helen Nissenbaum - USA; Kate O'Riordan - UK; Paula Roberts - Australia; Wendy Robinson - USA; Leslie Shade - Canada; Malin Sveningson - Sweden; Leslie Tkach - Japan; John Weckert - Australia.

2. In their project to collect all (English) literature pertinent to online research, the Committee on Scientific Freedom and Responsibility of the AAAS (American Association for the Advancement of Science) includes the following disciplines: Anthropology , Business, Communications/Media , Computer Science , Economics , Education , Law , Linguistics , Medicine, Nursing, Pharmacology , Philosophy , Political Science , Psychology , Public Health, Social Work, Sociology , and Statistics. (AAAS CSFR, "Categories.doc," quoted by permission.)

3. Deborah Johnson (2001) provides excellent definitions of these (and other) basic terms in her classic introduction to computer ethics.

"Utilitarianism is an ethical theory claiming that what makes behavior right or wrong depends wholly on the consequences….utilitarianism affirms that what is important about human behavior is the outcome or results of the behavior and not the intention a person has when he or she acts" (36: emphasis added, CE). When faced with competing possible actions or choices, utilitarian approaches apply an ethical sort of cost/benefit approach, in the effort to determine which act will lead to the greater benefit, usually couched in terms of happiness (a notoriously difficult and ambiguous concept – thus making utilitarian approaches often difficult to apply in praxis). As Johnson goes on to point out here, there are several species of utilitarianism (what some ethicists also call teleological or goal-oriented theories). Briefly, one can be concerned solely with maximizing benefit or happiness for oneself (ethical egoism) and/or maximizing benefit or happiness for a larger group (hence the utilitarian motto of seeking "the greatest good for the greatest number").

"By contrast, deontological theories put the emphasis on the internal character of the act itself," and thus focuses instead on the motives, intentions, principles, values, duties, etc., that may guide our choices" (Johnson 2001, 42: emphasis added, CE). For deontologists, at least some values, principles, or duties require (near) absolute endorsement – no matter the consequences. As we will see in this document, deontologists are thus more likely to insist on protecting the fundamental rights and integrity of human subjects, no matter the consequences – e.g., including the possibility of curtailing research that might threaten such rights and integrity. Utilitarians, by contrast, might argue that the potential benefits of such research outweigh the possible harms to research subjects: in other words, the greatest good for the greatest number would justify overriding any such rights and integrity.

Finally, virtue ethics derives in the Western tradition from Plato and Aristotle. The English word "virtue" in this context translates the Greek arete - better translated as "excellence." In this tradition, "…ethics was concerned with excellences of human character. A person possessing such qualities exhibited the excellences of human good. To have these qualities is to function well as a human being" (Johnson 2001, 51).

Contemporary feminist ethics traces much of its development to Carol Gilligan’s work on how women make ethical decisions – in ways that both parallel and often sharply contrast with the ethical developmental schema established by Lawrence Kohlberg. Briefly, Gilligan found that women as a group are more likely to include attention to the details of relationships and caring, choosing those acts that best sustain the web of relationships constituting an ethical community – in contrast with men who as a group tend to rely more on general principles and rules. For Gilligan, this basic contrast between an ethics of care and an ethics of justice is by no means an either/or choice: on the contrary, she finds that the highest stages of ethical development are marked by the ability to make use of both approaches. See Rachels (1999, 162-74) for an overview and suggestions for further reading.

Rachels also provides a more complete account of utilitarianism, deontology, and still other ethical decision-making procedures. In addition, interested readers are encouraged to review Weston (2001), Thomson (1999), Birsch (1999), and Boss (REF) for both more extensive discussion and applications of ethical theory. (See note 3 below for additional resources in cross-cultural ethics.)

Finally, while ethicists find that these distinctions between diverse theories and approaches are useful for clarifying discussion and resolving conflicts – they (largely) agree that a complete ethical framework requires a careful synthesis of several of these theories.

4. For cross-cultural approaches to ethics in addition to Boss (2001), see, for example Zeuschner (2001).

5. The NESH guidelines (National Committee for Research Ethics in the Social Sciences and the Humanities [NESH], Norway) "Guidelines for research ethics in the social sciences, law and the humanities." [2001]: <http://www.etikkom.no/NESH/guidelines.htm>) point out that "public persons" and people in public spaces have a reduced expectation of privacy, such that simple observation of such persons and people is not ethically problematic. By contrast, recording (e.g., using audio- or videotape) such persons and people does require their (informed) consent.

6. As a middle ground between more public and more private domains, and between greater and lesser obligation to protect privacy — there is the correlative set of expectations as to what counts as polite or courteous behavior, sometimes called "Netiquette." For example, it is arguable that any listserv or e-mail is public because the Internet is technologically biased in favor of publicity, listserv archives are often made available publicly on the Web, etc. Insofar as this is true, there is no strict ethical obligation, say, to ask permission before quoting an e-mail in another context. Nonetheless, it seems a matter of simple courtesy, if not ethical obligation, to ask authors for permission to quote their words in other electronic domains.

If the request is for quoting an electronic document in print, then prevailing practice — and perhaps the requirements of copyright law? — strongly suggest that all such quotes require explicit permission from the author.

See also Allen (1996), who argues for a "ground-up" dialogical ethics - i.e., one developed over the course of the research project through on-going communication with one's research authors (in contrast with the usual social science and medical approach that presumes these are subjects). The results of this approach are a concrete instance of the sort of middle ground described above.

7. Consider the following comments on the NESH guidelines (from aoir ethics working committee Preliminary Report):

More stringent ethical obligations — and requirements

The ethical requirements established here appear to be somewhat more stringent than in other statements we’ve examined. For example:

The obligation to respect human dignity

Human dignity implies that every one of us has interests that can not be set aside, whether in the interests of greater insight or to benefit society in other ways.

That is, contra the utilitarian approach that allows individual interests, including life, to be overridden if necessary for the greater good — this statement seems to say that human dignity is an absolute, which cannot be overridden for the sake of benefit for others.

The obligation to inform research subjects

Persons who are the subjects of research must be given the information they need for a reasonable understanding of the research field, of the consequences of participating in the research project, and of the object of the research. They must also be told who is paying for the research.

While we have discussed the advisability of subjects knowing about funding sources, I don’t recall that we’ve been collectively insistent on this point.

The obligation to respect individuals’ private lives and families

Researchers must show due respect for the individual's private life. Each person is entitled to control over whether or not to make identifiable information on his or her private life and close relations available to others.

Respect for privacy is intended to protect people against unwanted interference and against unwanted observation.

I find this striking as it includes one’s family and/or other close relations as part of the circle of protection that researchers must draw. By contrast, I’ve always assumed in reading other guidelines and statements that the obligation to protect the identity of one’s subjects mean solely the individual.

 

The confidentiality requirement

Persons who are made the subjects of research are entitled to confidential treatment of all information they give. The researcher must prevent the use and transmission of information which may harm the individual on whom the research is being carried out. The research material must normally be rendered anonymous, and the storage and destruction of lists of names or personal identity numbers must satisfy strict requirements. (Emphasis added, CE)

8. This contrast is also apparent, for example, between the guidelines suggested by Amy Bruckman and Susan Herring - both U.S.-based researchers. See Jankowski, Nickolas and Martine van Selm. 2001 (?). "Research Ethics in a Virtual World: Some Guidelines and Illustrations" <http://www.brunel.ac.uk/depts/crict/vmpapers/nick.htm> for a discussion of this contrast as presented as part of a panel discussion at aoir 2.0.

9. Diane Michelfelder (The moral value of informational privacy in cyberspace. Ethics and Information Technology 3 [2001]: 129-135) has argued that both the U.S. and European law are able to root privacy as a fundamental human right. To begin with,

legal protection for privacy in the US has grown up around two fundamental privacy interests. On the one hand, there is the constitutional right to privacy first established by the US Supreme Court decision in Griswold v. Connecticut.(4) On the other hand, there is the …constitutional right to informational privacy backed by the Fourth Amendment as well as by tort-related guarantees. The former finds its moral basis largely rooted in a single value, the value of personal autonomy. The latter finds its moral basis in a host of different values, including personal liberty and dignity, solitude, self-esteem, self-identity, and the development of one’s individuality for the sake of achieving happiness.(5) (131, with references)

With regard to the European Union Data Protection Directive (1995), she writes,

The DPD explicitly states that "data-processing systems are designed to serve man." With this in mind, the DPD finds its moral basis in the 1950 European Convention for the Protection of Human Rights and Fundamental Freedoms, specifically in this Convention’s statement that "everyone has the right to respect for his private and family life, his home and his correspondence." These words, particularly the mention of ‘correspondence,’ ring of the language of the Fourth Amendment and privacy construed as the ‘right to be left alone.’ They are also though suggestive of the constitutional ‘zone of privacy’ that Justice Douglas argued for in the Griswold decision. The moral values underlying the DPD can accordingly be tied in both to the European Convention for the Protection of Human Rights, and to the US Constitution. (132: emphasis added, CE)

Nonetheless, beyond the initial comparison offered here between the NESH Guidelines and the U.S. AAAS report, additional support for my claim that the U.S. approach is more consequentialist in contrast with a more deontological European approach may be seen in the different approaches each takes to laws concerning e-commerce and e-consumers. Briefly, U.S. law places the burden of privacy protection first of all on the consumer - placing corporate "rights" to gather information on consumers ahead of individual rights. By contrast, the E.U. Data Protection Act, as noted above, places priority on protecting individual privacy rights over corporations' and governments' interests in collecting information on individuals. See John R. Aguilar ("Over the Rainbow: European and American Consumer Protection Policy and Remedy Conflicts on the Internet and a Possible Solution," International Journal of Communications of Law and Policy (Issue 4, Winter 1999/2000, 1-57) extensively documents this contrast: see especially section III, "E-Commerce Concerns and the Cultural Battle Waging Between the EU and US" (11ff.)