The focus of this annotated bibliography is primarily on articles dealing with examples of the use of personal computers by anthropologists for content analysis of written texts. There does seem to be a paucity of such articles in the literature, unfortunately, which seems to arise from the sort of instinctive avoidance anthropologists have shown toward working with textual rather than conversational discourse materials. Many anthropologists seem to leave the texts of their informants to scholars in romance languages, religion, the humanities, or other hermeneutic disciplines, somehow leaving them out of the field of "culture."
This is apart from their hesitance toward using technology such as computers in their research. The goal here is to look at anthropologists who have sidestepped both obstacles. However, this survey is somewhat far-ranging, in that some of these texts look at content analysis purely from a theoretical standpoint without discussing practical applications, and others are by other social researchers involved in ethnographic work. Further, some look at the issue of coding and categorization on transcribed verbal (spoken) data or field notes rather than written cultural texts. However, they are all likely to be of interest to linguistic and cultural anthropologists wishing to gain a better understanding of the possibilities of computer-assisted content analysis for their discipline.
Agar, Michael, "Microcomputers as field tools," Computers and the Humanities , Vol. 17, 1983, p. 19.
In this article, Agar attempts to argue what one needs in a computer for using it "in the field" for qualitative analysis. Clearly, this one's painfully dated. Agar argues that in order to do this, social scientists "also have to be programmers" and that in order to be able to code data and retrieve it by categories, they need a minimum RAM of 48K. (My Commodore 64 in junior high would have fit the bill, I suppose.) Nonetheless, Agar makes a convincing case for how the computer improves the efficiency of the ethnographer without interfering with "ethnographic rapport," and strongly suggests that researchers should take computers with them into the field rather than bringing back reams of hand-written notes to computer labs back home, since "data elimination" can be done right away.
Anderson, Ronald E., and Brent, Edward E. Jr., Computer Applications in the Social Sciences, Temple University Press, Philadelphia, 1990.
This book is certainly valuable in its own right, discussing everything from computer literacy for the social sciences to using "expert system" artificial intelligence and creating computer models of complex social institutions. However, anthropologists interested in doing content analysis will particularly appreciate Chapter 13, "Analyzing Text," which discusses content analysis, qualitative research methods, and comprehension of "natural languages" (i.e. the ones that people, rather than computers, speak.) It has an excellent discussion of some of the computer text dictionaries and software packages available, though it really doesn't help rank them according to their relative merits and flaws.
Becker, Howard S.; Gordon, Andrew C.; and LeBailly, Robert K., "Field Work with the Computer: Criteria for Assessing Systems," Qualitative Sociology , Vol. 7, No. 2, Spring/Summer 1984, p. 16.
This article is an attempt to counsel social scientists as to what sort of computing hardware and software they will need "in the field," based on their needs with regard to data collection, reduction, and coding. It's not surprisingly somewhat dated, considering all the advances in portable computing that have occurred in the last decade. However, it is useful particularly in the areas where Becker, et al., discuss what they consider to be the best and most economical software packages for coding field notes - many of which are useful for other kinds of content analysis.
Berelson, Bernard, Content Analysis in Communication Research, Free Press, Glencoe, 1952.
"An oldie but goodie," Berelson's book is mainly focused on the early uses of content analysis - the tracking of public opinion (in the days before "scientific polling"), the monitoring of wartime (and other) propaganda, and the analysis of trends in journalism. After reading it, one is likely to be amazed at how complete and mature a field of research content analysis was at this stage - and how surprisingly little its techniques have advanced since, except for computerization. Berelson discusses the uses of content analysis, and is frank about the "technical problems" involved with it (under the constraints of the technology of that era.) Also valuable for its massive bibliography, showing early forms of content analysis used as far back as the previous century.
Bernard, H. Russell, and Evans, Michael J., "New microcomputer techniques for anthropologists," Human Organization , Vol. 42, 1983, p. 182.
In this short article, Bernard and Evans attempt to bring some "new" computer applications to the awareness of their applied anthropology colleagues. While their discussion of new statistical analysis packages and word processors that handle foreign language alphabets is undeniably helpful, anthropologists interested in content analysis will be most interested in the part of the article that discusses a "new" software package for coding and editing field notes on "memory-limited microcomputers" using a "database management system" approach.
Budd, Richard W.; Thorp, Robert K.; and Donohew, Lewis, Content Analysis of Communications, Macmillan Company, New York, 1967.
This volume on content analysis helps lay out the conceptual foundations of content analysis, discussing the environmental contexts of communication and the basis for formulating hypotheses and systematic studies with regard to communication. As with other texts, it goes into the matters of sampling, measurement, categorical formulation, reliability and validity, and the "relatively new technique" of doing content analysis with the computer. Especially valuable for its massive bibliography, this discussion of content analysis in its "middle years" shows some of the ways the technique was used in a less media-saturated time.
Colby, Benjamin N., "Toward an Encyclopedic Ethnography for Use in 'Intelligent' Computer Programs," in Dougherty, Janet, ed., New Directions in Cognitive Anthropology, University of Illinois Press, Chicago, 1985.
Colby's article here discusses the possibility of creating "smart/expert" artificially intelligent systems with an ethnographic component, suggesting the largest hurdle would be "natural language" comprehension of multiple linguistic systems. He discusses some of the work with "semi-smart" content analysis software such as the General Inquirer, and how that might lay the basis for more "intelligent" special-purpose content dictionaries and parsing programs that could be created. This edited volume is useful for its other articles, which discuss a whole range of issues in cognitive anthropology, such as taxonomies, schemata, fuzzy sets, and ethnosemantics, which bear on the theoretical basis of content analysis.
Colby, Benjamin N.; Kennedy, Sayuri; and Milanesi, Louis, "Content Analysis, Cultural Grammars, and Computers," Qualitative Sociology , Vol. 14, No. 4, 1991, p. 373.
In this article, Colby, et al., discuss a "program under development," SAGE, for the labelling of clauses by semantic features and the creations of narrative grammars from oral folk literature. They suggest that all kinds of cultural forms, including body painting, divination, and classroom interaction, could be analyzed in terms of such "plot grammars." They discuss specifically their intentions to use the program to examine genres of folktales (bounded to specific groups and time periods, they stress) in order to get at the "cultural logic" of their creators.
Edwards, Jane Anne, and Lampert, Martin D., eds., Talking Data : transcription and coding in discourse research, Lawrence Erlbaum Associates, Hillsdale, 1993.
This is a good, general, all-purpose primer in sociological conversational analysis. The authors go through all the basic methods involved in transcribing and coding spoken conversation in all its various settings. Once again, content analysis researchers might see some clear points of contention, but might also uncover one or two interesting ideas for dealing with written textual data from their cultures of study.
Gerbner, G.; Holsti, O.R.; Krippendorff, K.; Paisley, W.J.; and Stone, P.J., eds., The Analysis of Communication Content: developments in scientific theories and computer techniques, John Wiley, New York, 1969.
The "state of the art" in content analysis - in 1969. The authors practically foam at the mouth over the possibilities for new kinds of content analysis that computers will make available. They discuss it all in light of "new" scientific discoveries with regard to cognitive science, cybernetics, and natural language analysis. Unfortunately, once again, it can't help but seem a little dated to 1990s content analysis researchers.
Hirschman, Elizabeth S., Postmodern Consumer Research : the study of consumption as text, Sage Publications, Newbury Park, 1992.
Hirschman suggests something that many advertisers believe already - that the patterns of consumption of people in post-industrial societies can be read as a "text," whose meaning, once deciphered, will help them sell more products and "product allegiance." However, her goals are to suggest "readings" of the text of consumption for "counter-hegemonic" purposes, working against the "Madison Avenue" semioticians. In doing so, she takes the reader on a fascinating tour through a rather long history of philosophical inquiry in the social sciences.
Holsti, Ole R., Content Analysis for the Social Sciences and Humanities, Addison-Wesley Publishers, Reading, 1969.
An excellent basic primer for all social scientists interested in doing content analysis. Holsti suggests the ways in which content analysis can benefit anthropologists, sociologists, psychologists, and behavioral scientists, as well as people working in the humanities interested in issues of authorship and influence of texts, etc. He discusses some of his own work in using content analysis to track coverage of international political crises. Perhaps the best of all the many "how-to" guides as far as addressing the uses of content analysis specifically for social researchers.
Jacobs, Jerry, "A Phenomenological Study of Suicide Notes," The Sociologist as Detective: an introduction to research methods, 2nd edition, Praeger Publishers, New York, 1976.
Sanders' textbook lays out the ways in which the sociological method involves the same kind of close attention, dodging false trails, and methodical clue-searching that detective work does. Sander's fifth section discusses content analysis as a way for sociologist-detectives to find "leads" in texts, and goes into written-text and conversational analysis. The written-text subsection discusses Jacob's study of suicide notes, in which he examines the notes left behind by suicide victims and uses content analysis to "open the door into their state of mind" and discover if the "real motives" of suicide fit into Durkheim's speculations on the matter.
Kemper, Robert V., "Trends in urban anthropological research: an analysis of the journal Urban Anthropology, 1972-1991," Urban Anthropology , Vol. 20, no. 4, Winter 1991, p.373.
Kemper writes in Urban Anthropology, for the readers of his journal, about trends that he's seen in it over the last 20 years. He's interested in examining what themes his colleagues have been "after" in pursuing their ethnographies and research, and suggests some interesting cyclical patterns based on his admittedly "low-level" content analysis of the journal. It's interesting in a whole sort of sociology-of-knowledge framework, seeing what issues came to the forefront in different periods of publishing.
Kirk, Rodney C., "Microcomputers in anthropological research," Sociological Methods and Social Research , Vol. 9, 1981, p. 473.
Kirk's article attempts to warn of the "risks" (rural telephone operators seem to be a big one) involved in using computers in the field, but then goes on to argue what he sees as the many advantages. Of particular interest is the section where Kirk discusses how he and his research team used microcomputers to retrieve and manipulate data about pesticides and farmworker health in a qualitative form. He suggests that working with the data in ways other than the "traditional quantitative" ones makes it more readable and displayable for the researchers.
Krippendorff, Klaus, Content Analysis: an introduction to its methodology, Sage Publications, London, 1980.
Part of Sage's CommText series, Krippendorff's volume lays out a comprehensive survey of the theory and practice of content analysis. The first part of the book deals with the historical, conceptual, and logical foundations of content analysis. Then he goes into the "nitty gritty" of it all, in chapters on unitizing, sampling, recording and coding, analysis and constructs, using computers, and testing for reliability and validity. This book is an excellent intro for social scientists and communications scholars who want to see the step-by-step procedure for doing a content analysis.
Laffal, Julius, "Concept Analysis of Language in Psychotherapy," in Russell, Robert L., Language in Psychotherapy: Strategies of Discovery, Plenum Press, New York, 1987.
It's not a typo - Laffal's concept of concept analysis is not quite the same thing as content analysis, but is interestingly similar. He discusses the need for more formal methods for interpreting meaning in patient discourse for an effective therapeutic relationship. His technique of "factor analysis" of "semantic primitives" and "key words" are surprisingly similar to Weber's method of content-analyzing text. The therapist must transcribe the patient's discourse, and search for interrelated "concept categories." He discusses how this might help deal with the problems of schizophrenic patients. He might not call it content analysis - but if it looks like a duck...
Lasswell, H.D., et al. Language of Politics: studies in quantitative semantics, MIT Press, Cambridge, 1965.
Lasswell discusses in this volume much of the research on political communication, especially the "new" field of political campaigning through mass electronic broadcasting. While he discusses a variety of techniques with regard to determining the frequency of certain phrases in political speech, only some of them could truly be called content analysis. In any case, it's a good introductory volume for acquainting political scientists with the methodology of content analysis.
Manwar, Ali; Johnson, Bruce D.; and Dunlap, Eloise, "Qualitative Data Analysis with Hypertext: A Case of New York City Crack Dealers," Qualitative Sociology , Vol. 17, No. 3, 1994, p. 283.
What's interesting about Manwar, et al., is not that they attempt to content-analyze a particularly novel form of discourse, namely ethnographic interviews with New York City crack dealers, but rather that they found their analysis was best facilitated by a Hypertext program, Folio Views, which they claimed helped them overcome the "almost insurmountable difficulties" of managing and coding the "sheer volume" of data they collected. They make a convincing case for the use of hypertext for solving many qualitative data-related problems, suggesting that it might be useful for a lot more than just interactive fiction books. This is a relatively recent article, but then, hypertext has also been around for 25 years too, so one might expect an article discussing its use for content analysis before now... no such luck.
Markoff, J.; Shapiro, S.; and Weitman, S., "Toward the integration of content analysis and general methodology," in Heise, David R., Sociological Methodology, Jossey-Bass, San Francisco, 1975.
The goal of Markoff, et al., is to argue that social scientists should not treat content analysis as an end-in-itself, but should instead to integrate it with other traditional parts of the "sociological toolkit." They feel that there has been insufficient communication between social researchers doing content analysis and their colleagues, and for that reason the two groups are not collaborating. The authors suggest that sociology can benefit through the use of content analysis, but the sociological method must better integrate textual studies into its theoretical foundations.
McCarty, Christopher, and Podolefsky, Aaron, "Topical sorting: A technique for computer assisted qualitative analysis," American Anthropologist , Vol. 84, 1983, p. 4.
McCarty and Podolefsky try to discuss what they see as the limitations of "traditional" management and analysis of fieldnotes, and then attempt to convince the reader about all the advantages of using text-editing programs "that are widely available for personal computers" to store, retrieve, edit, and sort coded fieldnotes in paragraph form. Claiming that the computer helped them deal with 10,000 pages of typed fieldnotes, they once again emphasize the point that the computer is most efficient at managing data from "large-scale" projects.
Moerman, Michael, with Sacks, Harvey, Talking Culture: Ethnography and Conversation Analysis, University of Pennsylvania Press, Philadelphia, 1988.
Moerman's "conversation analysis," which involves the transcribing of conversational data and appending notations to it, could certainly be considered a form of content analysis. He discusses in this book his attempt to analyze the conversational discourse of the various actors involved in Thai criminal trials in order to get at their concepts of sequence, intentionality, and truth. He claims the use of this method is rooted in the "ethnography of speaking" as pioneered by Harvey Sacks and John Gumperz - but it certainly provides some interesting ideas about what to do with written texts as well.
Ogilvie, Daniel M.; Stone, Philip J.; and Kelly, Edward F., "Computer Aided Content Analysis," p. 218, in Smith, Robert K.; and Manning, Peter K., eds., A Handbook of Social Science Methods, Volume 2: Qualitative Methods , Ballinger Publishing, Cambridge, 1982.
Enthusiastic about the "new" possibilities for using smaller computers with microprocessors for content analysis in field situations, Ogilvie, et al., set out to discuss the techniques, the hardware, and the software. They attempt to define the ways in which computers will make the task of content analysis more rigorous and efficient, while cautioning researchers not to allow the technology to make them complacent in thinking hard about data validity. A good all-around piece.
Piault, C., "A methodological investigation of content analysis using electronic computers for data processing," in Hymes, Dell D., ed., The Use of Computers in Anthropology, Mouton Publishing, The Hague, 1965.
Piault discuss the ways in which the "newer" electronic computers can be utilized for content analysis research, so as to help researchers overcome some "old" technical problems. Researchers are counseled to consider how computers can alter some of the steps in designing their analyses. This book is valuable for its challenging the computerphobia of anthropologists in an early epoch, but the article in particular contains nothing of unusual interest.
Pool, Ithiel de S., Trends in Content Analysis, University of Illinois Press, Urbana, 1959.
Once again proving the hoary age of the field of content analysis, this book shows what Pool believes to be all the major "trends" in the field - themes in media coverage that researchers have focused on - up to 1952! It would be interesting to see what "trends" there have been in the field over the past 40 years. Perhaps someone needs to do a close content analysis of all the content analysis literature from that time period...?
Schlegel, Alice, and Schlegel, Barry, Cross-Cultural Samples and Codes, University of Pittsburgh Press, Pittsburgh, 1980.
The Schlegels' book discusses the use of coding in the HRAF data archive. It's an interesting exposition because while it's not content analysis per se, the techniques used in coding the HRAF archive will nonetheless be of interest to people who do content analysis. They do a good job in discussing some of the limitations of their methods and other issues in working with cross-cultural data.
Sproull, Lee S., and Sproull, Robert F., "Managing and analyzing behavioral records: explorations in non-numeric data analysis," Human Organization , Vol. 41, 1982, p. 283.
While Sproull and Sproull attempt to argue that their method of "computer-assisted ethnography" is different from content analysis and "protocol analysis," the technique they describe herein is nonetheless quite similar. Basically, it involves "non-numeric" analysis of "behavioral records," which they define as "coded" texts derived from observations, interviews, and self-reports. Their "custom-designed frequency program" for "tallying the occurrence of each label or tag and printing the resulting frequencies" sounds to me like a content-analysis software package.
Tallerico, Marilyn, "Applications of Qualitative Analysis Software: A View from the Field," Qualitative Sociology , Vol. 14, No. 3, 1991, p.275.
Unfortunately for anthropologists, Tallerico's "field" is not out on some South Sea island, but instead in her professional field. She discusses in this article her use of the Ethnograph text-analysis program within the context of dissertation student advising and mentoring for her PhD students in educational administration. She says she has written the article to help her "colleagues get their feet wet," claiming it helps her go through dissertation drafts and discover unfounded attributions and overstatements. Thus, for her it's a tool for checking research, not doing research.
Webb, E.J.; Campbell, D.T.; Schwartz, R.D.; and Sechrist, L., Unobtrusive Measures: Nonreactive research in the social sciences, Rand McNally, Chicago, 1966.
Webb, et al., discuss a number of "unobtrusive measures," which they define as sociological techniques which take place outside of the awareness of ones' research subjects. Webb suggest that sociologists need not automatically assume that all unobtrusive measures involve some sort of covert surveillance, eavesdropping, or spying, offering the counterexamples of household waste studies, etc. They frame the problem in terms of the old idea of the Heisenberg Uncertainty Principle, and how subjects' awareness of being studied alters their behavior; also how such techniques may be called for in "sensitive" sociological situations, such as criminal behavior. Content analysis is mentioned and discussed in some detail as an important unobtrusive technique.
Weber, Robert Philip, Basic Content Analysis, Sage Publications, London, 1985.
Weber's book indeed gives you the "basics" of content analysis. While short on examples, his text does discuss how to classify and interpret qualitative textual data, techniques and procedures, and some speculations on issues in and the future of the field of content analysis. He also has a valuable appendix (which should be a chapter in its own right) discussing the use of computer software and textual data archives and concordances. Weber, along with Krippendorff, is reputed to be one of the "mavens" of content analysis, and he does a good job of introducing it to beginners here.
Weber, Robert Philip, "Computer-Aided Content Analysis: A Short Primer," Qualitative Sociology , Vol. 7, No. 2, Spring/Summer 1984, p. 126.
In this article, Weber attempts to argue for the advantages of computerized content analysis, suggesting it makes the task a lot easier and more powerful. Weber specifically discusses his use of computer analysis in tracking the attitude of the Democratic and Republican parties toward wealth throughout the 20th century, based on speeches in presidential campaigns, but suggests ways in which the technique is usable for a "wide variety of data - fieldnotes, letters, speeches, newspapers, books, and diaries, just to cite a few examples." The article is in many ways a short outline of his book on the basics of content analysis, but perhaps "propagandizes" a bit more for the usefulness of microcomputers.
Weinberg, Daniela, "Computers as a research tool," Human Organization , Vol. 33, 1974, p. 291.
While Dr. Weinberg attempts to argue the case that researchers need to emphasize "algorithmic thinking," and mentions how computers aided her in her research on sorting and displaying kinship and migration patterns based on local texts in a rural European village, she fails to discuss what techniques she used, leaving the reader in the dark. Nonetheless, this is an early article, written back in the days before micros when using computers was still a matter governed by batch-processing technician-priests, so she can perhaps be forgiven. She does a good job of arguing for the limitations of quantitative analysis and "traditional" interpretive methods in understand the "rural exodus" from Europe.
Werner, Oswald, "Microcomputers in cultural anthropology: APL programs for qualitative analysis," Byte , Vol. 7, 1982, p. 250.
Werner discusses in this article how he used computers to "pioneer" a study of Navajo ethnomedical knowledge. He goes into detail about how he created three programs in the APL language for creating Navajo key word-in-context (KWIC) files. Unfortunately, APL is somewhat of an arcane programming language, using unusual integer-indexing methods, and those not familiar with it might have a hard time following Werner's data manipulations. And one might wonder as to why he published it in a computer hobbyist magazine rather than a peer-reviewed anthropological journal?
Werner, Oswald, Systematic Fieldwork, Sage Publications, Newbury Park, 1987.
Werner's book is an all-out call for anthropologists to be less haphazard and more methodologically precise with their field research. He attempts to criticize what he feels is the "naive empiricism" of anthropologists for thinking they can just "soak in" all the information that confronts them in the field. He mentions computer-assisted content analysis as one of several important "data techniques" which assure that the data the researcher gathers in the field is valid and testable.
Wood, Michael S., "Alternatives and options in computer content analysis," Social Science Research , Vol. 9, 1980, p. 273.
Realizing that content analysis "covers a wide assortment of approaches and techniques," Wood nonetheless seeks to deal with what he sees as its key theoretical and methodological problems. He attempts to argue in this article that specialized content analysis software packages, like the General Inquirer program, force researchers to depend too much on inflexible a priori assumptions about their data, such as operationalization based on words as units of analysis. He suggests that "general" text-processing packages might provide more advantages because they are more flexible in allowing the researcher to work a posteori from their textual data, in choosing sampling units, etc.
Return to Academia