The "art" of questionnaire construction: some important considerations for manufacturing studies


Nicolaos E. Synodinos

The Authors

Nicolaos E. Synodinos, Department of Marketing, University of Hawaii at Manoa, Honolulu, Hawaii, USA

Acknowledgements

Received July 2001 Revised March 2002 Accepted July 2002 This article is an expanded and updated version of a paper presented to the International Symposium on Manufacturing Strategy (ISMS '98) at Waseda University, Tokyo, Japan November 18-20, 1998. The author is grateful to Sophia University (Tokyo, Japan) where portions of this manuscript were completed while being there as a "visiting researcher."

Abstract

This article reviews research findings related to the "art" of constructing survey questionnaires. It discusses some of the important issues that should be considered in gathering quality data via questionnaires, provides general suggestions for their construction, includes a comprehensive list of important reference sources, and examines some of the survey-based studies published in Integrated Manufacturing Systems. Constructing a good questionnaire requires a thorough grasp of the intricacies of the topical area and detailed knowledge of the strengths and weaknesses of the different survey administration modes. In addition, questionnaire construction entails close attention to details about the wording of questions, their instructions, their response choices, and their sequence. Most importantly, the research instrument should be refined based on guidance from repeated pretests. Well-constructed questionnaires can ensure the consistent meaning of the questions across respondents and can contribute to data quality by decreasing both item and unit nonresponse.


Article type: Literature review, Survey.

Keywords: Surveys, Questionnaires, Development, Methodology.

Content Indicators: Research Implications*** Practice Implications*** Originality** Readability***


Integrated Manufacturing Systems
Volume 14 Number 3 2003 pp. 221-237
Copyright © MCB University Press ISSN 0957-6061


Introduction

Manufacturing decisions should be guided not only by technical feasibility, but also by many other factors. These include a detailed understanding of the ever-changing needs of potential customers, the variables of the marketing mix, and the factors of the external environment. Occasionally, knowledge and intuition are sufficient for developing appropriate managerial strategies. Almost invariably, these are not enough and additional information is required. Frequently, survey research is the chosen (and most appropriate) approach to gather the additional data needed for sound manufacturing strategy decisions.

The origins of modern surveys are in early twentieth century public opinion polls and marketing research. Since that time, they have proliferated in many fields and are commonly used to obtain diverse types of information in organizational settings (Kraut and Saari, 1999). Surveys can measure managers' attitudes about certain issues, gather subjective appraisals of manufacturing processes, or obtain expectations of various outcomes. Also, surveys are used "as strategic tools to drive and measure organizational change" (Kraut, 1996, p. 11) and in studies of ergonomic improvements in manufacturing and production (Lockhart and Russo, 1994).

Over the years, Integrated Manufacturing Systems (IMS) published numerous articles that collected their data via surveys. Table I summarizes the characteristics of some of these studies. In addition, IMS published case studies that relied mostly on unstructured or semi-structured questionnaires. Although not included in the table, such cases can also benefit from guidelines of questionnaire construction.

Surveys rely on self-reported answers obtained from a sample of respondents in order to generalize to their parent population. The answers are obtained via self or interviewer-administered questionnaires and typically are structured and undisguised. That is, various aspects of the questionnaire are clearly specified and each question's purpose is apparent to the respondent.

Surveys may sample citizens, consumers, employees, managers, organizations, or other entities. The studies in Table I employed a wide variety of samples. For example, their samples consisted of groups such as practitioner members of the Institute of Operations Management (Burcher and Lee, 2000), managers of automotive assemblers and suppliers (Burgess et al., 1997), and furniture manufacturers with more than 30 employees (Huang and Mak, 1998).

Although surveys are susceptible to various errors, one of the most critical and preventable threats to their validity comes from the design of their questions (Fowler, 1995). Question clarity and consistent meaning to all respondents can be instrumental in reducing bias. Also, well-constructed questionnaires may contribute to reductions in item and unit nonresponse. Nevertheless, high nonresponse does not necessarily equate to high nonresponse error; it indicates the potential for such error because respondents are probably different from nonrespondents.

Many articles in manufacturing technology have been plagued with high unit nonresponse (see Table I). For instance, the authors of two of these articles (Newman and Sridharan, 1995; Riedel and Pawar, 1997) were disappointed with their return rates (12.3 percent and 13 percent respectively), but suggested that these rates were comparable to those frequently obtained in other industrial surveys. The limitations imposed by low response rates were clearly noted by Gascoigne et al. (1997). They commented that their study may provide some useful insights, but it "cannot be claimed to represent the UK cell control marketplace fully due to the limited response" (Gascoigne et al., 1997, p. 181). With return rates as low as those obtained in some of the studies in Table I, the findings can be outright misleading if respondents differ from nonrespondents on some critical dimensions.

Some of the articles of Table I reported their return and their usable questionnaire rates. In these studies, the usable questionnaires were approximately two percentage points below the number returned. For instance, Burcher (1992) obtained a return rate of 14.8 percent and a usable rate of 12.9 percent; Huang and Mak (1998) had return and usable rates of approximately 15 percent and 12.5 percent respectively; and Newman and Sridharan (1995) reported a return rate of 12.3 percent and a usable rate of 11 percent. It is unclear how liberal or conservative were the various authors' criteria of determining usability. Also, details of the mailing (e.g. number of questionnaires returned as undeliverable) were not usually given in the articles of Table I. All researchers of survey-based studies must peruse the guidelines of the American Association for Public Opinion Research (AAPOR, 2000) about computing and reporting detailed outcome rates.

Citations of survey methodology sources were largely absent in the articles of Table I. An exception is Orr's (1999) article that cites textbooks in marketing research (i.e. Zikmund) and sociological methods (i.e. Denzin). Empirical studies in manufacturing strategy are relatively recent and have been traced to the mid 1970s (Swink and Way, 1995). It is not surprising that many researchers in this and related fields "do not have a strong foundation in gathering and using empirical data" (Flynn et al., 1990, p. 250). However, such studies have gained momentum and scholars in the area "are learning to use the empirical methods that have been developed in other related academic disciplines" (Minor et al., 1994, p. 22).

An authoritative review of empirical studies in operations management pointed out that many of the "questionnaires appear to have been thrown together hastily, with little thought of reliability, validity or generalizability" (Flynn et al., 1990, p. 259). Other authors (Fowler, 1995; Lockhart and Russo, 1994) noted that it is not uncommon for researchers - usually in disciplines outside of survey/marketing research - to hold the mistaken belief that questionnaires can be easily written by anyone knowledgeable in the topical area.

Questionnaire construction can be deceptively simple (Birn et al., 1990; Sudman and Bradburn, 1982). This apparent simplicity creates many problems because poorly constructed instruments can lead to erroneous conclusions. In fact, Schwarz (1996, p. 72) noted that:

Survey methodology has long been characterized by rigorous theories of sampling on the one hand, and the so-called "art of asking questions" on the other.

Thus, the most critical element of surveys may end up being its weakest link (Bradburn and Sudman, 1988).

The present article draws from the various academic disciplines that contributed to the development of survey research and discusses the important issues that should be considered in the development of high quality questionnaires. Typically, manufacturing-management researchers are not specialists in survey measurement. Therefore, it is useful to summarize for them some of the important guidelines and provide references that they ought to consult during the process of constructing their questionnaires.

Guidelines should not be considered in a vacuum, but in the context of the unique circumstances surrounding a particular survey. Indeed, experts (Labaw, 1980; Oppenheim, 1966; Payne, 1951; Peterson, 2000; Sheatsley, 1983; Sudman and Blair, 1998) have warned explicitly against blind adherence to rules of questionnaire construction. Payne (1951, pp. 98-9), in the classic book The Art of Asking Questions, emphasized that:

An open mind is especially needed in research, and flat rules or arbitrary judgments might do more harm than good.

The guidelines presented here should be viewed with this important admonition in mind.

The construction of a questionnaire consists of various interrelated steps that start with the research objectives and end with the final version of the instrument. This progression is summarized in Figure 1 and discussed in the remainder of this article.

Administration method

Almost invariably, there is no method that is superior to the others in all circumstances (Bradburn, 1983). Several factors should be considered in the selection of the most appropriate survey administration mode. Among them are the objectives of the study, the target group and its geographic distribution, the types of questions, and the available resources. In multi-national studies, the selection is complicated further by the fact that an appropriate method in one country may be inappropriate in another. Cost is an important determinant in choosing the survey administration mode, but it should never overshadow data quality considerations.

Survey administration methods can be classified into interviews and self-administered questionnaires (SAQs). Interviews can be conducted in person or via telephone. SAQs include postal and all other forms of questionnaires designed to be respondent administered (e.g. fax, e-mail, Web surveys). In this article, the discussion of SAQs is almost exclusively about mail questionnaires because they have been used by most of the studies published in IMS (see Table I).

Personal interviews rely on verbal reports and thus are less burdensome to respondents than SAQs that require written answers. The presence of the interviewer provides flexibility and the opportunity to observe respondents. However, the results can be influenced by biases resulting from the interviewer-respondent interaction (Bradburn et al., 1979; Kwong See and Ryan, 1999). Fortunately, response effects are relatively small when the interviewers are well trained and well supervised (Bradburn, 1983; DeLamater, 1982). Such effects are more likely "when the respondent has not arrived at a firm position on the issue and when the subject of the study is highly related to the respondent or interviewer characteristics" (Sudman and Bradburn, 1974, p. 137). Personal interviews are appropriate for surveys where the sequence of the questions is consequential and where there is a need to use visual materials. Usually, personal interviews achieve higher response rates than telephone or mail surveys and can be used for lengthier and more complex questionnaires. Unfortunately, personal interviews are costly, especially when their samples are geographically widely dispersed. Generally, personal interviews are perceived as being less anonymous than mail questionnaires and telephone surveys. In some cases, personal interviews can access respondents that are unreachable by other methods. In other cases, some respondents may be difficult to reach because of various practical impediments (e.g. organizational gatekeepers may prevent direct contact with the manager in charge of strategic decisions).

Personal interviews have been used in two of the IMS surveys shown in Table I. These were Orr's (1996) "first project" and the Burgess et al. (1997) study. Personal interviews have been used to collect data in other types of studies published in IMS (e.g. Driva et al., 2001; Kidd, 1995; Woodcock and Chen, 2000).

Telephone surveys are appropriate for surveys where question order is consequential because the interviewer (as in personal interviews) controls the flow of the questionnaire. Most importantly, telephone interviews - conducted from centralized calling facilities - allow for better supervision of the interviewers and thus can achieve higher data quality (Lavrakas, 1993). The data can be collected in a relatively short time and telephone interviews are ideally suited for obtaining information for ongoing or recently completed events. Generally, telephone interviews are perceived as more anonymous than personal surveys but less so than postal questionnaires. Some practitioners espouse the view that telephone interviews should not exceed 10-15 minutes. Although certain general population telephone surveys can be substantially longer, the number of partially completed interviews increases after 45 minutes (Lake and Harper, 1987). Rea and Parker (1997, p. 7) have suggested that:

The cost of implementing a telephone survey is considerably less than that of an in-person survey and, under certain circumstances it can be less than that of a mail-out survey.

With the current telephone technology that is in widespread use, questions that require visual stimuli (e.g. package design, advertisements) or facilitating devices (e.g. showcards) are better suited for other methods.

Telephone interviews were not used in any of the studies in Table I. However, two of them used telephone calls to encourage survey participation. That is, calls were made to notify (Gieskes and ten Broeke, 2000) and to remind (Gieskes and ten Broeke, 2000; Orr, 1996) respondents.

For mail surveys - as for other forms of SAQs - ease of administration and professional appearance of the questionnaire are important considerations. There is an increased need to have exceedingly clear and unambiguous overall and question-specific instructions because of their self-administered nature. Mail questionnaires are usually the least costly and most standardized alternative. Indeed, their ability to reach inexpensively geographically dispersed groups was probably a major determinant of the fact that almost all the studies of Table I were postal surveys. Mail questionnaires can be completed at the respondent's convenience and are generally perceived as more anonymous than the other methods. Complex questions can be facilitated with graphical presentations but SAQs may not be appropriate for some groups because they require a certain level of literacy. Thus, this is not a common concern for most IMS survey-based studies because their samples usually consist of literate individuals. Mail questionnaires are inappropriate for studies of rapidly changing opinions. In typical manufacturing technology applications, the magnitude of this concern is small compared to surveys of political and social attitudes. Generally, mail questionnaires have lower response rates than personal and telephone surveys. However, with meticulous procedures, postal surveys can achieve response rates comparable to those of the other administration methods (Dillman, 1978, 2000; Mangione, 1995).

Mail and other SAQs are more susceptible to question context effects (Schwarz et al., 1991). Also, in such instruments it is impossible to control the order with which the questions will be answered and there are no assurances that the intended person completed the questionnaire. The latter concern can be especially problematic for manufacturing studies that do not address the questionnaire to a specified individual or send the questionnaire to one person in an organization with instructions to pass it to another.

The questionnaire should be constructed to fit the method of survey administration. A question format that is appropriate for one method may not be for another. Indeed, the necessity to include questions of a particular format may lead to re-evaluation and change of the selected mode of survey administration (see Figure 1). A questionnaire developed for a particular mode will require some degree of change to make it suitable for another. Although not discussed in Orr's (1996) article, various adaptations to the questionnaire were probably necessary as his "first project" was a personal interview, whereas the "second project" was a replication using mail (see Table I).

The potential of some new technologies for gathering questionnaire data (e.g. Web surveys) will probably change the relative use of the different modes for administering surveys. In organizational surveys, the sampling concerns - that plague Web surveys of the general population - can be resolved in some cases. However, the issue of confidentiality of sensitive company information has different dimensions than those encountered in general population surveys. Notwithstanding some important differences between traditional and Web questionnaires (e.g. novel ways of presenting visual and auditory stimuli), the basic principles of good item construction are the same. In fact, new technologies increase the importance of understanding the implications of the differences between questionnaire administration methods (Tourangeau et al., 2000).

Questionnaire construction

Questionnaires should be designed to gather responses in an unbiased manner. The obtained answers should not reflect differences due to the instrument but should indicate differences between respondents (Fowler, 2002). This article discusses the concerns related to questionnaire construction under four sub-categories: question wording, response choices, question sequence, and other considerations (see Figure 1).

Question wording

The wording of questions (including their stem, response choices, and instructions) can have pronounced effects on the results. Even a small difference in wording may produce substantial response effects. Thus, it is appropriate to characterize good questionnaire construction as "a highly developed art form within the practice of scientific inquiry" (Rea and Parker, 1997, p. 27).

Questions should ask information that respondents can access readily (Tourangeau, 2000). Asking for detailed and/or not easily accessible information may antagonize some persons. Also, questions that ask "people to predict their response to a future or hypothetical situation should be done with considerable caution - particularly when respondents are likely to have limited experience on which to base their answers" (Fowler, 1995, p. 80). Questions that request confidential information (e.g. certain company records) can lead to high item nonresponse. Such items may be instrumental in the decision of some persons to forego participation in the survey, causing higher unit nonresponse as well. Therefore, researchers should limit their questions to those that the selected persons are able and willing to answer.

The intrusiveness of certain questions is probably one of the reasons for the very low return rates obtained by most of the surveys in Table I. Although not usually reported in the articles of Table I, item nonresponse was probably high as well. A hint about this concern is given by Newman and Sridharan (1995, p. 38) who commented that:

Not all questions were answered by each firm, the chief reasons being that some of the questions were not applicable to every firm that responded and in some cases the required data were not available or proprietary.

Also, they suggested that this tendency is prevalent in manufacturing surveys.

The respondent's understanding of a question should correspond to the meaning intended by the researcher (Schwarz, 1999) bearing in mind that "question comprehension involves extensive inferences about the speaker's intentions to determine the pragmatic meaning of the question" (Schwarz et al., 1998a, p. 152). Therefore, a question should be as clear and precise as possible so that all respondents interpret it as intended and all understand the same thing. Belson (1981) provided detailed examples of possible interpretations and misinterpretations of questions. His book and examples can be very instructive to all researchers employing surveys.

Questions should use simple structure with familiar words and avoid any slang or jargon. Also, items should never resort to double negatives. Questions should be as concise as possible to convey the intended meaning and respondents should be able to answer them with relatively minimal effort. As a general rule, questions should be easy to understand by persons with little formal education. The emphasis should be "on communication rather than grammar and style" (Wolfe, 1990, p. 95). Furthermore, research objectives should be examined vis-à-vis the burden imposed on respondents and their ability/willingness to make the differentiations requested by the questions. Unfortunately, it is not uncommon for researchers to be deeply engrossed in the minutia of their topic, leading them to ask "two or more questions which sound alike to the respondents" (Payne, 1951, p. 125).

Items should be asked in frames of reference that are meaningful to the respondents (e.g. appropriate measurement units and typical time frames for a particular activity). For example, asking respondents to report by calendar year may lead to difficulties and confusion if it differs from their fiscal year. All assumptions should be stated explicitly and the phrasing should be in specific rather than in indefinite terms. Most importantly, the wording must be neutral. That is, researchers should avoid leading questions (i.e. suggesting a response to the respondent) or loaded questions (i.e. including emotionally charged descriptions). In addition, researchers should be constantly vigilant of the potential effects of social desirability on the respondents' answers (DeMaio, 1984).

Each question should cover a single issue only. Items that inquire about two or more issues must be divided into separate questions. In the second figure of their IMS article, Gardiner and Gregory (1996) presented two questions of their audit questionnaire. Both of these should have been broken into more than one question as respondents may be agreeing with one of the issues of the stem but disagreeing with the other(s). Some of the questions of Gieskes and ten Broeke (2000) suffer from the same problem (e.g. statements 5 and 15 in Table AI of their article).

For studies repeated at different times, researchers should reevaluate each and every question in light of topical and linguistic changes. Rapid changes in some topics can lead to question obsolescence. Researchers of manufacturing technology must be especially vigilant given the fast pace of change taking place in that field.

Wording problems increase exponentially in multi-culture and/or multi-country questionnaires (Behling and Law, 2000; Johnson et al., 1997). These problems can be especially noticeable when the languages and the underlying cultures differ substantially. Questions developed within a particular cultural context may be meaningless or offensive in another. Indeed, creating directly comparable questionnaires in different languages is an extremely difficult task. Researchers should strive to construct research instruments that are lexically equivalent, conceptually equivalent, equivalent in measurement, and equivalent in response (Bulmer and Warwick, 1983; Warwick and Lininger, 1975). Certain basic universals may exist in some manufacturing-related questions. However, this does not necessarily eliminate some biases when such questions are answered by native speakers of the language vis-à-vis those who are not. The presumption that meaning will be equivalent across all respondents because they "know" - at various levels of proficiency - the language is unwarranted. Indeed, the meaning of some questions may differ among speakers of variations of the same language.

Response choices

Based on their response format, questions can be classified as being either open-ended or closed-ended. In open-ended items, the respondents phrase their own replies rather than trying to fit their answers into the provided choices. In closed-ended items, the respondent selects one (or more, if applicable) answer from the given alternatives. Hence, the former are referred to as "free response" and the latter as "fixed response" or "fixed alternative" items. A detailed discussion of open-ended vis-à-vis closed-ended questions can be found in Foddy (1993), who devoted an entire chapter of his book on this issue.

In open-ended questions, the answer is given from the respondent's frame of reference rather than that of the writer of the questionnaire. Free response items tend to be burdensome to respondents, especially when the questionnaire is self-administered. Also, free response questions are more likely to result in vague and useless responses (Fowler, 2002; Hague, 1987). Consequently, open-ended items should be used sparingly because they require substantial respondent effort. However, they can be useful in exploratory research and in the early stages of questionnaire development. According to Peterson (2000), open-ended questions are necessary in five situations. He noted that these occur in instances when the:

Fixed response items oversimplify the complexity of some opinions but are generally easier for respondents to answer and have fewer missing data than open-ended questions. Generally, closed-ended questions are difficult to construct but are relatively simpler to code and analyze. In most instances, closed-ended items are the most appropriate response format and researchers should expend the necessary effort to create them.

Special attention should be placed on the choices of closed-ended questions bearing in mind that "identically worded questions may acquire different meanings, depending on the response alternatives by which they are accompanied" (Schwarz, 1996, p. 75). The choices may clarify the underlying meaning because they provide "guidelines" regarding the expected answers (Schwarz, 1999) and are more likely "to communicate the same frame of reference to all respondents" (Converse and Presser, 1986, p. 33). However, they reduce the likelihood of obtaining not-specified answers (Schwarz, 1996).

Closed-ended questions should provide response alternatives that are exhaustive and mutually exclusive. That is, they should cover all possible response options and these should not overlap. The choices should be comprehensive but researchers should not overwhelm the respondents with too many alternatives. Whenever pertinent, it is advisable to construct the questions and their categories so that they can be compared readily with secondary data.

In certain instances, the substantive choices of a closed-ended question consist of value ranges. Response choices that specify ranges rather than specific values can be very useful for sensitive items. In such questions, it is more likely that respondents will be willing to give a range rather than an exact amount (Dillman, 1978).

The number of response alternatives is determined by various factors including the questionnaire administration method and type of question. For questions with categorical choices, Rea and Parker (1997) suggested using less than ten and up to a maximum of 15 answers in SAQs; a maximum of 20 choices in personal interviews provided they are accompanied with a showcard; and up to six options in telephone surveys. The order of the alternatives may influence the results (Schuman and Presser, 1981; Schwarz and Hippler, 1991; Sudman et al., 1996) and such effects interact with respondent characteristics such as age (Knäuper, 1999). There are several possible ways of ordering the choices of categorical variables. For example, they can be presented randomly, alphabetically, or in a sequence appropriate to fulfill particular research objectives. With computer-administered questionnaires, it is feasible to vary the order of the alternatives for different respondents or build experiments within the survey to examine response sequence effects.

For choices on a continuum, there are various other factors that should be considered. These include issues such as the degree of possible differentiation (respondents can make) of the alternatives, the use of verbal descriptions for all or for some of the categories, the length of such descriptions, and the decision to use numbers instead of verbal labels to describe the choices. Undoubtedly, the answers can be influenced by the psychological meaning of factors such as the numeric values chosen, the selected verbal labels, and the graphic layout of the scale (Schwarz et al., 1998b).

In many instances, the researcher must decide whether to include explicit nonsubstantive (i.e. "don't know," "no opinion") choices. Obviously, if respondents have an opinion it is important to record it. However, questions that provide nonsubstantive options may discourage respondents from reporting their meaningful opinions (Krosnick, 1999; Weisberg et al., 1996). As a guide, it has been suggested that:

Questions about which nearly everyone has enough information to form some opinion ... should be stated without a "no opinion" option. Questions of a specific, narrow, or detailed nature ... should be prefaced by screening questions to see whether the respondent has any information on the subject. (Scheaffer et al., 1990, p. 45).

Question sequence

The context within which a question is presented can influence the respondents' answers (Bradburn, 1983; Schuman and Presser, 1981; Wänke and Schwarz, 1997; Schwarz, 1996; Strack, 1992; Sudman and Bradburn, 1974; Sudman et al., 1996; Tourangeau and Rasinski, 1988). In fact, it is possible to find context effects in SAQs that are caused by subsequent items (Schwarz and Hippler, 1995). Also, context effects are more likely to occur when a single question is used to measure a complex issue (Schuman et al., 1981). Context effects can be especially problematic in studies investigating issues across time because such studies may incorrectly attribute their findings to changes while they may simply reflect the different contextual factors within which the questions have been presented (Schuman et al., 1981). Although question sequence effects are not ubiquitous, McFarland's (1981, p. 213) findings suggested "that the question order should be carefully planned in the construction of every survey".

Various devices can be used to scrutinize the relationships between the questions and their sequence. For instance, the use of flow charts is one such invaluable tool (Jabine, 1985). Also, it is helpful to conceptualize the questionnaire as consisting of three parts: The "introduction", the "main body", and the "characteristics of the respondent and/or organization."

The introduction provides a brief description of the study. In postal questionnaires, a portion of the information is included (or duplicated) in the cover letter that should always accompany such surveys. The description of the study should state clearly who the researcher represents, how/why the respondent was selected, and the importance of the respondent's answers to the research. For opinion questions, it should be stressed to respondents that there are no right or wrong answers. In cases where the questionnaire is marked with identifying information (e.g. identification number), the respondents must be informed of its presence and purpose. Frequently, manufacturing studies seek sensitive information and this may lead to high item and unit nonresponse. The latter can occur if respondents decide to forego answering the questionnaire because they find some portion(s) objectionable. Thus, it is critical for the researcher to explain to respondents how the provided information and their privacy will be protected.

In postal surveys, an explicit deadline for the return of the questionnaire must be given. Also, respondents should be provided with a stamped addressed envelope along with appropriate instructions for returning the questionnaire and for making inquiries. Some of these instructions may be repeated at the end of the questionnaire.

Various screening questions may be included at the end of the introduction. Screening questions can be used to select respondents that meet certain criteria and to ensure that those selected meet the necessary requirements assumed by the researcher. That is, respondents that are not part of the sampling frame should be excluded. For instance, Gascoigne et al. (1997) were able to identify (though it is not clear from the narrative whether this was done via screener questions) and exclude from their analyses respondents that were not part of their intended sample (see Table I).

The main body of the questionnaire contains the topical questions. Proper sequencing of the items facilitates questionnaire administration and minimizes confusion. The questions should be ordered logically and in a manner non-threatening to respondents. Usually, similar questions should be grouped together and the within-topic order should be from the general to the specific. Other sequencing options can be used to satisfy particular research objectives (Labaw, 1980).

Questions pertaining to respondent and/or organizational characteristics usually comprise the last section of the questionnaire because they tend to ask the most sensitive information. Within this section, items should be organized topically and from the least to the most sensitive. In a postal survey at an organizational setting, it has been demonstrated that placing sensitive questions at the end of the questionnaire resulted in higher return rates (Robertson and Sundstrom, 1990).

The sequence of questions is not always indicated in the write-up of the studies in Table I. Some of the studies (Burcher, 1992; Gascoigne et al., 1997; Gilgeous, 1998) mentioned that the organizational/respondent characteristics were placed at the beginning of the questionnaire. Depending on the sensitivity of these questions vis-à-vis that of the others, this placement may have contributed to the high unit nonresponse.

Other considerations

The questionnaire must be tailored to its audience. To be able to do this, researchers must be well versed with the survey topic and realize the limits of their respondents' knowledge. The questions should be applicable to all respondents and pertinent branching must be included for the cases where particular questions apply only to some members of the sample (Warwick and Lininger, 1975). Questionnaires should have appropriate branching and means by which it can be distinguished if a respondent did not answer an item because of inapplicability or failure to reply. That is, the type of item nonresponse is important information and must be distinguished in the questionnaire and reported in the findings.

The samples of some of the IMS studies shown in Table I consisted of heterogeneous respondents. For example, the selected companies had between 50 and 3,000 employees in the Gilgeous (1998) study; the number ranged from 10 to 10,000 employees in Orr's (1996) "second project"; and Tummala et al. (2000, p. 372) noted that in their study:

The positions held by the people who completed the questionnaire varied from top management to supervisors and engineers.

In each of these studies, at least some of the questions were probably not pertinent to segments of respondents. Indeed, Bennett et al. (1997) attributed their nonresponse partly to the inapplicability of the questionnaire (in totality or in parts) to some of their potential respondents. Researchers should be in a position to determine those not participating in the survey because of their ineligibility. Indeed, the issue of eligibility is a major consideration in the computation of various survey outcome measures (AAPOR, 2000).

The questionnaire should have a professional appearance and its formatting should make it easy for the respondent to complete or for the interviewer to administer. Instructions should be clear in terms of content and should be distinguishable stylistically from questions. Branching instructions must be user-friendly and as unambiguous as possible. The questionnaire should be constructed to facilitate coding and data entry, bearing in mind the available resources and the intended analyses. Detailed suggestions for questionnaire design and layout can be found in various sources (Dillman, 2000; Salant and Dillman, 1994; Sudman and Blair, 1998; Sudman and Bradburn, 1982).

For postal questionnaires, the two preeminent factors that influence response rates are incentives and number of contacts (Harvey, 1987). In addition, various other factors (e.g. type of postage, questionnaire appearance) should be considered. Increases in response rates can be achieved by taking into account potentially influencing factors in a comprehensively designed system (Dillman, 1978, 1991, 2000).

Over the years, various survey incentives (monetary and non-monetary) have been examined. The preponderance of the research findings suggests that token monetary incentives included (rather than promised) with the questionnaire can increase response rates (Brennan et al., 1991; Dommeyer, 1988; Furse and Stewart, 1982; Gajraj et al., 1990; Gendall et al., 1998; Hopkins and Gullickson, 1992; Hopkins and Podolak, 1983; James and Bolstein, 1992; Paolillo and Lorenzi, 1984).

Discussions of incentives and their feasibility were largely absent in the IMS studies of Table I. With the exception of the Sohal et al. (1996) study, none reported the use of incentives, despite the fact that in many cases their questionnaires appear to have been very demanding. Some form of incentives should have been used in appreciation of the respondents' time and effort. Token monetary amounts or small gifts are the typical incentives used to increase participation in surveys. Incentives can take other forms such as a donation to a charity of the respondent's choice or a promise of a summary of the study's results. Indeed, the Sohal et al. (1996) study used the latter.

Response rates can be improved by increasing the number of contacts with respondents (Martin et al., 1989; Peterson et al., 1989; Ruggles et al., 1984; Schlegelmilch and Diamantopoulos, 1991; Sutton and Zeits, 1992; Taylor and Lynn, 1998). This increase can be accomplished with a combination of a preliminary notification and a series of follow-up contacts. Schlegelmilch and Diamantopoulos (1991) expressed skepticism about the efficacy of the preliminary notification in industrial surveys. However, Yammarino et al. (1991, p. 628) - in a meta-analysis of 115 articles that used mail surveys - found that:

Follow-ups/repeated contacts seemed to have a greater effect on institutional [which comprised educational, industrial, healthcare, governmental, and "other institutional" samples] than consumer groups' response rates.

Respondent burden is usually high in many manufacturing-management studies. However, not enough attempts (in the form of a preliminary notification and follow-ups) were made to increase response rates in most of the studies of Table I. Indeed, some authors appear to espouse the mistaken belief that multiple contacts should be avoided. This is exemplified by the comment that:

it is considered satisfactory rather than disappointing that no efforts (such as telephoning or preliminary questionnaires) were made to boost the response to this questionnaire survey (Huang and Mak, 1998, p. 384).

Multiple contact attempts have been used in some of the IMS surveys shown in Table I. For example, Gieskes and ten Broeke (2000) used a preliminary phone notification and a telephone reminder in their postal survey; in Orr's (1996) "second project" the respondents were mailed a follow-up letter and called in order to increase the response rate; and Gupta et al. (1998) used two mailings of the questionnaire.

Pressley and Dunn (1985) stressed the importance of investigating empirically on organizational populations the impact of various design features on response rates rather than relying solely on findings from general population studies. Indeed, there are some important characteristics of organizational surveys that merit special attention. According to Paxson et al. (1995), these center around five factors:

1 the definition of what constitutes a particular organization;
2 the importance of selecting the appropriate person for the survey;
3 the presence of various gatekeepers that may control the access to the respondent;
4 the possible existence of organizational policies regarding participation in surveys; and
5 the generally higher level of effort needed as organizational surveys frequently require information that is not readily accessible to respondents.

The presence/absence of such characteristics must be addressed explicitly in all survey-based studies published in IMS.

Questionnaire pretesting

After the questionnaire is constructed it must be pretested (piloted). At times, the terms "pretesting" and "piloting" are differentiated (Babbie, 1990), but here will be used interchangeably. Pretests assist the researcher to refine the instrument and fielding procedures. Pretesing should be viewed as an iterative process (see Figure 1) aimed to "perfect" the questionnaire for its intended purpose. Based on the findings of this process, the questionnaire may have to be restructured and various items may have to be re-written bearing in mind their inter-relationships within the new sequence.

Wolfe (1990, p. 102) stressed the importance of pretesting by pointing out:

that more disasters in market research happen through bad questionnaires than anything else, and most of these failures can be traced to inadequate piloting.

Other authors criticized the undesirable, but all too frequent, state of affairs of conducting pretests on small convenience samples (Bolton, 1991) and in a nonsystematic and hurried fashion (Hunt et al., 1982). The practice of pretesting questionnaires on university students only (while they are intended for a different population) has also been strongly admonished (DeLamater, 1982; Oppenheim, 1966).

Questionnaire pretesting must be an indispensable phase of all studies, and it is puzzling why it is frequently "handled so casually, given its importance" (Sykes and Morton-Williams, 1987, p. 192). Not all pretests are created equal and some can be more effective in detecting particular types of problems than others (Diamantopoulos et al., 1994; Hunt et al., 1982; Presser and Blair, 1994). Although pretests may uncover some difficulties, their error detection rate tends to be relatively low (Hunt et al., 1982). Thus, it is useful to conduct pretests with specialists in questionnaire construction in addition to those that must be done with potential respondents.

Early pretests should be done in person (Boyd et al., 1989; Churchill and Iacobucci, 2002; Kinnear and Taylor, 1996; Weiers, 1988). However, the final pretest must be done with potential respondents and with the intended questionnaire administration method. Whether pretests should use typical respondents or extreme cases is an issue that needs to be tested empirically (Reynolds et al., 1993). Nevertheless, it has been proposed (Babbie, 1990) that pretests should aim to cover the whole range of intended respondents, including those that may be considered atypical. For multi-language studies it is imperative to pretest the instrument in the intended languages and in the particular fieldwork environments. Such pretests may uncover unique difficulties not ascertainable in the original language of the questionnaire (McKay et al., 1996).

Some authors (Aaker et al., 2001; Boyd et al., 1989; McDaniel and Gates, 2001) suggested that only expert interviewers should conduct the pretests. Others (Hunt et al., 1982; Malhotra, 1999; Nelson, 1985) stressed the need to run the gamut of interviewer experience. Be that as it may, it is advisable that researchers themselves conduct some pretest interviews (Malhotra, 1999; Sheatsley, 1983; Tull and Hawkins, 1993).

The use of pretests is noted explicitly in several of the articles shown in Table I (Batley, 1993; Bennett et al., 1997; Gieskes and ten Broeke, 2000; Gupta et al., 1998; Newman and Sridharan, 1995; Orr, 1999; Sohal et al., 1996; Tummala et al., 2000). The most extensive appears to be the pretest conducted by Newman and Sridharan (1995). The pretests of other studies in Table I varied in their extensiveness and purpose. Presumably, all studies found their pretests useful. For example, Batley's (1993) pretest - which consisted of personal interviews with potential respondents - assisted in the construction of the postal questionnaire and refinement of the fielding procedures. The pretest led him to the decision to exclude very small firms for which the instrument would have been inappropriate because the management requirements pertaining to quality in such companies "were mostly unwritten and often difficult to identify" (Batley, 1993, p. 5).

The recent attention (Forsyth and Lessler, 1991; Hippler et al., 1987; Jobe and Mingay, 1991; Schwarz and Sudman, 1996; Schwarz et al., 1999; Sudman et al., 1996; Tanur, 1992; Tourangeau et al., 2000) to the cognitive aspects of survey methodology (CASM) is a welcomed merging of academic disciplines. Application of CASM can be traced to an important seminar on the topic under the auspices of the Committee on National Statistics (Jabine et al., 1984). Cognitive approaches are grounded in psychology and aim for in-depth understanding of the processes underlying responses in surveys. Application of approaches based on the CASM can "be effective for identifying cognitive sources of response error that are not uncovered by the usual question-and-answer exchange between the interviewer and the respondent that occurs in a typical field test of a questionnaire" (Willis et al., 1991, p. 263). Thus, cognitive methods have been touted as invaluable additions to existing pretesting tools (Bolton, 1991; DeMaio and Rothgeb, 1996; Dippo et al., 1995; Schwarz, 1997; Schwarz et al., 1998a; Turner et al., 1992; Willis et al., 1991).

Uses of the CASM can assist researchers in their quest for "solutions to practical questions in questionnaire design, and `rules' of question formulation" (O'Muircheartaigh, 1997, p. 18). Undoubtedly, such pretesting can be costly because of the time and labor involved (Bolton, 1991). However, the increased effort and costs should not dissuade researchers because pretests can be extremely beneficial in the research process (Fowler, 1995).

Conclusion

Gathering data via surveys is ubiquitous in many fields and used frequently in empirical studies of manufacturing strategy. A fundamental part of this research process is the development of a quality questionnaire. Contrary to what some may believe, just writing a set of questions does not result in an appropriate research instrument because "a questionnaire is not simply a series of questions, nor is a question merely a series of words" (Labaw, 1980, p. 1).

This article addressed important issues surrounding the collection of data via questionnaires. It provided general suggestions for the construction of high quality questionnaires and a comprehensive bibliography that should be consulted by those wishing to explore the matter in greater detail. In addition, the article examined and critiqued some of the survey-based studies published in IMS.

Researchers of manufacturing strategy will create increasingly refined questionnaires as they gain sophistication in survey research methodology. Well-constructed questionnaires will contribute to a deeper understanding of the intricacies of manufacturing strategy. Also, questionnaires that are not burdensome to respondents will have probably lower item and unit nonresponse.

Manufacturing-management researchers must address explicitly the implications of the high nonresponse rate of their studies and take appropriate actions to ensure those responding represent the pertinent population accurately. At the very least, some of the nonrespondents must be followed and their responses compared with those of the respondents. Also, researchers should include, in their articles, details of their questionnaires (such as the questions' exact phrasing) and make them readily available. Furthermore, published expositions should report their findings in sufficient detail, include lengthy discussions of the limitations of their surveys, and offer recommendations for improvements to the questionnaire (e.g. items that produced high item nonresponse, conflicting answers) and to the survey process.

As Payne (1951) envisioned, constructing a good questionnaire is and will remain an "art" that considers simultaneously a multitude of factors following various flexible guidelines. This process requires thorough knowledge of the topical area, great attention to details, as seemingly slight changes in wording or structure can influence the results, and continuous revisions in light of pretests. Cognitive psychology can offer valuable insights into the development of better survey instruments. However, it is only "recently that the cognitive and communicative processes underlying question answering in surveys have received sustained theoretical attention from psychologists" (Schwarz et al., 1998a, p. 150). Indeed, some researchers seem to downplay the data gathering stage and simply concentrate on elaborate statistical procedures. It can be stated unequivocally that no amount of sophistication in statistical analyses can correct fundamental shortcomings stemming from a poorly constructed questionnaire.

0680140304001.png

Table I Selected studies published in IMS that used surveys

0680140304002.png

Figure 1 Questionnaire contruction process

0680140304003.png

Table I Selected studies published in IMS that used surveys


References

Aaker, D.A., Kumar, V., Day, G.S., 2001, Marketing Research, 7th ed., Wiley, New York, NY.

American Association for Public Opinion Research (AAPOR), 2000, Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys, AAPOR, Ann Arbor, MI.

Babbie, E., 1990, Survey Research Methods, 2nd ed., Wadsworth, Belmont, CA.

Batley, T.W., 1993, "Managing product quality in New Zealand firms", Integrated Manufacturing Systems, 4, 4, 4-9.

Behling, O., Law, K.S., 2000, Translating Questionnaires and Other Research Instruments: Problems and Solutions, Sage, Thousand Oaks, CA.

Belson, W.A., 1981, The Design and Understanding of Survey Questions, Gower, Aldershot.

Bennett, D., Hongyu, Z., Vaidya, K., Xing Ming, W., 1997, "Transferring manufacturing technology to China: supplier perceptions and acquirer expectations", Integrated Manufacturing Systems, 8, 5, 283-91.

Birn, R., Hague, P., Vangelder, P., 1990, "Introduction", Birn, R., Hague, P., Vangelder, P., A Handbook of Market Research Techniques, Kogan Page, London, 17-23.

Bolton, R.N., 1991, "An exploratory investigation of questionnaire pretesting with verbal protocol analysis", Advances in Consumer Research, 18, 558-65.

Boyd, H.W. Jr, Westfall, R., Stasch, S.F., 1989, Marketing Research: Text and Cases, 7th ed., Irwin, Homewood, IL.

Bradburn, N.M., 1983, "Response effects", Rossi, P.H., Wright, J.D., Anderson, A.B., Handbook of Survey Research, Academic Press, Orlando, FL, 289-328.

Bradburn, N.M., Sudman, S., 1988, Polls and Surveys: Understanding What they Tell us, Jossey-Bass, San Francisco, CA.

Bradburn, N.M.et al, 1979, Improving Interview Method and Questionnaire Design: Response Effects to Threatening Questions in Survey Research, Jossey-Bass, San Francisco, CA.

Brennan, M., Hoek, J., Astridge, C., 1991, "The effects of monetary incentives on the response rate and cost-effectiveness of a mail survey", Journal of the Market Research Society, 33, 3, 229-41.

Bulmer, M., Warwick, D.P., 1983, "Data collection", Bulmer, M., Warwick, D.P., Social Research in Developing Countries: Surveys and Censuses in the Third World, Wiley, Chichester, 145-60.

Burcher, P., 1992, "Master production scheduling and capacity planning: the link?", Integrated Manufacturing Systems, 3, 4, 16-22.

Burcher, P.G., Lee, G.L., 2000, "Competitive strategies and AMT investment decisions", Integrated Manufacturing Systems, 11, 5, 340-7.

Burgess, T.F., Gules, H.K., Tekin, M., 1997, "Supply-chain collaboration and success in technology implementation", Integrated Manufacturing Systems, 8, 5, 323-32.

Churchill, G.A. Jr, Iacobucci, D., 2002, Marketing Research: Methodological Foundations, 8th ed., Harcourt, Fort Worth, TX.

Converse, J.M., Presser, S., 1986, Survey Questions: Handcrafting the Standardized Questionnaire, Sage, Beverly Hills, CA.

DeLamater, J., 1982, "Response-effects of question content", Dijkstra, W., van der Zouwen, J., Response Behaviour in the Survey-Interview, Academic Press, London, 13-48.

DeMaio, T.J., 1984, "Social desirability and survey measurement: a review", Turner, C.F., Martin, E., Surveying Subjective Phenomena, 2, Russell Sage, New York, NY, 257-82.

DeMaio, T.J., Rothgeb, J.M., 1996, "Cognitive interviewing techniques: in the lab and in the field", Schwarz, N., Sudman S., Answering Questions: Methodology for Determining Cognitive and Communicative Processes in Survey Research, Jossey-Bass, San Francisco, CA, 177-95.

Diamantopoulos, A., Reynolds, N., Schlegelmilch, B., 1994, "Pretesting in questionnaire design: the impact of respondent characteristics on error detection", Journal of the Market Research Society, 36, 4, 295-313.

Dillman, D.A., 1978, Mail and Telephone Surveys: The Total Design Method, Wiley, New York, NY.

Dillman, D.A., 1991, "The design and administration of mail surveys", Annual Review of Sociology, 17, 225-49.

Dillman, D.A., 2000, Mail and Internet Surveys: The Tailored Design Method, 2nd ed., Wiley, New York, NY.

Dippo, C.S., Chun, Y.I., Sander, J., 1995, "Designing the data collection process", Cox, B.G., Binder, D.A., Chinnappa, B.N., Christianson, A., Colledge, M.J., Kott, P.S., Business Survey Methods, Wiley, New York, NY, 283-301.

Dommeyer, C.J., 1988, "How form of the monetary incentive affects mail survey response", Journal of the Market Research Society, 30, 3, 379-85.

Driva, H., Pawar, K.S., Menon, U., 2001, "Performance evaluation of new product development from a company perspective", Integrated Manufacturing Systems, 12, 5, 368-78.

Flynn, B.B., Sakakibara, S., Schroeder, R.G., Bates, K.A., Flynn E.J., 1990, "Empirical research methods in operations management", Journal of Operations Management, 9, 2, 250-84.

Foddy, W., 1993, Constructing Questions for Interviews and Questionnaires: Theory and Practice in Social Research, Cambridge University Press, Cambridge.

Forsyth, B.H., Lessler, J.T., 1991, "Cognitive laboratory methods: a taxonomy", Biemer, P.B., Groves, R.M., Lyberg, L.E., Mathiowetz, N.A., Sudman, S., Measurement Errors in Surveys, Wiley, New York, NY, 393-418.

Fowler, F.J. Jr, 1995, Improving Survey Questions: Design and Evaluation, Sage, Thousand Oaks, CA.

Fowler, F.J. Jr, 2002, Survey Research Methods, 3rd ed., Sage, Thousand Oaks, CA.

Furse, D.H., Stewart, D.W., 1992, "Monetary incentives versus promised contribution to charity: new evidence on mail survey response", Journal of Marketing Research, 19, 3, 375-80.

Gajraj, A.M., Faria, A.J., Dickinson, J.R., 1990, "A comparison of the effect of promised and provided lotteries, monetary and gift incentives on mail survey response rate, speed and cost", Journal of the Market Research Society, 32, 1, 141-62.

Gardiner, G.S., Gregory, M.J., 1996, "An audit-based approach to the analysis, redesign and continuing assessment of a new product introduction system", Integrated Manufacturing Systems, 7, 2, 52-9.

Gascoigne, J.D., Zhang, B.L., Weston, R.H., 1997, "A report on the UK cell control marketplace", Integrated Manufacturing Systems, 8, 3, 181-4.

Gendall, P., Hoek, J., Brennan, M., 1998, "The tea bag experiment: more evidence on incentives in mail surveys", Journal of the Market Research Society, 40, 4, 347-51.

Gieskes, J.F.B., ten Broeke, A.M., 2000, "Infrastructure under construction: continuous improvement and learning in projects", Integrated Manufacturing Systems, 11, 3, 188-98.

Gilgeous, V., 1998, "Manufacturing managers: their quality of working life", Integrated Manufacturing Systems, 9, 3, 173-81.

Gupta, A., Prinzinger, J., Messerschmidt, D.C., 1998, "Role of organizational commitment in advanced manufacturing technology and performance relationship", Integrated Manufacturing Systems, 9, 5, 272-8.

Hague, P.N., 1987, The Industrial Market Research Handbook, 2nd ed., Kogan Page, London.

Harvey, L., 1987, "Factors affecting response rates to mailed questionnaires: a comprehensive literature review", Journal of the Market Research Society, 29, 3, 341-53.

Hippler, H.J., Schwarz, N., Sudman, S., 1987, Social Information Processing and Survey Methodology, Springer-Verlag, New York, NY.

Hopkins, K.D., Gullickson, A.R., 1992, "Response rates in survey research: a meta-analysis of the effects of monetary gratuities", Journal of Experimental Education, 61, 1, 52-62.

Hopkins, K.D., Podolak, J., 1983, "Class-of-mail and the effects of monetary gratuity on the response rates of mailed questionnaires", Journal of Experimental Education, 51, 4, 169-70.

Huang, G.Q., Mak, K.L., 1998, "A survey report on design for manufacture in the UK furniture manufacturing industry", Integrated Manufacturing Systems, 9, 6, 383-7.

Hunt, S.D., Sparkman, R.D. Jr, Wilcox, J.B., 1982, "The pretest in survey research: issues and preliminary findings", Journal of Marketing Research, 19, 2, 269-73.

Jabine, T.B., 1985, "Flow charts: a tool for developing and understanding survey questionnaires", Journal of Official Statistics, 1, 2, 189-207.

Jabine, T.B., Straf, M.L., Tanur, J.M., Tourangeau, R., 1984, Cognitive Aspects of Survey Methodology: Building a Bridge Between Disciplines, National Academy Press, Washington, DC.

James, J.M., Bolstein, R., 1992, "Large monetary incentives and their effects on mail survey response rates", Public Opinion Quarterly, 56, 4, 442-53.

Jobe, J.B., Mingay, D.J., 1991, "Cognition and survey measurement: history and overview", Applied Cognitive Psychology, 5, 3, 175-92.

Johnson, T., O'Rourke, D., Chavez, N., Sudman, S., Warnecke, R., Lacey, L., Horm, J., 1997, "Social cognition and responses to survey questions among culturally diverse populations", Lyberg, L., Biemer, P., Collins, M., De Leeuw, E., Dippo, C., Schwarz, N., Trewin, D., Survey Measurement and Process Quality, Wiley, New York, NY, 87-113.

Kidd, J.B., 1995, "Subcontractors, JIT and kanbans: a brief review of spring manufacturing in Japan and South Korea", Integrated Manufacturing Systems, 6, 6, 15-22.

Kinnear, T.C., Taylor, J.R., 1996, Marketing Research: An Applied Approach, 5th ed., McGraw-Hill, New York, NY.

Knäuper, B., 1999, "Age differences in question and response order effects", Schwarz, N., Park, D., Knäuper, B., Sudman, S., Cognition, Aging, and Self-Reports, Psychology Press, Philadelphia, PA, 341-63.

Kraut, A.I., 1996, "Introduction: an overview of organizational surveys", Kraut, A.I., Organizational Surveys: Tools for Assessment and Change, Jossey-Bass, San Francisco, CA, 1-14.

Kraut, A.I., Saari, L.M., 1999, "Organization surveys: coming of age for a new era", Kraut, A.I., Korman, A.K., Evolving Practices in Human Resource Management: Responses to a Changing World of Work, Jossey-Bass, San Francisco, CA, 302-27.

Krosnick, J.A., 1999, "Maximizing questionnaire quality", Robinson, J.P., Shaver, P.R., Wrightsman, L.S., Measures of Political Attitudes, 2, Academic Press, San Diego, CA, 37-57.

Kwong See, S.T., Ryan, E.B., 1999, "Intergenerational communication: The survey interview as a social exchange", Schwarz, N., Park, D., Knäuper, B., Sudman, S., Cognition, Aging, and Self-Reports, Psychology Press, Philadelphia, PA, 245-62.

Labaw, P.J., 1980, Advanced Questionnaire Design, Abt Books, Cambridge, MA.

Lake, C.C., Harper, P.C., 1987, Public Opinion Polling: A Handbook for Public Interest and Citizen Advocacy Groups, Island Press, Washington, DC.

Lavrakas, P.J., 1993, Telephone Survey Methods: Sampling, Selection, and Supervision, 2nd ed., Sage, Newbury Park, CA.

Lockhart, D.C., Russo J.R., 1994, "Mail and telephone surveys in marketing research: a perspective from the field", Bagozzi, R.P., Principles of Marketing Research, Blackwell, Cambridge, MA, 116-61.

McDaniel, C. Jr, Gates, R., 2001, Marketing Research Essentials, 3rd ed., South-Western, Cincinnati, OH.

McFarland, S.G., 1981, "Effects of question order on survey responses", Public Opinion Quarterly, 45, 2, 208-15.

McKay, R.B., Breslow, M.J., Sangster, R.L., Gabbard, S.M., Reynolds, R.W., Nakamoto, J.M., Tarnai, J., 1996, "Translating survey questionnaires: lessons learned", Braverman, M.T., Slater, J.K., Advances in Survey Research, Jossey-Bass, San Francisco, CA, 93-104.

Malhotra, N.K., 1999, Marketing Research: An Applied Orientation, 3rd ed., Prentice-Hall, Upper Saddle River, NJ.

Mangione, T.W., 1995, Mail Surveys: Improving the Quality, Sage, Thousand Oaks, CA.

Martin, W.S., Duncan, W.J., Powers, T.L., Sawyer, J.C., 1989, "Costs and benefits of selected response inducement techniques in mail survey research", Journal of Business Research, 19, 1, 67-79.

Minor, E.D. III, Hensley, R.L., Wood, D.R. Jr, 1994, "A review of empirical manufacturing strategy studies", International Journal of Operations & Production Management, 14, 1, 5-25.

Nelson, D.D., 1985, "Informal testing as a means of questionnaire development", Journal of Official Statistics, 1, 2, 179-88.

Newman, W.R., Sridharan, V., 1995, "Linking manufacturing planning and control to the manufacturing environment", Integrated Manufacturing Systems, 6, 4, 36-42.

O'Muircheartaigh, C., 1997, "Measurement error in surveys: a historical perspective", Lyberg, L., Biemer, P., Collins, M., De Leeuw, E., Dippo, C., Schwarz, N., Trewin, D., Survey Measurement and Process Quality, Wiley, New York, NY, 1-25.

Oppenheim, A.N., 1966, Questionnaire Design and Attitude Measurement, Basic Books, New York, NY.

Orr, S.C., 1996, "A longitudinal survey of robot usage in Australia", Integrated Manufacturing Systems, 7, 5, 33-46.

Orr, S., 1999, "The role of technology in manufacturing strategy: experiences from the Australian wine industry", Integrated Manufacturing Systems, 10, 1, 45-55.

Paolillo, J.G.P., Lorenzi, P., 1984, "Monetary incentives and mail questionnaire response rates", Journal of Advertising, 13, 1, 46-8.

Paxson, M.C., Dillman, D.A., Tarnai, J., 1995, "Improving response to business mail surveys", Cox, B.G., Binder, D.A., Chinnappa, B.N., Christianson, A., Colledge, M.J., Kott, P.S., Business Survey Methods, Wiley, New York, NY, 303-16.

Payne, S.L., 1951, The Art of Asking Questions, Princeton University Press, Princeton, NJ.

Peterson, R.A., 2000, Constructing Effective Questionnaires, Sage, Thousand Oaks, CA.

Peterson, R.A., Albaum, G., Kerin, R.A., 1989, "A note on alternative contact strategies in mail surveys", Journal of the Market Research Society, 31, 3, 409-18.

Presser, S., Blair, J., 1994, "Survey pretesting: do different methods produce different results?", Sociological Methodology, 24, 73-104.

Pressley, M.M., Dunn, M.G., 1985, "A factor-interactive experimental investigation of inducing response to questionnaires mailed to commercial populations", Lusch, R.F., Ford, G.T., Frazier, G.L., Howell, R.D., Ingene, C.A., Reilly, M., Stampfl, R.W., 1985 AMA Educators' Proceedings, American Marketing Association, Chicago, IL, 356-61.

Rea, L.M., Parker, R.A., 1997, Designing and Conducting Survey Research: A Comprehensive Guide, 2nd ed., Jossey-Bass, San Francisco, CA.

Reynolds, N., Diamantopoulos, A., Schlegelmilch, B., 1993, "Pretesting in questionnaire design: a review of the literature and suggestions for further research", Journal of the Market Research Society, 35, 171-82.

Riedel, J.C.K.H., Pawar, K.S., 1997, "The consideration of production aspects during product design stages", Integrated Manufacturing Systems, 8, 4, 208-14.

Robertson, M.T., Sundstrom, E., 1990, "Questionnaire design, return rates, and response favorableness in an employee attitude questionnaire", Journal of Applied Psychology, 75, 3, 354-7.

Ruggles, D.R., Dea, J.Y., Kwok, F.K., Carman, C.A., 1984, "Evaluation of the effectiveness of data collection procedures for the 1982 census of agriculture", American Statistical Association 1984 Proceedings of the Section on Survey Research Methods, American Statistical Association, Washington, DC, 588-93.

Salant, P., Dillman, D.A., 1994, How to Conduct Your Own Survey, Wiley, New York, NY.

Scheaffer, R.L., Mendenhall, W., Ott, L., 1990, Elementary Survey Sampling, 4th ed., PWS-Kent, Boston, MA.

Schlegelmilch, B.B., Diamantopoulos, A., 1991, "Prenotification and mail survey response rates: a quantitative integration of the literature", Journal of the Market Research Society, 33, 3, 243-55.

Schuman, H., Presser, S., 1981, Questions and Answers in Attitude Surveys Experiments on Question Form, Wording, and Context, Academic Press, New York, NY.

Schuman, H., Presser, S., Ludwig, J., 1981, "Context effects on survey responses to questions about abortion", Public Opinion Quarterly, 45, 2, 216-23.

Schwarz, N., 1996, "Survey research: collecting data by asking questions", Semin, G.R., Fiedler, K., Applied Social Psychology, Sage, London, 65-90.

Schwarz, N., 1997, "Questionnaire design: the rocky road from concepts to answers", Lyberg, L., Biemer, P., Collins, M., De Leeuw, E., Dippo, C., Schwarz, N., Trewin, D., Survey Measurement and Process Quality, Wiley, New York, NY, 29-45.

Schwarz, N., 1999, "Self-reports: How the questions shape the answers", American Psychologist, 54, 2, 93-105.

Schwarz, N., Hippler, H.J., 1991, "Response alternatives: the impact of their choice and presentation order", Biemer, P.B., Groves, R.M., Lyberg, L.E., Mathiowetz, N.A., Sudman, S., Measurement Errors in Surveys, Wiley, New York, NY, 41-56.

Schwarz, N., Hippler, H.J., 1995, "Subsequent questions may influence answers to preceding questions in mail surveys", Public Opinion Quarterly, 59, 1, 93-7.

Schwarz, N., Sudman S., 1996, Answering Questions: Methodology for Determining Cognitive and Communicative Processes in Survey Research, Jossey-Bass, San Francisco, CA.

Schwarz, N., Grayson, C.E., Knäuper, B., 1998, "Formal features of rating scales and the interpretation of question meaning", International Journal of Public Opinion Research, 10, 2, 177-83.

Schwarz, N., Groves, R.M., Schuman, H., 1998, "Survey methods", Gilbert D.T., Fiske, S.T., Lindzey, G., The Handbook of Social Psychology, 4th ed., Â1, McGraw-Hill, Boston, MA, 143-79.

Schwarz, N., Park, D.C., Knäuper, B., Sudman, S., 1999, Cognition, Aging, and Self-Reports, Psychology Press, Philadelphia.

Schwarz, N., Strack, F., Hippler, H.J., Bishop, G., 1991, "The impact of administration mode on response effects in survey measurement", Applied Cognitive Psychology, 5, 3, 193-212.

Sheatsley, P.B., 1983, "Questionnaire construction and item writing", Rossi, P.H., Wright, J.D., Anderson, A.B., Handbook of Survey Research, Academic Press, Orlando, FL, 195-230.

Sohal, A.S., Maguire, W.A.A., Putterill, M.S., 1996, "AMT investments in New Zealand: purpose, pattern and outcomes", Integrated Manufacturing Systems, 7, 2, 27-36.

Strack, F., 1992, "`Order effects' in survey research: activation and information functions of preceding questions", Schwarz, N., Sudman, S., Context Effects in Social and Psychological Research, Springer-Verlag, New York, NY, 23-34.

Sudman, S., Blair, E., 1998, Marketing Research: A Problem-Solving Approach, McGraw-Hill, Boston, MA.

Sudman, S., Bradburn, N.M., 1974, Response Effects in Surveys: A Review and Synthesis, Aldine, Chicago, IL.

Sudman, S., Bradburn, N.M., 1982, Asking Questions: A Practical Guide to Questionnaire Design, Jossey-Bass, San Francisco, CA.

Sudman, S., Bradburn, N.M., Schwarz, N., 1996, Thinking about Answers: The Application of Cognitive Processes to Survey Methodology, Jossey-Bass, San Francisco, CA.

Sutton, R.J., Zeits, L.L., 1992, "Multiple prior notifications, personalization, and reminder surveys", Marketing Research, 4, 4, 14-21.

Swink, M., Way, M.H., 1995, "Manufacturing strategy: propositions, current research, renewed directions", International Journal of Operations and Production Management, 15, 7, 4-26.

Sykes, W., Morton-Williams, J., 1987, "Evaluating survey questions", Journal of Official Statistics, 3, 2, 191-207.

Tanur, J.M., 1992, Questions about Questions: Inquiries into the Cognitive Bases of Surveys, Russell Sage, New York, NY.

Taylor, S., Lynn P., 1998, "The effect of a preliminary notification letter on response to a postal survey of young people", Journal of the Market Research Society, 40, 2, 165-73.

Tourangeau, R., 2000, "Remembering what happened: memory errors and survey reports", Stone, A.A., Turkkan, J.S., Bachrach, C.A., Jobe, J.B., Kurtzman, H.S., Cain, V.S., The Science of Self-Report: Implications for Research and Practice, LEA, Mahwah, NJ, 29-47.

Tourangeau, R., Rasinski, K.A., 1988, "Cognitive processes underlying context effects in attitude measurement", Psychological Bulletin, 103, 3, 299-314.

Tourangeau, R., Rips, L.J., Rasinski, K., 2000, The Psychology of Survey Response, Cambridge University Press, Cambridge.

Tull, D.S., Hawkins, D.I., 1993, Marketing Research: Measurement & Method, 6th ed., Macmillan, New York, NY.

Tummala, V.M.R., Lee, H.Y.H., Yam, R.C.M., 2000, "Strategic alliances of China and Hong Kong in manufacturing and their impact on global competitiveness of Hong Kong manufacturing industries", Integrated Manufacturing Systems, 11, 6, 370-84.

Turner, C.F., Lessler, J.T., Gfroerer, J.C., 1992, "Future directions for research and practice", Turner, C.F., Lessler, J.T., Gfroerer, J.C., Survey Measurement of Drug Use: Methodological Studies, National Institute on Drug Abuse, Rockville, MD, 299-306.

Wänke, M., Schwarz, N., 1997, "Reducing question order effects: the operation of buffer items", Lyberg, L., Biemer, P., Collins, M., De Leeuw, E., Dippo, C., Schwarz, N., Trewin, D., Survey Measurement and Process Quality, Wiley, New York, NY, 115-40.

Warwick, D.P., Lininger, C.A., 1975, The Sample Survey: Theory and Practice, McGraw-Hill, New York, NY.

Weiers, R.M., 1988, Marketing Research, 2nd ed., Prentice-Hall, Englewood Cliffs, NJ.

Weisberg, H.F., Krosnick, J.A., Bowen, B.D., 1996, An Introduction to Survey Research, Polling, and Data Analysis, 3rd ed., Sage, Thousand Oaks, CA.

Willis, G.B., Royston, P., Bercini, D., 1991, "The use of verbal report methods in the development and testing of survey questions", Applied Cognitive Psychology, 5, 3, 251-67.

Wolfe, A., 1990, "Questionnaire design", Birn, R., Hague, P., Vangelder, P., A Handbook of Market Research Techniques, Kogan Page, London, 89-103.

Woodcock, D., Chen, C.Y., 2000, "Skills and knowledge of senior Taiwanese manufacturing managers", Integrated Manufacturing Systems, 11, 6, 393-404.

Yammarino, F.J., Skinner, S.J., Childers, T.L., 1991, "Understanding mail survey response behavior: a meta-analysis", Public Opinion Quarterly, 55, 4, 613-39.