As discussed above, online surveys offer many advantages over traditional surveys. However, there are also disadvantages that should be considered by researchers contemplating using online survey methodology. Although many of the problems discussed in this section are also inherent in traditional survey research, some are unique to the computer medium.Sampling IssuesWhen conducting online research, investigators can encounter problems as regards sampling (Andrews et al., 2003; Howard, Rainie, & Jones, 2001). For example, relatively little may be known about the characteristics of people in online communities, aside from some basic demographic variables, and even this information may be questionable (Dillman, 2000; Stanton, 1998). A number of recent web survey services provide access to certain populations by offering access to email lists generated from other online surveys conducted through the web survey service. Some offer access to specialized populations based on data from previous surveys. However, if the data were self-reported, there is no guarantee that participants from previous surveys provided accurate demographic or characteristics information.Generating Samples from Virtual Groups and OrganizationsSome virtual groups and organizations provide membership email lists that can help researchers establish a sampling frame. However, not all members of virtual groups and organizations allow their email addresses to be listed, and some may not allow administrators to provide their email addresses to researchers. This makes accurately sizing an online population difficult.Once an email list is obtained, it is possible to email an online survey invitation and link to every member on the list. Theoretically, this can give researchers a sampling frame. However, problems such as multiple email addresses for the same person, multiple responses from participants, and invalid/inactive email addresses make random sampling online a problematic method in many circumstances (Andrews et al., 2003; Couper, 2000). One solution is for researchers to require participants to contact them to obtain a unique code number (and a place to include this code number on the online questionnaire) prior to completing a survey. However, requiring this extra step may significantly reduce the response rate. Another solution that some newer web survey programs offer is response tracking. Participants are required to submit their email address in order to complete the survey. Once they have completed the survey, the survey program remembers the participant's email address and does not allow anyone using that email address access to the survey. This feature helps to reduce multiple responses, although someone could potentially complete the survey a second time using a secondary email address (Konstan, Rosser, Ross, Stanton, & Edwards, 2005).Generating a Sample from an Online CommunityEstablishing a sampling frame when researching an online community presents a number of challenges. Unlike membership-based organizations, many online communities, such as community bulletin boards and chat rooms, do not typically provide participant email addresses. Membership is based on common interests, not fees, and little information is required when registering to use these communities, if registration is required at all. Some researchers attempt to establish a sampling frame by counting the number of participants in an online community, or the published number of members, over a given period of time. In either case, the ebb and flow of communication in online communities can make it difficult to establish an accurate sampling frame. For example, participation in online communities may be sporadic depending on the nature of the group and the individuals involved in discussions. Some people are “regulars,” who may make daily contributions to discussions, while others only participate intermittently. Furthermore, “lurkers,” or individuals who read posts but do not send messages, may complete an online survey even though they are not visible to the rest of the community. The presence of lurkers in online communities appears to be highly variable (Preece, Nonnecke, & Andrews, 2004). Studies have found that in some online communities lurkers represent a high percentage (between 45% and 99%) of community members, while other studies have found few lurkers (Preece et al., 2004). Because lurkers do not make their presence known to the group, this makes it difficult to obtain an accurate sampling frame or an accurate estimate of the population characteristics.
As internet communities become more stable, some community administrators are beginning to compile statistics on their community's participants. Many communities require a person to register with the community in order to participate in discussions, and some communities are willing to provide researchers with statistics about community membership (at least in aggregate form). Registration typically involves asking for the individual's name, basic demographic information such as age and gender, and email address. Other community administrators might ask participants for information about interests, income level, education, etc. Some communities are willing to share participant information with researchers as a validation technique by comparing the survey sample characteristics with those of the online community in general. Yet, because individuals easily can lie about any information they report to community administrators, there is no guarantee of accuracy.
When possible, using both online and traditional paper surveys helps to assess whether individuals responding to the online version are responding in systematically different ways from those who completed the paper version. For example, Query and Wright (2003) used a combination of online and paper surveys to study older adults who were caregivers for loved ones with Alzheimer's disease. The researchers attempted to assess whether the online responses were skewed in any way by comparing the responses from both subsamples. While no significant differences between the two subsamples were found in this particular study, real differences in responses between Internet users and non-Internet users might exist in other populations. This may make it difficult to assess whether the observed differences are due to factors such as participant deception or actual differences due to characteristics associated with computer and non-computer users.
Other Sampling Concerns
Although some studies of online survey methods have found that response rates in email surveys are equal to or better than those for traditional mailed surveys (Mehta & Sivadas, 1995; Stanton, 1998; Thompson, Surface, Martin, Sanders, 2003), these findings may be questionable because non-response rate tracking is difficult to ascertain in most large online communities (Andrews et al., 2003). One relatively inexpensive technique used by market researchers to increase response rates is to offer some type of financial incentive, e.g., a lottery. Individuals who participate in the survey are given a chance to win a prize or gift certificate, and the winner is selected randomly from the pool of respondents. However, this technique is not without problems. Internet users frequently encounter bogus lotteries and other “get rich quick” schemes online, so a lottery approach to increasing response rates could potentially undermine the credibility of the survey. In addition, offering a financial incentive may increase multiple responses to the survey as participants try to “stack the deck” to increase their chances of winning (Konstan, et al., 2005). Straight incentives such as a coupon redeemable for real merchandise, i.e., books, may be more effective and more credible.
Self-selection bias is another major limitation of online survey research (Stanton, 1998; Thompson et al., 2003; Wittmer et al., 1999). In any given Internet community, there are undoubtedly some individuals who are more likely than others to complete an online survey. Many Internet communities pay for community operations with advertising. This can desensitize participants to worthwhile survey requests posted on the website. In short, there is a tendency of some individuals to respond to an invitation to participate in an online survey, while others ignore it, leading to a systematic bias.
These sampling issues inhibit researchers' ability to make generalizations about study findings. This, in turn, limits their ability to estimate population parameters, which presents the greatest threat to conducting probability research. For researchers interested only in conducting nonprobability research, these issues are somewhat less of a concern. Researchers who use nonprobability samples assume that they will not be able to estimate population parameters.
Many of the problems discussed here are not unique to online survey research. Mailed surveys suffer from the same basic limitations. While a researcher may have a person's mailing address, he or she does not know for certain whether the recipient of the mailed survey is the person who actually completes and returns it (Schmidt, 1997). Moreover, respondents to mailed surveys can misrepresent their age, gender, level of education, and a host of other variables as easily as a person can in an online survey. Even when the precise characteristics of a sample are known by the researcher, people can still respond in socially desirable ways or misrepresent their identity or their true feelings about the content of the survey.
The best defense against deception that researchers may have is replication. Only by conducting multiple online surveys with the same or similar types of Internet communities can researchers gain a reliable picture of the characteristics of online survey participants.
Access Issues
Some researchers access potential participants by posting invitations to pa
đang được dịch, vui lòng đợi..
