promising phenomenon that lends itself to call centers’ ability to improve their own and their other business units’ efficiency is the employment of crowdsourcing. Crowdsourcing is an online, distributed problem-solving and production model already in use by for-profit organizations such as Threadless, iStockphoto, and InnoCentive. Speculation in Weblogs and wisdom of crowds theory assumes a diverse crowd engaged in crowdsourcing labor. Crowdsourcing is the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call. Furthermore, and as crowdsourcing is in some ways similar to open-source software production, prior research suggests that individuals in the crowd likely participate in crowdsourcing ventures to gain peer recognition and to develop creative skills. However, there has been limited research on the most effective ways to apply crowdsourcing techniques to foster a collaborative environment between call center employees and customers. The goal of this study was to assess the effect that crowdsourcing techniques can have on the development of call center business strategies and functional area operational practices alignment that allows for the identification, socialization, and alignment of customer-focused business strategies that create value for both the customer and the organization.
The Power of the Crowd: A Study of Applying Crowdsourcing Techniques in Developing Co-Value between Call Center Customers, Call Center Employees and the Overall Organization
Chapter 1: Introduction
Statement of the Problem
Call centers are critically important as they are vibrant parts of the American business culture (Dawson, 20006). Since the opening of the first call centers by the aviation industry in the late 1960s, call centers have become a basic business requirement for customer support, service, and marketing for businesses, large and small (Hillmer, Hillmer & McRoberts, 2004). Indeed, in both the United States and Europe, call centers are growing in importance as employers, currently accounting for between 1 and 3% of the workforce, and these percentages are expected to increase in the future (Wiley & Legge, 2006). The introduction of computer-based information technology has further fueled unprecedented growth in the number and size of call centers in recent years (Wiley & Legge, 2006). While a number of services are clearly dependent on a local presence to support warranty service or other product support, other services, such as call centers, need not be located domestically; however, in recent years, some U.S. call centers that have outsourced operations to Asia and elsewhere have brought them back to the United States, finding that domestic operations provide a better customer experience (Kopitzke, 2008). The importance of call centers stems in large part from the fact that they are at the center of an organization’s relationship with its customers. Case and point, call centers are the front door to a business; further, according to Dawson (2006), the call centers’ front line position is even more important in today’s global economy. In this regard, Griffin (2002) emphasizes that “The key to growing a loyal customer rests first in creating an effective frontline employee. Increasingly, for many enterprises, the employee front line is a customer contact center where agents interact with customers” (p. 112). For many organizations, the front-line employees frequently referred to as customer service representatives are the employees with the most direct knowledge of customers. They are familiar with the questions, concerns, and desires of their customers long before others in the organization are. Often, the call center representative is the sole personal contact available to customers and thus plays a significant role in shaping the customer’s perception of the organization (Hillmer et al., 2004).
As with people, companies only have one chance to make a good first impression, but call centers are also essential to maintain customer loyalty. The performance of its frontline employees determines how judgments of the entire company are made — and future sales made or lost. Indeed, many firms have traditionally considered call centers little more than a tactical, reactive point of contact for the customer. More visionary companies, however, are now looking at inbound calls as an all-important servicing function that retains existing customers, cross-sells new services, and helps increase the company’s overall share of a customer’s budget (Griffin, 2002). However, the link between how well call centers to perform their mission and translating that into actionable plans for improving other business areas has not been fully capitalized on. This threatens an organization’s competitive advantage and decreases efficiencies in both the call centers and the business functional areas.
Purpose Statement
A very promising phenomenon that lends itself to call centers’ ability to improve their own and their other business units’ efficiency is the employment of crowdsourcing. However, there has been limited research on the most effective ways to apply crowdsourcing techniques to foster a collaborative environment between call center employees and customers. According to Cole (2009), “Crowdsourcing is a new buzzword spawned by social media. It recognizes that useful ideas aren’t confined to positional leaders or experts. Wikipedia is a powerful success story, showing how millions of contributors can build a world-class institution, crushing every hierarchical rival” (p. 8). Crowdsourcing is the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call (Condron, 2010).
Crowdsourcing represents a potentially valuable addition to the manner in which call centers operate because it can provide a venue in which customers can offer their insights, views, and opinions concerning what is important to them and as well as the quality of their experience. These are important issues for the vast majority of call centers operating today given the stressful environment that characterizes many of them, resulting in inordinately high levels of turnover and the enormous costs associated with unplanned employee attrition. Indeed, service environments such as telephone call centers can feature never-ending queues of customers and relentless pressure to handle calls (Kossek & Lambert, 2005). To the extent that technology controls the pace of work and is combined with discretion-reducing managerial practices, it can diminish workers’ ability to engage, both physically and psychologically, in other life activities (Kossek & Lambert, 2005). The main purpose of the study is to assess the effect that crowdsourcing techniques can have on the development of call center business strategies and functional area operational practices alignment that allows for the identification, socialization, and alignment of customer-focused business strategies that create value for both the customer and the organization.
Significance of the Study
According to Doan (2008), crowdsourcing is “an innovative business trend taking collaborative project management online — and to a whole new level. Around the world, individuals are using online communities to identify people with similar experiences or interests who can share ideas, offer feedback and collectively identify which projects hold the most promise” (p. 46). Although many people have never heard of crowdsourcing, consumers who have commented on an industry-standard or test-run beta software has taken part in a crowdsourcing initiative (Doan, 2008). Using the technique, an organization can tap into the collective intelligence of the public at large to complete tasks it would normally either perform itself or outsource to a third-party provider. Crowdsourcing can include anything from gathering feedback on a new idea, asking for help to solve a product problem, or looking for contractors, investors or new employees interested in participating in a project (Doan, 2008). According to Cooper and Edgett (2008), “The advent of communities of users combined with the widespread availability of high-speed Internet has enabled some companies to tap into the creative abilities of their customer base. They seek input, ideas and, in some cases, partially completed product designs. Whether you are a T-shirt maker in Chicago, a furniture manufacturer (such as Muji in Japan), or a household products company (e.g., P&G with its Connect & Develop system), opening your doors to external inputs and your customers’ wishes via company hosted webpage and the Internet is an increasingly popular route in this trend toward open innovation” (p. 48). An article by Howe published in Wired magazine entitled, “The Rise of Crowdsourcing” (2006) notes that, “Just as distributed computing projects like UC Berkeley’s [email protected] have tapped the unused processing power of millions of individual computers, so distributed labor networks are using the Internet to exploit the spare processing power of millions of human brains” (p. 37). According to McCluskey and Korobow (2009), “Viewing an institution from the perspective of networks is a key component of successfully managing modern, mission-driven organizations. The traditional hierarchical view of an organization fails to capture how information and knowledge are created and used in executing the organization’s objectives. The network approach has the potential to more deeply inform decision making and outcomes. Moreover, the advent of social software tools embraces and complements the network view of an organization. Social software helps make existing networks explicit while creating avenues for forming new networks around mission exigencies and crowd-sourcing, thus creating shared solutions to difficult challenges” (p. 66).
Furthermore, social software will only increase in importance in helping organizations maintain and manage their domains of knowledge and information. When networks are enabled and flourish, their value to all users and to the organization increases as well. That increase in value is typically nonlinear, where some additions yield more than proportionate values to the organization (McCluskey and Korobow, 2009). Some of the key characteristics of social software applications as they apply to crowdsourcing techniques are listed below.
Personal profile information (“personal brand”) to build user-driven skills taxonomy;
Knowledge sharing, creation, organization, storage, and retrieval;
Really simple syndication (RSS) content feeds replace or restructure the e-mail in-box;
Content rating to rank value of contributions by others;
Practitioners can self-identify as subject matter experts;
Social and professional interaction around hard problems, like practitioners, communities of practice and interest;
Business intelligence (BI) dashboard or analytics portal captures the interactions and transactions taking place across a social network;
Extranet capabilities-glean content (wikis, blogs, and micro-blogs) and selectively make available to the public or to clients;
Team coordination and communication, including people search (expertise location);
Enterprise skilling to track individual and collective knowledge and skills;
Metadata tagging;
Reputation management and alerting (McCluskey & Korobow, 2009).
The open-source software movement proved that a network of passionate, geeky volunteers could write code just as well as the highly paid developers at Microsoft or Sun Microsystems. Wikipedia showed that the model could be used to create a sprawling and surprisingly comprehensive online encyclopedia. And companies like eBay and MySpace have built profitable businesses that couldn’t exist without the contributions of users (Howe, 2006, p. 37). In this article, Howe coined the term “crowdsourcing” – which he defines as “the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call” (p. 37). Howe (2006) further clarified that “It’s only crowdsourcing once a company takes that design, fabricates [it] in mass quantity and sell[s] it.” Companies such as Threadless (https://www.threadless.com), iStockphoto, and InnoCentive (http://www.innocentive.com), as well as user — generated advertising contests, are examples of the crowdsourcing model in action (Brabham, 2008).
Current trends indicate that the use of crowdsourcing will continue to increase in the future, particularly in view of the exponential growth of social networking and open source software initiatives. For instance, West (2008) advises, “Technological progress and the evolution of virtual networks and social vetting-that is, using networks such as Linkedln to establish trust and research people’s backgrounds-will increase workplace flexibility. The trends will increase the use of emerging work structures that involve engaging professional and social networks through means such as “crowd sourcing’-when an organization invites the public to help solve a problem” (p. 21).
Likewise, Quart (2007) reports that, “Easy information sharing has led to the birth of “crowd-sourcing”-a new pool of cheap labor in which ordinary people use their “spare cycles to create content, solve problems, even do corporate R & D,” also jargonized as “decentralized horizontal media,” “convergence culture,” or “commons-based peer production” (p. 73).
Crowdsourcing provides a key framework for organizations to capitalize on the wisdom of the crowd, that is, the average of diverse, independent, and decentralized crowds (Surowiecki, 2004). Suroweicki is a proponent of collective intelligence and the ability of a group to solve concrete, well-defined problems and to make decisions that will be intellectually better than those of the isolated individual over time. He makes a solid case for this argument by discussing different types of problems (cognition, coordination, and cooperation) and necessary conditions for the crowd to be wise (diversity, independence, and decentralization) (Shiu, 2007). The groundswell is a social development in which people use modern technologies to get the things they need from one another (Li & Bernoff, 2008). Specifically, the impact of the well-informed crowds on an organization’s attempt to develop business strategies and operational efficiencies that allow the organization and its customers to co-develop and co-create value is very promising in the business area of call centers. That said; it is not known to what extent crowdsourcing techniques can be effectively applied in call centers to increase call center performance as measured by established key performance indicators, ultimately resulting in operating efficiencies that fosters an environment where the organization and its customers co-develop value.
Research Questions
The intention of this study is to illuminate and explain the aspects that enable call centers to more effectively assist their organizations main business units in increasing operational efficiencies through the use of crowdsourcing techniques. With this goal in mind, the following research question will be addressed: “What is the relationship between the application of crowdsourcing techniques and call center performance as measured by normal call center key performance indicators and an organization’s functional business areas operational efficiencies?”
Hypothesis
1. The affective application of crowdsourcing techniques leads to increased call center performance.
a. Crowdsourcing techniques are related to an increase in first call resolution in call centers.
b. Crowdsourcing techniques are related to decreased average call handle time in call centers.
c. Crowdsourcing techniques are related to decreased cost per call in call centers.
d. Crowdsourcing techniques are related to decreased abandonment rates in call centers.
e. Crowdsourcing techniques help to optimize call center agent utilization.
2. The increased performance of call centers, which results from the application of crowdsourcing techniques, are associated with increased operational efficiencies in an organization’s major business functional areas.
3. Operational efficiencies, which are the results of increase call center performance due to the affective application crowdsourcing techniques, help foster a business environment where both the organization and its customer co-develop
Scope of Study
Rationale of Study
According to Robinson and Morley (2007) “Call center research has, to date, relied heavily on case studies. The advantages gained from case study research need to be supplemented by well conducted survey research” (p. 250). Likewise, Brabham (2008) emphasizes that, “Coined in the June 2006 issue of Wired, the term crowdsourcing describes a new Web — based business model that harnesses the creative solutions of a distributed network of individuals through what amounts to an open call for proposals” (p. 3). In other words, a company posts a problem online, a vast number of individuals offer solutions to the problem, the winning ideas are awarded some form of a bounty, and the company mass produces the idea for its own gain. While crowdsourcing has proven its worth in for — profit contexts, some have hopes for crowdsourcing as a far — reaching problem — solving model that can harness the collective intelligence of the crowd to benefit government and non-profit projects. No matter the purpose — for business or for the public good — the potential for the crowdsourcing model needs to be tapped (Brabham, 2008).
Overview of Study
This paper used a five-chapter format to achieve the above-stated research purpose. Chapter one is used to introduce the topics under consideration, provide a statement of the problem, the purpose and importance of the study, as well as its scope and rationale. Chapter two provides a critical review of the relevant and peer-reviewed and popular literature concerning crowdsourcing and call centers, and chapter three more fully describes the study’s methodology, a description of the study approach, the data-gathering method and the database of study consulted. Chapter four is comprised of an analysis of the data developed during the research process and chapter five presents the study’s conclusions, a summary of the research and salient recommendations.
Chapter 2: Review of Related Literature
Crowdsourcing: What is It?
Most likely everyone has heard of outsourcing, but many consumers may not have heard about crowdsourcing. According to Anthony Williams, co-author of Wikinomics: How Mass Collaboration Changed Everything, says examples of crowdsourcing are ubiquitous. Likewise, de Castella (2010) recently observed that, “Open source computer operating systems, such as Linux and Google’s Android, the big rival to Apple’s iPhone, are written and refined by members of the public. Another good example of such collaboration is Wikipedia, which allows users to write and edit entries for its online encyclopaedia” (para. 2). According to Howe (2006), “Technological advances in everything from product design software to digital video cameras are breaking down the cost barriers that once separated amateurs from professionals. Hobbyists, part-timers, and dabblers suddenly have a market for their efforts, as smart companies in industries as disparate as pharmaceuticals and television discover ways to tap the latent talent of the crowd. The labor isn’t always free, but it costs a lot less than paying traditional employees. It’s not outsourcing; it’s crowdsourcing” (p. 37). In this regard, Williams adds that, “For the first time millions of people can aggregate their talent and expertise” (para. 3). Crowdsourcing is already being used extensively – the World Bank used it in Haiti, the firm Innocentive now has hundreds of thousands of scientists on tap to solve problems for a fee, and New Zealand and the United Kingdom have experimented with it for crafting legislation (de Castella, 2010). In their book, Groundswell: Winning in a World Transformed by Social Technologies, Charlene Li and Josh Bernhoff expand on the Forrester Report (2006). They describe how the business environment has been changed by the emergence of powerful social media technologies. However, they note that the relationships that spring from the new technologies are more important than the actual technology. Li and Bernoff (2008) define these relationships as the “Groundswell.”
The book very effectively defines and explains the implications of the groundswell technologies; that is, blogs, social networks, wikis, forums, really simple syndication (RSS), and widgets are characterized and details are provided on how to best employ them. Furthermore, the authors delve into how the technologies threaten institutional power and what organizations can do about the threat. Several strategies are discussed in the book on how to leverage the groundswell. These strategies are illuminated through the use of case studies. The final section of the book enumerates on how connecting with the groundswell transforms an organization.
This book is a must read for any organization wanting to learn how to position itself in a way to be able to exploit the new social technologies that are already or coming available.
Another book that is very insightful in detailing the phenomena of crowdsourcing is The Wisdom of Crowds. In the book, Surowiecki puts forth that informed group judgments can be more valuable in reaching business and investment decisions than even the most brilliant individual’s conclusion. The key, according to Surowiecki (2004), is the group (crowd) must be diverse, have independence, and be decentralized. Surowiecki briefly describes the seminal research in group dynamics when he touches on Hazel Knight’s (Sociologist) initial group experiments conducted in the 1920; additionally, he mentions several other sociologists’ research on the crowd’s wisdom. However, he does caveat that the majority of the early research for the larger the group the better the decision dynamic remained relatively within the academic world.
Surowiecki uses multiple examples to illustrate his ideas. For instance, he writes about the popular TV show Who Wants to Be a Millionaire. In the show, the contestant is given three life-lines to use if they are unable to answer a question:
1. They can ask a single smart friend or family member;
2. They can use 50/50 to eliminate two incorrect answers, and,
3. They can ask the audience (crowd).
According to Surowiecki, the audience picked the correct answer 91% of the time as opposed to the smart friend choosing the correct answer only 65% of the time. This, as noted by Surowiecki, is not scientific proof of the possibilities of group intelligence; however, is does provide a very powerful unproven illustration of the crowds potential. The principal message of the book’s author is that the average of independent, well-informed decisions on a particular subject matter can be more useful than the determination of one individual, regardless of that one individual’s qualifications. This theory has wide applicability for market research, business and investment decisions.
Major corporations such as Procter & Gamble, for example, often posts research problems on the InnoCentive website, which runs them by a global network of 140,000 scientists who take on the challenges for the fun of solving them. Recently Intel and Asus launched a project at www.WePC.com asking the public to help them design a better computer (Kim, 2008). On the face of it this is a win-win situation. Businesses tap into a vast resource of knowledge and creativity, and the crowd gets the enjoyment of working on something a bit different from their daily grind, not to mention the kudos (and possibly payment) if their ideas are used (Kim, 2008, p. 28).
In a crowdsourcing application, the crowd is the collective of users who participate in the problem — solving process. Since crowdsourcing takes place through the Web, the crowd is necessarily comprised of Web users. The crowd consists of individuals who posit solutions in a crowdsourcing application, though the crowd may also consist of firms that put forth solutions on behalf of a company. Thus, though it may be simpler to conceptualize the crowd as a composite of individual Web users, a more precise concept for the crowd is a composite of ideas put forth by solo or group entities.
It is in this composite or aggregate of ideas, rather than in a collaboration of ideas, where strength lies. Based on his investigation of numerous case studies, from futures markets to cattle estimating, Surowiecki (2004) found that “under the right circumstances, groups are remarkably intelligent, and are often smarter than the smartest people in them.” [3] This “wisdom of crowds” is derived not from averaging solutions, but from aggregating them:
After all, think about what happens if you ask a hundred people to run a 100 — meter race, and then average their times. The average time will not be better than the time of the fastest runners. It will be worse. It will be a mediocre time. But ask a hundred people to answer a question or solve a problem, and the average answer will often be at least as good as the answer of the smartest member. With most things, the average is mediocrity. With decision making, it’s often excellence. You could say it’s as if we’ve been programmed to be collectively smart. [4]
The Internet is that perfect technology capable of aggregating millions of disparate, independent ideas in the way markets and intelligent voting systems do, without the dangers of “too much communication” and compromise [5]. Crowdsourcing applications are ventures that harness and aggregate this wisdom of the crowd to produce solutions and products superior to those of collaborative groups or solo geniuses. Thus, understanding more about this powerful crowd is important.
If it is to be at all innovative, an aggregate of ideas must be diverse. “[D]iversity and independence are important because the best collective decisions are the product of disagreement and contest, not consensus and compromise.” [6] Brabham (2007b) asserted that diversity — in terms of gender, sexuality, race, nationality, economic class, (dis)ability, religion, etc. — is important because each person’s unique identity shapes their worldview. Thus, we can assume that differing worldviews might produce differing solutions to a problem, some of which might be superior solutions because the ideas might consider the unique needs of diverse constituencies. [7]
Despite the need for diversity in the crowd to develop effective solutions, research shows substantial gaps in access even to the technology to participate in crowdsourcing (Fox, 2005). This “digital divide” means that the crowd currently is likely to be white, middle — or upper — class, English — speaking, higher educated, with high — speed Internet connections in the home. Moreover, the most productive individuals in the crowd are also likely young in age, certainly under 30 years of age and more likely to be under 25 years of age (Lenhart, et al., 2004; Lenhart and Madden, 2005), as this age group is most active in the so — called Web 2.0 environment of massive content creation, such as through blogging (Rainie, 2005; Madden, 2005; Madden and Fox, 2006).
According to Thompson (2009), “The term ‘Web 2.0′ refers to the next generation of Internet applications that allow (even encourage) the average Internet user to collaborate and share information online. It signals a major change in Internet use, since in the computer world “2.0” indicates a major upgrade to an original program. Web 2.0 sites allow anyone to contribute content and to participate with other users in editing and even combining or remixing existing content with other material to repurpose it for additional uses. Thus content on the Internet is no longer static; it is changing and dynamic. A distinguishing Web 2.0 feature is the increasing significance of the individual user, as anybody (even a fifth-grader) can create and upload text, as well as audio and video, to the Internet. Another characteristic is the reliance on user participation, often referred to as the “wisdom of the crowd” and the “architecture of participation.” Web 2.0 has an inherent trust in people and what they can contribute when working together toward a common goal for the greater good (Thompson, 2009, p. 711).
According to Tim O’Reilly of O’Reilly Media, the concept of “Web 2.0” began with a conference brainstorming session between O’Reilly and MediaLive International. Dale Dougherty, web pioneer and O’Reilly VP, noted that far from having “crashed,” the web was more important than ever, with exciting new applications and sites popping up with surprising regularity. O’Reilly adds that, “What’s more, the companies that had survived the collapse seemed to have some things in common. Could it be that the dot-com collapse marked some kind of turning point for the web, such that a call to action such as “Web 2.0″ might make sense? We agreed that it did, and so the Web 2.0 Conference was born” (2005, para. 2). By 2006, the term “Web 2.0” has clearly taken hold, with more than 9.5 million citations in Google; there are currently more than 34,100,000 such citations, a clear reflection of the growing popularity of this collaborative approach. Nevertheless, there remains some disagreement concerning precisely what Web 2.0 means, with some people decrying it as a meaningless marketing buzzword, and others accepting it as the new conventional wisdom (O’Reilly, 2005).
Figure __. “Meme Map” of Web 2.0
Source: O’Reilly, 2005
In sharp contrast to “Web 1.0,” today’s Web 2.0 is read/write. The Internet’s first era of mass use required users with programming skills to contribute (upload) material to the Internet. Early Internet users found that material in a manner similar to going to the library to find and take home a book. Also in contrast, Web 2.0 users still go to the library (i.e., the Internet), but instead of figuratively just taking home a book to read, they now enjoy other possible uses, including contributing comments, changing the contents, and having others simultaneously read the material in real time (Thompson, 2009).
Web 2.0 is “shifting the focus from individualized work to collaborative efforts, from individual learning to collective knowledge, from passive reception to active creation.” Kathy Schrock, a technology administrator in the Nauset Public Schools in Orleans, Massachusetts, and keeper of the Kathy Schrock’s Guide for Educators website, relates that the “ability to add to the body of knowledge about a topic, offer additional information, or state an opinion via public commenting on a blog or social networking site allows students to understand the importance of producing information for an authentic audience.”
Several thousand Web 2.0 applications have become available in the last few years. These applications are generally free to individuals. One suite of online applications that promotes creating, sharing, and collaborating is Zoho (http://zoho.com), which offers a word processor, spreadsheet, presentation tool, and note taker, among other services. Another increasingly popular and diverse online productivity and collaboration application is Google Docs (http://docs.google.com), which requires a free Google account. Google is increasingly becoming more than just a search engine. Google Docs is a suite of applications that allow you to import existing documents or create new documents, spreadsheets, and presentations. As with other Web 2.0 applications, it is Web-based so you can create, edit, and store your material online. Using online applications instead of programs installed on your desktop or laptop computer is a hallmark of Web 2.0 applications, so much so that it has acquired its own name: “cloud computing.” The all-encompassing Internet is the “cloud.”
However, Google recently announced that it will begin permitting word processor users to store files on their personal computers in addition to using Google’s online storage, thereby giving you access to your work when the Internet is not available, such as when you are on a plane. In this way, you will be able to edit your work on your computer and then synchronize it with what is stored online when you have Internet access again. Similar functions with spreadsheets and presentation software will be phased in over time. More information can be found at the Google Docs Blog (http://googledocs.blogspot.com) or at the Google Docs Community Channel (http://youtube.com/googledocscommunity). Of course, you can also combine Google applications such as Picasa (store/edit/share photos), Blogger (create and share blogs), Calendar (coordinate meetings and events with shareable calendars), and Earth (blend satellite images, maps, and even 3D structures to display global geographic information). Google is fast becoming a one-stop shopping center for Web 2.0 applications.
Social bookmarking sites are another Web 2.0 category. Instead of saving Internet bookmarks to your computer’s hard drive, save the addresses at a website. Then they are available to you from any computer with Internet access, anywhere in the world. Social bookmarking gives you greater capabilities than the traditional method of bookmarking. You decide who has access to your links — they can be confidential and only for you, limited to certain individuals (e.g., students) who have password access, or available to the general public. Most social bookmarking sites, such as del.icio.us (http://del.icio.us) encourage users to assign “tags” (think keywords) to their saved sites. The result is called a tag cloud, which is a group of tags of different sizes to indicate relative popularity. Michelle Bourgeois of Pensacola Catholic High School in Pensacola, Florida, relates that her science department is “beginning to share Web resources by creating a network of del.icio.us users so that they can easily tag and collect curriculum-related bookmarks in a common place.” Other popular social bookmarking sites include www.blinklist.com and www.stumbleupon.com.
In response to the numbers of educators and students using PowerPoint, SlideShare (www.slideshare.net) features storage of presentations online. This enables students to show their work to a larger audience, for example. Or administrators can upload presentations from professional development sessions so participants have access afterward. However, SlideShare is not just a place to upload a presentation. Your slideshows can be public or private. You can synchronize audio with your slides, and you can join a community of SlideShare groups who share your interests. The opportunity to participate in a community of users is a major attribute of Web 2.0 applications.
There are many more types of Web 2.0 applications. For example, users that want to create their own blog can use free host sites such as Blogger (www.blogger.com), or teachers can set up a classroom blog and screen entries before they are posted at Class Blogmeister (http://classblogmeister.com). Creating a podcast (an audio or video recording posted to the Internet for downloading to an iPod or other digital device) is simple. Use a telephone to record directly to GabCast (www.gabcast.com) or Gcast (www.gcast.com). Students can then listen to teachers’ audio commentary about their papers. Set up an RSS (really simple syndication) feed using Bloglines (www.bloglines.com) to get timely information from various blogs, news sources, and podcasts. The information comes in the form of “feeds” to a single site, which means you don’t have to canvass each individual site for things that consumers may want to know more about and there are many other types of Web 2.0 applications so they can brainstorm (http://bubbl.us), diagram (www.gliffy.com), personalize a homepage (www.pageflakes.com), interact in live video broadcasting (www.ustream.tv), and share various media (http://flickr.com) (Thompson, 2009)..
For instance, consumers can use del.icio.us to store applicable sites for colleagues or to preselect sites for student projects. Use Flickr to store student-taken photos and to locate photos for projects. Use Google Docs to create and edit presentations online. Transfer them to SlideShare and add accompanying audio. Wikis (such as Wetpaint, www.wetpaint.com) allow users to collectively create, add to, and edit content and make real-time access to committee documents available 24/7. Turn to bubbl.us for online brainstorming sessions with committee colleagues.
Educators have suggestions on how to begin using Web 2.0. David Widener, who is IT and curriculum director for the Woodward Academy outside Atlanta, advises that “anyone wishing to get started with Web 2.0 needs to take the approach of our students. Go online and click unbridled. You have to explore and experiment.” Elaine Talbert, who manages the Web Filter Unit for the New South Wales Department of Education and Training in Australia, uses Web 2.0 tools for teacher professional learning and says to “take some time to explore all the tools and widgets. Apply fun and function for use in teaching as the criteria.”
Web 2.0 offers educators new teaching and learning tools. Web 2.0 can change the way teachers interact with students and how students interact among themselves. When you start exploring the information and sites presented here, try avoiding getting overwhelmed by the ever-growing number of Web 2.0 applications. In addition to the sites mentioned here, you should also visit sites that have lists of other sites, such as www.go2web20.net and www.classroom20.com. Start exploring. Get acquainted with what Web 2.0 has to offer. Take your time. Take baby steps — one small application at a time. Web 2.0 offers powerful applications with great potential, but you have to use them to experience their benefits.
The ways in which individuals use and are gratified by new media technologies, such as the Internet, differ from studies of individuals’ use of “older” media technologies, such as newspapers and television. The primary categories of uses and gratifications that emerged from the many early individual and collaborative efforts of Blumler and Katz (Blumler, 1979; Blumer and Katz, 1974; Blumler, et al., 1985; Katz, et al., 1973a, 1973b), for example, are necessarily limited by the fact that the media of the time of those studies did not offer nearly as many interactive possibilities and user — productive modes as the digital technology of the Web era. Today, audiences do not merely use and seek pleasure from content. Audiences are producers and consumers, what futurist Alvin Toffler (1980) called a “prosumer,” of media content. This is not to say Blumler, Katz, and other behaviorist researchers of the “old” media era do not still have some relevance. After all, their findings were important in that they discovered an audience that was not merely a passive receptacle for media content, but was instead fundamentally interactive. Early uses and gratifications research prophesized a moment when the pleasures of media interactivity would amplify if users were given media technologies that truly enabled production. The Internet — specifically given the recent Web 2.0 trend toward massive user — generated online content — is the vehicle for distributed, mass, pleasurable production.
To adapt to the new character of digital media, more recent studies into audience motivations for online media use have focused on the curious practice of open source software production. In this production, users essentially work for free to create software (Coar, 2006), which in itself undermines the power of simple extrinsic motivators such as money and also complicates intrinsic motivators. Several studies on motivation in open source participation (Bonaccorsi and Rossi, 2004; Hars and Ou, 2002; Hertel, et al., 2003; Lakhani and Wolf, 2005) support what open source pioneer and founder of Linux, Linus Torvalds, predicted would be the primary motivator: the pleasure found in doing hobbies. As Torvalds stated, “most of the good programmers do programming not because they expect to get paid or get adulation by the public, but because it is fun to program.” [8]
Lakhani, et al. (2007) measured motivations of winners of crowdsourced scientific problems at InnoCentive.com, another exemplar crowdsourcing application. For a number of reasons, however, I respectfully avoid most of Lakhani, et al.’s (2007) survey instrument for motivations. First, the crowd at InnoCentive is incredibly specialized and educated, the “majority (65.8%) holding a Ph.D.” And many of those in scientific fields [9]. The crowd at iStockphoto is surely not entirely comprised of professionally trained graphic designers and photographers, holding MFAs in their fields. The problem at iStockphoto requires far less specialized problem — specific skills, presumably skills a large portion of the population might have — at least a larger portion than have Ph.D.s. Second, Lakhani, et al.’s (2007) study found the possibility of monetary reward to be a strong indicator of success in winning InnoCentive challenges, along with intrinsic motivations (e.g., the joy of solving scientific problems) and simply having free time to fill (Lakhani, et al., 2007, pp. 10 — 11).
The opportunity to gain new skills or propel one’s career were not strong motivators [11]. This is problematic for the present study because monetary reward for individuals at iStockphoto (about U.S.$0.20 per download) is low compared to InnoCentive, where awards offered by “seeker” companies range from U.S.$10,000 to U.S.$100,000 for winning solutions [12]. This steep of a bounty understandably makes the desire for financial gain a strong motivator for participation. Also, the opportunity for gaining new skills and possibly advancing one’s career are low for InnoCentive members, probably due to the fact that so many are indeed Ph.D.s with established careers in industry, corporate research and development, or the academy. Many of the biographies of winning solvers at InnoCentive, for instance, state that they are (likely happily) employed in scholarly or high — technology institutions, more likely to be in a career suitable to their goals. At least from the narratives emerging from iStockphoto members and crowds at other art and design applications (Brabham, 2008).
Crowdsourcing isn’t without flaws, however. Pitfalls include the added costs of bringing a project to an acceptable conclusion, the difficulty of maintaining a working relationship with a crowd, and that it’s difficult for smaller companies, with less visibility, to use. Nevertheless, with many more projects in the pipeline, you can expect to see much more of it. When William Bernbach introduced what he called creative teams at Grey Advertising in the 1940s, he transformed the way the ad industry came up with ideas. Now crowdsourcing has the potential to do the same for virtually any firm (Kim, 2008).
According to Yahr (2007), crowdsourcing techniques have even redefined how journalists go about researching and reporting events. For instance, in the hours after the Interstate 35W bridge collapsed in Minneapolis on August 1, 2007, staffers at the Des Moines Register posted a message on the paper’s Web site asking readers to send details from the scene. Though Des Moines is about three hours to the south and the incident occurred in the early evening, the next morning’s paper contained a story filled with vignettes from eyewitnesses: a 30-year-old woman coming home from the gym who drove over the bridge mere seconds before it fell; a man on the scene who described seeing a truck “crunched like an accordion.” (Yahr, 2007, p. 9).
Asking readers for help to broaden a story is a typical reporting technique. But a strategy known as crowdsourcing has breathed new life into the concept, allowing journalists to blend an age-old tactic with new technology and resources. Yes, multiple editors and media veterans agree — at its core, crowdsourcing is similar to what journalists have always done: elicit the assistance of readers while gathering material for a story. Wired magazine contributing editor Jeff Howe says crowdsourcing is the natural outgrowth of reporters adding, “Do you have tips for me? E-mail or call” at the end of a story.
Crowdsourcing, which can take that notion much further, has become an integral part of the far-reaching restructuring at the nation’s largest newspaper chain, Gannett, which on May 1 transformed its newsrooms into “information centers.” Dividing newspapers into seven departments that rely heavily on multimedia and hyperlocal news, the new approach also seeks to heighten reader involvement in the newsgathering process. The goal is to recruit readers at the beginning stage of stories, publishing inquiries on the papers’ Web sites and in their print editions, and ultimately using citizen contributions to help produce high-quality content. (Yarh, 2007, p. 9).
A recent article by Howe (2008) reports that the use of crowdsourcing has extended to the political sphere as well. The last presidential election was indeed historic, Howe notes, but not just for the reasons emblazoned in headlines throughout the world. It was also the most closely monitored election in U.S. history, as everyone from CNN to The Huffington Post to Harvard University asked people to document their voting experience and provide instant reports on problems at the polls. Thousands responded, sending in text messages, photographs, videos and even voice mails. The resulting data were aggregated and displayed — in real time — on maps, in charts, and over RSS feeds.
All of this activity signaled a small but significant advance in the use of crowdsourcing as a new tool in digital journalism. While crowdsourcing, or citizen journalism, has been widely embraced by all manner of news operations over the past several years, its track record has been decidedly spotty. In theory, crowdsourcing offers outlets like newspapers and newscasts and Web sites an opportunity to improve their reporting, bind their audiences closer to their brands, and reduce newsroom overhead. In reality, relying on readers to produce news content has proved to be a nettlesome — and costly — practice. Howe coined the word “crowdsourcing” in a Wired magazine article published in June 2006, though at that time I didn’t focus on its use in journalism. It was — and is — defined as the act of taking a job once performed by employees and outsourcing it to a large, undefined group of people via an open call, generally over the Internet. Back then I explored the ways TV networks, photo agencies, and corporate R&D departments were harnessing the efforts of amateurs. I had wanted to include journalism in the piece, but there was a dearth of examples (Howe, 2008).
That quickly changed. Not long after Wired published this article the term began to seep into the pop cultural lexicon, and news organizations started to experiment with reader-generated content. Around this time, some of the more memorable moments in journalism had been brought to us not by a handful of intrepid reporters, but by a legion of amateur photographers, bloggers and videographers. When a massive tsunami swept across the resort beaches of Thailand and Indonesia, those “amateurs” who were witness to it sent words and images by any means they could. When homegrown terrorists set off a series of bombs on buses and subways in London, those at the scene used their cell phone cameras to transmit horrifying images. Hurricane Katrina reinforced this trend: As water rose and then receded, journalists — to say nothing of the victims’ families — relied on information and images supplied by those whose journalistic accreditation started and ended with the accident of their geographical location (Howe, 2008).
With these events, the news media’s primary contribution was to provide the dependable Web forum on which people gathered to distribute information. By late 2006, the stage seemed set for the entrance of “citizen journalism,” in which inspired and thoughtful amateurs would provide a palliative for the perceived abuses of the so-called mainstream media. These were heady times, and a spirit of optimism — what can’t the crowd do? — seemed to pervade newsrooms as well as the culture at large. At Wired, we were no less susceptible to the zeitgeist. In January 2007, we teamed up with Jay Rosen’s NewAssignment.Net to launch Assignment Zero. We anticipated gathering hundreds of Web-connected volunteers to discuss, report and eventually write 80 feature articles about a specified topic. At about the same time, Gannett was re-engineering its newsrooms with the ambition of putting readers at the center of its new business strategy. I had a close-up view of both efforts. At Assignment Zero, I was trying to help apply the crowdsourcing principles, while in 2006 I broke the news of Gannett’s retooling — the most significant change since it launched USA Today in 1982 — after spending several months reporting on the sea change at the company for Wired Magazine (2) and for my book, “Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business.” (Howe, 2008).
It would be easy to say that the original optimism was simply naivete, but that wouldn’t be exactly correct. As it turns out, there’s a lot that the crowd can’t do or, at least, isn’t interested in doing. Recently I spent time talking to sources at Gannett as well as some of my Assignment Zero alumni (3) to revisit what went right, what didn’t, and to pull from them valuable lessons for others to put to good use. What I’ve learned has reinforced my belief that crowdsourcing has limited applicability to journalism — it’s a spice, not a main ingredient in our fourth estate. I’ve also come to fear that news organizations will rely more and more on reader-generated content at the expense of traditional journalism. But what’s also clear is that the animating idea — our readers know more than we do — is evolving into something that, if used wisely, will be far more efficient and useful than our first, early attempts at this new form of journalism. At any rate, crowdsourcing isn’t going away, so it behooves all of us to make sure it improves journalism but does not replace it (Howe, 2008, p. 48).
Assignment Zero was intended to demonstrate, as I wrote in a Wired. com piece on the occasion of the project’s launch in March 2007, that “… A team of professionals, working with scores of citizen journalists, is capable of completing an investigative project of far greater scope than a team of two or three professionals ever could.” In this case, the first topic of investigation by the crowd would be ” … The crowd itself — its wisdom, creativity, power and potential.” Dozens of “subject pages” were constructed, ranging from open source car design to architecture. Included was even a subject file called “the crowdsourced novel.” Within each topic, there were up to 10 assignments, in which contributors could report, brainstorm or “write the feature.” It was an ideal format for a newsroom. But then, we weren’t soliciting journalists. (Howe, 2008, p. 48).
The initiative started out strong. The New York Times published a column devoted to Assignment Zero, and the effort received lots of positive attention from the blogosphere. Within the first week, hundreds of volunteers had signed up. But just as quickly, these enthusiastic volunteers drifted away. Six weeks later, most of our topic pages were ghost towns. What had we done wrong? Here’s a few lessons learned:
1. Using the crowd to study crowd-sourcing proved far too wonky and bewildering for most of our would-be citizen journalists.
2. We failed to anticipate that while building a community can be difficult, maintaining it is much harder. We didn’t have a tier of organizers ready to answer questions and guide people in the right direction. With their earnest e-mails unanswered, quite naturally most volunteers drifted away.
3. We expected the crowd would fall all over themselves for the opportunity to produce all the artifacts of the journalistic practice — reporter’s notes, inverted pyramid articles, and long-form features. It turned out that asking people to write a feature proved about as appealing as asking them to rewrite their college thesis. And so our contributors spoke with their feet. (Howe, 2008, p. 48).
Six weeks in, we turned things around. We scrapped most of the feature stories; instead people were asked to conduct Q&As. Critically, we shifted our tone. Instead of dictating assignments to people, we let the crowd select whom they wanted to interview or suggest new subjects entirely. In the end, about 80 interviews made it to the Web site as published pieces, and the majority were insightful and provocative. What their interviews made clear is these volunteer contributors tackled topics about which they were passionate and knowledgeable, giving their content a considerable advantage over that of professional journalists, who often must conduct interviews on short notice, without time for preparation or passion for the subject. (Howe, 2008, p. 48).
Gannett, too, found itself experimenting with crowdsourcing in some of its newsrooms but did so for different reasons and in different ways than Assignment Zero. Conceived as a wholesale reinvention of the newsroom — rechristened the “information center” — Gannett’s readers were now to reside at the heart of the two planks in its strategy. After a successful initial foray into crowdsourced reporting — at The (Fort Myers) News-Press, in which a citizen-engaged investigation unearthed corruption in a sewage utility in a town in Florida — Gannett decided to export this model to its other newspapers. (4) Readers (a.k.a. community members) would also play a significant newsroom role in the renamed “community desk,” which would oversee everything from blogs to news articles written by readers. (Howe, 2008, p. 48).
In reporting on Gannett’s strategy, Howe elected to focus on how the changes were being implemented at one paper, The Cincinnati Enquirer. One indication of how the newsroom was changing was the shift in job responsibilities. A longtime metro reporter, Linda Parker, had recently been reassigned as “online communities editor.” Every Enquirer Web page prominently featured the words “Get Published” as a way of eliciting stories, comments and anything else Cincinnatians might feel compelled to submit. It all landed in Parker’s queue; perhaps not surprisingly, these words and videos never have resembled anything commonly considered journalism. Even figuring out how best to prompt contributors has revealed valuable lessons to those at the Enquirer — ones that other news organizations can learn from. “It used to read, ‘Be a Citizen Journalist,'” Parker told me. “And no one ever clicked on it. Then we said, ‘Tell Us Your Story, and still nothing. For some reason, ‘Get Published’ were the magic words.” (Howe, 2008, p. 48).
Now, nearly two years into the experiment, the Enquirer considers this feature to be an unequivocal success. I sat with Parker, a cheerful woman in her mid-50’s, in April of last year as she pored over several dozen submissions she had received that day. There was one written by a local custom car builder trumpeting his upcoming appearance on a BET show, and another, expressing with the intensity of emotional passion befitting the circumstance, is a notice for a play being held to raise funds for a fifth-grader’s bone marrow transplant. Parker almost never rejects anything she receives, though she scans each one for “the F-word,” and then posts it to the site. “A few years ago these would have come across the transom as press releases and been ignored,” she says. This observation points to a central problem with Gannett’s strategy — indeed, with both the hyperlocal and crowdsourcing movements in general. Readers are content to leave the gritty aspects of reporting to journalists; they prefer to focus on content and storytelling that Nicholas Lemann, dean of the Graduate School of Journalism at Columbia University, once characterized in The New Yorker as being the equivalent of the contents of a church newsletter. As it turns out, Tom Callinan, the Enquirer’s editor, observed a while into the project “even ‘Get Published’ was too newspaperlike in its sound. People don’t want to get published. They want to ‘share.'” And so this is what the Web site’s button now encourages its readers to do. The results continue, as Callinan says, to tend toward “pretty fluffy stuff.” (Howe, 2008, p. 48).
So what are we to take away from these experiments? Readers are very interested in playing a role in the creation of their local media. They don’t necessarily want to write the news; what they want is to engage in a conversation. This doesn’t mean, however, that they don’t have valuable contributions to make. This fall, Callinan told me, readers shared with others on the Enquirer Web site news about a stabbing at a local strip club and a photograph of a theater fire. “We were able to confirm the stabbing,” he said. “We would have never known about it without the tip.” It might not be grist for a Pulitzer, but it fills the copy hole. (Howe, 2008, p. 48).
Nor were these key lessons lost on those of us involved in Assignment Zero. In fact, Assignment Zero’s community manager, Amanda Michel, employed the lessons of what didn’t work adeptly at her next venture, directing The Huffington Post’s effort, Off the Bus, with its citizen-generated coverage of the presidential campaign. Rather than duplicate what journalists were doing, Off the Bus leveraged its strength — namely, the size of its network of 12,000 “reporters.” With citizen correspondents spread across the nation and ready to attend smaller rallies, fundraisers and get-out-the-vote events that the national press ignored, Off the Bus found its niche. (Howe, 2008, p. 48).
Off the Bus became arguably the first truly successful example of crowd-sourced journalism with some of its citizen reporters breaking national stories. Perhaps its most significant story was about the moment when Barack Obama, at a nonpress event fundraiser in San Francisco, made his famous comment about how rural Americans “cling to guns or religion” as an expression of their frustration. However, this reporting by Mayhill Fowler, the citizen journalist who broke this story, actually drew attention away from Off the Bus’s broader achievement. Toward the end of the campaign, Off the Bus was publishing some 50 stories a day, and Michel — with the help of her crowd — was able to write profiles of every superdelegate, perform investigations into dubious financial contributions to the campaigns, and publish compelling firsthand reports from the frontlines in the battleground states. The national press took note — and sent its kudos — but more importantly, readers noticed. Off the Bus drew 3.5 million unique visitors to its site in the month of September. (Howe, 2008, p. 48).
Michel achieved this because she took away valuable information from the failures of the experimentation at Assignment Zero. Rather than dictate to her contributors, she forged a new kind of journalism based on playing to their strengths. The result: Some contributors wrote op-eds, while others provided reporting that journalists at the Web site then used in weaving together investigative features, including one that explored an increase in the prescribing of hypertension medicine to African-American women during the campaign. They also contributed “distributed reporting,” in which the network of contributors performed tasks such as analyzing how local affiliates summed up the vice presidential debate. “We received reports from more than 100 media markets,” Michel said. “We really got to see how the debate was perceived in different regions.” (Howe, 2008, p. 48).
Is Off the Bus the future of journalism? Hardly, Michel contends, and I agree wholeheartedly. She regards Off the Bus as complimentary, not competitive, with the work done by traditional news organizations. “We didn’t want to be the AP. We think the AP does a good job. The question was what information and perspective can citizens, not reporters on the trail, offer to the public?” Nor does she claim the Off the Bus method would work with all stories. It’s easy to build such a massive network of volunteer reporters when the story is so compelling. But what happens when the topic generates far less passion, even if it is no less important — say, for example, the nutritional content in public school lunches? (Howe, 2008, p. 48).
The take-away message for journalists should be this: Adapt to these changes and do so quickly. “The future of content is conversation,” says Michael Maness, the Gannett executive who helped craft the company’s recent newsroom overhaul. Worth noting is that one of Gannett’s unqualified successes are the so-called “mom sites,” launched in some 80 markets. Each is overseen and operated online by a single journalist with the assignment of facilitating conversation while also providing information. “We’re moving away from mass media and moving to mass experience,” says Maness. “How we do that? We don’t know.” (Howe, 2008, p. 48).
The concept of the wisdom of crowds is not new. The possibilities of group intelligence, at least when it came to judging questions of fact, were demonstrated by a host of experiments conducted by American sociologists and psychologists between 1920 and the mid-1950s, the heyday of research into group dynamics. Although in general, as we’ll see, the bigger the crowd the better, the groups in most of these early experiments — which for some reason remained relatively unknown outside of academia — were relatively small. Yet they nonetheless performed very well. The Columbia sociologist Hazel Knight kicked things off with a series of studies in the early 1920s, the first of which had the virtue of simplicity. In that study Knight asked the students in her class to estimate the room’s temperature, and then took a simple average of the estimates. The group guessed 72.4 degrees, while the actual temperature was 72 degrees. This was not, to be sure, the most auspicious beginning, since classroom temperatures are so stable that it’s hard to imagine a class’s estimate being too far off base. But in the years that followed, far more convincing evidence emerged, as students and soldiers across America were subjected to a barrage of puzzles, intelligence tests, and word games. The sociologist Kate H. Gordon asked two hundred students to rank items by weight, and found that the group’s “estimate” was 94% accurate, which was better than all but five of the individual guesses. In another experiment students were asked to look at ten piles of buckshot — each a slightly different size than the rest — that had been glued to a piece of white cardboard, and rank them by size. This time, the group’s guess was 94.5% accurate. According to the author, “A classic demonstration of group intelligence is the jelly-beans-in-the-jar experiment, in which invariably the group’s estimate is superior to the vast majority of the individual guesses. When finance professor Jack Treynor ran the experiment in his class with a jar that held 850 beans, the group estimate was 871. Only one of the fifty-six people in the class made a better guess” (Surowiecki, 2004, p. 1).
Likewise, stock markets aggregate input from buyers and sellers to determine prices and as noted above, the average of all estimates in certain contests, such as guessing the number of jellybeans in a jar, often concludes with remarkably accurate results. The downside of crowd sourcing is that groups tend to blunt the originality of people who have new ideas. Nevertheless, recent data shows that individuals between the ages of 14 and 40 now spend more time on social networks than surfing the Web, and the level of their commitment to social networking is high. The Internet is more interesting to them than traditional information sources such as newspapers; they also place an enormous emphasis on mobility (cell phones) and connectivity (Hawkins, 2007, p. 33). Indeed, an early form of crowdsourcing was even used by the curators and other experts of the Smithsonian Institution to help identify a motley collection of mysterious tools, artifacts and other implements that had become part of the “nation’s attic’s” collection during the mid-20th century. These articles were placed in a series of glass cabinets marked with large question mark and a sign that encouraged visitors to the museum to speculate concerning their identity. The exhibit, which would become one of the most popular in the Smithsonian’s National Museum of History and Technology, was proof-positive of the power and potential of crowdsourcing to help enterprises of all sizes and types achieve a competitive advantage. According to Bedini (1977), “Day and day, for over two years, the public enthusiastically deposited hundreds of suggestions. About half of the objects that stumped the experts were correctly identified by the public” p. 96). While engaging the visiting public in a meaningful way is perhaps the most important aspect of this exercise by the Smithsonian staff, a more pragmatic aspect was its cost effectiveness, a feature that has only been amplified by the introduction of computer-assisted technologies, a feature that has also facilitated the operation of virtually all types of call centers.
A wide array of software solutions have been introduced to facilitate crowdsourcing, including the following that have been used by the motion picture industry to help critique and even write movie scripts.
1. Tangler.com is an excellent way to create a media-rich forum around a project that enables audio, video, stills and provides an RSS feed for easy notification. There is even an embeddable forum version that can be placed on your site or blog.
2. Real-time communication is one of the foundations of mobilizing an audience base. Meebo.com is a free real-time chat tool that you can embed within your site or blog. There are two embeddable versions of Meebo: a single chat client that allows chatting with site visitors when you are online and Meebo Rooms, which provides a group chat room.
3. Freeconferencecall.com provides a dedicated conference line, which is a great way to hold Q&As, record interviews or mobilize your audience. The free line supports unlimited calls and a maximum of 96 callers, gives upwards of 6 hours of recording time and can output a .wav file of the call. The service even provides an RSS feed if you want to create a podcast.
4. Live streaming provides a simple way for filmmakers to communicate with their audiences. Sometimes a simple video podcast will not do and you may want to build more of an interactive experience around your message. Justin TV (justin.tv), Blog TV (blogtv.com) and Ustream (ustream.tv) all offer free services that allow users to broadcast live streaming channels across the Web while providing viewers the ability to chat in real time. All three services are free and in most cases can support more than one camera.
5. Ning.com is a free service that enables you to easily create your own social network that includes many of the features found in the most popular social networking sites such as MySpace and Facebook.
6. Twitter.com is a fun way to communicate with your audience in 140 characters or less. Audience members can subscribe or follow your tweets via their Web browser or a mobile phone (Weiler, 2008, p. 86).
According to the editors of the Library Administrator’s Digest, Twitter is a short message routing service – messages are limited to a maximum of 140 characters. This length restriction makes “tweets” (as Twitter messages are called) equivalent to cell phone “texts” but with a difference: text messages are essentially one-to-one, whereas tweets are one-to-many. The core of social networking is that there’s a commons, a shared area, wherein people communicate. Normal e-mail has no commons. Blogs have localized commons and there’s usually a specific focus to the hierarchical discussion, the post’s topic, and editorial control over the thread. Twitter has a global commons and there’s no restriction (other than on length) to what is posted and no enforced hierarchy (To tweet or not to tweet, 2009, p. 35).
All of these forms of communication are almost like a poor man’s e-mail. Here’s the way to view these different forms of communications: e-mail is like person-to-person phone calls while blogs are like lectures with follow-up questions and discussions. But social media, such as Twitter, are like a cocktail party. So why Twitter? According to a blog entry on Compete.com in February, Twitter ranks as the third largest social network with six million users and fiftyfive million monthly visitors (Facebook is number one and MySpace number two).
Some of the ways that Twitter can be used in crowdsourcing applications include the following:
1. Twitness -which involves a few hundred Twitterers watching something, such as the Academy Awards show, and tweeting about the show with all sorts of funny remarks about the clothes, etc. It turns something solitary like TV watching into an interactive experience.
2. Breaking News – when a major news event happens, often a Twitterer will be there and share their experience faster than the cops can cordon off the spot.
3. Communications – a great way to broadcast a quick message to other co-workers working out in the field without having to contact them each individually.
4. Feedback – an instantaneous way to respond to a question or comment vs. using e-mail.
5. Crowd-sourcing and Information Polling – users can poll people who are Twitter followers about a question you have or conduct an informal survey.
6. Public Address System – Twitter can be used to announce the start of something or to promote a product or service to a user’s followers (To tweet or not to tweet, 2009, p. 35).
Other industries are also taking advantage of crowdsourcing techniques for help achieve their organizational goals. For instance, Weiler (2009) reports that, “The MakerBot is a boxlike unit that prints thin plastic, laying it down layer by layer similar to a glue gun. Over time the layers build and become physical objects” (p. 18). MakerBot does an excellent job of understanding the value of a niche community and providing tools and resources to enable them to share and “make.” Case in point: MakerBot is experimenting with crowd-sourcing manufacturing. Parts for the actual MakerBot are being printed by those in the community, thus eliminating the need for outside manufacturing. The goal is to eventually have an army of MakerBots making themselves. By giving the community an active role in literally and figuratively building the MakerBot they have tapped a loyal user base and in the process have energized a whole community (Weiler, 2009).
The success enjoyed by a T-shirt company called Threadless prompted other firms to explore how crowdsourcing might work for them. One of those companies is RYZ, a tiny, high-end sneaker company in Portland, Oregon. Like other companies relying on community design, RYZ doesn’t need a large marketing or design staff. It uses potential customers for that. Would-be designers use a template from the company’s Web site to create a pair of high-rise sneakers. The sneaker designs are posted online, and viewers vote on which ones they like. Winning designs are produced, and the designer gets $1,000 plus 1% royalties. There’s practically no overhead involved. (Kaufman, 2008).
Marketing costs? Practically nothing. The business model relies largely on the Internet, hoping that online voting and buzz on sites such as MySpace and Facebook will create demand for specific products. MIT professor Eric von Hippel, an expert in innovation management, says online design is becoming a substitute for in-house research and development while voting takes the place of conventional market research. “This is really the biggest paradigm shift in innovation since the Industrial Revolution,” von Hippel says. “For a couple hundred years or so, manufacturers have been really imperfect at understanding people’s needs. Now people get to decide what they want for themselves.” (Kaufman, 2008).
Moreover, relying on customers for design and market research allows his company to move much more quickly, says Rob Langstaff, the founder and CEO of RYZ. Langstaff, who used to head Adidas North America, says in a traditional footwear company, it might take 12 months — and a substantial investment — to get a new design to market. “What we’ve done is compressed this time using the Internet,” Langstaff says. Design to final product: about 6 weeks.
The community-based model will be easier to adopt in some industries than in others. For example, designing T-shirts and sneakers is quite different from engineering complex industrial equipment; however, von Hipple says lots of firms see the handwriting on the wall; many are turning to the Internet for customer feedback and ideas, but they are not yet comfortable with the idea that customers want manufacturers to listen to them (Kaufman, 2008). RYZ’s Langstaff acknowledges that giving consumers control is humbling — and risky. But, he says, “it’s almost less risky to have most talented actors on your stage than to try to do-it-yourself.” Still, he admits that for his business to take off, consumers will have to take a leap of faith — from voting and talking about a shoe design to actually plunking down a sizable chunk of change to buy a pair of sneakers (Kaufman, 2008).
Some of the companies that facilitate crowdsourcing have been around for years but are just now building markets large enough and liquid enough to achieve critical mass. OnForce – founded as ComputerRepair.com in 2003 – has focused narrowly on system setups, equipment repairs, network wiring and other onsite IT tasks. Two OnForce competitors, Guru.com and Elance (elance.com), offer a broader set of services; in addition to an IT repair guy, they can set you up with lawyers, Web designers, and free-lance writers, many of whom work from home. InnoCentive (innocentive.com) functions like a virtual R&D department, allowing companies to post laboratory “challenges” to scientists and inventors all over the world and offer cash prizes for their completion. (Whitford, 2008)
Then there’s Mechanical Turk (mturk.com), one of several fresh offerings from Amazon’s (AMZN, Fortune 500) new B2B division, Amazon Web Services. The original Mechanical Turk was an 18th-century chess-playing automaton that fooled spectators and opponents who didn’t realize there was a flesh-and-blood chess master hiding inside the box. Similarly, Amazon’s Mechanical Turk lets companies enlist human beings to perform tasks computers can’t -identifying details in photographs, for instance, or reading handwritten information on forms – and pay piece rates upon completion. (Amazon collects a 10% commission from the client.) Amazon describes the service as “artificial intelligence,” but other companies view it simply as a virtual hiring hall, much like OnForce. (Whitford, 2008)
Mechanical Turk customer iConclude is a two-year-old enterprise-software company in Bellevue, Washington that sells scripts that automate troubleshooting and routine repairs on IT networks. “How are we going to build this library of repairs?” asked CEO Sunny Gupta. “We were doing a lot of it in-house, but we felt there was a big [labor] market out there we weren’t tapping.” To test the waters, iConclude posted a request on Mechanical Turk for one simple procedure; 300 programmers responded from all over the world, 80 of whom iConclude deemed qualified. Gupta was thrilled, especially when he discovered that he could get the job done for one-tenth the cost of doing it in-house. iConclude is building a library of 10,000 automated procedures. Gupta hopes to source about 10% of that work through Mechanical Turk. “Will I have to pay more [as the market develops]?” he wonders. “I think the answer is yes. And I don’t think it’s suitable for projects that require lots of supervision. But we think this model has a lot of potential for self-contained tasks. It could actually change the way lot of companies do business” (Whitford, 2008, para. 3).
Technology startup Waze is tapping into the collective knowledge of road warriors in order to make life more pleasant for drivers while creating reliable street maps. A free Waze: Way to Go service that proved its worth in Israel is making its U.S. debut, inviting motorists to use smart phones to keep one another in the know about speed traps, short cuts, hazards, accidents and more.”It seems like a silly thing, but it is addictive,” Waze chief executive Noam Bardin said while demonstrating the service for AFP. “There is this feeling that you are not alone… Some people just like knowing someone else is out there. “Satellite tracking technology commonly built into smart phones lets Waze automatically measure traffic flow while simultaneously verifying or modifying public street information in its database. Motorists “teach” Waze computers where roads are and how best to maneuver about simply by driving. Drivers can upload comments, along with pictures, from along their routes to alert fellow “wazers” to anything from accidents or detours to a favorite place to grab a cup of coffee. Waze also provides users with turn-by-turn directions. While Waze’s acts as a handy, free navigation tool for drivers it is, at its core, a “wiki” style approach to map making: Waze users are essentially feeding updated street information to the service every time they drive. The crowd-sourcing approach is expected to produce up-to-date street maps that will compete with offerings from leading mapping data firms Navteq and Tele Atlas.Navteq and Tele Atlas dispatch fleets of specially equipped trucks to gather data they in turn sell to firms that provide Internet mapping and in-car navigation services. “We plan to come out with cheaper, better maps built from scratch,” Bardin said. “We are very much about folks driving their daily commutes or local routes. You might know a better way to get someplace; fine, drive it and you’ve taught us. “Waze features include outlining routes considered fastest, shortest, popular, or most environmentally friendly. “We’ve found that people have started using our application to build maps all over the world,” Bardin said. “By year’s end, we plan to open up internationally. “The project was named “Freemap” when it was launched in Israel in 2006 and data collection for a “live” map of that country is almost complete. Waze plans to make money by eventually charging for map data and premium navigation services (Waze turning road warrior, 2009).
Even search engine giant Yahoo! is executing what modern marketers call “crowd sourcing or wisdom of the crowd” strategy. Writing in the Manila Bulletin, Ong (2007) reports, “What better way than to set up a command post in this city, the epicenter of software development in Asia. And to host Yahoo! Hack Day also in this country, symbolically the first in the continent. Following the successful events in Sunnyvalle (Yahoo!’s headquarters) and London this year, the developer community event was brought here by the Yahoo! Developer Network with no other than technologist David Filo, co-founder and Chief Yahoo!, leading company executives” (p. 37).
Asked why Yahoo! opened Hack Day to the public, Filo remarked, “It makes us prioritize….(facing the challenge of) How to take all the Yahoo! assets and get the most of what we have.” He further explained their belief, “there is a fair relation between India and emerging markets. We are very focused on the emerging markets.” Speaking to foreign media observers of India Yahoo! Hack Day edition, Pranesh Anthapur, COO, Yahoo! India Research and Development, said “The challenge is on understanding [that] the consumer needs more than technology. Today, when you build a product, it is for millions of people in different languages.”
Another challenge that his India-based team are confronted every second is how to manage data for millions of people.” Sharad Sharma, CEO, Yahoo! India, interjected, “The next 20 million, for example, will be of a different demographic profile… For you to reach a mass market, then you use Yahoo!” The Yahoo! R & D. India organization, 1,000 strong and growing very fast in its newly built five-storey building, is presented with: over 500 million unique users using the Yahoo! Network every month, over 3.5 billion pages viewed every day, and 10 terabytes of access log data to mine through every day.
Some of the notable work at Yahoo! Bangalore includes audio search, podcasts portal, desktop search, anti-spam technology, Web developers platforms, deep crawl technology, information extraction, content management, data research, image/audio/video analysis, etc. Overall as a global Internet media and product company, Filo said Yahoo! has three focus areas, namely consumers, publishers and advertisers in Cyberspace. “All these three feed on each other,” explained Filo as he underscored. “Publishers are waiting to access the next 500 million consumers (on the Internet).” He urged online media published to “get into the advertising platform.”
Another Yahoo! top gun, Bradley Horowitz, VP of Yahoo! Advanced Development Division, said for two years now they have been collecting the best breed social media. Filo and Horowitz noted that there are “People with page to sell.” This signals opportunities for Yahoo! which has become one of the world’s most highly trafficked Web sites and one of the Internet’s most recognized brand. “We have an opportunity to deliver,” Filo exclaimed. Horowitz plugged, “Yahoo! has smart ads which are tailored for the user/consumer. Smart ads are more relevant.”
As a leader in display advertising and search, he underscored, Yahoo!, brings relevant content ads. Added Filo, “We make sure the ads are of high quality.” Bradley said advertisers are transforming how they buy ads (spaces). “Now we can measure conversions. Publishers want to have an ad network that monetizes,” he stated (Ong, 2007).
Notwithstanding the success stories to date, though, crowdsourcing is not without its detractors. For example, an article in Forbes by Woods (2009) notes that, “The recent coverage of the $1 million Netflix prize was rightly heralded as a victory for crowdsourcing. The competition was designed to create a better algorithm for recommending films. But in the popular press, and in the minds of millions of people, the word crowdsourcing has created an illusion that there is a crowd that solves problems better than individuals. For the past 10 years, the buzz around open source has created a similar false impression. The notion of crowds creating solutions appeals to our desire to believe that working together we can do anything, but in terms of innovation it is just ridiculous” (para. 2).
In fact, Woods (2009) cautions that, “There is no crowd in crowdsourcing. There are only virtuosos, usually uniquely talented, highly trained people who have worked for decades in a field. Frequently, these innovators have been funded through failure after failure. From their fervent brains spring new ideas. The crowd has nothing to do with it. The crowd solves nothing, creates nothing” (2009, para. 3). From Spy’s perspective, what really happens in crowdsourcing as it is practiced in wide variety of contexts, from Wikipedia to open source to scientific research, is that a problem is broadcast to a large number of people with varying forms of expertise. Then individuals motivated by obsession, competition, money or all three apply their individual talent to creating a solution (Woods, 2009).
Just look at the successes of crowdsourcing to see how the crowd is an illusion. Wikipedia seems like a good example of a crowd of people who have created a great resource. But at a conference last year I asked Wikipedia founder Jimmy Wales about how articles were created. He said that the vast majority are the product of a motivated individual. After articles are created, they are curated — corrected, improved and extended — by many different people. Some articles are indeed group creations that evolved out of a sentence or two. But if you took away all of the articles that were individual creations, Wikipedia would have very little left. Open-source developers are often mentioned as a crowd of motivated programmers ready to meet the world’s software needs. A lot of wishful thinkers love to put forth the notion that all large software companies should be quaking in their boots because a crowd of open-source developers is ready to eat their lunch and create software for any purpose. (Woods, 2009).
There is no crowd of open-source developers ready to attack every problem. In fact, most open-source projects are the product of one obsessed individual who wrote the software to meet his own needs. Often this individual was joined by other programmers who shared the founder’s vision and, under his direction, created great software. Yes, there are large teams of developers on open-source projects, but without the virtuoso contribution at the outset, they would achieve nothing. In a clear indication of the lasting importance of the role of the founding virtuoso, Linus Torvalds’ absence from a list of the Linux kernel contributors made headlines in April. (Woods, 2009).
Virtually all large projects, especially Linux, are dominated by programmers who are paid to work on the project because it benefits their corporate employers. Such structures are consortiums like the impromptu assembly of Netflix ( NFLX – news – people ) innovators as explained below, not crowds. There are vast areas of the software business such as accounting where no open source exists or for which the open-source offerings offer just a fragment of the functionality of commercial solutions. It turns out most people with deep expertise do not spend their time writing software to give away. Increasingly, what is offered as open source is simply commercial software using open source as a marketing technique. Alfresco has done this beautifully in the enterprise content management market. (Woods, 2009).
The Netflix contest is a prime example of individual virtuosity at work. One team was clearly in the lead and then a consortium of teams that had worse performance joined together and combined their innovations to create an algorithm that won the contest. For most of the contest, individuals toiled to figure out a solution. At the end, a consortium was formed. None of the invention happened through a crowd. (Woods, 2009).
Taken together, the foregoing indicates that crowdsourcing represents an idea whose time has come; however, Woods asks, “So what’s my problem? Why does it bug me that people think crowdsourcing is something it is not? Why do I care that people think a crowd is capable of individual virtuosity? What bugs me is that misplaced faith in the crowd is a blow to the image of the heroic inventor. We need to nurture and fund inventors and give them time to explore, play and fail. A false idea of the crowd reduces the motivation for this investment, with the supposition that companies can tap the minds of inventors on the cheap” (Woods, 2009, para. 2).
Does crowdsourcing exist as it is popularly conceived? Yes, it does, but it doesn’t have anything to do with innovation. Jigsaw, the community-created database of 16 million business contacts, is crowdsourcing. Tens of thousands of people have added business contacts to Jigsaw’s database so they can earn points and get access to business contacts entered by others. Jigsaw sells this data to companies, generating millions in revenue. Jigsaw is the only true crowdsourced business I know of. The other businesses mentioned in the crowdsourcing category, Innocentive, Threadless, Spreadshirt, iStockPhoto, are really versions of Wikipedia, that is, aggregations of the inventions of individual virtuosos. Other large projects, like Linux, Apache (APA – news – people ) and GIMP, are virtuoso creations around which consortiums of experts have gathered. (Woods, 2009, para. 5).
Other critics suggest that open source initiatives such as Wikipedia invite “group-think” in ways that diminish the usefulness of the millions of articles already published, as well as what new offerings emerge in the future. For instance, Cox (2009) cautions that, “Today, [Wikipedia] offers more than 10 million articles in more than 270 languages and is consulted daily by millions of readers all over the world. Never has so much knowledge been so freely provided to so many. This miracle is made possible by a peculiar and unlikely mode of production. All of Wikipedia’s articles are composed and endlessly revised by unpaid, self-appointed enthusiasts, many of them unqualified and anonymous, who are themselves numbered in millions. And there’s the rub. How can we trust the efforts of such a motley multitude? We cannot, according to the world’s accredited intellectual elite, who see Wikipedia’s triumph as the cutting edge of a generalised onslaught on their mastery of our minds. Our grasp of truth, they warn, depends on the expertise of the authorised. If we let the lunatics take over the asylum, we shall be sorry. On the face of it, you would think they must be right” (p. 33).
Surely an encyclopaedia written by anyone who happened to fancy contributing must quickly founder in a sea of ignorance, negligence, malevolence, bigotry, hawking, puffery and propaganda? Wikipedia, according to its often distinguished detractors, must always be unreliable and therefore useless. Yet its global following continues to grow. Traditional encyclopaedias are collapsing in the face of the onward march of their awesome rival. Already, Wikipedia is the world’s default provider of knowledge. The future of what we can know therefore turns on the validity of its pronouncements. As an arbiter on this matter, Andrew Dalby is not wholly disinterested. He is an exlibrarian who currently makes cider in France but is also a Wikipedia contributor (Cox, 2009, p. 33).
Still, he is careful to keep his enthusiasm in check. To enable us to evaluate the monumental resource confronting us, he provides a meticulous and judicious examination of the way it is put together. An extraordinary world is unveiled, in which doggedness and obsession are locked in an endless struggle with falsehood. So vast is the army of vigilant Wikipedians that no edit is ever far from scrutiny. Over a two-year period, the entry on Calvin Coolidge was vandalised 10 times with obscenities and the like; yet the damage was repaired on average within three minutes. As it turns out, the good drives out the bad. To impose your own version of almost anything, you must be prepared to spend long hours arguing repeatedly with all-comers over a single word, and ready to do this for ever. What happens in practice is that the wisdom of the crowd produces a consensus that prevails. It is this process, not the short-lived acts of sabotage, that poses the real question. The emergent consensus may not be the truth; it may not be accurate. It is simply the lowest common denominator of what a limitless number of well-disposed people believe to be the truth (Cox, 2009). In reality, though, this is how history books are written, with history frequently representing lies, prevarications and half-truths that humans have simply agreed upon over time. In this regard, Cox notes that, Wikipedia therefore provides a particular kind of data. Once recognised for what it is, it is as worth citing as any other source, in spite of the widespread warnings against doing so. After all, even highly qualified Britannica authors can make mistakes and fall prey to bias. When they do, no army of invigilators is available to put things right” (2009, p. 33).
More recently, Technorati announced the launch of a new service today called WTF (Where’s the Fire), which encourages users to write explanations about why a given search term is hot-right now. And because folks can also vote for the WTFs they think are most helpful, we’re able to highlight the best WTFs for our community. It’s Live Web search meets wiki meets voting (Hane, 2007). many compelling sites exist for users to communicate, participate, and leverage social networking, especially when they involve close, personal, or professional vertical communities of interest, and we’re likely to see many more emerge. The key element is support for vertical or special interests, not general ones.
A good example is the new release of Engineering Village. Indexers at Elsevier Engineering Information (Ei) have been tagging free language terms for years for Compendex and said it’s now time to test and add to this a “bottom up indexing from users.” Rafael Sidi, vice president of product development at Ei, wrote in his blog, “Our new release with [the] Tags and Groups feature is live and we might be the first abstract & index database (subscription) [that] is taking the ‘indexing’ to the next level.” Users are urged to tag any record in Engineering Village-for public, private, their institutions, and their groups.
The social news arena is another area that can leverage user participation to a real advantage. MSN has created MSN Reporter, a social news site similar to digg and reddit. Available in beta since October 2006, MSN Reporter has now launched in only three markets: the Netherlands, Belgium, and Norway, according to the Microsoft LiveSide blog. MSN Reporter is “an ongoing part of MSN’s efforts to increase the amount of user generated content on its network.” Users can submit links from anywhere on the Web. Other users can then vote stories up or down and leave comments.
Wikis and Other User Tools
Amazon.com introduced Amapedia, a new community for sharing information about the products users like the most. Amapedia introduced a new way of organizing products it calls “collaborative structured tagging.” This makes it easy for users to tag products with what they are and with their most important facts and for others to search, discover, filter, and compare products by those tags. Amapedia is the next generation of Amazon.corn’s ProductWiki feature; all of users’previous ProductWiki contributions were preserved. (Hane, 2007).
The power of mashups (sites or applications that combine content from more than one source) is impressive, especially with all the useful applications developed during Hurricane Katrina. Now there’s a free tool to create your own. MapBuilder.net is a Web 2.0 service or rapid mashup development tool to build custom Google and Yahoo! maps without any knowledge of the Google/Yahoo! Maps API or JavaScript. MapBuilder.net provides a visual interface for the map-building process with geocoding and import features. It lets users tag locations on their maps and then publish the map either on MapBuilder.net or on their own Web sites. Librarian Marylaine Block, who alerted me to MapBuilder.net, commented that this is “a great way to provide local information to library users in a highly usable format.” She said quikmaps and Wayfaring offer similar services (Hane, 2007).
The trend toward crowd sourcing and citizen media recently received a boost from a global news source. The Associated Press (AP) announced it is now collaborating with NowPublic.com, a company based in Vancouver, British Columbia, that claims to be the world’s largest participatory news network with more than 60,000 contributors from 140 countries. The initiative is designed to bring citizen content into AP news gathering and to explore ways to involve NowPublic’s onthe-ground network of news contributors in AP’s breaking news coverage. Another project is NewAssignment.Net, which calls itself “an experiment in opensource reporting.” It is partially funded by Reuters. Reuters also has a partnership with Yahoo! News to showcase photos and videos submitted by the public. And the newspaper chain Gannett is now incorporating elements of reader-created citizen journalism. (Hane, 2007).
At least one recent survey indicates consumer support for these new crowdsourcing initiatives. A majority of Americans (55%) reported in an online survey that bloggers are important to the future of American journalism, and 74% reported that citizen journalism will play a vital role, according to a We Media-Zogby Interactive poll. The We Media survey results were released by iFOCOS and pollster John Zogby as part of an iFOCOS conference on media innovation hosted by the School of Communication at the University of Miami. (Hane, 2007).
Call Center Administration
The rapid development of information and communication techniques and technologies as new basic technologies produced a change from the industrial society towards the information society. -The resulting fast storage, processing and transmission of information and data in miniaturized units (computers) dramatically reduced the distance from the customer’s wishes up to the delivery of the product. One of the consequences is an increase in individual demands and requirements on the customer’s side, which inevitably will bring about a more individual advisory and assistance service. This individualized direct marketing, from the simple telephone information service up to a demanding product information service will be increasingly made via so-called “call centers” (Bullinger & Ziegler, 1999). High information density and processing speed, but also an enormous responsibility of the call centre agent with regard to the customer are the characteristic features of these new tasks. In general, these call centers are legally and economically independent service entities. What they have in common is the direct connection and closeness to the customer (Bullinger & Ziegler, 1999).
Call centers are installed to sell or buy information, services, products or procedures. They may be established as a company unit for improving customer service. There is a rising tendency however, to install them as independent service companies. The job requirements of call centre agents may be derived from the corresponding call centre profile, where the depth of the job plays an important part, in addition to the market segment to be considered. The more complex the job, the higher are the requirements. A major aspect of the stress, the call centre agent is exposed to, is their assignment to the so-called inbound or outbound area. Inbound means, that calls are accepted and processed “towards inside.” Outbound means activities towards outside, in the sense of telephone shopping, marketing etc. For extending or enlarging the contents of work, a combination of the two jobs would be the best.
A major requirement for making a call centre work is the integration of telecommunication and computer systems (CTI). Working in a call centre normally means sitting highly concentrated and under time pressure in front of a video display unit (VDU). However, as shown in Table __ below, there are some differences between traditional VDU jobs and call centre work stations.
Table
Differences between traditional VDU work places and call centre work
Traditional VDU Work
Call Center Work Station
Discontinuous work at the VDU
Continuous work at the VDU
Flexible planning of work
Permanent contact to customers
Communication possible
Extreme insulation, acoustic stress
Normal working hours
Frequently working in shifts
Electronic and non-electronic means of work
Electronic means of work only
Source: Bullinger & Ziegler, 1999, p. 1314
By and large, call centers are unique workplaces and organizational cultures because they belong to multiple geographical spaces (e.g., North Atlantic and Asian, domestic and overseas, high and low technology, and particular country, city, organizational, and workplace spaces (Pal & Buzzanell, 2008). According to Pal and Buzzanell, “[Call center] spaces and cultures offer arrays of possible structural positions (i.e., locations within work and nonwork networks) and discursive as well as sociocultural resources (i.e., linguistic, historical, and cultural devices that guide individuals’ interpretations of events and action and influence their representations of self) on which employees can draw when they choose their different identifications and (re)position their identities” (p. 32).
Call center employees make telemarketing calls and cater to customers on insurance claims, credit cards, computer hardware, network connections, banking, and financial plans. So cost effective and productive are these centers that the call center industry grew 59% to $2.3 billion between 2002 and 2003, and the number of foreign companies outsourcing to India increased from 60 in 2000 to 800 by the end of 2003, an increase of more than 1200% (Mirchandani, 2004). In fact, Dell Computers alone has a 30-site call center network located in four major Indian cities employing more than 15,000 workers by 2008 (Ribeiro, 2006). With its high growth potential, total industry employment is expected to reach 600,000 by 2007 (Pal & Buzzanell, 2008). The integration of call centers has important functions for many organisations that want to gain increasing contributions for productivity and rationalization. The primary precondition for a functioning call center are integrated information systems which promote short handling times as well as actuality, quality, and usability of the dialogs (Bullinger & Ziegler, 1999).
According to Robinson and Morley (2007), there are two conflicting perspectives concerning call centers today. These authors report that, “Promoters of call centers present an extremely positive view of call centers to the various stakeholders. For prospective staff, call centers are promoted as exciting places to work where teamwork is encouraged. For business, call centers are promoted as the gateway to improved customer service whilst driving down unit labour costs through economies of scale and better management practices” (p. 249). By contrast, the opposing perspective is that, “Call centers are places where electronic surveillance has reached the level of sophistication whereby the call center supervisor’s power has been rendered almost perfect” (Robinson & Morley, 2007, p. 250). Likewise, Kossek and Lambert (2005) point out that, “Call centers have been alternatively characterized as the ‘dark Satanic mills’ of the New Economy and as a setting for a variety of approaches to the organization of work. In either case, the point is that the organizing logic of the workplace is neither dictated by the environment nor fixed by design; rather, technologies are deployed by managers (and this deployment may be contested by workers)” (p. 69).
The negative aspects of call centers are due in some part to the pervasiveness of performance measures and the “fishbowl” qualities that are characteristic of many call centers today. For instance, Robinson and Morley (2007) point out that, “Many call centers have developed a sophisticated array of electronic monitoring where almost every variable in a call center’s operation can be measured and directly monitored” (p. 250). These measurement and monitoring activities can be conducted at a macro level involving the entire center as a distinct unit or at a micro level down to individual operators within the call center; and at any level between these extremes. Variables such as calls taken, call wait time, call abandonment rates, call talk times, call wrap-up times and many others can be measured. Individual calls can be monitored (listened in on) with or without the knowledge of the call center employee. Measurement and monitoring is designed to achieve behavioural outcomes whether or not the particular method of measurement or monitoring is active at any given time (Robinson & Morley, 2007, p. 250).
Many call centers have deployed sophisticated technology to automate the greeting and delivery of the call, integrate computer records with the caller’s identification, and provide caller history / demographics regarding the customer. All this has been deployed to improve the representative’s access to customer- and product/service-specific information, increase revenues, reduce costs, and improve customer handling times (Hillmer et al., 2004). According to Bernett, Masi and Fischer (2006), the use of call centers is particularly important for e-businesses. Although e-commerce and online customer service continue to grow, many customers remain reluctant to complete a Web purchase without first talking to a live agent. Surveys indicate that between 25 and 75% of online shoppers abandon their shopping carts before completing a purchase, primarily because the website lacks real-time customer service to answer questions or resolve problems. Thus, it’s not surprising that a Purdue University survey of call center managers showed that the most significant initiative started in 2000 was website integration with their call center. The idea is to take advantage of agent pools who can assist Web customers, by installing a “Customer Service” button on a Web page that visitors can press to request assistance. Some studies have shown that corporations will spend more to upgrade their online customer-service capabilities than on any other information-technology effort in the next two years, with the bulk of that investment going into front-end call-center systems, back-end Customer Relationship Management (CRM) applications and website enhancements. (Bernett et al., 2006).
On-line customer service is provided by call center agents who communicate with Web visitors via email, agent callback, text chat and/or Internet telephony. These tools, however, are not equally or ubiquitously available. Most Web-enabled call centers can handle email and provide callbacks, and the deployment of text chat is rapidly increasing. But Internet telephony is just beginning to appear in pilots and trials. Emails, currently the most utilized Web-integration technology, are implemented by putting a “Send Questions” or “Contact Us” button on the Web page. When clicked, a form is displayed; it’s filled out and sent back to the call center where it is answered either by an automated process or routed to an agent to answer it. Emails, however, are not a real-time interactive process, and response times can average between 10 and 50 hours depending on the industry segment. (Bernett et al., 2006).
In agent callback, a Web visitor fills out an online form requesting an agent to call back on a certain phone number and at a particular time. An agent then places a normal telephone call to the customer through the Public Switched Telephone Network (PSTN). This, however, requires that customers have a separate phone line to accept the callback while maintaining their Internet connection; otherwise, the customer must disconnect from the Web session and wait for the agent to call back without the benefit of concurrently viewing the website. (Bernett et al., 2006).
With text chat, the agent and caller exchange typed messages, which is very similar to a chat-room environment or instant messaging. When a Web visitor selects a “Chat” button, a form is typically displayed requesting pertinent information. The information, along with the chat request, is then sent to the call center where it is analyzed to determine the best agent to handle the request. A Java applet is sent to the visitor’s browser to set up the chat window, or the chat can be performed in a pure HTML page. (Bernett et al., 2006).
Internet telephony requires that the caller have a multimedia PC that includes a speaker, microphone and an Internet phone client, such as Microsoft’s NetMeeting, that accepts Voice over Internet Protocol (VOIP) calls. When a Web visitor clicks on a “Talk to Agent” button, a form is displayed requesting information about the call and about the caller’s modem and software. The completed form is then assessed to properly set up the call and to select the best agent to handle the request. With text chat and Internet telephony, an agent can actively push Web pages to the caller; with some products the caller also can push Web pages to the agent. This process of pushing pages, called “Web collaboration,” is a key capability of providing online customer service. (Bernett et al., 2006).
There are two primary technology alternatives that a call center can implement for Web integration. The first enables a traditional call center that has circuit-switched-based systems to support online customer service. The second is the implementation of an all-IP call-center infrastructure. Most established call centers will enhance their existing traditional infrastructure to enable integration with the Web. This typically means providing bandwidth access to the Internet, installing an Internet call manager application, adding software to the existing ACD systems, CTI applications and agent stations, and connecting a VOIP gateway to the ACD. The Internet call manager provides the call-control function between the Web callers and the existing call center systems. The VOIP gateway converts VOIP calls from the Internet into time-division-multiplex (TDM) format and routes them to an ACD, where they are queued for an agent. (Bernett et al., 2006).
With an IP-based call center, the architecture is primarily a software solution that uses standards-based computer hardware, the TCP/IP protocol and WAN/LAN infrastructures. In this type of call center, an ACD server replaces the telephony-based ACD. The server performs the functions of a traditional ACD, but it does not physically switch calls; with IP packets are addressed to the appropriate terminating device. The ACD server also provides universal queue control across all the contact channels and media types, and enables centralized administration and management reporting capabilities. (Bernett et al., 2006).
A call manager process is added, which establishes the connection to the caller and communicates with the ACD server to identify the first available agent that has the appropriate skills to serve the customer. Based on the instructions from the ACD server, the call manager connects the call to the assigned agent. Since the call is not switched through the ACD server, the call manager creates the link between the caller and the agent. To allow the call center to handle traditional phone calls, a PSTN gateway is utilized which converts TDM calls from the PSTN to an P. format. Since a PSTN caller does not go through the website, this gateway also performs the functions of an Interactive Voice Response (IVR) to request information from the caller. The IVR responses, as well as any other call-based information, such as DNIS and ANI, are passed to the ACD server to determine which agent should handle the call. (Bernett et al., 2006).
In mid-1999, Mitretek Systems, a not-for-profit public interest corporation, performed a market survey of Web-enabled and IP-based call-center technologies, which found that while a number of vendors had software in early stages of release, there were no enterprise-capable products or VOIP implementations for mid-size or large call centers. Based on the results of the survey, and as part of Mitretek’s internal research and development program, we established a Call Center Laboratory to investigate and evaluate P-based Web-enabled call-center technologies. The laboratory utilizes vendor-provided software running on Mitretek servers and desktop computers (Bernett et al., 2006).
The primary mission of the laboratory is to evaluate the features and technology of Web-enabled products in order to determine the maturity of the offerings as they relate to the call-center market; the laboratory does not typically perform product-versus-product comparisons. The results of the evaluations are documented in published papers and live demonstrations are presented at conferences, such as Next Generation Networks and VoiceCon. The laboratory also provides a neutral environment where government and corporate call-center users can operate and assess the vendors’ products without the pressure of a sales and marketing setting (Bernett et al., 2006).
During 2000, the Call Center Laboratory received first-generation products that concurrently supported email, callback, chat, Web collaboration and VOIP. Although vendors were already shipping these products, there were very few actual cases of Web-enabled call centers. Indeed, a survey performed in 2000 by Utenberg Towbin found that only 10% of surveyed ecommerce websites offered text chat, only 1% had agent callback and none had VOIP. Today, Web-enabled call-center deployments have significantly increased. In a Forrester Research survey of 50 call-center managers performed in 2001, 70% said that a Web-based call-center strategy was critical to their companies, and 26% had implemented Web call-center applications. To meet this demand, the vendors of traditional telephony-based products are providing Web-integration feature upgrades and delivering IP line- and trunk-side interfaces for their products. (Bernett et al., 2006).
Increasingly, new-generation, P-based, call-center suites are becoming available from established vendors as well as from recently formed companies. These all-P products include most of the features and functions found in traditional call center systems, and provide capabilities that were either very expensive, difficult to implement or unobtainable with traditional TDM-based products. Some of these include:
1. The call center and agent locations can be independent — an agent can be placed anywhere in the world and be connected to the call center through the Internet.
2. The agent desktop can support all media types — there is no need for a separate telephone set and computer.
3. Straightforward integration with external and back-end applications (elimination of CTI) using Java, ActiveX, APIs, ODBC, etc. (Bernett et al., 2006).
Accompanying the increased technological and skill requirements of today’s call centers, management of the typical center is highly structured, with close surveillance and work controls of the CSR population. Work times are precisely managed, with breaks and meals carefully scheduled. Frequently, the pace of the job is extremely fast, with little time between calls. In many call centers, the agents may deal with upset, angry, or frustrated individuals and may have to endure verbal abuse without reacting negatively. Often, the flexibility to respond to customers based upon their own judgment or discretion is severely limited. All of these factors combine to create a highly structured and stressful work environment, resulting in turnover ratios in the industry frequently as high as 60% to 80% annually (Hillmer et al., 2004).
Many call centers find this highly structured environment incompatible with the needs and desires of much of the candidate pool. Many of these individuals are looking for environments that value their independence, commitment, and creativity. They appreciate the autonomy to make decisions and exercise their discretion and judgment, and are put off by rigid rules, schedules, and measurements. Creativity, innovation, and commitment are the characteristics required to deal with the more complex and challenging demands of today’s organizations. Recent research supports this conclusion, finding that call centers that employ HR practices that take advantage of employees’ skills and ideas and involve them in decision making have lower turnover rates and better financial outcomes (Batt, 2002).
In more recent years, many call center operations have been augmented with computer-based information systems. Once considered a potential replacement for call centers, the Internet has actually increased the need for “real-time” sales and service support as found in the call center. E-commerce and web sites have enabled additional channels into the organization, channels that also terminate in the call center. This is particularly true as call centers increasingly shift from outbound telemarketing to in-bound status with the implementation of the nationwide “Do-Not-Call” list in the fall of 2003 (Richtel, 2003).
On-line businesses are increasing the use of call centers to serve as their support and fulfillment mechanism. This avenue is available 24/7, thus posing another challenge for call centers that operate in more traditional daily timeframes. In addition to contributing to an increased number of call centers, the rapid expansion of the Internet has created a need for customer service representatives (CSRs) with a wider range of skill sets and an overall higher level of skill. A pleasant and friendly manner is no longer sufficient. As a result, CSRs are increasingly expected to have excellent writing skills to respond effectively to email and web inquiries. The technical know how and wide range of product knowledge required of representatives is escalating rapidly (Hillmer et al., 2004)
Call centers, by design, are the primary point of entry for customers. With this mission-critical role, enormous demands are placed on CSRs and their management. CSRs must handle elevated customer expectations, understand complex products and services, explain creative pricing strategies, navigate sophisticated technology, operate within regulatory limitations, and meet or exceed challenging individual performance expectations for variables such as talk time and sales quota (Hillmer et al., 2004).
The negative characterization of call centers is described further by Robinson and Morley thusly, “Whilst the industry is still in its relative infancy it has already attracted much criticism in respect to what has been described as the new age sweatshop and slave galleons of the twenty first century” (p. 70). Not surprisingly, agent burnout is a common problem in many call centers (Griffin, 2002). Likewise, Bullinger and Ziegler (1999) emphasize that, “call centers accomplish simple communicative tasks like hotline and information services, ticket-booking, reception of orders, sale of goods etc.. The integration of computer and telephony in call centers offers the best possibilities for a rapid and efficient execution of these tasks. But in the same time the working activities of the agents are characterized by an extreme division of labor, by automatic distribution of calls and by a technical performance control. The discrepancies between the competencies of the agents and their possibilities to apply these competencies may provoke a dequalification and a loss of motivation. If the agent has to realize simple activities repeatedly, and additionally, under conditions like time pressure, shift-work and without breaks, the risk of psychic complaints will rise” (p. 1321).
Despite these criticisms, employment in call centers continues to increase and their use by major corporations is on the rise because of the inherent advantages that accrue to their use (Wiley & Legge, 2006). For instance, by funneling phone calls to representatives armed with the appropriate information, it is possible to adopt employee self-service as a corporate strategy, but also offer human assistance, when necessary (Greengard, 1999). Today, computer telephony integration (CTI) can create a seamless and efficient way to provide accurate information, track cases, spot problems and provide a higher level of service. It can replace tedious manual processes with a high level of automation. In fact, in recent years, call center technology has matured into a highly sophisticated solution (Greengard, 1999).
Nevertheless, attracting and retaining top talent within call centers is a challenge, particularly given the large number of employees required and the industry’s limited career advancement opportunities. Added to the challenge is the fact that engaging and retaining a large part-time population of any kind has proven difficult (Wiley & Legge, 2006). Attrition is expensive and detrimental to any organization. An estimate of the cost of turnover for an employee at a call center is as much as one year’s salary (Hillmer, et al., 2004). Research shows, however, that the higher the level of employee engagement, the less likely that the employee will look for a new job. Because call centers generally experience high levels of turnover, even small improvements in employee engagement can produce substantial improvements in the bottom line (Brooks, Wiley & Hause, 2006). Managing a call center staff usually involves multiple shifts, part-time staffing, and high turnover rates. According to U.S. norms from the WorkTrends survey, only 52% of call center employees intend to stay with their organizations, compared to 61% of WorkTrends employees overall. Job satisfaction, quality issues, and feelings of accomplishment are also much lower in call centers (Wiley & Legge, 2006). There are some steps that employers can take, though, to help minimize employee turnover at call centers as shown in Table __ below.
Table
Call center employee training needs
Training Need
Description/Rationale
Recognize the dangers of poor writing
In addition to being a risk to a corporation’s image, poor writing can also be a liability risk. Misstating company policies and the like in a customer e-mail, chat text, and so on can have severe legal consequences for a company.
Train all front liners to write effectively
Until relatively recently, a firm’s “official” letter writing was reserved for management. Now, frontline agents are expected to respond. In today’s call center, every frontline person needs effective writing skills. Attempting to reserve e-mail writing only for agents who are already good writers does not work in the long run. A company’s system will be plagued with bottlenecks. Arming all front liners with adequate writing skills is the wiser approach.
Revise corporate style guides
Many corporations already have a style guide illustrating acceptable formats for internal memos, customer proposals, and so on. The style guide should now be updated to include professional guidelines for composing customer e-mail and the like.
Make a writing test part of the hiring process
Test a prospective agent’s writing skills. For example, devise a writing exercise in which the prospective agent writes an e-mail message in response to a simulated e-mail from a customer. Evaluate the prospective employee’s e-mail sample for thought process, clarity, and grammar.
Build boilerplate text that is easy to customize
Perhaps the only thing worse than an e-mail full of ambiguous sentences and grammatical errors is a canned response that fails to answer the customer’s real question in the first place. Make sure suggested response templates and prescripted chat phrases allow appropriate customizing. Keeping call center reps focused and productive in a multichannel environment requires more than effective writing skills. Attention to multichannel skills as well as morale and culture issues is also critical
Train agents to excel in multiple channels
Agents working in a multichannel contact center have to switch gears from channel to channel effectively and possess a good grasp of the Web — in particular, the nuances of their own company site. This is why coaching call center agents on how to efficiently access resources from the Cisco Website is a key training focus for Cisco’s frontline employees. For example, Cisco’s Customer Response Center’s escalation team uses the Cisco Collaboration product, which provides Web-sharing functionality, to enable an escalation agent to help a frontline agent remotely navigate to a specific contact on the Cisco site
Keep an agent’s workload balanced
Agent burnout is a common problem in many call centers. To help combat agent overload, many companies are using departmental or workgroup e-mail addresses rather than individual ones. This allows a dedicated group of representatives to monitor inbound activity and balance its distribution.
Have agents contribute to online knowledge bases
A knowledge base (also known as a knowledge portal) is a single point of access to multiple information sources. Because of their customer contact and direct experience, frontline agents should be key contributors to the knowledge base. Pharmaceutical giant GlaxoSmithKline answers ten thousand customer queries per month, primarily by phone and e-mail — with only thirteen agents. An efficient online search index and file folder system simplifies information retrieval for the company’s agents.
Carefully measure an agent’s performance
Self-serve Websites and knowledge bases are designed to let customers resolve for themselves questions that have simple answers. That’s why building a knowledge base with the sole intention of reducing agent talk time can backfire. In fact, knowledge management can actually cause talk time between customer and agent to rise. Yet many companies discourage the agent from spending adequate time with a customer by rewarding agents who have the shortest call cycle. A customer survey that measures the agent’s performance (Was the issue fully resolved for the customer? Was the customer pleased with the outcome?) is a better gauge. Measuring the rate of agent referral of questions to others is another meaningful metric. Companies want the agent answering as many questions without referral as possible.
As noted in Table __ above, some of the steps that companies can take to help improve the operation of their call centers, such as measuring an agent’s performance through the use of customer surveys and having call center agents contribute to a knowledge database involve crowdsourcing, even if it is not referred to in that context.
Besides the person-to-job fit and training considerations outlined in Table __ above, other steps employers can take to reduce inordinately high levels of attrition in call centers is to make the working environment conducive to the type of work being performed. For instance, in their study, “Redesigning computer call center work: A longitudinal field experiment,” Workman and Bommer (2004) provide a relevant problem statement that outlines the issue of simultaneous demand for technical and customer service skills that places strain on call center employees and which frequently leads to employee poor job attitudes. This article is a quantitative experimental study that utilized a field study with a randomly assigned pretest-post-test and control group designed to compare three interventions’ effectiveness on employee job attitudes in a computer technology call center. The following hypotheses were clearly stated in the article:
1. Alignment job design will increase employee job satisfaction.
2. Alignment job design will increase employee commitment.
3. High involvement work process will increase employee job satisfaction.
4. High involvement work process will increase employee commitment.
5. Autonomous work team will increase employee job satisfaction.
6. Autonomous work team will increase employee commitment.
The purpose statement and hypothesis fit the experimental research design very nicely. The researcher used randomly assigned subjects for both the experimental and the control group and administered the pretest and posttest to each, while only administering the intervention to one of the groups. This design of study does align with the positivist tradition; that is, it is obvious the author viewed technology call centers as independent and measurable when developing the above listed hypothesis and author identified purpose statement.
The author identified that there may have been some cross group contamination, which was a threat to validity and that due to the short interval (six months) between pre and post test there may not have been enough time to fully group and novelty effects. In the area of external validity, the author raised the question for further research as to whether the study could be generalized to call centers other than computer technology centers.
Another article dealing with this topic is entitled, “The application of knowledge management (KM) in call centers” by Koh and Gunasekaran (2005). The purpose of this article is to evaluate the need for knowledge management in a help desk, for improving the level of customer services through addressing the issues dealing with information KM. The following research questions were depicted from the article:
1. Is it useful to know whether a formal KM effort would improve the quality of customer service in a call center, and at what price?
2. Can KM be achieved by effectively managing the five roles of knowledge; that is, knowledge acquisition, utilization, adaptation, distribution, and generation?
Evidence-based management is the practice of using research to acquired evidence (facts) concerning a business situation or problem for the purpose of making the best decision on how to resolve the concern or develop the soundest principles for the issue. Evidence-base research is almost always used to gather the facts surrounding the problem. That is exactly what Biggs and Swailes did in their study.
The role of knowledge repositories in technical support environments: Speed vs. learning in user performance (Gray & Durcikova, 2006). This is an article that details a quantitative investigation concerning why technical support analyst prefer specific sources of information over others. Particularly, technical support analyst chose between their colleagues, official company document, and solutions available in technical support knowledge repositories. The authors of the article theorize that technical analyst with stronger learning orientation would engage in higher levels of knowledge sourcing by seeking knowledge directly from their colleagues, official company documents, and technical knowledge repositories. Additionally, the authors presume that technical analyst that face higher perceived intellectual demands, higher levels of work-related time pressure demands, and analyst that are risk adverse would all engage in more knowledge sourcing behavior; consequently, they too would source more knowledge from all three knowledge sources identified earlier. The authors developed a cross-sectional survey to measure how the subjects learning orientation, intellectual demands, risk aversion, and time pressure reaction would affect their preference for sourcing specific information. The results were mostly in line with what knowledge sourcing theory would predict when it came to sourcing knowledge from their colleagues. One notable exception for sourcing knowledge from colleagues occurred when time pressure was introduced into the equation. When analysts were under time pressure, they did not consult their colleagues for information. However, there were some noted exceptions when it came to sourcing knowledge from company documents and repositories. For example, neither time pressure nor risk aversion predicted sourcing from company manuals. On the other hand, risk aversion and intellectual demand (as theorized) both significantly predicted sourcing from repositories, one positively and one negatively.
Potential Applications of Crowdsourcing Techniques for Soliciting Feedback from Customers
“One of the sure signs of a bad or declining relationship is the absence of complaints from the customer, ” says Harvard Professor Theodore Levitt, writing in the Harvard Business Review. “Nobody is ever that satisfied, especially not over an extended period of time. The customer either is not being candid or is not being contacted.” If companies are not receiving complaints from customers, something is wrong. Don’t be fooled into thinking there are no unhappy customers. Instead, it means that rather than complaining, customers are probably leaving or, at best, reducing the amount of business he is doing with you. Moreover, the “iceberg effect” is alive and well when it comes to complaints. According to the Consumer Affairs Department, if one customer complains to a business, there are usually twenty-five additional customers with the same complaint that haven’t been heard from. Therefore, one of the most profitable activities a business can engage in is to seek out customer complaints, making it easy for the customer to give feedback. Ask customers regularly about their most recent purchase. Did it meet their needs? Was it what they expected? How could it be improved? (Griffin, 2002, p. 180). There are some other relatively cost-effective methods available for implementing crowdsourcing techniques for soliciting feedback from customers concerning a company’s products and services, as well as the quality of their experiences with a call center which are described further in Table __ below.
Table
Techniques for facilitating consumer feedback
Technique
Description
Surveys
Whether in writing, face-to-face, or by phone, a survey can be an excellent way to get customer feedback. Each year, Whirlpool mails its Standardized Appliance Measurement Survey (SAMS) to 180,000 households, requesting that people rate all their appliances on a variety of attributes. If consumers rank a competitor’s products higher, Whirlpool engineers go to work (literally ripping the competitor’s product apart) to understand why. The Web offers exciting new tools for surveying customers. For example, real-time surveys that pop up online following a specific customer transaction can quickly produce valuable customer feedback.
Order forms
American Supply International gets customer feedback from a comment section incorporated directly into its order form. The mail-order company, located in Bryans Road, Maryland, helps overseas Americans find hardto -get U.S. products. Among its biggest sellers are 9 Lives Cat Food and canned chili. The comment form, according to cofounder Steve Reed, has given the company new service ideas and contributed to an impressive 85% customer retention rate.
Newsletters
Printing letters from readers motivates customer feedback through a newsletter. That’s the word from Paul de Benedictis, communications director for Opcode Systems. Using newsletters has enabled the company to create a more personal, one-to-one rapport with its more than thirty thousand users.
Focus groups
When Tyler Phillips founded Partnership Groups, a child care and elder care referral service in Lansing, Pennsylvania, he created an information kit to explain the range of his company’s services. Phillips sold the kits to corporations and counted on his corporate clients to promote them to their employees. But when clients’ employees were interviewed in focus groups, they said they wanted their questions answered by a person, not just a kit. That was all Phillips needed to hear to shift his company away from the kits and into more of a consulting service. Customers were given unlimited access to Partnership staff in getting answers to child care and elder care referral questions. Thanks to this increased interaction with employees, new options, such as “FirstNest” for infant care, were soon created. Ten years after focus groups helped redirect the company’s service offerings, Partnership Group reports that the majority of its 109 corporate contracts are for three years and that the company is profitable, with sales of $9 million.
User groups and advisory boards
SunWave Manufacturing of Leander, Texas, a maker of portable spas, uses a customer advisory board to stay in touch with customer needs. The advisory council is composed of SunWave spa dealers, who serve as a voice for other dealers in their region. SunWave coordinates these meetings to coincide with industry events and trade shows, thereby reducing cost.
Voice mail he Beef Box is an electronic mailbox that Homes and Land Publishing of Tallahassee uses to get feedback from its franchisees, which publish magazines containing real estate listings for a specific region. “Anything they want senior management to hear” is the way Ron Sauls, executive vice president, describes the comments or complaints franchisees call in with on the Beef Box. Sauls’s assistant transcribes the voice mail messages and then passes them along to company staffers for quick follow-up.
Chat rooms and message boards
An online community offers a company a real opportunity to gather customer feedback. Monitoring chat rooms, message boards, and the like on your own community site as well as other industry sites your customers frequently visit yields invaluable insight on customer opinions, problems, needs, and wants.
Source: Griffin, 2002, p. 183
Once a company is in receipt of feedback from a customer, it is important to act quickly. If a consumer calls with a complaint, companies must respond immediately, preferably by fixing the problem, but at least by affirming their intention to fix the problem as quickly as possible. If consumers are required to contact a company more than once with a problem, they will be much more likely to be dissatisfied, even if the second call results in a fix. A TARP (Technical Assistance Research Programs) study conducted among the 800-number customers of 460 companies found that the number of customers reporting complete satisfaction after one call was dramatically higher than when two or more phone calls were made as shown in Figure __ below.
Figure __. Customer Satisfaction Comparison, One Call vs. Two
Source: Based on graph in Griffin, 2002 at p. 183
The TARP study reinforces another 800-number study, which revealed that customer dissatisfaction does not increase linearly; after the first period of delay, a customer’s dissatisfaction appears to increases sharply (Griffin, 2002). Clearly, call center customers value certain outcomes and organizations that are able to provide these desired outcomes obtain a competitive advantage. Generally speaking, customers want the following during the contacts with call centers:
1. Solutions to their problems;
2. Help in a timely fashion;
3. Efficient and effective issue resolution; and,
4. Clear commitments that are kept.
When a call center’s output fails to satisfy the wants of its customers it faces additional cost consequences such as the cost to rework a customer’s request, to appease or retain a dissatisfied customer, to replace a lost customer, and to neutralize the impact of dissatisfied customers on the general public’s perception of the business’ quality of service. Thus, the benchmark call center provides the help customers desire at the lowest total long-term cost to the organization. The intangible costs of turnover in our model are derived by computing what it would cost to return to the same level of coverage and service as in the benchmark call center. This implies that when turnover occurs, experienced CSRs will provide coverage until a replacement is hired and trained and will make up for the reduced productivity and increased errors until the replacement becomes fully proficient. Centers that do not maintain the expected level of service will incur even greater long-term costs because of the negative impact on customers discussed previously. The method we use to compute intangible costs is conservative because if the call center does not spend the resources to maintain the same service as in the benchmark center, the ultimate cost to the organization will be even greater than those computed in our model. There are six categories of intangible costs associated with maintaining the level of service in the benchmark center during the time a replacement is hired trained, and gains full proficiency:
1. The cost of lost productivity for a new CSR. The model assumes that a new agent will not be as skilled and thus will not be as productive as an experienced agent; however, the difference in lost productivity compared to that of an experienced agent will decrease in a linear manner until the new agent is fully proficient. The model allows for customizing both the proportion of initial lost productivity for a new agent and the time it takes for this difference to vanish.
2. The cost of rework for increased errors made by new agents. A new agent is expected not only to be less productive than an experienced agent, but also to make more errors. This cost reflects the time to rework the calls for which the error rate exceeds that of an experienced agent. The model allows customization in the initial proportion of calls handled by a new CSR requiring rework above that of an experienced agent and the time it takes for this difference to vanish. The model assumes that the rework difference decreases in a linear manner until it disappears.
3. The cost of increased supervision to coach the new agent. An important part of the supervisor’s role in the benchmark call center is coaching and providing feedback and help to agents. This includes identifying agents needing additional training on new products or services, as well as coaching and mentoring agents on how to deal effectively with customer requests. An inexperienced agent will not have encountered as many unusual circumstances, will not be as experienced in dealing with difficult customers, and will not be as confident in his or her abilities as an experienced agent; therefore, when an inexperienced agent begins working, he/she will require, on average, more coaching and mentoring than a typical experienced agent will. The model assumes that the amount of additional coaching time required will decrease in a linear fashion until the new agent becomes fully proficient. In order to compute the total number of extra hours of supervision required for a new hire, the average amount of time a supervisor spends coaching an experienced agent must be calculated. To perform this calculation the model assumes the call center has staffed the correct number of supervisors to perform the level of coaching and mentoring required to handle the difficulty level of the work being performed by the agents: thus, if all the agents are experienced, each supervisor would spend one fourth of his or her time (2 hours a day on average) coaching and mentoring.
4. The cost to pay an experienced agent to take over during the interim period after a vacancy and before a new agent begins work. This work will have to be done with agents working overtime, because the call center is understaffed during this period, if the extra work is not handled through overtime, it has a negative effect on customers, such as reduced accessibility, longer hold times, and rushed interactions leading to errors. Because these events lead to increased long-term organizational costs, it is less costly to pay the overtime costs of an experienced agent to avoid these negative consequences.
5. The cost of lost productivity from stress on remaining call center agents after an agent departs. When turnover occurs, the call center will be understaffed because agents cannot be replaced immediately. In addition, the new agents will require greater attention from the call center supervisors. One outcome of this is a slight increase in the stress level of current agents because they will either need to work harder during regular hours (at an unsustainable pace) or will need to work overtime. The model allows for computing different effects upon the immediate work group of the departing agent and upon agents not included in the departing agent’s immediate work group. The immediate work group is the set of CSRs who share the same supervisor as the departing agent does. Normally the reduced productivity will be larger for the immediate work group than for other agents in the call center.
6. The cost of the reduced performance of an agent before he or she terminates employment with the call center. When an agent decides to leave a call center, on average, his/her productivity will decrease as a result of reduced motivation and the distractions arising from preparing to leave (Hillmer et al., 2004, p. 35).
Taken together, the foregoing makes it clear that organizations of all types and sizes that use call centers need to develop customer feedback techniques that can better engage call center employees while providing improved levels of customer service.
Chapter 3: Methodology
Participants
The participants to be analyzed involves managers of call centers in specifying what they feel is important in effectively running their call centers. Additionally, customers’ value inputs will be analyzed.
Design
The study will employ survey research through the application of questionnaires to the population to analyze the participants with a between-participants approach; that is, the variation needed for the study comes from changes in the participants for a specific period of time. This approach is congruent with the guidance provided by Grinnell and Unrau (2005) who note, “Surveys can be designed to achieve a variety of ends, but they all seek to collect data from many individuals in order to understand something about them as a whole. It is essential, therefore, that survey research procedures produce data that is accurate, reliable, and representative so that findings can be generalized from a sample to the larger population or to different research situations” (p. 272). The use of surveys in a crowdsourcing applications to gauge the perceptions of call center customers is also highly congruent with the guidance provided by Griffin (2002) who emphases that surveys represent a highly effective approach to gathering customer feedback.
To accomplish this, the researcher will employ a quasi-experimental research design in an attempt to determine a correlation between the application of crowdsourcing techniques and increased efficiencies in call centers and their supported major business functional areas. The variables, according to Swanson & Holton (2005), are the phenomena which vary depending on the circumstances affecting them.
The dependent variables in this study are effective application of specific crowdsourcing techniques to the independent variables, which are call center key performance indicators and customer value inputs.
Procedures
The survey development process will follow the guidance provided by Grinnell and Unrau (2005) as described in Table __ below.
Initial Steps and Factors to be Considered in Survey Instrument Development
Steps in Survey Research
Major Tasks/Factors to be Considered
Planning
1. Definition of the research problem area;
2. Definition of research questions and/or hypotheses;
3. Operational definition of variables;
4. Development of the survey design.
Development and Application of Sampling Plan
1. Definition of the population;
2. Identification of subpopulations;
3. Detailed sampling procedures;
4. Selection of the sample.
Construction of Interview Schedule or Questionnaire
1. Development of questions or selection of measuring instrument;
2. Development of anticipated analysis procedures;
3. Pretest of instrument;
4. Revision of questions (as often and to the extent necessary).
Data Collection
1. Implementation of interviews, questionnaires, inventories, tests, or observations schedules;
2. Follow-ups;
3. Initial tabulation and coding.
Translation of Data
1. Construction of category systems as necessary;
2. Technical preparation of data for analysis.
Analysis
1. Separate analyses of questions, individually or in groups;
2. Synthesis, interpretation of results.
Conclusions, Reporting, Etc.
Source: Based on figure in Grinnell & Unrau at p. 273.
In addition, Proctor and Vu (2005) caution that while effective and reliable survey instruments are not necessarily complicated or difficult to develop, there are several factors should be taken into consideration during the design phase to ensure that the survey collects the type of data desired and the questions used do not introduce any misleading or ambiguous responses. In this regard, Proctor and Vu provide some useful guidance concerning survey design that can be used as a guide for researchers wanting to develop a custom survey instrument and these are shown in Table __ below.
Table
Survey Design Principles
Design Principle
Description
Is the language simple?
Write the questions so they will be easily understood by the target users. For example, “use” instead of “utilize.” This is the case for both language and sentence structure.
Is the question clear?
Avoid using words that are ambiguous. Also, it is important to ask only one question at a time. If the item contains “and” or “or,” there is a good chance that the researcher has inadvertently asked more than one question.
Is it short?
Long sentences are more likely to contain complex phrases and sentence structure. Furthermore, long questions are sometimes difficult to follow and increase the workload on the respondent.
Is there any bias present in the question or the response choices?
Do not bias the users’ potential response by using leading language in the question. Do not introduce the user to new facts, avoid mentioning one side of a semantic differential scale, and lead users through your choice of response categories.
Does the question have the right level of specificity?
Response choices should not be so general that the user cannot possibly determine the answer; however, they should be specific enough to be useful for the study.
Is the question objectionable?
Each item should be reviewed for the possibility of either inappropriate tone or content. This is of particular concern when a survey is cross-cultural where the questions, sentence structure, and language may be perfectly acceptable in one culture but offensive in others.
Source: Proctor & Vu, 2005, p. 311.
Finally, De Vaus (1996) emphasizes that, “The process of focusing a research question requires a knowledge of the field, an understanding of previous research, an awareness of research gaps and knowledge of how other research in the area has been conducted” ( p. 25). To help craft effective survey questions, De Vaus recommends that researchers take into account the following issues described in Table __ below.
Table
Issues to Consider in Developing Valid Survey Questions
Issue
Key Points
Is the language simple?
Avoid jargon and technical terms. Look for simple words without making questions sound condescending. Use simple writing guides or a thesaurus to help. A question such as ‘Is your household run on matriarchal or patriarchal lines?’ will not do!
Can the question be shortened?
The shorter the question the less confusing and ambiguous it will be. Avoid questions such as: ‘Has it happened to you that over a long period of time, when you neither practised abstinence nor used birth control, you did not conceive?’
Is the question double-barrelled?
Double-barrelled questions are those which ask more than one question. The question ‘how often do you visit your parents?’ is double-barrelled. Separate questions about a person’s mother and father should be asked.
Is the question leading?
A leading question is one where either the question structure or wording pushes people to provide a response that they would not have given had the question been asked in a more neutral way. Questions such as ‘Do you oppose or favor cutting defense spending even if cuts turn the country over to communists?’ are obviously leading. Leading questions give respondents the impression that there is a ‘correct’ response. Avoid linking an attitude position, policy or whatever with a prestigious person. Avoid phrases such as ‘Do you agree that… ‘ or ‘Does this seem like a good idea to you?’ The particular terminology you use can be leading. Think of the different impact of the choice of words ‘abortion’, ‘killing unborn babies’ or ‘ending a pregnancy.’
Is the question negative?
Questions which use ‘not’ can be difficult to understand-especially when asking someone to indicate whether they agree or disagree. The following question could be confusing: Marijuana should not be decriminalized:
-Agree
-Disagree
Rewording the question to ‘Marijuana use should remain illegal’ avoids the confusion caused by using ‘not.’
Is the respondent likely to have the necessary knowledge?
When asking about certain issues it is important that respondents are likely to have knowledge about the issue. A question which asks ‘Do you agree or disagree with the government’s policy on legalizing drug injecting rooms?’ would be unsatisfactory. For issues where there is doubt, we might first ask a filter question to see if people are aware of the government’s policy on drug injecting rooms and then ask the substantive question only if people answered ‘yes’ to the filter question. Alternatively, we should offer the respondent the opportunity to say that they are not sure what the government’s policy is.
Will the words have the same meaning for everyone?
Depending on factors such as age group, subcultural group and region, the meaning of some words will vary, so care must be taken either to avoid such words or to make your meaning clear. People also vary in how they define certain terms. For example, the answers people give to a question that asks them if they have been a victim of a crime in the last five years will depend on what they include in their definition of crime. For example, despite its illegality, some people may exclude domestic violence from their definitions of crime, thus leading to its under-reporting.
Is there a prestige bias?
When an opinion is attached to the name of a prestigious person and the respondent is then asked to express their own view on the same matter, the question can suffer from prestige bias. That is, the prestige of the person who holds the view may influence the way respondents answer the question. For example, ‘What is your view about the Pope’s policy on birth control?’ could suffer from prestige bias. Effectively the question is double-barrelled: the answer may reflect an attitude about the Pope or about birth control-we cannot be sure which.
Is the question ambiguous?
Ambiguity can arise from poor sentence structure, using words with several different meanings, use of negatives and double negatives, and using double-barrelled questions. The best way to avoid ambiguity is to use short, crisp, simple questions.
Is the question too precise?
While we need to avoid questions which invite vague and highly imprecise responses we also need to avoid requiring answers that need more precision than people are likely to be able to provide reliably. Precise answers are not necessarily accurate answers. Asking for too precise an answer can produce unreliable responses and add nothing useful to the study. For example, asking people ‘How many times in the last year did any member of your household visit a doctor?’ may yield precise figures but they are likely to be both inaccurate and unreliable.
Is the frame of reference for the question sufficiently clear?
If you ask ‘How often do you see your mother?’, establish within what time frame-within the last year? The last month? If you mean the frequency within the last year, ask ‘Within the last year how often would you have seen your mother on average?’ And then provide alternatives such as ‘daily’ through to ‘never’ to help further specify the meaning of the question.
Does the question artificially create opinions?
On certain issues people will have no opinion. You should therefore offer people the option of responding ‘don’t know’, or ‘no opinion’. This can lead to some people giving these responses to most questions which can create its own problems, but not including these alternatives will produce highly unreliable, and therefore useless, responses.
Is personal or impersonal wording preferable?
Personal wording asks respondents to indicate how ‘they’ feel about something, whereas the impersonal approach asks respondents to indicate how ‘people’ feel about something. The approach you use depends on what you want to do with the answers. The impersonal approach does not provide a measure of someone’s attitudes but rather the respondent’s perception of other people’s attitudes.
Is the question wording unnecessarily detailed or objectionable?
Questions about precise age or income can create problems. Since we normally do not need precise data on these issues we can diffuse this problem by asking people to put themselves in categories such as age or income groups.
Does the question have dangling alternatives?
A question such as ‘Would you say that it is frequently, sometimes, rarely or never that… ‘ is an awkward construction. The alternative answers are provided before the respondent has any subject matter to anchor them to. The subject matter should come before alternative answers are listed.
Does the question contain gratuitous qualifiers?
The italicized qualifiers in the following examples would clearly affect the way people answer the question-they effectively present an argument for a particular response. ‘Do you oppose or favor cutting defense expenditure even if it endangers our national security? And ‘Do you favor or oppose increasing the number of university places for students even if it leads to a decline in standards?’
Is the question a ‘dead giveaway’?
Absolute, all-inclusive or exclusive words are normally best avoided. Examples of such ‘dead giveaway’ words are: all, always, each, every, everybody, never, nobody, none, nothing. Since these words allow no exceptions few people will agree with the statement that includes them and this in turn will result in low variance and poor question discrimination.
Source: De Vaus, pp. 98-99.
Following development of the questionnaires, they will be pilot tested by a small sample of experienced call center managers to assess the validity and reliability of the survey questions as a whole. Additionally, the validity and reliability of the surveys will be accessed qualitatively through conducting interviews of a small number of the samples respondents to provide additional insight into the questionnaire answers.
I will send an introductory letter to the sample frame explaining the purpose of the survey. I will use a mail survey as the tool to employ my questionnaire to the managers and customers. Fowler (2009) suggests that anything that can be done to make a mail questionnaire appear more professional, personalized and/or attractive to the potential respondents usually has a positive effect on response rates. Therefore, work will be done to make the questionnaire as attractive to the participants as possible. For example: the survey’s layout will be clear, easy to read and to follow. Additionally, Fowler (2009) suggests the instrument be easy to complete. The questionnaire will use closed ended questions with check box or similar answers. My questionnaire will be self administered mailed to the sample frame.
The returned survey questionnaires will be converted into data files so they can be analyzed on a computer. Each respondent will receive a serial identifier to allow for organization and tracking. Data will be coded in the order it is present in the questionnaire to allow for ease of coding, data entry and programming tasks (Fowler, 2009). The data will be coded with numeric codes by answer; additionally, I will provide a missing answer code to allow for questions that are not answered. I do not have in mind to provide any monetary or other tangible form of motivation to respondents; however, I will provide them the results of the survey if they desire to see them. Again, any results provided will be sanitized to ensure confidentiality of all respondents’ identifiable information.
The research will be theory testing and will be conducted as a quantitative methodology which will test hypothesis quantitatively and thoroughly investigated an assessed in accordance with traditional research practices and procedures. Examining this topic through an interpretative lens would add to the body of knowledge in this area in a productive manner by helping in understanding the meaning of the situations. Examining call centers through observation and communicating face-to-face would bring understanding of the meaning apparatus that individuals bring to and develop from, a dynamic stream of events (Swanson & Holton, 2005).
Analysis of Data
The statistical procedure that will be used for this study is a regression analysis. The response or dependent variable (organizations productivity) will be analyzed through regression testing to see the affect that the independent variables of (1) key performance indicators and (2) customer value input have on the dependent variables. The initial step in the procedure will be to develop a scatter plot of the variables to see if there is any easy to see relationship between them. According to Albright, Winston, & Zappe (2006), a scatterplot graph is an excellent way to determine if there is a relationship between variables. If a relationship is observed between manager’s productivity and both independent variables, a multiple regression analysis will need to be performed to determine if a correlation exist. If a relationship is seen between only one of the variables, a simple regression will be performed between manager’s productivity and the independent variable that shows a relationship. The statistical analysis will be completed using SPSS Version 11.0 for Windows (Student Version) and the results presented in tabular and graphic form, as well as being interpreted narratively.
Ethical Considerations
Research ethics are governed by the National Research Act of July 1974 (Swanson & Holton, 2005). The Act created a commission, which was chartered to protect the interest of humans and subjects in research. The commission produced The Belmont Report, which defined practices and research so the boundaries of the two could be established (Swanson & Holton, 2005). According to Swanson & Holton (2005), the commission defined practice as interventions intended to improve the well-being of a patient or client, and research as activity designed to evaluate hypothesis and add to generalized body of knowledge concerning a topic. In Swanson and Holton (2005), they also noted that The Belmont Report identified three principles that should guide research: (1) Respect for persons “where persons is identified as autonomous individuals that are able to make independent decisions. (2) Beneficence “has to do with the researcher’s obligation to protect human subjects. (3) Justice “requires that parity be at hand in determining who will bear the burden of human subject research.
Creswell (2003) notes that the identification of the problem to be researched is one of the initial decisions that require ethical consideration; that is, the problem studied should benefit the individuals being studied. According to Creswell (2003), a pilot test is an excellent way to gain trust and respect from participants because the pilot test allows for the discovery of marginalization before the study is developed and conducted. Additionally, Creswell (2003) identifies ethical considerations in data collection during research. First, research plans for school projects must be reviewed by the schools institutional review board (IRB). The IRB is chartered with upholding the established research standards. Secondly, an informed consent form (ICF) must be signed by both the researcher and the participants. The ICF should contain the following elements per Creswell (2003):
1. The right of the participants to participate voluntarily and withdraw any time desired.
2. The purpose of the study should be clearly identified.
3. The procedures to be used in the study should be clearly identified.
4. The right of the participants to ask questions and to get a copy of the results of the study.
5. Signatures of both the researcher and participants; signifying that both agree to the terms of the research.
Creswell also wrote about ethical issue in data analysis and interpretations; specifically, they noted that researchers should consider how their study will protect the anonymity of the individuals in the study. Finally, Creswell explained that date should be kept for 5 to 10 years, who owns the data should be clearly outlined, and the proven accuracy of the information extracted from the data should be considered (2003).
Chapter 4: Data Analysis
Chapter 5: Summary, Conclusions and Recommendations
References
-. (2008, Winter). The wisdom of the crowd resides in how the crowd is used. Nieman Reports, 62(4), 47-48.
Albright, S.C., Winston W.L., & Zappe, C. (2006). Data Analysis and Decision Making with Microsoft Excel, 3rd Ed. Mason, OH: Thomson South-Western.
Batt, R. (2002). Managing customer services: Human resources practices, quit rates and sales growth. Academy of Management Journal, 45(3), 587-597.
Bedini, S.A. (1977). The Smithsonian experience. New York W.W. Norton & Company.
Bernett, H.G., Masi, D.M. & Fischer, M.J. (2006, July). Web-enabled call centers — A progress report. Business Communications Review, 32(7), 38-39.
Biggs, D., & Swailes, S. (2006). Relations, commitment and satisfaction in agency workers and permanent workers. Employee Relations, 28, 130-143.
Brabham, D.C. (2008, June 2). Moving the crowd at iStockphoto: The composition of the crowd and motivations for participation in a crowdsourcing application. First Monday, 13(6), 3-4.
Brooks S, Wiley JW & Hause EL (2006). “Using Employee and Customer Perspectives to Improve Organizational Performance,” in L. Fogli (ed.) Customer Service Delivery: Research and Best Practices, Jossey-Bass.
Bullinger, H.J. & Ziegler, J. (1999). Human-computer interaction: Communication, cooperation, and application design. Mahwah, NJ: Lawrence Erlbaum Associates.
Condron, S. (2010). Crowdsourcing. Retrieved from http://crowdsourcing.typepad.com/.
Cooper, R.G. & Edgett, S.J. (2008). Maximizing productivity in product innovation. Research Technology Management, 51(2), 47-48.
Cox, D. (2009, October 22). The truth according to Wikipedia. The Evening Standard, 33.
Creswell, J.W. (2003). Research design: Qualitative, quantitative and mixed methods approaches (2nd ed.). Thousands Oaks, CA: Sage Publications.
Crowded House: News Organizations Turn to Crowdsourcing to Get Readers More Involved in the Newsgathering Process. Contributors: Emily Yahr – author. Magazine Title: American Journalism Review. Volume: 29. Issue: 5. Publication Date: October-November 2007. Page Number: 8+
Culture Hacker. Contributors: Lance Weiler – author. Magazine Title: Filmmaker. Volume: 18. Issue: 1. Publication Date: Fall 2009. Page Number: 18+.
Dawson, K. (2006). ACCE/Special preview: The state of the call center industry. Retrieved February 11, 2009 from http://www.callcentermagazine.com/shared/article/showArticle.jhtml?articleId=192202464.
de Castella, T. (2010, July 5). Should we trust the wisdom of crowds? BBC News Magazine. Retrieved from http://news.bbc.co.uk/2/hi/uk_news/magazine/8788780.stm.
De Vaus, D. (1996). Surveys in social research. London: UCL Press.
Disciplined Action Planning Drives Employee Engagement. Contributors: Jack W. Wiley – author, Marilou Legge – author. Journal Title: Human Resource Planning. Volume: 29. Issue: 4. Publication Year: 2006. Page Number: 8+adv
Doan, A. (2008, January). A closer look: Film riot. PM Network, 22(1), 46-47.
Don’t Be Afraid to Explore Web 2.0. Contributors: John Thompson – author. Journal Title: Phi Delta Kappan. Volume: 89. Issue: 10. Publication Year: 2008. Page Number: 711+.
For Love or Money. Contributors: Alissa Quart – author. Magazine Title: Mother Jones. Volume: 32. Issue: 1. Publication Date: January/February 2007. Page Number: 73+.
Fowler, F.J. (2009). Survey research methods (4th ed.). Thousand Oaks, CA: Sage Publications, Inc.
Gray, P.H., & Durcikova, A. (2006). The role of knowledge repositories in technical support environment: Speed vs. learning in user performance. Journal of Management Information Systems, 22, 3. 159-190.
Greengard, S. (1999, June). HR call centers: a smart business strategy. Workforce, 78(6), 116-
Grinnell, R.M. Jr. & Unrau, Y.A. (2005). Social work research and evaluation: Quantitative and qualitative approaches. New York: Oxford University Press.
Hawkins, D.T. (2007, November). Trends, tactics, and truth in the information industry. Information Today, 24(10), 33-34.
Hillmer, S., Hillmer, B. & Mcroberts, G. (2004). The real costs of turnover: Lessons from a call center. Human Resource Planning, 27(3), 34-35.
Howe, J. (2006, June). The rise of crowdsourcing. Wired, 37.
Karim R. Lakhani, Lars Bo Jeppesen, Peter A. Lohse, and Jill A. Panetta, 2007. “The value of openness in scientific problem solving,” Harvard Business School Working Paper, number 07 — 050, at http://www.hbs.edu/research/pdf/07 — 050.pdf, accessed 6 April 2008.
Kaufman, W. (2008, August 20). Crowd sourcing turns business on its head. National Public Radio. Retrieved from http://www.npr.org/templates/story/story.php?storyId=93495217.
Kim, H. (2008, November 13). Working for nothing is popular. The Journal, 28.
Koh, S.C.L., Gunasekaran, A., Thomas, A., & Arunachalam, S. (2005). The application of knowledge management in call centers. Journal of Knowledge Management, 9, 4, 56-69
Kossek, E.E. & Lambert, S.J. (2005). Work and life integration: Organizational, cultural, and individual perspectives. Mahwah, NJ: Lawrence Erlbaum Associates.
Kumar, S. & Kopitzke, K.K. (2008). A practitioner’s decision model for the total cost of outsourcing and application to China, Mexico and the United States. Journal of Business Logistics, 29(2), 107-108.
Leveraging Networks and Social Software for Mission Success: Web 2.0 Tools Help Dynamically Assess Contributions, Grasp Organizational Sentiment, and Identify Key Human Capital Assets. Contributors: Tom Mccluskey – author, Adam Korobow – author. Journal Title: The Public Manager. Volume: 38. Issue: 2. Publication Year: 2009. Page Number: 66+.
Li, C., & Bernoff, J. (2008). Groundswell: Winning in a world transformed by social technologies. Boston: Harvard Business Press.
Mirchandani, K. (2004). Practices of global capital: Gaps, cracks and ironies in transnational call centers in India. Global Networks, 4, 355-374.
O’Reilly, T. (2005, September 30). What is Web 2.0? O’Reilly Media. Retrieved from http://oreilly.com/web2/archive/what-is-web-20.html.
Ong, E.D. Yahoo! Paving Ad Platform for Next Generation Netizens. Newspaper Title: Manila Bulletin. Publication Date: November 6, 2007
Proctor, R.W. & Vu, K.P. (2005). Handbook of human factors in Web design. Mahwah, NJ: Lawrence Erlbaum Associates.
Ribeiro, J. (2006, February 6). Dell to add 5000 call center workers in India. Computer-world, p. 15.
Running the Electronic Sweatshop: Call Center Managers’ Views on Call Centers. Contributors: George Robinson – author, Clive Morley – author. Journal Title: Journal of Management and Organization. Volume: 13. Issue: 3. Publication Year: 2007. Page Number: 249+.
Surowiecki, J. Contributors: Alexandra Shiu – author. Journal Title: Roeper Review. Volume: 29. Issue: 5. Publication Year: 2007. Page Number: 57.
Swanson, R.A., & Holton, E.F., III. (Eds.). (2005). Research in organizations: Foundations and methods of inquiry. San Francisco: Berrett-Koehler.
The Indian Call Center Experience: A Case Study in Changing Discourses of Identity, Identification, and Career in a Global Context. Contributors: Mahuya Pal – author, Patrice Buzzanell – author. Journal Title: The Journal of Business Communication. Volume: 45. Issue: 1. Publication Year: 2008. Page Number: 31+
To Tweet or Not to Tweet?. Magazine Title: Library Administrator’s Digest. Volume: 44. Issue: 5. Publication Date: May 2009. Page Number: 35.
Waze Turning Road Warriors into Map Builders. Newspaper Title: Manila Bulletin. Publication Date: September 25, 2009
West, A. (2008). Top Predictions. Magazine Title: Workforce Management. Volume: 87. Issue: 20. Publication Date: December 15, 2008. Page Number: 20+.
When the Audience Takes Control. Contributors: Lance Weiler – author. Magazine Title: Filmmaker. Volume: 16. Issue: 4. Publication Date: Summer 2008. Page Number: 86+
Whitford, D. (2008, January 3). Hired guns on the cheap. Fortune: Small Business. Retrieved from http://money.cnn.com/magazines/fsb/fsb_archive/2007/03/01/8402019/index.htm.
Woods, D. (2009, September 29). The myth of crowdsourcing. Forbes. Retrieved from http://www.forbes.com/2009/09/28/crowdsourcing-enterprise-innovation-technology-cio-network-jargonspy.html.
Workman, M., & Bommer, W. (2004). Redesigning computer call center work: A longitudinal field experiment. Journal of Organizational Behavior, 25, 3, 317- 337
We provide professional writing services to help you score straight A’s by submitting custom written assignments that mirror your guidelines.
Get result-oriented writing and never worry about grades anymore. We follow the highest quality standards to make sure that you get perfect assignments.
Our writers have experience in dealing with papers of every educational level. You can surely rely on the expertise of our qualified professionals.
Your deadline is our threshold for success and we take it very seriously. We make sure you receive your papers before your predefined time.
Someone from our customer support team is always here to respond to your questions. So, hit us up if you have got any ambiguity or concern.
Sit back and relax while we help you out with writing your papers. We have an ultimate policy for keeping your personal and order-related details a secret.
We assure you that your document will be thoroughly checked for plagiarism and grammatical errors as we use highly authentic and licit sources.
Still reluctant about placing an order? Our 100% Moneyback Guarantee backs you up on rare occasions where you aren’t satisfied with the writing.
You don’t have to wait for an update for hours; you can track the progress of your order any time you want. We share the status after each step.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
From brainstorming your paper's outline to perfecting its grammar, we perform every step carefully to make your paper worthy of A grade.
Hire your preferred writer anytime. Simply specify if you want your preferred expert to write your paper and we’ll make that happen.
Get an elaborate and authentic grammar check report with your work to have the grammar goodness sealed in your document.
You can purchase this feature if you want our writers to sum up your paper in the form of a concise and well-articulated summary.
You don’t have to worry about plagiarism anymore. Get a plagiarism report to certify the uniqueness of your work.
Join us for the best experience while seeking writing assistance in your college life. A good grade is all you need to boost up your academic excellence and we are all about it.
We create perfect papers according to the guidelines.
We seamlessly edit out errors from your papers.
We thoroughly read your final draft to identify errors.
Work with ultimate peace of mind because we ensure that your academic work is our responsibility and your grades are a top concern for us!
Dedication. Quality. Commitment. Punctuality
Here is what we have achieved so far. These numbers are evidence that we go the extra mile to make your college journey successful.
We have the most intuitive and minimalistic process so that you can easily place an order. Just follow a few steps to unlock success.
We understand your guidelines first before delivering any writing service. You can discuss your writing needs and we will have them evaluated by our dedicated team.
We write your papers in a standardized way. We complete your work in such a way that it turns out to be a perfect description of your guidelines.
We promise you excellent grades and academic excellence that you always longed for. Our writers stay in touch with you via email.