SandBox
|
Size: 32013
Comment:
|
Size: 43206
Comment:
|
| Deletions are marked like this. | Additions are marked like this. |
| Line 1: | Line 1: |
| #title Building Buzz - Building Alliances | #title Hooks and Data |
| Line 8: | Line 8: |
| == Building Alliances == Good communication resources and contacts are critical for promotional ideas and concepts to flow from you to the outside world. If you are going to build some great buzz, you need to know how to communicate effectively in different channels. This requires two skills, which must be mastered separately for each channel: * Find the opportunity to make use of that channel. As an example, if you want to be featured in a particular magazine, you need to create an opportunity in which you can get your content there. This almost always involves building contacts. You should get to know the editors of the magazine and build a relationship that could allow you to feature some content in their channel. * Ensure that your buzz in that channel is appropriate. The norms of communication between different mediums vary, but the differences can often be subtle and unwritten. If you act outside the expected boundaries (particularly in volunteer channels), it can impact negatively on your community. Buzz is designed to be consumed by lots of people. You want as much focus and attention on your community as possible. The more relevant eyeballs, the better. As part of your planning stage for a buzz campaign, you should weigh the amount of effort involved with the number of relevant eyeballs. You want to ensure that your time and effort preparing materials and content is worthwhile and that a reasonable number of people see your work. Much of this boils down to readership, and readership varies tremendously. Sometimes you can ask for this information, such as with magazines, but sometimes it is more of a guess. There will be some resources that you will assume have a large audience (such as a very popular website) and some with less (such as a blog). An important consideration in this area is how the growth of the Internet has changed audience figures. It used to be general wisdom that paper publications were always the source of high audience figures. This is often no longer the case, as many websites—even a number of blogs have hundreds of thousands of regular visitors. == The Professional Press == The professional press is a large and extensive channel. It encompasses magazines, websites, journals, videos, multimedia content, and more. Each of these publications has professional paid staff who have a responsibility to publish quality content. The professional press has three primary concerns at the forefront of its mind: '''Quality content''' First and foremost, the professional press wants to produce leading content. It wants wellproduced content that is of interest to its audience. Great content drives an increased... '''Readership/audience''' Professional publications rely on readership numbers. It is these numbers that largely justify the continuation of the publication. Having high audience numbers depends on getting the previous item in place: quality content. '''Advertising opportunities''' Most publications make a significant chunk of their revenue from advertising, and advertising does have an impact on content. Although many publications would deny it, advertising deals are often agreed based upon relationships between the publication and the company. These relationships need to be maintained to continue to bring in revenue. In many cases the content in a publication may be heavily critical of a company, product, or initiative. Although this should never matter, for many publications it does, and the producer of the content is either advised to change the content or focus on other topics. You should factor these attributes into your plan for building buzz. You want to target the most appropriate publications that are relevant to your community. You will need to provide them with quality content that is of interest to their audience and consider any potential advertising conflict. You will need to build a relationship with the publication. With these publications largely staffed by paid personnel, it is entirely reasonable to formally contact them via email or phone and ask them if you can contribute some content. A great first step would be to ask if they could feature your community in their news section. In some cases you may have the chance to build some relationships that you can return to when opportunity strikes later. The first time I experienced this was years back at the start of my career. It was my first time at the Linux Expo in London. I was there running an exhibition stand for the first time for the KDE project. While there, I went to an after-show party, and the editors from Linux Format magazine were there. I got to chatting to them, had more than a few drinks, and a little while later asked them if they would consider publishing something from me. Nick Veitch, editor at the time, responded with, “Sure, write something, but if it is rubbish we won’t publish it.” I wrote my article, it got published, and so started my journalism career. Linux Format opened many doors for me, but most importantly, it gave me a platform to talk about things that I considered interesting. It opened up a set of opportunities that have since helped with building buzz and promotion in the open source projects that I have been involved in. Though it’s been some years since my days writing for Linux Format, I got in touch with then staff writer and now editor-in-chief Paul Hudson to gather some insight from the perspective of an editor to share with you all. Paul is a firm believer of the have-a-go approach to getting content in: Both of us got into the world of free software journalism by saying, “hey, why don’t I write for you?” and I think that same situation occurs a lot—people don’t realise how much they can contribute until they just ask. I think people imagine some sort of incredible vetting process must take place in order to write for magazines—as if only people in smoking jackets with PhDs from the school of ignorant snobbery are able to get stuck into writing, but that’s simply not true. Well, not always true, at least! Technical magazines and websites are crying out for people to get involved and just share what’s cool and what’s new in their world. Paul regularly handles a slew of wannabe writers and passionate community members keen to get their projects featured in the magazine. With this in mind, he offers some useful guidance for improving the likelihood of getting coverage in magazines: Don’t use email. We get stacks of emails, and most of them remain unread. The reason for this is that PR agencies blast us with all sorts of emails about things whether they are relevant to the magazine or not, so inevitably some important emails get lost in the mess. Instead, call first, ask to speak to the news editor or someone else on the team, and just have a chat to them. They want good contacts as much as you do, so if you’re someone who represents a project that’s on their radar, they would love to be in touch with you. They are also much more likely to read your emails if you’ve already made contact by phone. When you write release announcements, make it really clear what’s new. This is something the GNOME project, as one such example, does well (http://library.gnome.org/misc/release-notes/2.24#rnusers). They list the new features with pictures, so that someone can decide at a glance whether it’s worth looking into. If you are a software project, provide at least one screenshot that shows off the best feature you’ve got to offer. Remember, these guys are looking for “wow” things to print, and if you can send them a shot of your software looking awesome, they are much more inclined to run it as a news story. Remember that even in technical magazines, some people are still journalists first and geeks second. Put your documentation online and link to all the technical information you like, but when you’re trying to get a journalist interested in what you have to say, it’s much more important to say “MyProject 2.0 uses 25% less RAM than MyProject 1.0” than to say “The switch to the xyz toolkit blah blah blah please send me straight to your Trash folder.” Sure, drop in all the technical information you want later on, but you need to win them over in the first two sentences by focusing on what really kicks ass in your software. If you’re not producing software, getting into magazines is slightly trickier, because magazines rarely want to print a story if it’s similar to something else they ran recently. So if your user group wants to get featured, you need to step outside the installfest (unless it’s big) and do something pretty darn special. Whatever you do, take a photo and make it available under a Creative Commons license that allows commercial use. The rules change with nontechnical magazines because once you enter the mainstream, you need to focus more on people. The New York Times won’t find the Gecko web rendering engine interesting, but it will find Spread Firefox interesting because grassroots marketing really is changing the browser landscape. While Paul offers some useful advice on the best-practice methods of getting content in the hands of editors, he is keen to emphasize that many communities simply don’t get out and try, and this makes for a huge opportunity for printed nirvana: Let me try to make this a bit clearer with a specific example from Linux Format. We run a page of LUG information every month, and we have to email people to try to get content to fill those pages, despite printing an open plea every issue asking people to get in touch. So it’s not that community members are struggling to get their information in—it’s more that many of them just aren’t trying. Perhaps they think we’re not interested. Perhaps they think we won’t print it. But as they so rarely try, most of them will never know. Maybe they’re just targeting magazines that are just a little bit out of their reach, but that’s another schoolboy error—Editor X is much more likely to print an article about your community if Editors Y and Z already have. So start small; find a magazine that fits your niche closely and get yourself covered in there. Then use that to help get coverage in other places, building it up bit by bit. The professional press can seem a bit unnerving. Professional journalists often feel like a live-by-the-seat-of-your-pants collection of hard-working, focused, and unrelenting writers. Don’t let this worry you. Journalists are good people and they get asked for content opportunities all the time. Just go out there and ask. When I started doing this, I would ask everyone. I would email 10 or 15 magazines to see if I could contribute content. I would not spam them: each email would be focused on that specific publication, and each would be relevant to my topic. I would recommend that you email over a list of topics that you can write about and ask if you could write something about those topics. Alternatively, write an article and submit it. The benefit of the latter is that the journo has direct access to content, which is often an attractive proposition. Just go out there and ask; there really is no harm. == The Amateur Press == In the last five years, the amateur press world has exploded. The Internet has provided an incredible medium in which anyone can write about anything and have the chance to grow an audience. Technology and open access to information have provided an incredible opportunity to be heard, and many have built new reputations out of these opportunities. Consequently, millions of blogs and thousands of podcasts have sprung up around the world. The amateur press is a world largely fueled by volunteers. The authors write their words not to claim a paycheck, but to share their ideas, perspectives, and opinions. Although populated by amateur scribes, this does not necessarily equate to a lack of quality. Some of the greatest work I have ever read has shown up on a blog. This could be the musings of Lessig on the copyright wars (http://www.lessig.org/blog/ ) or the deeply amusing yet incredibly wellwritten and inspired political blather of Flyingrodent (http://flyingrodent.blogspot.com/ ). Both are inspired works, yet very different in content and presentation. The timeline of the amateur press revolution largely matched that of the professional press revolution many years earlier. The publishing world exploded onto the scene and thousands of books, newspapers, and journals sprung up. Each of these publications had its own perspective on its respective topic, and it became very difficult for readers to identify where the real quality was. The solution to this was the launch of other publications that read, reviewed, and collated this content (a great example being The Week at http://www.theweek .com/ ). We have started to see much of this in recent years with blogs and podcasts. Websites such as Technorati (http://technorati.com/ ) have been sifting through the blogosphere and identifying the most popular and interesting blogs. Well-respected members of given communities will also often provide their “blogroll,” which lists the blogs that they enjoy reading. These resources all provide an excellent opportunity to identify which blogs you should be focusing on. The amateur press is hugely important for buzz. The professional press is far more complicated and restricted in terms of getting heard, whereas often a few emails exchanged with the amateur press will net great results. It’s not surprising why: * The amateur press has a shared appreciation for volunteer community. They understand your reasons and intentions and will often want to promote them. * There is no advertising conflict in most of the amateur press: they can write about whatever they want. * There is typically no limitation on content. On a blog you can write as many posts as you like. This opens up more opportunities for you to get in on the content. The most significant mediums in the amateur press arena are blogs, podcasts, and videos. Let’s take a quick look at all three and explore their cultures. == Blogs == Weblogs (typically known as blogs) started out life as online diaries. In them people would share what they were doing, what they were thinking, and what interested them. When blogging started, there were few blogs, and most were devoted to deeply technical topics. Alan Cox was one of the earliest bloggers that I am aware of. Living in Swansea in Wales, Cox developed his celebrity among early Linux fans due to his work on the Linux kernel. Cox worked on incredibly low-level deep and dirty programming. It was about as unrelentingly hardcore as you could get. When I first started reading his diary, I was fascinated. This was not the work of Alan Cox communicated through a journalist’s eyes. What I was reading were the direct thoughts of the man himself. Without wishing to sound like an overenthusiastic psychology major, I felt like I was actually closer to the person I was reading about. It gave a direct line to his world, and it pretty much rocked mine. Since then blogging has expanded somewhat. In addition to blogs being used as personal diaries, many are now referred to as personal publishing systems. Many people, myself included, instead use blogs as a means of writing articles that are of interest to them. I use my blog to write about community, music, technology, usability, and more. I also use it as a medium to express achievements, goals, and more. It is entirely conceivable that both your existing community members and the people you want to have as friends have blogs. With this in mind, blogging should be a critical component in your buzz-building. The first task is knowing which blogs to build buzz with. Look for relevant blogs and strike up a relationship with the authors. Explain what your community is doing, and what your goals are. Try to get the author on board with your mission. You can then ask whether the author would be interested in sharing your work on his blog. If you have your own blog, you could offer to provide a link to his blog in exchange. Blog wars. Although blogging has had a hugely positive impact on how people can articulate and share opinions and perspectives, there has been a dark side. Blogging has also become a medium in which much overzealous opinion can sometimes be expressed a little too quickly. Unfortunately, I have a rather embarrassing example of someone who fell into this trap: yours truly. First, a bit of background. There used to be a company called Lindows that made a version of Linux that shared many visual and operational similarities to Windows. Microsoft frowned at the name “Lindows,” and a fight started to change the name. Lindows initially resisted, but after mounting pressure, changed their name to Linspire. Now to the issue. Let me take the liberty to explain in the words of the article itself: Recently a chap named Andrew Betts decided to take the non-free elements out of Linspire and release the free parts as another Linspire-derived distribution called Freespire. This act of rereleasing distributions or code is certainly nothing new and is fully within the ethos of open source. In fact, many of the distributions we use today were derived from existing tools. Unfortunately, Linspire saw this as a problem and asked for the Freespire name to be changed. Reading through the notice of the change, the language and flow of the words screams marketing to me. I am certainly not insinuating that Betts has been forced into writing the page, or that the Linspire marketing drones have written it and appended his name, but it certainly doesn’t sound quite right to me. I would have expected something along the lines of “Freespire has been changed to Squiggle to avoid confusion with the Linspire product”, but this is not the case. Instead we are treated to choice marketing cuts such as “To help alleviate any confusion, I contacted Linspire and they made an extremely generous offer to us all”. Wow. What is this one-chance-in-a-lifetime-not-sold-in-stores offer? Luckily, he continues, ‘they want everyone who has been following my project to experience ‘the real’ Linspire, FOR FREE!!!”. Now, pray tell, how do we get this “real” version of the software “FOR FREE!!!”? “For a limited time, they are making available a coupon code called ‘FREESPIRE’ that will give you a free digital copy of Linspire! Please visit http://linspire.com/freespire for details”. Oh...thanks. I gave Linspire a pretty full-throated kick in the wedding vegetables in my blog entry. I told the story, objected to what I considered hypocrisy given their own battle with similar-sounding trademarks, and vented. I wish Guitar Hero had existed back then: it would have been a better use of my time. I was wrong. My article was never going to achieve anything. Shortly after the article was published, then-CEO Kevin Carmony emailed me. He was not a happy bunny. His objection, and it was valid, was that I flew off the handle without checking in with him first. My blog entry was my first reaction. The happy conclusion to this story is that I apologized to Kevin, admitted to being a bit of an arse, and we have remained friends. In fact, a little while later I joined the Linspire Advisory Board shortly before I joined Canonical to work on Ubuntu. It’s funny how things work out. == PRACTICE WHAT YOU PREACH == In this chapter we have discussed the important attributes in setting up a website and blog for your project and also how to build buzz using other people’s blogs. Importantly, you personally should have a blog. Use it as an opportunity to discuss your own personal interests and also to talk about your community. Podcasts Podcasts are audio shows that are distributed on the Internet. They typically have between one and four presenters, and they are often based around fairly specific topics. Many listeners use a special piece of software called a podcatcher to subscribe to a podcast so that when new episodes of a podcast are released, they are automatically downloaded to a media player such as an iPod. This is a fantastic way to keep listeners updated with new content. A significant reason behind the success of podcasts is that they deliver interesting specialist content to the listener to fill those dull minutes traveling to work. Many podcasts include interviews, reviews, features, debates, and other content. They vary hugely in both audio and content quality, and some podcasts have netted thousands of listeners. As I mentioned earlier in this book, I cofounded a podcast with some friends called LugRadio. The show was very specifically focused on open source and digital rights. It took a lighthearted and irreverent approach to the content, and we deliberately focused on making the content social, fun, and amusing. Each show presented a range of topics for discussion, and each of us would weigh in and share our thoughts, often resulting in raucous debate and discussion. Podcasts are always looking for pointers to interesting content and announcements. You should email the presenters and explain what you are working on, and see if they would be interested in featuring your community on the show. If you manage to get a spot on a popular podcast, it could bring a wealth of new blood to your community. Although you may feel a little funny about emailing the presenters out of the blue, go ahead! If you don’t ask, you don’t get. When pitching to a podcast, the most important tip is that your tone should match that of the podcast. When we were doing LugRadio, we would often get offers for interviews and features, but often the tone would be right out of a Marketing 101 textbook. This not only demonstrated that the person making the offer had not listened to the show, but it was a red flag for boring, emotionless content that had no place on LugRadio. On the other hand, we also got offers of content that was fun, loose, and insightful, and these were snapped up instantly. If you get accepted for an interview or to have your community featured, listen to a number of episodes of the podcast to get a feel for the tone. Use it as a guide, but don’t be afraid to share your own personality: you have the opportunity to inspire people to join your community, so just be yourself within the context of the podcast. Finally, always ensure you have a web address to point the listeners to. This will provide an option to feed them more information, and the link can be listed in the podcast’s show notes. Ensure the website that the link points to is packed with content that’s ready when the episode of the podcast is published. ---- |
== Hooks ’n’ Data == So far we’ve discussed the importance of gathering feedback and measurements from your community, and that the focal point is the goals that we decided on in our strategic plan. The next step is to build into each goal a feedback loop that can deliver information about our progress on the goal. This feedback loop is composed of two components—hooks and data: '''Hooks''' A hook is a medium or resource in which we can slurp out useful information about our goal. As an example, if our goal was to reduce crime in a neighborhood, a hook could be local crime reports from the police. The reason I call them hooks is that they are the protruding access points in which we can display interesting information. ''' Data''' If a hook is the medium that provides useful information, data is the information itself. Using our previous example of a goal to reduce crime in a neighborhood, the hook (local crime reports) could provide data such as “10 crimes this month.” The data is composed of two attributes, the data itself and the measurement unit. Again, the kind of unit can be used to feed a display (e.g., numerical units are great for graphs). To help understand this further, let’s look at an example. In the Ubuntu community, my team has worked to help increase the number of people who become new developers. In our strategic plan we created an objective to increase the number of community developers and fleshed it out with goals for improving developer documentation, awareness, and education. Each goal had the expected set of actions. For us to effectively track progress on the objective, we needed data about developer growth. Fortunately, we have access to a system called Launchpad (http://www.launchpad.net), which is where all Ubuntu developers do their work. This system was an enormous hook that we could use to extract data. To do this we gathered a range of types of data: * The current number of developers (e.g., 50 developers). * How long new contributions from prospective developers took to be mentored by existing developers (e.g., 1.4 weeks). * How many of these new contributions are outstanding for mentoring (e.g., 23 developers). Launchpad had all of this information available. Using some computer programs created by Daniel Holbach, we could extract the data. This allowed us to track not only the current number of developers but also how quickly progress was being made: we knew that if the number of developers was regularly growing, we were making progress. We could also use this data to assess the primary tool that new developers use to participate in Ubuntu: the queue of new contributions to be mentored. When a new developer wants to contribute, she adds her contribution to this queue. Our existing developers then review the item, provide feedback, and if it is suitable, commit it. By having data on the average time something sits on that queue as well as the number of outstanding items, we could (a) set reasonable expectations, and (b) ensure that that facility was working as well as possible. In this example, Launchpad was a hook. Using it involved some specific knowledge of how to physically grab the data we needed from it. This required specialist knowledge: a script was written in Python that used the Launchpad API to gather the data, and then it was formatted in HTML to be viewed. Launchpad was an obvious hook, but not the only one. Although Launchpad could provide excellent numbers, it could not give us personal perspectives and opinions. What were the thoughts, praise, concerns, and other views about our developer processes and how well they worked? More specifically, how easy was it to get approved as an Ubuntu developer? To gather this feedback, our hook was a developer survey designed for prospective and new developers. We could direct this survey to another hook: the list of the most recently approved developers and their contact details. This group of people would be an excellent source of feedback, as they had just been through the developer approval process and it would be fresh in their minds. With so many hooks available to communities, I obviously cannot cover the specific details of how to use them. This would turn The Art of Community into War and Peace—complete with tragic outcome (at least for the author). Fortunately, the specifics are not of interest, as all hooks can be broadly divided into three categories: Statistics and automated data Hooks in this category primarily deal with numbers, and numbers can be automatically manipulated into statistics. Surveys and structured feedback These hooks primarily deal with words and sentences and methods of gathering them. Observational tests These hooks are visual observations that can provide insight into how people use things. Let’s take a walk through the neighborhood of each of these hooks and learn a little more about them. Statistics and Automated Data People have a love/hate relationship with statistics. Gregg Easterbrook in The New Republic said, “Torture numbers, and they’ll confess to anything.” Despite the cynicism that surrounds statistics, they turn up insistently on television, in newspapers, on websites, and even in general pub and restaurant chitchat. The problem with the general presentation of statistics is that the numbers are often used to make the point itself instead of being an indicator of a wider conclusion. Statistics are merely indicators. They are the metaphorical equivalent to the numbers and gauges on the dashboard of a car: no single reading can advise on the health of the car. The gauges, along with the sound of the car itself, the handling, look and feel, and smell of burning rubber all combine to give you an indication that your beloved motor may be under the weather. Despite the butchered reputation of statistics, they can offer us valuable insight into the status quo of our community. Statistics can provide hard evidence of how aspects of your community are functioning. Many hooks can deliver numerical data. A few examples: * Forums and mailing lists can deliver the number of posts and number of members. * Your website can deliver the number of visitors and downloads. * Your meeting notes can deliver the number of participants and number of topics discussed. * Your development tools can deliver the number of lines of code written, number of commits made to the source repository, and number of developers. * Your wiki can deliver the number of users and number of pages. For us to get the most out of statistics, we need to understand the mechanics of our community and which hooks can deliver data from those mechanics. We will discuss how to find hooks from these mechanics later in this chapter. == The risks of interpretation == Although statistics can provide compelling documentation of the current status quo of your community, they require skill to be interpreted properly. A great example of this is forum posts. Many online communities use discussion forums, the online message boards in which you can post messages to a common topic (known in forums parlance as a thread). Within most forums there is one statistic that everyone seems to have something of a love affair with: the total number of posts made by each user. It’s easy to see how people draw this conclusion. If you have three users, one with 2 posts, one with 200 posts and one with 2,000 posts, it’s temping to believe that the user with 2,000 posts has more insight, experience, and wisdom. Many forums leap aboard this perspective and provide labels based upon the number of posts. As an example, a forum could have these labels: 0–100 posts: New to the Forum 101–500 posts: On the Road to Greatness 501–1,050 posts: Regular Hero 1,501–3,000 posts: Dependable Legend 3,001+ posts: Expert Ninja As an example, if I had 493 posts, this would give me the “On the Road to Greatness” label, but if I had 2,101 posts, I would have the “Dependable Legend” label. These labels and the number of posts statistic is great for pumping up the members, but it offers little insight in terms of quality. Quantity is rarely an indicator of quality; if it were, spammers would be the definition of email quality. When you are gathering statistics, you will be regularly faced with a quantity versus quality issue, but always bear in mind that quality is determined by the specifics of an individual contribution as opposed to the amalgamated set of contributions. What quantity really teaches us is experience. No one can deny that someone with 1,000 forum posts has keen experience of the forum, but it doesn’t necessarily reflect on the quality of his opinion and insight. == Plugging your stats into graphs == Stats with no presentation are merely a list of numbers. When articulated effectively though, statistics can exhibit the meaning that we strive for. This is where graphs come into play. Graphs are an excellent method of displaying lots of numerical information and avoiding boring the pants off either (a) yourself, or (b) other people. Let’s look at an example. Earlier we talked about a project to increase the number of community developers in Ubuntu, and one piece of data we gathered was the current number of community developers who had been approved. This is of course a useful piece of information, and as the number climbs it helps indicate that we are achieving our goals. What that single number does not teach us, though, is how quickly we are achieving our goal. Imagine that we had 50 developers right now and we wanted to increase that figure by 20% a year. This would mean we would need to find five developers in the next six months. This works out at approximately one developer per month. If we want to encourage this consistency of growth, we need not only to look at the number of current developers once, but also track it over time so we can see if we are on track to achieve our 20% target. Using this example in the Ubuntu world, we could use Launchpad to take a regular snapshot of the number of current developers, plot it on a graph, and draw a line between the dots. This could give us a growth curve of new developers joining the project. Another handy benefit of graphs is to show the impact of specific campaigns on your community. On my team at Canonical we have a graph that shows the current number of bugs in Ubuntu. On the graph is a line that shows the current number of bugs for each week. As you can imagine, the line that connects these numbers shows a general curve of our bug performance. This line is generally fairly consistent. Each cycle, we have a special event called the Ubuntu Global Bug Jam (http://wiki.ubuntu.com/UbuntuGlobalJam) in which our community comes together to work on bugs. Our local user groups organize bug-squashing parties, and there are online events and other activities that are all based around fixing bugs. Interestingly, each time we do the event, we see a drop in the number of bugs on our graph for the days that the Global Bug Jam happens. This is an excellent method of assessing the impact of the event on our bug numbers. == TECHNICAL TIP == You may be wondering how you can gather data from various hooks and display them in a graph automatically. I just wanted to share a few tips. If this seems like rocket science to you, I recommend that you seek advice from someone who is familiar with these technologies. Gathering data from hooks is hugely dependent on the hook. Fortunately, many online services offer an application programming interface (API) that can be used by a program to gather the data. This will require knowledge of programming. Many programming languages, such as Python and Perl, make it simple to get data through the API. Another approach with hooks is to screen scrape. This is the act of downloading a web page and figuring out what text on the page has the data. This is useful if an API is not available. For graphing, there are many tools available that can ease graphing if the data is available. This includes Cricket (http://cricket.sourceforge.net/ ), and of course you could load data into a spreadsheet with a comma-separated value (CSV) file if required. == Surveys and Structured Feedback == Surveys are an excellent method of taking the pulse of your community. For us, they are simple to set up, and for our audience, they are simple to use. I have used surveys extensively in my career, and each time they have provided me and my teams with excellent feedback. Over the next few pages I want to share some of the experience I have picked up in delivering effective surveys. The first step is to determine the purpose of the survey. What do you want to achieve with it? What do you need to know? Every survey needs to have a purpose, and it is this purpose that will help you craft a useful set of questions that should generate an even more useful set of data. |
| Line 117: | Line 118: |
| As a gesture to the makers of the podcast, it is highly recommended that you spread the word about the podcast episode that your community is featured in. You could do this on your website, in your community’s communication channels, and on blogs. This will help build a strong relationship with the podcast, leaving the door open for future content and interviews. ---- Videos Online video has become increasingly popular as the Internet has become faster and more accessible. Although a hefty Internet connection is required to suck said videos down onto your computer for viewing, the sheer popularity of services such as YouTube (http://www.youtube.com) and blip.tv (http://blip.tv/ ) has demonstrated that many do indulge in such audiovisual delights on the Internet. While some of us may reminisce about the dark days of dial-up Internet access, it is important to remember that many parts of the world still rely on slower dial-up connections. For these folks, videos are simply not an option. As such, before you get too excited to step into the shoes of Steven Spielberg, you should consider how accessible videos are for your community. As an example, if you are reaching out to a community in a remote part of Africa, you may want to rely on another lower-bandwidth medium. In general, my recommendation is to make use of video, but not as a primary medium. Instead, use it to complement your other, more widely accessible resources. By far the most popular video service at the time of writing is YouTube. The idea is simple: anyone can upload a video and anyone with a web browser equipped with Macromedia Flash can view it. YouTube opened the doors for anyone with a webcam or a cheap video camera to be able to create and publish online video. This has resulted in thousands of hours of freely accessible video hitting the Internet. This is only part of the value of YouTube, though. Videos on YouTube are hugely discoverable: it is possible to upload a video and have thousands of people stumble across it. This happens because each video on YouTube also displays a list of videos that are related to the one being viewed. This feature alone hugely increases the likelihood of people finding your videos. To do this you need to ensure that you name and add keywords to describe the content of your video in a way that enhances the chances of a certain demographic of user being able to find it. As an example, if you are part of a mapping community, you might want to tag your video with the words “map,” “geography,” “geo,” “location,” and any specific regions that were featured in the video. It is stunning how many people will find your videos, and this is further bolstered by word of mouth and the simplicity of embedding videos in web pages. Another hugely useful feature of YouTube are channels. These are home pages on YouTube that contain videos from a certain provider (such as an artist, actor, or your community). There are different types of channels on YouTube designed for different types of provider that have additional facilities such as custom logos, blog entries, and tour dates. A huge benefit of a channel is that people can subscribe to it and will be notified when you add a new video. This is an excellent way to keep people hooked into your videos. YouTube channels are something we have used extensively in the Ubuntu community. As part of our ongoing efforts to educate and train developers in how to contribute to Ubuntu, we created the Ubuntu Developer Channel on YouTube at http://www.youtube.com/ ubuntudevelopers. On the site, we uploaded tuition videos, developer interviews, and more. At each Ubuntu Developer Summit, we would interview attendees to get updates about their work on the next release and perform question and answer sessions with key community members. These videos were hugely successful, and many of them gathered thousands of views within weeks of going online. The channel has over 1,800 subscribers at the time of writing. YouTube is an excellent resource for delivering education and best practice, and I highly recommend you make use of it if you have the resources and time. Another interesting option for video is live streaming. This is where you produce a live videocast that people can view as it is being recorded at a scheduled time. Traditionally, live streaming has always been off-limits for most of us: the bandwidth requirements are so epic that it makes it too costly and impractical. Fortunately there is another option in the form of Ustream (http://www.ustream.tv). The concept is neat: the video you record on your computer with your lower-bandwidth Internet connection is streamed to the Ustream server, and then your viewers connect there with Ustream’s oodles of bandwidth to show your video. This means that your viewers don’t hammer your own Internet connection, and it puts streaming in the hands of us all. Ustream not only provides a simple means of streaming video, but also includes other features, such as a live chat channel for the show, and recording, tagging, and syndication facilities. The live chat channel is particularly interesting: it provides an opportunity for viewers to interact with the presenter as the broadcast is happening. This means that a viewer could tune in and comment on the content, and the presenter can read the comment and repeat it in the broadcast. This is something I first tried around the time I was wrapping this book. While experimenting with Ustream, I tested it by broadcasting live from my living room and posting the link to the videocast on Twitter and identi.ca. Within minutes I had 24 people viewing my entirely ad hoc and off-the-cuff broadcast. With my interest piqued, I decided to start performing a regular show called At Home with Jono Bacon (http://www.ustream.tv/channel/at-home-with-jono-bacon). Whether you make use of prerecorded or live video, there are some nuggets of best practice that are useful to keep your viewers engaged in your content: * Do your best to keep production values high. As an example, if you are recording the video with your laptop’s webcam, consider buying an external microphone. Many of the builtin mics in laptops sound awful and distort easily. Ensure that the location the camera points at looks clean, uncluttered, and professional, and wear clothes that don’t distract the viewer. * Before you produce your video, make some notes about what you will discuss. The easiest way of doing this is to make a series of bullet points with the topics you want to feature. If you are nervous, you may want to write a script, but I would highly recommend that you don’t: unscripted content that is well delivered is far more natural and engaging. * If possible, have more than one presenter. Multiple presenters always make for more interesting shows because there is an opportunity to bounce off each other with conversation, spark up debate, or play specific roles (e.g., the teacher and the learner). * When creating an educational program (such as a tuition video), consider embedding in the video the focal point of the tuition (e.g.,the computer screen if a programming video) or slides. There are many free tools that can capture computer screen content to video to help with this, such as Screencast-o-matic (http://www.screencast-o-matic.com/ ), Wink (http://www.debugmode.com/wink/ ), and recordMyDesktop (http://recordmydesktop.sourceforge.net/ ). * YouTube and Ustream allow you to put notes next to your video. This is an excellent place to list the topics you are covering in the video, provide links to websites, and credit those involved in the content and creation of the video. * Consider the licensing of your content before you release it. I would always recommend that you license your video under a Creative Commons license (more information on this is at http://www.creativecommons.org/ ). You should also consider the license of thirdparty content. As an example, if you want to use the latest U2 tune in your video, you might not be able to legally use it, or if you can, you may need to cough up some royalties. Be very careful here: although it is tempting to just go ahead and use the song, many online video producers have been busted for copyright infringement. I would always recommend that you play it safe and only use properly licensed content for your needs. * Finally, you should be aware that at the time of writing the Macromedia Flash plug-in that many video websites use (including YouTube and Ustream) is closed source. Some proponents of software freedom and open source may refuse to view those videos for this reason. If this is likely to be problematic, it is recommended you also provide access to your videos in an entirely free format, such as Ogg Theora (http://theora.org/ ). |
You should avoid surveys just for the purpose of creating a survey. Only ever create a survey if there is a question in your head that is unanswered. Surveys are tools to help you understand your community better: use them only when there is a purpose. Examples of this could include understanding the perception of a specific process, identifying common patterns in behavior in communication channels, and learning which resources are used more than others. Again, your goals from your strategic plan are a key source of purpose for your surveys. As an example, if your goal is “increase the number of contributors in the community,” you should break down the workflow of how people join your community, and produce a set of questions that test each step in this workflow. You can use the feedback from the answers to gauge whether your workflow is effective and use the data as a basis for improvements. '''Choosing questions''' When deciding on questions, you should be conscious of one simple fact: everyone hates filling in surveys. When someone has considered participating in your survey, you need to be able to gather that person’s feedback as quickly and easily as possible. This should take no longer than five minutes. As such, I recommend you use no more than 10 questions. This will give the respondent an average of 30 seconds to answer each question. The vast majority of surveys have questions with multiple-choice ratings for satisfaction. Most of you will be familiar with these: we are provided with a satisfaction scale between 1 (awful) and 5 (excellent). You are then expected to select the appropriate satisfaction grade for that question. Surveys like this are simple and effective. '''THE VARIANCE OF THE VOTE''' Ratings are a funny beast, and everyone interprets them differently. A great example of this is the employee performance reviews that so many of us are familiar with. In one organization I have worked at, the scale ranged from 1 (unacceptable) to 5 (outstanding). I did a small straw poll of how different people interpreted the grading system, and the views varied tremendously: * Some felt that if 1 is unacceptable and 5 is outstanding, then 3 would be considered acceptable, and if staff completed their work as contractually expected, a 3 would be a reasonable score. * Some others felt that meeting contractually agreed upon standards would merit a 5 on the scale, and that 3 would indicate significant, if tolerable, lapses. * Interestingly, some people informed me that they would never provide a 5, as they felt there was always room for improvement. When people fill in your survey, you will get an equally varied set of expectations around the ratings. You should factor this variation of responses into your assessment of the results. One way to do this is to add up the responses from each person and increase or reduce them proportionally so each person’s total adds up to the same points. But this may not be valid if someone legitimately had a wonderful or horrific experience across the board. When writing your questions, you need to ensure that they are simple, short, and specific enough that your audience will not have any uncertainty about what you are asking. When people are confronted with unclear questions in surveys, they tend to simply give up or pick a random answer. Obviously both of these are less-than-stellar outcomes. Let’s look at an example of a bad question: Do you like our community? Wow, how incredibly unspecific. Which aspect of the community are we asking about? What exactly does “like” mean? Here is an example of a much better question: Did you receive enough help and assistance from the mailing list to help you join the community successfully? This is more detailed, easier to understand, and therefore easier to answer. It’s no coincidence that the results are more immediately applicable to making useful changes in the community. Using the previous example of a survey to track progress on the goal of increasing the number of contributors, here are some additional example questions: How clear was the New Contributor Process to you? How suitable do you feel the requirements are to join the community? How useful was the available documentation for joining the community? How efficiently do you feel your application was tended to? Each of these asks a specific question about your community and the different processes involved. '''Showing off your survey reports''' Earlier, when we talked about statistics, we also explored the benefits of using graphs for plotting numerical feedback. We could feed the data directly into the graph, and the findings are automatically generated. This makes the entire process of gathering statistics easy: we can automate the collection of the data from the hook (such as regularly sucking out the data) and then the presentation of the data (regularly generating the graph). Unfortunately, this is impossible when dealing with feedback provided in words, sentences, and paragraphs. A person has to read and assess the findings and then present them in a report. It is this report that we can present to our community as a source for improving how we work. Readers have priorities when picking up your report. No one wants to read through reams and reams of text to find a conclusion: they want to read the conclusion up front and optionally read the details later. I recommend that you structure your survey findings reports as follows: 1. Present a broad conclusion, a paragraph that outlines the primary revelation that we can take away from the entire survey. For example, this could be “developer growth is slower than expected and needs to improve.” It is this broad conclusion that will inspire people to read the survey. Do bear one important thing in mind, though: don’t turn the conclusion into an inaccurate, feisty headline just for the purposes of encouraging people to read the survey. That will just annoy your readers and could lead to inaccurate buzz that spirals out of your control, both within and outside your community. 2. Document the primary findings as a series of bullet points. These findings don’t necessarily need to be the findings for each question, but instead the primary lessons to be learned from the entire survey. It is these findings that your community will take as the meat of the survey. They should be clear, accurate, and concise. 3. You should present a list of recommended actions that will improve on each of the findings. Each of these actions should have a clear correlation with the findings that your survey presented. The reader should be able to clearly identify how an action will improve the current situation. One caveat, though: not all reports can present action items. Sometimes a factual finding does not automatically suggest an action item; it may take negotiation and discussion for leaders to figure out the right action. 4. Finally, in the interest of completeness, you should present the entire set of data that you received in the survey. This is often useful as an addendum or appendix to the preceding information. This is a particularly useful place to present non multiple-choice answers (written responses). When you have completed your survey and documented these results, you should ensure they are available to the rest of your community. Sharing these results with the community is (a) a valuable engagement in transparency, (b) a way of sharing the current status quo of the community with everyone, and (c) an opportunity to encourage others to fix the problems or seek the opportunities that the survey uncovers. To do this, you should put the report on your website. Ensure you clearly label the date on which the results were taken. This will make it clear to your readers that the results were a snapshot of that point in the history of your community. If you don’t put a date, your community will assume the results are from today. When you put the results online, you should notify your community through whatever communication channels are in place, such as mailing lists, online chat channels, forums, websites, and more. Documented Results are Forever Before we move on, I just want to ensure we are on the same page (pun intended) about documenting your results. When you put the results of your survey online, you should never go back and change them. Even if you work hard to improve the community, the results should be seen as a snapshot of your community. You should ensure that you include with the results the date that they were taken so this is clear. Observational Tests When trying to measure the effectiveness of a process, an observational test can be one of the most valuable approaches. This is where you simply sit down and watch someone interact with something and make notes on what the person does. Often this can uncover nuances that can be improved or refined. This is something that my team at Canonical has engaged in a number of times. As part of our work in refining how the community can connect bugs in Ubuntu to bugs that live upstream, I wanted to get a firm idea of the mechanics of how a user links one bug to another. I was specifically keen to learn if there were any quirks in the process that we could ease. If we could flatten the process out a little, we could make it easier for the community to participate. To do this, we sat down and watched a contributor working with bugs. We noted how he interacted with the bug tracker, what content he added, where he made mistakes, and other elements. This data gave us a solid idea of areas of redundancy in how he interacted with a community facility. What Jorge on my team did here was user-based testing, more commonly known as usability testing. This is a user-centered design method that helps evaluate software by having real people use it and provide feedback. By simply sitting a few people in front of your software and having them try it out, usability testing can provide valuable feedback for a design before too much is invested in coding a bad solution. Usability testing is important for two reasons. The most obvious is that it gets us feedback from a lot of real users, all doing the same thing. Even though we aren’t necessarily looking for statistical significance, recognizing usage patterns can help the designer or developer begin thinking about how to solve the problem in a more usable way. The second reason is that usability testing, when done early in the development cycle, can save a lot of community resources. Catching usability problems in the design phase can save development time normally lost to rewriting a bad component. Catching usability problems early in a release cycle can preempt bug submissions and save time triaging. This is on top of the added benefit that many users may never experience such usability issues, because they are caught and fixed so early. Open source is a naturally user-centered community. We rely on user feedback to help test software and influence future development directions. A weakness of traditional usability testing is that it takes a lot of time to plan and conduct a formal laboratory test. With the highly iterative and aggressive release cycles some open source projects follow, it is sometimes difficult to provide a timely report on usability testing results. Some examples of projects that overcame problems in timing and cost appear in the accompanying sidebar (“Examples of Low-Budget, Rigorous Usability Tests”) by Celeste Lyn Paul, a senior interaction architect at User-Centered Design, Inc. She helps make software easier to use by understanding the user’s work processes and designing interactive systems to fit their needs. She is also involved in open source software and leads the KDE Usability Project, mentors for the OpenUsability Season of Usability program, and serves on the Kubuntu Council. Example of Low-Budget, Rigorous Usability There are some ways you can make usability testing work in the open source community. Throughout my career in open source, I have run a number of usability tests, and not all have been the conventional laboratory-based testing you often think of when you hear “usability test.” These three examples help describe the different ways usability testing can be conducted and how it can fit into the open source community. My first example is the usability testing of the Kubuntu version of Ubiquity, the Ubuntu installer. This usability test was organized as a graduate class activity at the University of Baltimore. I worked with the students to design a research plan, recruit participants, run the test, and analyze the results. Finally, all of the project reports were collated into a single report, which was presented to the Ubuntu community. The timing of the test was aligned with a recent release and development summit, and so even though the logistics of the usability test spanned several weeks, the results provided to the Ubuntu community were timely and relevant. Although this is the more rare case of how to organize open source usability testing, involving university students in open source usability testing provides three key benefits. The open source project benefits from a more formal usability test, which is otherwise difficult to obtain; the university students get experience testing a real product, which looks good on a curriculum vitae; and the university students get exposure to open source, which could potentially lead to interest in further contribution in the future. My second example involves guerilla-style usability testing over IRC. I was working with Konstantinos Smanis on the design and development of KGRUBEditor. Unlike most software, which usually are in the maintenance phase, we had the opportunity to design the application from scratch. While we were designing certain interactive components, we were unsure which of the two design options was the most intuitive. Konstantinos coded and packaged dummy prototypes of the two interactive methods while I recruited and interviewed several people on IRC, guiding them through the test scenario and recording their actions and feedback. The results we gathered from the impromptu testing helped us make a decision about which design to use. The IRC testing provided a quick and dirty way of testing interface design ideas in an interactive prototype. However, this method was limited in the type of testing we could do and the amount of feedback we could collect. Remote usability testing provides the benefit of anytime, anywhere, anyone at the cost of high-bandwidth communication with the participant and control over the testing environment. My final example is the case of usability testing with the DC Ubuntu Local Community (LoCo). I developed a short usability testing plan that had participants complete a small task that would take approximately 15 minutes to complete. LoCo members brought a friend or family member to the LoCo’s Ubuntu lab at a local library. Before the testing sessions, I worked with the LoCo members and gave them some tips on how to take their guest through the test scenario. Then, each LoCo member led their guest through the scenario while I took notes about what the participant said and did. Afterward, the LoCo members discussed what they saw in testing, and with assistance, came up with a few key problems they found in the software. The LoCo-based usability test was a great way to involve nontechnical members of the Ubuntu community and provide them an avenue to directly contribute. The drawback to this method is that it takes a lot of planning and coordination: I had to develop a testing plan that was short but provided enough task to get useful data, find a place to test (we were lucky enough to already have an Ubuntu lab), and get enough LoCo members involved to make testing worthwhile. —Celeste Lyn Paul Senior Interaction Architect User-Centered Design, Inc. Although Celeste was largely testing end-user software, the approach that she took was very community-focused. The heart of her approach involved community collaboration, not only to highlight problems in the interface but also to identify better ways of approaching the same task. These same tests should be made against your own community facilities. Consider some of the following topics for these kinds of observational tests: * Ask a member to find something on your website. * Ask a prospective contributor to join the community and find the resources they need. * Ask a member to find a piece of information, such as a bug, message on a mailing list, or another resource. * Ask a member to escalate an issue to a governance council. Each of these different tasks will be interpreted and executed in different ways. By sitting down and watching your community performing these tasks, you will invariably find areas of improvement. == Measuring Mechanics == The lifeblood of communities, and particularly collaborative ones, is communication. It is the flow of conversation that builds healthy communities, but these conversations can and do stretch well beyond mere words and sentences. All communities have collaborative mechanics that define how people do things together. An example of this in software development communities is bugs. Bugs are the defects, problems, and other it-really-shouldn’t-work-thatway annoyances that tend to infiltrate the software development process. Every mechanic (method of collaborating) in your community is like a conveyor belt. There is a set of steps and elements that comprise the conversation. When we understand these steps in the conversation, we can often identify hooks that we can use to get data. With this data we can then make improvements to optimize the flow of conversation. Let’s look at our example of bugs to illustrate this. Every bug has a lifeline, and that lifeline is broadly divided into three areas: reporting, triaging, and fixing. Each of these three areas has a series of steps involved. Let’s look at reporting as an example. These are the steps: 1. The user experiences a problem with a piece of software. 2. The user visits a bug tracker in her web browser to report that problem. 3. The user enters a number of pieces of information: a summary, description, name of the software product, and other criteria. 4. When the bug is filed, the user can subscribe to the bug report and be notified of the changes to the bug. Now let’s look at each step again, see which hooks are available and what data we could pull out: 1. There are no hooks in this step. 2. When the user visits the bug tracker in her web browser, the bug tracker could provide data about the number of visitors, what browsers they are using, which operating systems they are on, and other web statistics. 3. We could query the bug tracker for anything that is present in a bug report: how many bugs are in the tracker, how many bugs are in each product, how many bugs are new, etc. 4. We could gather statistics about the number of subscribers for each and which bugs have the most subscribers. So there’s a huge range of possible hooks in just the bug-reporting part of the bug conveyor belt. Let’s now follow the example through with the remaining two areas and their steps and hooks: The following are the triaging steps: 1. A triager looks at a bug and changes the bug status. 2. The triager may need to ask for additional information about the bug. 3. Other triagers add their comments and additional information to help identify the cause of the bug. Triaging hooks: 1. We could use the bug tracker to tell us how many bugs fall into each type of status. This could give us an excellent idea of not only how many bugs need fixing, but also, when we plot these figures on a graph, how quickly bugs are being fixed. 2. Here we can see how often triagers need to ask for further details. We could also perform a search of what kind of information is typically missing from bug reports so we can improve our bug reporting documentation. 3. The bug tracker can tell us many things here: how many typical responses are needed to fix a bug, which people are involved in the bug, and which organizations they are from (often shown in the email address, e.g., billg@microsoft.com). Fixing steps: 1. A community member chooses a bug report in the system and fixes it. This involves changing and testing the code and generating a patch. 2. If the contributor has direct access to the source repository, he commits the patch. Otherwise, the patch is attached to the bug report. 3. The status of the bug is set to FIXED. Fixing hooks: 1. There are no hooks in this step. 2. A useful data point is to count the number of patches either committed or attached to bug reports. Having the delta between these two figures is also useful: if you have many more attached patches, there may be a problem with how easily contributors can get commit access to the source repository. 3. When the status is changed, we can again assess the number changes and plot them on a timeline to identify the rate of bug fixes that are occurring. In your community, you should sit down and break down the conveyor belt of each of the mechanics that forms your community. These could be bugs, patches, document collaboration, or otherwise. When you break down the process and identify the steps in the process and the hooks, this helps you take a peek inside your community. Gathering General Perceptions Psychologically speaking, perception is the process in which awareness is generated as a result of sensory information. When you walk into a room and your nose tells you something, your ears tell you something else, and your eyes tell still more, your brain puts the evidence together to produce a perception. Perception occurs in community, too, but instead of physical senses providing the evidence, the day-to-day happenings of the community provide the input. When this evidence is gathered together, it can have a dramatic impact on how engaged and enabled people feel in that community. Even in the closed and frightening world of a prison community, with its constant threat of random violence and tyranny, there are shared perceptions, interestingly between staff and prisoners. Professor Alison Liebling, a world expert on prisons, discovered common cause between staff and prisoners in her Measuring the Quality of Prison Life study, which took place between 2000 and 2001. Liebling invited staff and prisoners to reflect on their best rather than worst experiences and identified broad agreement between staff and prisoners on “what matters” in prison life. She discovered that “staff and prisoners produced the same set of dimensions, suggesting a moral consensus or shared vision of social order and how it might be achieved.” Her work provided a model that described and monitored that which previously MEASURING COMMUNITY 203 appeared impossible to measure: “respect, humanity, support, relationships, trust, and fairness,” which had remained hidden under the traditional radar of government accountability. Perception plays a role in many communities, particularly those online. Some years back I was playing with a piece of software (that shall remain nameless). I spent quite some time setting it up and was more than aware of some of the quirks that were involved in its installation. In the interest of being a good citizen, I thought it could be useful to keep a notepad and scribble down some of the quirks, what I expected, and how the software did and did not meet my expectations. I thought that this would provide some useful real-world feedback about a genuine user installing and using the software. I carefully gathered my notes and when I was done I wrote an email to the software community’s mailing list with my notes. I strived to be as constructive and proactive in my comments as possible: my aim here was not to annoy or insult, but to share and suggest. And thus the onslaught began.... Email after email of short-tempered, antagonistic, and impatient responses came flowing in my general direction. It seemed that I struck a nerve. I was criticized for providing feedback on the most recent stable release and not the unreleased development code in the repository(!), many of my proposed solutions were shot down because they would “make the software too easy” (like that is a bad thing!), and the tone was generally defensive. Strangely, I was not perturbed, and I still took an interest in the software and community, but as I dug deeper I found more issues. The developer source repository was very restrictive; the comments in bug reports were equally defensive and antagonistic; the website provided limited (and overtly terse information),;and the documentation had statements such as “if you don’t understand this, maybe you should go somewhere else.” Well, I did. When each of these pieces of evidence combined in my brain, I developed a somewhat negative perception of the community. I felt it was rude, restrictive, cliquey, and unable to handle reasonably communicated constructive criticism. It was perception that drove me to this conclusion, and it was perception that caused me to focus on another community in which my contributions would be more welcome and my life there would be generally happier. Throughout the entire experience there was no explicit statement that the community was “rude, restrictive, cliquey, and unable to handle reasonably communicated constructive criticism.” This was never written, spoken of, or otherwise shared. Measuring perception involves two focus points. On one hand you want to understand the perception of the people inside your community, but you also want to explore the perception of your community from the outside. This is particularly important for attracting new contributors. To measure both kinds of perception, our hooks are people, and we need to have a series of conversations with different people inside and outside our projects to really understand how they feel. As an example, imagine you are a small software project and you have a development team, a documentation team, and a user community. You should spend some time having a social chitchat with a few members in each of those teams. This will help paint a picture for you. Some of the most valuable feedback about perception can happen with so-called “corridor conversations.” These are informal, social, ad hoc conversations that often happen in bars, restaurants, and the corridors of conferences. These conversations typically have no agenda, there are no meeting notes, and they are not recorded. The informal nature of the conversation helps the community member to relax and share her thoughts with you. Perception of you Another important measurement criterion is the perception of you as a person. As a leader you are there to work with and represent your community. Your community will have a perception of you that will be shared among its members. You want to understand that perception and ensure it fairly reflects your efforts. Perception of community leaders is complex, particularly when a leader works for a company to lead the community. As an example, as part of my current role at Canonical as the Ubuntu community manager I work extensively with our community in public, running public projects. There are, however, some internal activities that I focus on. I help the wider company work with the community. I work on Canonical projects that are currently under a NonDisclosure Agreement (NDA). There is also the work I do with my own team, such as building strategy, reviewing objectives, conducting performance reviews, making weekly calls, and more. Many of these internal activities are never seen by the wider community, and as such the community may not be privy to the genuine work that helps the community but is not publicized. Gathering feedback about your performance is hard work. It is difficult to gather constructive, honest, and frank feedback, because most people find it impossible to deliver that content to someone directly. Even if you are entirely open to feedback, you need to ensure that the people who are speaking to you feel there will be no repercussions if they offer criticism. You need to work hard to foster an atmosphere of “I welcome your thoughts on how I can improve.” Due to the difficulty of gathering frank feedback, you may want to rely on email to gather it. When we have physical conversations or even discussions on the phone, body language, vocal tone, and enunciation make those conversations feel much more personal. The visceral connection may make it intimidating for your respondent to provide frank and honest feedback (particularly if that involves criticism). Email removes these attributes in the conversation, and this can make gathering this feedback easier. TRANSPARENCY IN PERSONAL FEEDBACK In the continuing interest of building transparency, an excellent method is to be entirely public in letting your community share their feedback about you. As an example, you could write a blog entry asking for feedback and encouraging people to leave comments on the entry, and allow anonymous comments. This is a tremendously open gesture toward your community. It could also be viewed as a tremendously risky gesture. There is a reasonable likelihood that someone could share some negative thoughts about you there, and others may agree. (But that’s also feedback you need to collect!) |
From The Art Of Community by O'Reilly (http://www.artofcommunityonline.org) by Jono Bacon
Hooks ’n’ Data
So far we’ve discussed the importance of gathering feedback and measurements from your community, and that the focal point is the goals that we decided on in our strategic plan. The next step is to build into each goal a feedback loop that can deliver information about our progress on the goal.
This feedback loop is composed of two components—hooks and data:
Hooks
A hook is a medium or resource in which we can slurp out useful information about our goal. As an example, if our goal was to reduce crime in a neighborhood, a hook could be local crime reports from the police. The reason I call them hooks is that they are the protruding access points in which we can display interesting information.
Data
If a hook is the medium that provides useful information, data is the information itself. Using our previous example of a goal to reduce crime in a neighborhood, the hook (local crime reports) could provide data such as “10 crimes this month.” The data is composed of two attributes, the data itself and the measurement unit. Again, the kind of unit can be used to feed a display (e.g., numerical units are great for graphs).
To help understand this further, let’s look at an example. In the Ubuntu community, my team has worked to help increase the number of people who become new developers. In our strategic plan we created an objective to increase the number of community developers and fleshed it out with goals for improving developer documentation, awareness, and education. Each goal had the expected set of actions. For us to effectively track progress on the objective, we needed data about developer growth.
Fortunately, we have access to a system called Launchpad (http://www.launchpad.net), which is where all Ubuntu developers do their work. This system was an enormous hook that we could use to extract data. To do this we gathered a range of types of data:
- The current number of developers (e.g., 50 developers).
- How long new contributions from prospective developers took to be mentored by existing developers (e.g., 1.4 weeks).
- How many of these new contributions are outstanding for mentoring (e.g., 23 developers).
Launchpad had all of this information available. Using some computer programs created by Daniel Holbach, we could extract the data. This allowed us to track not only the current number of developers but also how quickly progress was being made: we knew that if the number of developers was regularly growing, we were making progress. We could also use this data to assess the primary tool that new developers use to participate in Ubuntu: the queue of new contributions to be mentored. When a new developer wants to contribute, she adds her contribution to this queue. Our existing developers then review the item, provide feedback, and if it is suitable, commit it.
By having data on the average time something sits on that queue as well as the number of outstanding items, we could (a) set reasonable expectations, and (b) ensure that that facility was working as well as possible.
In this example, Launchpad was a hook. Using it involved some specific knowledge of how to physically grab the data we needed from it. This required specialist knowledge: a script was written in Python that used the Launchpad API to gather the data, and then it was formatted in HTML to be viewed.
Launchpad was an obvious hook, but not the only one. Although Launchpad could provide excellent numbers, it could not give us personal perspectives and opinions. What were the thoughts, praise, concerns, and other views about our developer processes and how well they worked? More specifically, how easy was it to get approved as an Ubuntu developer? To gather this feedback, our hook was a developer survey designed for prospective and new developers. We could direct this survey to another hook: the list of the most recently approved developers and their contact details. This group of people would be an excellent source of feedback, as they had just been through the developer approval process and it would be fresh in their minds.
With so many hooks available to communities, I obviously cannot cover the specific details of how to use them. This would turn The Art of Community into War and Peace—complete with tragic outcome (at least for the author). Fortunately, the specifics are not of interest, as all hooks can be broadly divided into three categories:
Statistics and automated data
Hooks in this category primarily deal with numbers, and numbers can be automatically manipulated into statistics.
Surveys and structured feedback
These hooks primarily deal with words and sentences and methods of gathering them.
Observational tests
These hooks are visual observations that can provide insight into how people use things. Let’s take a walk through the neighborhood of each of these hooks and learn a little more about them.
Statistics and Automated Data
People have a love/hate relationship with statistics. Gregg Easterbrook in The New Republic said, “Torture numbers, and they’ll confess to anything.” Despite the cynicism that surrounds statistics, they turn up insistently on television, in newspapers, on websites, and even in general pub and restaurant chitchat. The problem with the general presentation of statistics is that the numbers are often used to make the point itself instead of being an indicator of a wider conclusion.
Statistics are merely indicators. They are the metaphorical equivalent to the numbers and gauges on the dashboard of a car: no single reading can advise on the health of the car. The gauges, along with the sound of the car itself, the handling, look and feel, and smell of burning rubber all combine to give you an indication that your beloved motor may be under the weather. Despite the butchered reputation of statistics, they can offer us valuable insight into the status quo of our community. Statistics can provide hard evidence of how aspects of your community are functioning.
Many hooks can deliver numerical data. A few examples:
- Forums and mailing lists can deliver the number of posts and number of members.
- Your website can deliver the number of visitors and downloads.
- Your meeting notes can deliver the number of participants and number of topics discussed.
- Your development tools can deliver the number of lines of code written, number of commits made to the source repository, and number of developers.
- Your wiki can deliver the number of users and number of pages.
For us to get the most out of statistics, we need to understand the mechanics of our community and which hooks can deliver data from those mechanics. We will discuss how to find hooks from these mechanics later in this chapter.
The risks of interpretation
Although statistics can provide compelling documentation of the current status quo of your community, they require skill to be interpreted properly. A great example of this is forum posts. Many online communities use discussion forums, the online message boards in which you can post messages to a common topic (known in forums parlance as a thread). Within most forums there is one statistic that everyone seems to have something of a love affair with: the total number of posts made by each user.
It’s easy to see how people draw this conclusion. If you have three users, one with 2 posts, one with 200 posts and one with 2,000 posts, it’s temping to believe that the user with 2,000 posts has more insight, experience, and wisdom. Many forums leap aboard this perspective and provide labels based upon the number of posts. As an example, a forum could have these labels:
- 0–100 posts: New to the Forum 101–500 posts: On the Road to Greatness 501–1,050 posts: Regular Hero 1,501–3,000 posts: Dependable Legend 3,001+ posts: Expert Ninja
As an example, if I had 493 posts, this would give me the “On the Road to Greatness” label, but if I had 2,101 posts, I would have the “Dependable Legend” label. These labels and the number of posts statistic is great for pumping up the members, but it offers little insight in terms of quality.
Quantity is rarely an indicator of quality; if it were, spammers would be the definition of email quality. When you are gathering statistics, you will be regularly faced with a quantity versus quality issue, but always bear in mind that quality is determined by the specifics of an individual contribution as opposed to the amalgamated set of contributions. What quantity really teaches us is experience. No one can deny that someone with 1,000 forum posts has keen experience of the forum, but it doesn’t necessarily reflect on the quality of his opinion and insight.
Plugging your stats into graphs
Stats with no presentation are merely a list of numbers. When articulated effectively though, statistics can exhibit the meaning that we strive for. This is where graphs come into play. Graphs are an excellent method of displaying lots of numerical information and avoiding boring the pants off either (a) yourself, or (b) other people. Let’s look at an example. Earlier we talked about a project to increase the number of community developers in Ubuntu, and one piece of data we gathered was the current number of community developers who had been approved. This is of course a useful piece of information, and as the number climbs it helps indicate that we are achieving our goals. What that single number does not teach us, though, is how quickly we are achieving our goal. Imagine that we had 50 developers right now and we wanted to increase that figure by 20% a year. This would mean we would need to find five developers in the next six months. This works out at approximately one developer per month. If we want to encourage this consistency of growth, we need not only to look at the number of current developers once, but also track it over time so we can see if we are on track to achieve our 20% target. Using this example in the Ubuntu world, we could use Launchpad to take a regular snapshot of the number of current developers, plot it on a graph, and draw a line between the dots. This could give us a growth curve of new developers joining the project.
Another handy benefit of graphs is to show the impact of specific campaigns on your community. On my team at Canonical we have a graph that shows the current number of bugs in Ubuntu. On the graph is a line that shows the current number of bugs for each week. As you can imagine, the line that connects these numbers shows a general curve of our bug performance. This line is generally fairly consistent. Each cycle, we have a special event called the Ubuntu Global Bug Jam (http://wiki.ubuntu.com/UbuntuGlobalJam) in which our community comes together to work on bugs. Our local user groups organize bug-squashing parties, and there are online events and other activities that are all based around fixing bugs. Interestingly, each time we do the event, we see a drop in the number of bugs on our graph for the days that the Global Bug Jam happens. This is an excellent method of assessing the impact of the event on our bug numbers.
TECHNICAL TIP
You may be wondering how you can gather data from various hooks and display them in a graph automatically. I just wanted to share a few tips. If this seems like rocket science to you, I recommend that you seek advice from someone who is familiar with these technologies. Gathering data from hooks is hugely dependent on the hook. Fortunately, many online services offer an application programming interface (API) that can be used by a program to gather the data. This will require knowledge of programming. Many programming languages, such as Python and Perl, make it simple to get data through the API.
Another approach with hooks is to screen scrape. This is the act of downloading a web page and figuring out what text on the page has the data. This is useful if an API is not available. For graphing, there are many tools available that can ease graphing if the data is available. This includes Cricket (http://cricket.sourceforge.net/ ), and of course you could load data into a spreadsheet with a comma-separated value (CSV) file if required.
Surveys and Structured Feedback
Surveys are an excellent method of taking the pulse of your community. For us, they are simple to set up, and for our audience, they are simple to use. I have used surveys extensively in my career, and each time they have provided me and my teams with excellent feedback. Over the next few pages I want to share some of the experience I have picked up in delivering effective surveys.
The first step is to determine the purpose of the survey. What do you want to achieve with it? What do you need to know? Every survey needs to have a purpose, and it is this purpose that will help you craft a useful set of questions that should generate an even more useful set of data.
NOTE
You should avoid surveys just for the purpose of creating a survey. Only ever create a survey if there is a question in your head that is unanswered. Surveys are tools to help you understand your community better: use them only when there is a purpose. Examples of this could include understanding the perception of a specific process, identifying common patterns in behavior in communication channels, and learning which resources are used more than others.
Again, your goals from your strategic plan are a key source of purpose for your surveys. As an example, if your goal is “increase the number of contributors in the community,” you should break down the workflow of how people join your community, and produce a set of questions that test each step in this workflow. You can use the feedback from the answers to gauge whether your workflow is effective and use the data as a basis for improvements.
Choosing questions
When deciding on questions, you should be conscious of one simple fact: everyone hates filling in surveys. When someone has considered participating in your survey, you need to be able to gather that person’s feedback as quickly and easily as possible. This should take no longer than five minutes. As such, I recommend you use no more than 10 questions. This will give the respondent an average of 30 seconds to answer each question. The vast majority of surveys have questions with multiple-choice ratings for satisfaction. Most of you will be familiar with these: we are provided with a satisfaction scale between 1 (awful) and 5 (excellent). You are then expected to select the appropriate satisfaction grade for that question. Surveys like this are simple and effective.
THE VARIANCE OF THE VOTE
Ratings are a funny beast, and everyone interprets them differently. A great example of this is the employee performance reviews that so many of us are familiar with. In one organization I have worked at, the scale ranged from 1 (unacceptable) to 5 (outstanding). I did a small straw poll of how different people interpreted the grading system, and the views varied tremendously:
- Some felt that if 1 is unacceptable and 5 is outstanding, then 3 would be considered acceptable, and if staff completed their work as contractually expected, a 3 would be a reasonable score.
- Some others felt that meeting contractually agreed upon standards would merit a 5 on the scale, and that 3 would indicate significant, if tolerable, lapses.
- Interestingly, some people informed me that they would never provide a 5, as they felt there was always room for improvement.
When people fill in your survey, you will get an equally varied set of expectations around the ratings. You should factor this variation of responses into your assessment of the results. One way to do this is to add up the responses from each person and increase or reduce them proportionally so each person’s total adds up to the same points. But this may not be valid if someone legitimately had a wonderful or horrific experience across the board.
When writing your questions, you need to ensure that they are simple, short, and specific enough that your audience will not have any uncertainty about what you are asking. When people are confronted with unclear questions in surveys, they tend to simply give up or pick a random answer. Obviously both of these are less-than-stellar outcomes. Let’s look at an example of a bad question:
Do you like our community?
Wow, how incredibly unspecific. Which aspect of the community are we asking about? What exactly does “like” mean? Here is an example of a much better question: Did you receive enough help and assistance from the mailing list to help you join the community successfully?
- This is more detailed, easier to understand, and therefore easier to answer. It’s no coincidence that the results are more immediately applicable to making useful changes in the community. Using the previous example of a survey to track progress on the goal of increasing the number of contributors, here are some additional example questions:
How clear was the New Contributor Process to you?
How suitable do you feel the requirements are to join the community?
How useful was the available documentation for joining the community?
How efficiently do you feel your application was tended to?
Each of these asks a specific question about your community and the different processes involved.
Showing off your survey reports
Earlier, when we talked about statistics, we also explored the benefits of using graphs for plotting numerical feedback. We could feed the data directly into the graph, and the findings are automatically generated. This makes the entire process of gathering statistics easy: we can automate the collection of the data from the hook (such as regularly sucking out the data) and then the presentation of the data (regularly generating the graph). Unfortunately, this is impossible when dealing with feedback provided in words, sentences, and paragraphs. A person has to read and assess the findings and then present them in a report. It is this report that we can present to our community as a source for improving how we work. Readers have priorities when picking up your report. No one wants to read through reams and reams of text to find a conclusion: they want to read the conclusion up front and optionally read the details later. I recommend that you structure your survey findings reports as follows:
1. Present a broad conclusion, a paragraph that outlines the primary revelation that we can take away from the entire survey. For example, this could be “developer growth is slower than expected and needs to improve.” It is this broad conclusion that will inspire people to read the survey. Do bear one important thing in mind, though: don’t turn the conclusion into an inaccurate, feisty headline just for the purposes of encouraging people to read the survey. That will just annoy your readers and could lead to inaccurate buzz that spirals out of your control, both within and outside your community.
2. Document the primary findings as a series of bullet points. These findings don’t necessarily need to be the findings for each question, but instead the primary lessons to be learned from the entire survey. It is these findings that your community will take as the meat of the survey. They should be clear, accurate, and concise.
3. You should present a list of recommended actions that will improve on each of the findings. Each of these actions should have a clear correlation with the findings that your survey presented. The reader should be able to clearly identify how an action will improve the current situation. One caveat, though: not all reports can present action items. Sometimes a factual finding does not automatically suggest an action item; it may take negotiation and discussion for leaders to figure out the right action.
4. Finally, in the interest of completeness, you should present the entire set of data that you received in the survey. This is often useful as an addendum or appendix to the preceding information. This is a particularly useful place to present non multiple-choice answers (written responses).
When you have completed your survey and documented these results, you should ensure they are available to the rest of your community. Sharing these results with the community is (a) a valuable engagement in transparency, (b) a way of sharing the current status quo of the community with everyone, and (c) an opportunity to encourage others to fix the problems or seek the opportunities that the survey uncovers.
To do this, you should put the report on your website. Ensure you clearly label the date on which the results were taken. This will make it clear to your readers that the results were a snapshot of that point in the history of your community. If you don’t put a date, your community will assume the results are from today. When you put the results online, you should notify your community through whatever communication channels are in place, such as mailing lists, online chat channels, forums, websites, and more.
Documented Results are Forever
Before we move on, I just want to ensure we are on the same page (pun intended) about documenting your results. When you put the results of your survey online, you should never go back and change them. Even if you work hard to improve the community, the results should be seen as a snapshot of your community. You should ensure that you include with the results the date that they were taken so this is clear.
Observational Tests
When trying to measure the effectiveness of a process, an observational test can be one of the most valuable approaches. This is where you simply sit down and watch someone interact with something and make notes on what the person does. Often this can uncover nuances that can be improved or refined. This is something that my team at Canonical has engaged in a number of times. As part of our work in refining how the community can connect bugs in Ubuntu to bugs that live upstream, I wanted to get a firm idea of the mechanics of how a user links one bug to another. I was specifically keen to learn if there were any quirks in the process that we could ease. If we could flatten the process out a little, we could make it easier for the community to participate.
To do this, we sat down and watched a contributor working with bugs. We noted how he interacted with the bug tracker, what content he added, where he made mistakes, and other elements. This data gave us a solid idea of areas of redundancy in how he interacted with a community facility.
What Jorge on my team did here was user-based testing, more commonly known as usability testing. This is a user-centered design method that helps evaluate software by having real people use it and provide feedback. By simply sitting a few people in front of your software and having them try it out, usability testing can provide valuable feedback for a design before too much is invested in coding a bad solution.
Usability testing is important for two reasons.
The most obvious is that it gets us feedback from a lot of real users, all doing the same thing. Even though we aren’t necessarily looking for statistical significance, recognizing usage patterns can help the designer or developer begin thinking about how to solve the problem in a more usable way.
The second reason is that usability testing, when done early in the development cycle, can save a lot of community resources. Catching usability problems in the design phase can save development time normally lost to rewriting a bad component. Catching usability problems early in a release cycle can preempt bug submissions and save time triaging. This is on top of the added benefit that many users may never experience such usability issues, because they are caught and fixed so early.
Open source is a naturally user-centered community. We rely on user feedback to help test software and influence future development directions. A weakness of traditional usability testing is that it takes a lot of time to plan and conduct a formal laboratory test. With the highly iterative and aggressive release cycles some open source projects follow, it is sometimes difficult to provide a timely report on usability testing results. Some examples of projects that overcame problems in timing and cost appear in the accompanying sidebar (“Examples of Low-Budget, Rigorous Usability Tests”) by Celeste Lyn Paul, a senior interaction architect at User-Centered Design, Inc. She helps make software easier to use by understanding the user’s work processes and designing interactive systems to fit their needs. She is also involved in open source software and leads the KDE Usability Project, mentors for the OpenUsability Season of Usability program, and serves on the Kubuntu Council.
Example of Low-Budget, Rigorous Usability
There are some ways you can make usability testing work in the open source community. Throughout my career in open source, I have run a number of usability tests, and not all have been the conventional laboratory-based testing you often think of when you hear “usability test.” These three examples help describe the different ways usability testing can be conducted and how it can fit into the open source community.
My first example is the usability testing of the Kubuntu version of Ubiquity, the Ubuntu installer. This usability test was organized as a graduate class activity at the University of Baltimore. I worked with the students to design a research plan, recruit participants, run the test, and analyze the results. Finally, all of the project reports were collated into a single report, which was presented to the Ubuntu community. The timing of the test was aligned with a recent release and development summit, and so even though the logistics of the usability test spanned several weeks, the results provided to the Ubuntu community were timely and relevant.
Although this is the more rare case of how to organize open source usability testing, involving university students in open source usability testing provides three key benefits. The open source project benefits from a more formal usability test, which is otherwise difficult to obtain; the university students get experience testing a real product, which looks good on a curriculum vitae; and the university students get exposure to open source, which could potentially lead to interest in further contribution in the future.
My second example involves guerilla-style usability testing over IRC. I was working with Konstantinos Smanis on the design and development of KGRUBEditor. Unlike most software, which usually are in the maintenance phase, we had the opportunity to design the application from scratch. While we were designing certain interactive components, we were unsure which of the two design options was the most intuitive. Konstantinos coded and packaged dummy prototypes of the two interactive methods while I recruited and interviewed several people on IRC, guiding them through the test scenario and recording their actions and feedback. The results we gathered from the impromptu testing helped us make a decision about which design to use. The IRC testing provided a quick and dirty way of testing interface design ideas in an interactive prototype. However, this method was limited in the type of testing we could do and the amount of feedback we could collect. Remote usability testing provides the benefit of anytime, anywhere, anyone at the cost of high-bandwidth communication with the participant and control over the testing environment.
My final example is the case of usability testing with the DC Ubuntu Local Community (LoCo). I developed a short usability testing plan that had participants complete a small task that would take approximately 15 minutes to complete. LoCo members brought a friend or family member to the LoCo’s Ubuntu lab at a local library. Before the testing sessions, I worked with the LoCo members and gave them some tips on how to take their guest through the test scenario. Then, each LoCo member led their guest through the scenario while I took notes about what the participant said and did. Afterward, the LoCo members discussed what they saw in testing, and with assistance, came up with a few key problems they found in the software.
The LoCo-based usability test was a great way to involve nontechnical members of the Ubuntu community and provide them an avenue to directly contribute. The drawback to this method is that it takes a lot of planning and coordination: I had to develop a testing plan that was short but provided enough task to get useful data, find a place to test (we were lucky enough to already have an Ubuntu lab), and get enough LoCo members involved to make testing worthwhile. —Celeste Lyn Paul Senior Interaction Architect User-Centered Design, Inc.
Although Celeste was largely testing end-user software, the approach that she took was very community-focused. The heart of her approach involved community collaboration, not only to highlight problems in the interface but also to identify better ways of approaching the same task. These same tests should be made against your own community facilities. Consider some of the following topics for these kinds of observational tests:
- Ask a member to find something on your website.
- Ask a prospective contributor to join the community and find the resources they need.
- Ask a member to find a piece of information, such as a bug, message on a mailing list, or another resource.
- Ask a member to escalate an issue to a governance council.
Each of these different tasks will be interpreted and executed in different ways. By sitting down and watching your community performing these tasks, you will invariably find areas of improvement.
Measuring Mechanics
The lifeblood of communities, and particularly collaborative ones, is communication. It is the flow of conversation that builds healthy communities, but these conversations can and do stretch well beyond mere words and sentences. All communities have collaborative mechanics that define how people do things together. An example of this in software development communities is bugs. Bugs are the defects, problems, and other it-really-shouldn’t-work-thatway annoyances that tend to infiltrate the software development process.
Every mechanic (method of collaborating) in your community is like a conveyor belt. There is a set of steps and elements that comprise the conversation. When we understand these steps in the conversation, we can often identify hooks that we can use to get data. With this data we can then make improvements to optimize the flow of conversation.
Let’s look at our example of bugs to illustrate this. Every bug has a lifeline, and that lifeline is broadly divided into three areas: reporting, triaging, and fixing. Each of these three areas has a series of steps involved. Let’s look at reporting as an example. These are the steps:
- The user experiences a problem with a piece of software.
2. The user visits a bug tracker in her web browser to report that problem. 3. The user enters a number of pieces of information: a summary, description, name of the software product, and other criteria. 4. When the bug is filed, the user can subscribe to the bug report and be notified of the changes to the bug.
Now let’s look at each step again, see which hooks are available and what data we could pull out:
1. There are no hooks in this step. 2. When the user visits the bug tracker in her web browser, the bug tracker could provide data about the number of visitors, what browsers they are using, which operating systems they are on, and other web statistics. 3. We could query the bug tracker for anything that is present in a bug report: how many bugs are in the tracker, how many bugs are in each product, how many bugs are new, etc. 4. We could gather statistics about the number of subscribers for each and which bugs have the most subscribers. So there’s a huge range of possible hooks in just the bug-reporting part of the bug conveyor belt. Let’s now follow the example through with the remaining two areas and their steps and hooks:
The following are the triaging steps:
1. A triager looks at a bug and changes the bug status.
2. The triager may need to ask for additional information about the bug. 3. Other triagers add their comments and additional information to help identify the cause of the bug.
Triaging hooks:
1. We could use the bug tracker to tell us how many bugs fall into each type of status. This could give us an excellent idea of not only how many bugs need fixing, but also, when we plot these figures on a graph, how quickly bugs are being fixed.
2. Here we can see how often triagers need to ask for further details. We could also perform a search of what kind of information is typically missing from bug reports so we can improve our bug reporting documentation.
3. The bug tracker can tell us many things here: how many typical responses are needed to fix a bug, which people are involved in the bug, and which organizations they are from (often shown in the email address, e.g., billg@microsoft.com).
- Fixing steps:
1. A community member chooses a bug report in the system and fixes it. This involves changing and testing the code and generating a patch. 2. If the contributor has direct access to the source repository, he commits the patch. Otherwise, the patch is attached to the bug report. 3. The status of the bug is set to FIXED. Fixing hooks: 1. There are no hooks in this step. 2. A useful data point is to count the number of patches either committed or attached to bug reports. Having the delta between these two figures is also useful: if you have many more attached patches, there may be a problem with how easily contributors can get commit access to the source repository. 3. When the status is changed, we can again assess the number changes and plot them on a timeline to identify the rate of bug fixes that are occurring. In your community, you should sit down and break down the conveyor belt of each of the mechanics that forms your community. These could be bugs, patches, document collaboration, or otherwise. When you break down the process and identify the steps in the process and the hooks, this helps you take a peek inside your community.
Gathering General Perceptions Psychologically speaking, perception is the process in which awareness is generated as a result of sensory information. When you walk into a room and your nose tells you something, your ears tell you something else, and your eyes tell still more, your brain puts the evidence together to produce a perception. Perception occurs in community, too, but instead of physical senses providing the evidence, the day-to-day happenings of the community provide the input. When this evidence is gathered together, it can have a dramatic impact on how engaged and enabled people feel in that community. Even in the closed and frightening world of a prison community, with its constant threat of random violence and tyranny, there are shared perceptions, interestingly between staff and prisoners. Professor Alison Liebling, a world expert on prisons, discovered common cause between staff and prisoners in her Measuring the Quality of Prison Life study, which took place between 2000 and 2001. Liebling invited staff and prisoners to reflect on their best rather than worst experiences and identified broad agreement between staff and prisoners on “what matters” in prison life. She discovered that “staff and prisoners produced the same set of dimensions, suggesting a moral consensus or shared vision of social order and how it might be achieved.” Her work provided a model that described and monitored that which previously
MEASURING COMMUNITY
203
- appeared impossible to measure: “respect, humanity, support, relationships, trust, and
fairness,” which had remained hidden under the traditional radar of government accountability. Perception plays a role in many communities, particularly those online. Some years back I was playing with a piece of software (that shall remain nameless). I spent quite some time setting it up and was more than aware of some of the quirks that were involved in its installation. In the interest of being a good citizen, I thought it could be useful to keep a notepad and scribble down some of the quirks, what I expected, and how the software did and did not meet my expectations. I thought that this would provide some useful real-world feedback about a genuine user installing and using the software. I carefully gathered my notes and when I was done I wrote an email to the software community’s mailing list with my notes. I strived to be as constructive and proactive in my comments as possible: my aim here was not to annoy or insult, but to share and suggest. And thus the onslaught began.... Email after email of short-tempered, antagonistic, and impatient responses came flowing in my general direction. It seemed that I struck a nerve. I was criticized for providing feedback on the most recent stable release and not the unreleased development code in the repository(!), many of my proposed solutions were shot down because they would “make the software too easy” (like that is a bad thing!), and the tone was generally defensive. Strangely, I was not perturbed, and I still took an interest in the software and community, but as I dug deeper I found more issues. The developer source repository was very restrictive; the comments in bug reports were equally defensive and antagonistic; the website provided limited (and overtly terse information),;and the documentation had statements such as “if you don’t understand this, maybe you should go somewhere else.” Well, I did. When each of these pieces of evidence combined in my brain, I developed a somewhat negative perception of the community. I felt it was rude, restrictive, cliquey, and unable to handle reasonably communicated constructive criticism. It was perception that drove me to this conclusion, and it was perception that caused me to focus on another community in which my contributions would be more welcome and my life there would be generally happier. Throughout the entire experience there was no explicit statement that the community was “rude, restrictive, cliquey, and unable to handle reasonably communicated constructive criticism.” This was never written, spoken of, or otherwise shared. Measuring perception involves two focus points. On one hand you want to understand the perception of the people inside your community, but you also want to explore the perception of your community from the outside. This is particularly important for attracting new contributors.
- To measure both kinds of perception, our hooks are people, and we need to have a series of
conversations with different people inside and outside our projects to really understand how they feel. As an example, imagine you are a small software project and you have a development team, a documentation team, and a user community. You should spend some time having a social chitchat with a few members in each of those teams. This will help paint a picture for you. Some of the most valuable feedback about perception can happen with so-called “corridor conversations.” These are informal, social, ad hoc conversations that often happen in bars, restaurants, and the corridors of conferences. These conversations typically have no agenda, there are no meeting notes, and they are not recorded. The informal nature of the conversation helps the community member to relax and share her thoughts with you.
Perception of you Another important measurement criterion is the perception of you as a person. As a leader you are there to work with and represent your community. Your community will have a perception of you that will be shared among its members. You want to understand that perception and ensure it fairly reflects your efforts. Perception of community leaders is complex, particularly when a leader works for a company to lead the community. As an example, as part of my current role at Canonical as the Ubuntu community manager I work extensively with our community in public, running public projects. There are, however, some internal activities that I focus on. I help the wider company work with the community. I work on Canonical projects that are currently under a NonDisclosure Agreement (NDA). There is also the work I do with my own team, such as building strategy, reviewing objectives, conducting performance reviews, making weekly calls, and more. Many of these internal activities are never seen by the wider community, and as such the community may not be privy to the genuine work that helps the community but is not publicized. Gathering feedback about your performance is hard work. It is difficult to gather constructive, honest, and frank feedback, because most people find it impossible to deliver that content to someone directly. Even if you are entirely open to feedback, you need to ensure that the people who are speaking to you feel there will be no repercussions if they offer criticism. You need to work hard to foster an atmosphere of “I welcome your thoughts on how I can improve.” Due to the difficulty of gathering frank feedback, you may want to rely on email to gather it. When we have physical conversations or even discussions on the phone, body language, vocal tone, and enunciation make those conversations feel much more personal. The visceral connection may make it intimidating for your respondent to provide frank and honest feedback (particularly if that involves criticism). Email removes these attributes in the conversation, and this can make gathering this feedback easier.
- TRANSPARENCY IN PERSONAL FEEDBACK
In the continuing interest of building transparency, an excellent method is to be entirely public in letting your community share their feedback about you. As an example, you could write a blog entry asking for feedback and encouraging people to leave comments on the entry, and allow anonymous comments. This is a tremendously open gesture toward your community. It could also be viewed as a tremendously risky gesture. There is a reasonable likelihood that someone could share some negative thoughts about you there, and others may agree. (But that’s also feedback you need to collect!)
itnet7/SandBox (last edited 2017-09-19 03:15:55 by itnet7)