Idealware Metrics Part Two: Hunting Down the Data
Someone once advised me, "If you're going hunting for ducks, you had better know what a duck looks like." Similarly, before you go collecting and organizing data, make sure you know what sort of data you ought to be looking for--data that aligns with your metrics. If you haven't figured that out yet, back up and visit Part One of this series. If you have figured that out already, go on and read this post. Here I will air some of Idealware's dirty laundry with no apologies.
Challenges in Gathering Our Data
After developing our theory of change and key metrics, Idealware was ready to gather some data. It wasn't easy. Here were some of the challenges:
- The data we wanted was in multiple systems.
- Some of it was unstructured.
- We haven't been 100 percent consistent in data entry or taxonomy.
- I am not proficient at using one of our key data management systems.
I'll use the examples from last time to illustrate how this played out.
Example One: Audience Data
First, remember we wanted to use budget size and type of organization to measure whether we are reaching the people our theory of change says that we set out to serve. That information was collected in two places: our annual audience survey (anonymously), and on our online registration pages. I still haven't figured out how to reconcile the two. We have selection bias issues, since we only have this information on our more engaged audience members, and not on the hundreds of thousands of people who read our blog posts and articles without registering. That's a potential weakness. What's more, we have not asked the questions consistently over time. Still, we had some useful data already on hand. Once I got help building a report, I had it at my fingertips any time I wanted it.
Idealware is reaching our target population and also providing some benefit to our service population. (That's good!)
Example Two: Training Results Data
Second, we wanted to use percent improvement on a knowledge scale to measure whether our training program results in people gaining knowledge and confidence. This data all came from our survey tool. I created a spreadsheet file with a sheet for each course and dumped the data in. I looked at how people rated their knowledge and confidence with key learning goals in a self-assessment before and after the course. Then I did some Excel gymnastics. I calculated the percent change on each learning goal, the overall average improvement for the course, and the overall average improvement for the year.
The results: On overage, participants improve 20% on the learning goals, after completing an Idealware online course. (Given that webinars are notoriously boring and ineffective, I think that's pretty amazing!)
Example Three: Audience Perception Data
Third, we wanted to use audience opinions to measure whether we are delivering on our promise of impartial, research-based, accessible resources. We get a lot of emails, phone calls, and survey comments--unstructured data that we don't have a good way to analyze (yet). But setting that aside, this metric was the easiest one. We asked a set of questions about perceptions of Idealware on our annual audience survey. We made sure that the 2016 and 2017 questions were identical, so that we could see trends over time.
The results: In 2017, 79% of respondents agreed or strongly agreed that Idealware resources are impartial, 63% that they are accurate, and 80% that they are easy to understand.
I would like to see even stronger results on this third metric. What can we do about that? I'll address that and other ways to make use of the data, in Part Three.