When thinking about the effectiveness of your digital communications, where do you look to answer the question “is it working?”
Do you focus on your database, which can provide a comprehensive picture of someone’s lifetime engagement with your organization? Or turn to a web analytics app that traces specific clicks and visits to different conversions?
In an ideal scenario—and in the vision sold by CRM & marketing automation platforms—we can trace an entire journey from initial touch points as an unidentified website visitor all the way to quantified lifetime value (or as we prefer to frame it at Pedal Lucid, lifetime engagement).
The reality is that it’s hard to do this comprehensively and trying to do so can become resource-intensive. There are absolutely scenarios where this can work consistently; the risk is putting too much emphasis on the value of one unbroken view and overlooking other dimensions that are lower effort to build and maintain.
Another way to think about this is to ensure we unpack the big “is it working” question into more specific, digestible chunks. It’s tempting to push for a single, “very good” answer but we are often better-served by putting our energy into better understanding the data we already have or can easily access. If we can piece together a couple of smaller questions and come up with a “pretty good” answer with a fraction of the effort, that leaves us with a lot more resources available to answer the next question.
Start by framing the question as a hypothesis and breaking it down into more specific questions.
Hypothesis: “Our blog is an effective tool for attracting new people to our organization who go on to engage with us in a meaningful way. Some may volunteer or donate immediately but it’s more likely they sign up for the newsletter first, then take additional action down the road.”
This hypothesis contains a number of implicit questions, especially once we start comparing it to other initiatives, and we can start to map out what system can best answer each question.
We have one general question in mind—whether to invest resources into building out a certain type of website content. There are several component questions that can help us draw a conclusion, and two of which are significantly lower effort to answer in a precise way than the third.
Rather than focusing on a single report, we can get a “pretty good” understanding of overall effectiveness by looking at how many website conversions (like newsletter signups) seem to be driven by viewing the blog, and how effective newsletter signups (in general) are at generating future engagement. If the blog attracts a thousand monthly website visitors and leads to 20 signups (a 2% conversion rate to known contacts), and 50% of people who enter your database as newsletter subscribers go on to become donors, we can extrapolate a “pretty good” answer that perhaps 1% of blog visitors will convert to donors.
It’s an imperfect number, but it’s a much faster and cheaper way to get a baseline than starting with the most difficult report first. Even with the inherent uncertainty, the rough numbers might be enough to confirm you’re on the right track or make you pause to consider alternatives. You might even decide to invest in building out report #3 because you’ve gained additional context and decided the effort to do so will be justified.
The challenge of deciding when and how to combine data from different sources certainly isn’t unique to the website vs CRM analytics discussion. There is absolutely value in linking together this kind of data, but it’s also important to recognize the strengths of each tool and think critically about how to make the best use of each. The allure of a single pretty chart with The Answer is high, but we should remember that the right questions can lead to “pretty good” answers in a fraction of the time—and we are frequently better served by having pretty good answers to a number of questions than putting all our energy into a very good answer to just one granular question.