Actionable Metrics – Something all publishers ought to pay attention to

Many sites are geared towards getting customers to download something and some attempts are made to get them to log in and personalize the site for themselves. As you build your solutions, its important to think in terms of what the user is getting out of all this and how they got to you site, what actions led them to download from your site. By paying attention to that you can increase repeat visits and downloads.

Recommend reading this article – Actionable Metrics – Say Hello to Cohort Analysis.

Here Ash Maurya talks about the difference between actionable metrics vs vanity metrics. The latter per Mr. Maurya  –

“only serve to document the current state of the product but offers no insight into how we got here or what to do next “

Couldn’t agree more.

So let me summarize some of the stuff he states because a) it helps me internalize these nuggets b) hopefully it gives anybody reading my post an overview. Definitely read his post!! And then I’ve tried to apply this to my own area of interest further below.

He offers three rules for actionable metrics –

1. Measure the right macro – Key things to know – How do users find you; Do users have a great first experience; Do they come back; How do you monetize; and Do they refer you.  Most importantly, don’t waste effort on simply driving signups when customer retention is already a problem. Focus on getting people a great first experience (getting what they want) and then getting them to come back.  This to me means, find out what your users are looking for, what they always download, make sure they can get it fast and at the point where they are converting, make their experience so good that they will want to do more or will come back for more. Sounds obvious but how often do we pay attention to what users really want and how good a first experience they are really having?

2. Create simple reports – We’re all impressed by the various charts and graphs and pages of information that many analytic tools produce or reports created on spreadsheets. But honestly, how many of us really digest every bit of information. I always save these off for later reading  but never get to them. Now compare that to a simple one page report (like the executive summary) or simple reports that speak to a specific problem; now that simplifies everything. I will definitely consume that immediately. So Mr. Maurya suggests looking at funnel reports based on Cohort analysis (or groups of people who share similar characteristics). For example a funnel report of users who are in different types of market segments or maybe different user types (guest vs logged in). Why do this? It tells you how any event or activity you did affects each cohort. Simple!

3. To make these metrics actionable, tie them back to specific user or users. This way you can find out who your activity worked for and who it didn’t. This enables you to contact these specific users for follow ups and feedback and thus get specific insights that you can act upon.

Feels so much like things we already do but the simplicity part is key here.

OK so let’s try an example specifically to the publishing industry. You have content that readers want to consume and they pay for it. More than likely users come from Google or some other search engine. When they arrive at your landing page, are they able to quickly consume the content. What is their first experience? Do they actually spend time consuming the information on your site or do they just download the PDF and go away? You probably want them to stay and read more. So focus on improving first experience – simple reading, fast page load, easy navigation, easy to understand functionality, etc. What do you measure when trying to improve the experience? User engagement perhaps. Additional page visits per session perhaps? Reuse of functionality? Revisits? Any thing else?

OK once you figure out what to measure, think about what kind of reports you want to see. Cohorts to me are by market segments – academic, corporate, government. Then there are guest users vs subscribers. What is in the funnel report here? #User visit page ->#User downloaded content->#user engaged with page->#user signed up for alerts perhaps (or something else) OR #user visited other pages.

Alright, once you have that and start getting reports, you then have to find specific users within each cohort that you can survey or email or talk to.

I think I got this. Any comments? Or other thoughts?

Advertisements

AB testing and pressing questions

Before you realize it, its been more than a year since i last posted. Its been mostly Twitter activity (@shiv17674) so its not like i’ve been completely holding back my thoughts.

Anyway, I find myself getting excited about my recent escapades with AB testing that i felt it would make for a nice post. Lately I’ve been experimenting with Optimizely a lot and its been eye opening to say the least. From small UI changes to complex functionality changes it sure looks like we have a come a long way since relying on simple user tests and surveys to tell us what a customer/user would prefer.

It all started after reading Avinash Kaushik‘s book Web Analytics 2.0 and the Always be Testing book (on Amazon). Shall we just say I have read about the top of the mountain and I’d like to go there. If you want a crash course on AB testing, I highly recommend this article from Smashing Magazine.

So when the opportunity presented itself, I jumped on it and now I’m like a kid in a candy store. (Do i go with Candy A or Candy B). Of course nerdy ecstasy aside, there are some really important organizational business decisions that will present themselves when we find the results of our experiments.

– Once we have the capability in place, everybody’s going to want to run these experiments. Organizational changes will have to be in place without overly burdening with red tape.  

– How do you prioritize which tests to run first?

– How do you make sure one test does not interfere with another?

– What if we can’t test everything? Will some changes remain untested?

– How long do we need to run a test and how does that influence our planning?

– I assume there are always things that just aren’t testable. Some things are just emotional and visual elements on a page. They are meant to make the customer comfortable within your product or add credibility. Whether to place those elements on the top or bottom or left or right will probably not be measurable. Or is there a way to measure that?

– What if the test reveals that your popular feature driving a sizeable chunk of your KPIs is actually hindering your customers ability to fully engage with your product?  

All these questions do have answers or at least I believe there are. But they’ll need to be evaluated and revised over time based on the experience we collect. I will try and post these thoughts on here when possible along with lessons learned. Certainly exciting times.