GLOBAL
PERFORMANCE
COMMERCE

Plumbing the Analytics Pipeline

plumbing-the-analytics-pipeline
{publishedDate}

This is part of an ongoing series of conversations with an eCommerce amateur (who shall remain nameless) and one of GPC’s most knowledgeable (and patient) pros.

Today, Tim Shuvaloff (T) talks everything you’ve ever wondered – or were ignorant of – regarding eCommerce analytics.  

Q: When we talk about analytics in eCommerce, what specifically are we talking about?  

T: At the risk of sounding cliché – what aren’t we talking about? Conversion Rate Optimization (CRO), engagement, customer satisfaction, employee performance assessment, inventory and logistical routing – the list goes on!

That being said, analytics in eCommerce is intended to deliver insight-driven decisions and outcome measurement. Put simply – what are your issues, what data do you have, what analytics can you run, what insights can you gather, what actions can you take, and what are your outcomes?

At the end of the day, analytics allows businesses to identify issues, determine solutions and measure/evaluate conclusions. Intrinsically, analytics is the codification and unification of technology and business practices.

Q: What platforms do we use to analyze web properties? How do they work?

T: Each platform we use is one piece of a bigger pie, which we refer to as our analytics pipeline. It’s exactly what it sounds like – all the tools are fed through this pipeline, the data is transformed and contextualized, and business outputs are generated.

Some tools we use:

  • AWS – Amazon Web Services runs our technical infrastructure insofar as hosting, security and pipeline transformations. This is also where any machine learning pipeline items “live”.
  • Tableau – A business intelligence self-service tool used to visualize outputs from our analytics pipeline to business users. Its results are reporting automation and cadence.
  • Google Suite – Specifically, Google Analytics (GA), Google Tag Manager (GTM), and Google Optimize. These all have different functions: GA and GTM are used to create a customized data gathering system based on unique page elements. They enhance decision making and feedback loops for our traffic teams. Google Optimize is used for split-testing and combined with the above integrations for reporting.
  • matillionETL – This “Extract-Transform-Load” tool helps combine and transform data sources into the format we need in order to conduct any reporting or modelling tasks.
  • HotJar – A heatmap, scroll-map, recording and survey tool used to gather qualitative data. It helps contextualize and enhance our understanding of quantitative data.

Q: You run GPC’s analytics function. What’s the most challenging part of your job?

T: Developing the analytics pipelineThe big question is, how do we create relationships between our data sources such that they are automated?

This is the point where we bust out our creative hats. It involves a lot of cooperation between stakeholders, constant monitoring and iteration, and a ton of headaches! But it’s worth it in the end, because once you have a result, your automation handles the rest.

Q: What is a split-test? Can you give an example?

T:  Split-testing refers to the execution of a general function we call Conversion Rate Optimization (CRO).

Despite the closed name, CRO/split-testing refers to testing a variety of hypotheses on live web traffic to accomplish a certain business goal. This involves generating variations of certain elements in a controlled environment in order to achieve the optimal result. In the eCommerce world, these are often related to conversion rate, retention rate, customer lifetime value, usage rates, engagement, etc.

We split-test because it follows a proven statistical method. The biggest mistake isn’t just “not testing”, it’s testing incorrectly. Split-testing gives us benchmark KPIs, controls a variety of external variables and allows us to measure expectations post-test, which greatly impacts decision making.

Q: How do we use these kinds of tests to optimize pages for clients?

T: GPC and our clients have a surplus of marketing knowledge and experience! However, there are lots of idioms, unique situations and clichés in our business, which is why testing is so important. We need a burden of proof, which is achieved by following a proven statistical method and refers to the actual results achieved as a consequence of the testing activity.

Typically, there are two major parts to split-testing: descriptive analytics, which helps identify where KPIs are dropping off and why, and marketing knowledge and interpretation of said business issues.

From there, the process goes like this:

  1. Identify your problem (via descriptive analytics or some other source)
  2. Identify how you want to solve your problem
  3. Identify how you can control the variables that are not related to your problem, and the KPIs you will use to measure success or failure
  4. Set up your test with your tool of choice (using a QA process, of course)
  5. Launch and monitor your test until you hit your desired statistical significance
  6. Assess the results of your tests and generate a historical record of testing

Remember that this process helps spawn new ideas. There is no such thing as a “test failure” – every outcome teaches you something new.

Q: What can analytics do for a brand’s bottom line?

T: The value of analytics, at a very high level, lies in corporate governance and efficiency.

Remember, its purpose is to enhance decision making. But what does that mean? Firstly, it means understanding your business better – how every cog performs, both independently and as part of a system. This means less time spent determining where issues are, why they’re happening and what’s needed to fix them. It creates focus and, most importantly, a way to measure and assess every aspect of your business.

This typically starts with your senior executive team and trickles down to the rest of your staff, where the end goal is a federated structure with data driving all business decisions.

This results in a lessened reliance on “gut instincts”, less departmental waste in terms of time and resources, and cost-savings via renewed focus on high-ROI projects (not to mention the ability to constantly iterate and improve processes and procedures).

Q: Where are the analytics gaps that most brands are missing? How can they bridge them?

T: In no order:  

  • Not enough split-testing rules – Many brands don’t have formal split-testing rules, which is to their detriment. There’s a concept in statistics called statistical significance, which is the probability that your result occurred due to the specific variables you changed (as opposed to random chance). Following this rule (among others) is critical, otherwise you risk creating false positives and negatives, making your split testing no better than guessing.

    Copying the competitor – I want to preface this by being very clear that I don’t mean everyone should constantly “reinvent the wheel,” but fancy blog posts saying “X will increase your conversion rate by Y” are being dishonest with their data. All websites and demographic targets are different; expecting those kinds of results to replicate on your site is a fallacy. That being said, brands should always try things they read or see while simultaneously ensuring to test and keep their expectations grounded.
  • Being afraid to take early losses – The adage that “you must take risks to make money” is appropriate; you must have sales in order to improve the rest of your brand’s priorities. Many brands are justifiably hesitant to shell out money on marketing and advertising, but with analytics as a guide, traffic provides you with an opportunity to test and learn your strengths/weaknesses, and then target them. A union of your analytics and finance functions can also generate projections and cash flow estimations, which allow you to see beyond bare profitability.

Bridging the gap is a straightforward process. Start with conversion rates, work your way to initial cart values and then aggressively attack remarketing and lifetime value metrics. But remember that it needs to start at the top; management must establish strict testing rules and employ the burden of proof concept. Over time, the brand’s “rules” will evolve into something specific to its own needs.

Q: Where is the analytics function headed?   

T: The end goal is to take a “descriptive” capability and turn it into a “predictive” capability, one that projects how decisions will affect your business in the future. From there, codifying the sum of your corporate and analytics knowledge into something “prescriptive”, a capability that provides decision outcomes and automates business functions to run at peak efficiency. This frees up corporate resources for other tasks and results in better, more measurable decisions.

This sort of automated learning is typically called machine learning, and it will continue to evolve and uncover greater and more robust applications for artificial intelligence in eCommerce.

 

POST WRITTEN BY

Tim Shuvaloff
Vice President, Operations & Analytics   tim.shuvaloff@dfo.global
 
Want to sell your products and services globally?
Let’s Chat