Logo
877-22-EMAIL Login
  • Visit MarketTraq
  • Categories
    • Behavioral Marketing
    • E-Mail Marketing
    • Industry Insight
    • Marketing Tips

How To Optimize Email Send Times: Part 1

Posted in Behavioral Marketing, Industry Insights | 0 comments

We recently wrote about optimizing send frequency for email marketing, and got a lot of positive feedback from all of customers and blog readers. To build on that, we wanted to answer one of the top questions on many email marketer’s mind: When should I send my marketing emails?

The answer is actually pretty simple: Best practices help, but we, and most likely even you, just don’t know.

The problem with asking questions about individual companies is that, as we’ve mentioned before, individual companies vary so greatly that making an across-the-board pronouncement is as ineffective or worse than offering no advice at all.

What we can answer, however, is how to find out your optimal email sending time. Our Strategic Services Group performs Time of Day statistical analysis for clients. For those who are interested in optimizing drop time further to achieve greater response rates, we’re currently producing an all new whitepaper to share an example with a large retailer, and explain how the results were achieved. While we won’t talk too much about the results here (since we don’t want to spoil the ending), we will take a look at the steps we took to design, run, and interpret our experimental approach to optimizing send times.

Doing this process in a manner that is statistically valid is much more involved than dropping whatever email you were otherwise sending at different times of the day. Since this promises to be a lengthy post, we’ll publish it in two parts.

1. Designs, Details, and Dropping-Off points

There are three main design methodologies for marketing experiments. The first, standard hypothesis testing, is most similar to the experiments you may remember from science class: you set up a testable hypothesis, then set out to either prove or disprove it.

If, after rigorous and methodical analysis, it hasn’t been disproven and you’re seeing evidence that supports the hypothesis, you can reasonably assume that it is valid, and take action on it. The disadvantage of this methodology is that your testing options are limited to your hypothesis. If halfway through the test you realize that what you really want to be measuring is something else, you need to scrap the experiment and design a new one. The other major flaw is that if you do not properly account for confounding variables, that is variables outside the scope of your experiment, the results can come back flawed and inconclusive. This happens all the time actually, and is a common mistake amongst junior analysts.

The second, multivariate testing, is a lot more flexible, especially when it is difficult to pin down an exact hypothesis. As the name implies, multivariate allows you to test for multiple variables, and then use statistical analysis (usually through the process of correlation testing) to rank the impact of different hypotheses. The advantage over the standard design is that if you aren’t sure what you’re looking for, a controlled multivariate test can let you test “retroactively”, and can often lead you to conclusions you might not have considered otherwise. The downside is that due to the heavy reliance on statistical interpretation and extrapolation, the test results are less reliable and more prone to statistical errors such as sampling bias.

The last methodology is a hybrid approach. We always favor this approach when looking at new problems because the two testing methodologies cancel out each others’ weaknesses. The set-up for a hybrid approach to testing involves first running a multivariate test, and then testing the conclusions via standard hypothesis testing. In this manner, you get the spontaneity of results found in the multivariate test, with the surety and rigor found in hypothesis testing.

2. Gathering your In-Group

We’ve already written about creating test and control groups in our previous article on testing email sending frequencies, but it’s worth another mention here since it’s frequently a point of confusion.

The most important consideration for creating a test segment, after deciding on the size of this segment, is to make sure that it is a valid representation of your overall email list.

Failure to account for an even and uniform distribution of subscriber types between the control and test groups can lead to sampling errors that will throw off results and can lead to poor marketing decisions based on faulty evidence.

This is especially true when dealing with multivariate testing, since correlation testing is only valid and useful if the test segment is very similar to the control segment.

One recommendation is to select a sample by segment, rather than by list. What we mean here is that if you have decided that your total test sample will be 15% of your total list, it can be beneficial to take 15% of random users from each segment you have set up rather than taking 15% of your total list at random. This helps to ensure that the makeup of your test sample is similar, proportionally, to the makeup of your overall list, and will provide more accurate and actionable data.

3. Success Metrics

Finally, once you have settled on a methodology and picked your test and control groups, it’s time to determine the most important criteria of any experiment: success metrics. How will you know what the results mean and which group “won” unless you set a clear set of KPIs by which you will judge the effectiveness of any hypothesis?

The recent optimization we ran for clients Newport News and Spiegel focused on revenue per email and click-through rates. Because they are e-commerce companies in the B2C space and use their emails as direct revenue drivers, these metrics made more sense to use other variables, like open rate. Companies with different business models, however, might have very different goals. For a B2B business, we might have used lead-form signups, calls with a representative, or requests for product information.

It’s also critical to consider your purpose in sending out emails in the first place. If your email marketing is intended to directly drive revenue, revenue makes sense as an experiment success metric. If, on the other hand, you use email marketing primarily for branding and engagement, it might make more sense to look at open rates, social shares, or share of voice. Consider your purpose, and pick the most logical set of metrics based on that.

In our next installment we’ll give you an in-depth look and “how-to” insights on the actual production of a marketing experiment like this. We’ll guide you through launch, to collecting results, and finally to interpreting your findings. Until then… keep ‘em clicking!

  • Share this:
  • Email
  • Print
  • Facebook

No Responses to “How To Optimize Email Send Times: Part 1”

  1. How to Optimize Email Send Times: Part 2 | OnTraq Blog says:
    July 20, 2012 at 6:46 pm

    [...] our last post on How to Optimize Email Send Times, we talked about the set-up and planning stages of running a marketing experiment. From choosing a [...]

    Reply

Leave a Reply

Click here to cancel reply.

Want Smarter Email Marketing?

Thank you for your inquiry.
We will contact you within one business day.

In the meantime, make your email marketing work smarter, not harder with our

Smarter Email Grows Business Better guide.

Click to Download!

Want better returns from your email marketing? Sign up for information and we'll send you our free E-Book: "Smarter Email Marketing Grows Business Better."

Ooops! Something went wrong!
Please review the form to ensure that all fields are properly completed.










Yes, please send me
     email marketing tips.
No. Thanks.

MarketTraq

Copyright © 2010 MarketTraq.
All rights reserved.

Privacy Policy

Contact MarketTraq

info@markettraq.com
1-877-22-EMAIL
217 Water Street, 3rd Floor
New York, NY 10038

  Facebook

 Become a MarketTraq
Authorized Consultant

MTAC
Email Smarter

Enter your email address for
free tips & advice on email
marketing.

loading Cancel
Post was not sent - check your email addresses!
Email check failed, please try again
Sorry, your blog cannot share posts by email.