In a session at SMX East on testing in paid search accounts, speakers Amalia Fowler, account director at Snaptech Marketing, and Aaron Levy, director of paid search at Elite SEM, approached the topic from two polar perspectives: low volume accounts and enterprise-scale accounts. The juxtaposition made for engaging discussion.
Amalia Fowler on testing in low volume accounts
In discussing testing risks and challenges for low conversion volume accounts, Fowler stressed the need to be extra selective and strategic about what you test. She provided a template for a “What if” testing ideas spreadsheet in which teams can collaborate to capture what has been tested in the past, those results and ideas for future tests.
“We need to consider, what would happen if [the test] failed? Is the business going to be okay? Will stakeholders be okay with failure?” said Fowler. Importantly, she added, “We need a hypothesis for every test. That’s the guiding force for the entire testing process.”
Particularly for low volume accounts, it may be necessary to test across multiple campaigns or ad groups. Fowler also said she sometimes lowers the statistical confidence level for a test from 95 percent to 90 percent. Google Ads’ draft and experiments confidence level is 95 percent, she noted. “Define your minimum necessary data. And prepare other people to wait for tests to complete when you have low volume accounts,” she advised.
No matter the account volume, however, Fowler said, “Don’t wait until something is broken to start testing. Be proactive rather than reactive.” For more tips, see the full presentation below.
Aaron Levy on testing in high volume accounts
Levy discussed testing into the future, with a particular focus on high volume accounts. While Fowler stressed the need to test across multiple entities when volume is low, Levy presented several segmentation scenarios for accounts that have millions of keywords, automation is a must and budgets are large. “Keywords are an old data level,” he said. “We have many more ways of targeting now. AdWords is now called Google Ads for a reason.”
When discussing Smart Bidding, Levy said Smart Bidding usually means “spend more,” but that’s not inherently a bad thing from a profit perspective. Levy says companies need to embrace a testing culture and referred to his “Now, next, new” budget strategy for clients to allocate 70 percent of their paid search budgets to ongoing and proven efforts, 20 percent to evolving existing efforts and 10 percent to innovation and brand new tests.
“Making room for failure encourages experimentation,” said Levy. However, that doesn’t mean blindly experimenting. You should build tolerance forecasts to mitigate risks. “Learning periods cost. If a test works, you need to make up the cost of the experiment,” he said. How long will it take to complete the payback period of a test? Are you okay if a test never pays for itself? “That’s what the testing budget is for,” he said.
With automation, Levy said, the robots only care about two things: expected conversion rate and average order value. “If old campaigns are not designed for new automation, it won’t work,” cautioned Levy. You need to “structure your campaigns for success”. That means adding every audience possible on observe (though you don’t need to segment by recency since that is handled automatically). “The more you constrict the algorithms, the worse they will perform,” said Levy. “You’re adding in your own bias by adding restrictions. Err on the side of broad.”
That said, Levy noted that to structure for success you need to eliminate the stuff you know is not performing and remove outliers like a higher converting keyword from an ad group. Then give the machines the freedom to learn for a couple of weeks.
Levy also discussed the need to let go of match types when letting the machines run a test with Google Ads’ drafts and experiments. Check out his full presentation below, including recommended testing thresholds.