The Mad Scientists of Paid Search – SMX Advanced Session Recap


In this lively paid search session, each of the Mad Scientist speakers brought fresh ideas, data and insight to our fast-paced world of paid search and consumer intent. Here is an overview of what they shared.

Andy Taylor, Merkle

Andy made five main points in his presentation:

1. Changes to exact match close variants

Initially, Merkle didn’t see a huge change, but they saw click volume increases by the end of 2017.

Here are some of his findings:

  • Andy estimates that 20 percent of exact match traffic came from close variants on a desktop by the end of 2017 for the median advertiser studied.
  • They converted at a 20-25 percent lower rate than true exact matches.
  • There was a 3-6 percent drag on the non-brand exact match.

You don’t want to fall off the first page because of close variant changes, so be sure to look at structured query reporter (SQR) for negative terms and filter them out where appropriate.

Changes can have an effect on multiple groups. For example, the phrase [homemade pop tart] could get pulled into an [artisan pop tart] ad group. He noted close variant changes corresponded to a recent change to ad rank. With this, Google emphasized the meaning of the query and was weighing bids more than quality score.

Andy also talked about phrase and broad match and thinks there is no big difference. Singular to plural terms or plural to singular was where he saw the biggest difference. Close variants will continue to grow, and a true exact match is not coming back.

2. Need for speed

People are now looking for near-immediate fulfillment when doing their online shopping. Andy noted the growth of  “same day” and “fast shipping” (vs. free shipping). He said it’s because we already expect free shipping in some way, shape or form.

3. Another trend is voice search and search assistants

The phrase “OK Google” is stripped from queries, and Google doesn’t include the phrase as part of the query. So it’s unlikely we’ll need to add “OK Google” to keyword terms in the future. If you see it in your search query report (SQR), you can use it to give you a directional sense of how voice search is growing.

In many ways, search queries haven’t changed. Voice query length is similar to typing query length. It won’t be different in future, and we will use many of the same terms for voice search and typing search.

4. SQR can give you an idea of competition

A big competitor for many retailers is Amazon, and it’s kind of a big deal. Shares of queries using either Amazon or Walmart to search show Amazon ahead. It may make sense to bid on competitor terms. Amazon cannot effectively optimize every product that it sells.

5. SQR also shows how local search is growing

“Near me” queries have increased faster than location-specific queries. The share of Google paid search clicks tracked to the ZIP code level has gone up significantly over the past couple of years, indicating that Google is better able to assign users to granular location types. This is good news in light of the increase of “near me” queries, when Google has to identify the location in order to serve up relevant nearby businesses.

Andy’s SMXInsights:

Presentation deck: Query Trends to Know for 2018 and Beyond 

Andreas Reiffen, Crealytics

Andreas suggested we could go down the wrong path if we rely too much on data. Data is deceptive, and the reality is often very different from what we are seeing. He suggested that remarketing lists for search ads (RLSA) skews our data so that we see a better return on ad spend (ROAS) than we should. The question is to what extent RLSA drives incremental sales.

In one example, he showed an increase in ROAS, but there was a 48 percent decrease in new customer acquisitions. What can be seen in general is that RLSA traffic drives many fewer new customers than any type of prospecting.

Andreas asked, “What happens if we start boosting RLSA?” He suggested a boost could stem from:

  • Previously free traffic that was pulled into paid search.
  • Incremental increases.

The better the numbers you see, the less incremental they might be.

This is a difficult situation, and impact is not necessarily incremental.

Andreas suggested retargeting is highly addictive and asked, “Is there a way back?” He suggested that no one in corporate management or Google has the incentive to answer this question honestly. A better question may be “Should we bid higher or lower?”

Engagement and recency define the buying propensity. If someone has put a dozen products into a basket, has viewed 30 pages and spent a significant amount of time on the website, and all this happened just a few seconds ago, it’s very likely that someone will buy — with or without retargeting an ad. The question is now whether we should bid high for people showing this behavior or should we bid low. Bidding high will show great numbers, but it might just be a waste of money.

He took us on a journey of his research, and he tested this by:

  • Having large groups and filtering down. He segmented by user in Google Analytics and exported data to Google AdWords.
  • Triggering events on what people actually do using Google Tag Manager.

He found a very effective way to test audiences based on a single criterion like cart abandonment. In order to build and test audiences by multiple factors like engagement and time, a more complex approach would be needed.

He suggested companies build data science capabilities and that operational teams are not enough. He suggested companies need data scientists to solve problems.

Andreas SMXInsight:

Presentation deck: Retargeting, Incrementality and Beyond: Data Insights from Behind the Scenes 

Andrew Goodman, Page Zero Media

Andrew said the value of mad science cannot be overstated! He suggested using tools and resources that Google has to offer, since they can help you make progress and answer tough questions instead of just making assertions.

He also talked about the importance of statistical significance. In the interface, the more green arrows you see, the higher the statistical significance.

He gave suggestions on using campaign experiments:

He also talked about effective cost per click (ECPC). It uses Google machine learning and predictive power. In the most recent test, he looked at a lot of data over a 6-8-week period. Results were a big win for ECPC, and Andrew recommends running your own tests.

Andrew also ran a “high bid” experiment. It was not a successful experiment, as the cost was higher, cost per action (CPA) went up and conversions went down. He suggested that one of the reasons was that ads already had a good positioning. He also tried smart display. ROAS worked out to be 0.11, which is a very bad return on investment.

To close his presentation Andrew suggested all paid search marketers should rigorously test everything and also:

  • Remove all negatives in a campaign and find out what true key performance indicator impact will be.
  • Test a version of the campaign with a large number of internet protocol (IP) exclusions in it, and then run another test version without the exclusions.
  • Run a version of your campaign without dayparts, and then run it again with dayparts. Was there any impact?

Andrew’s SMXInsight:

Presentation deck: Mad World: Tears, Fears and… (No, just some tests we ran etc.)


Want more info on Paid Search? Check out our comprehensive PPC Guide – 9 chapters covering everything from account setup to automation and bid adjustments!


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About The Author

Mona Elesseily writes extensively and speaks internationally on search & online marketing. She is the Vice President of Online Marketing Strategy at Page Zero Media, where she focuses on search engine marketing strategy, landing page optimization (LPO) and conversion rate optimization (CRO).





Source link

WP Twitter Auto Publish Powered By : XYZScripts.com
Exit mobile version