Many in the SEM world are concerned that artificial intelligence and machine learning will make their jobs obsolete — but columnist Frederick Vallaeys believes that humans and machines working together will ultimately create the most value for clients.
Earlier this year, I wrote about how artificial intelligence (AI) and machine learning are driving automation in PPC and then again about how Google’s latest wave of AdWords innovations is driven largely by these same technologies.
As the move towards automation accelerates, how should agencies and PPC managers update their strategy? What processes will they need to remain competitive? And what can they really expect from automation tools in the market today? I’ll cover all these topics in a series of upcoming posts, so I’d love to hear your ideas. But today, let’s begin by looking at what roles humans and agencies will play in PPC.
1. Agencies will teach machines to learn
Now that machines can learn, they certainly will surpass humans, right? The reality is that machine learning is still very dependent on humans. We program the algorithms, we provide the training data, we even manipulate the training data to help the machine get it right.
Machine learning often requires structured data to learn from, and it needs a very well-defined problem to solve. We as humans will play a role for some time to define the problem and help shape the desired outcome by manipulating how the machine can “learn.”
For now, the machines need us to be its teachers. AdWords Quality Score only works because the wisdom of the crowds provides a massive set of data about queries and clicks that the machine can use to learn from.
Tesla’s autopilot works because thousands of drivers control their cars manually through tricky situations. Because they’re all networked, this helps the next Tesla better drive itself through that same spot.
In PPC, what we have learned from years of manually managing campaigns can be the basis for teaching computers how to respond in similar situations.
Teachers can’t teach everything, so a large part of what they do is help students ask better questions. As teachers to the computers, we should allow ourselves to ask more questions, because synthetic intellect doesn’t have the same human constraints for how quickly it can find answers.
Take Quality Score, for example — it is a machine learning system that can analyze hundreds of factors related to a search and find patterns of things that have a meaningful impact on CTR. Because it can analyze data so much faster, we can feed it seemingly random and unconnected data and let it tell us if this makes a difference.
Here’s a crazy question we once asked the Quality Score system: Does the lunar cycle impact CTR? While the answer isn’t what’s important (no, there was no correlation), what is important is that we were able to ask entirely new questions and quickly get an answer that helped make the system better.
But we should also prioritize the questions we ask based on human intuition. We don’t want to waste machine power by asking everything when we already know with a high probability that some answers won’t help us improve. Consider the following example: Ask Google Maps to calculate the best route from San Francisco to New York. Calculating every possible backroad will take a long time, and considering that we know highways tend to be faster than local roads, that calculation will almost certainly not yield a better result — so we can safely ignore that question.
2. Agencies will provide the creativity machines lack
The biggest value of an agency will be the ability of its employees to work collaboratively with automation.
Chess grandmaster Garry Kasparov notes that when it comes to chess, teams of humans assisted by machines dominate even the strongest computers. In a 2005 experiment, Playchess.com launched a chess tournament in which participants could play in teams with other players and/or computers. According to Kasparov:
The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.
Humans are still good at creative strategy — putting old ideas together in new ways and testing the results. The reason we don’t have Google’s computers writing all the ads for AdWords is that they all would end up looking the same — and then they would stop evolving because the machine would no longer have any variations to test.
Evolutionary algorithms, a subset of AI, are based on biological evolution, and they need access to variations to work well. And while they can create their own mutations, humans often still know the right shortcuts to come up with better ideas.
An advertiser on Facebook once submitted an ad that was a static image that shook a bit. This had a far better CTR than the same ad when it was completely static. It’s kind of a silly way to produce better CTR, but it’s a great example of humans trying something new that the machine probably wouldn’t have thought of because nobody had done this before inside the realm of the data it had access to.
3. Agencies will be the pilot who averts disaster
Self-driving cars are not “driverless” cars because there’s still a human behind the wheel to monitor the machine. That makes sense because not killing its passengers or others on the road is valuable enough to deserve some human resources.
In PPC, we’re fortunately not dealing with life-or-death scenarios; but we can still put a pilot in place to monitor the most important areas of automation. The trick is figuring out the 80/20 rule and saving the human involvement for the automations with the biggest potential impact.
I once audited an account that had completely tanked because the bid automation had correctly reduced bids after the launch of a terribly performing landing page. But while the landing page was quickly fixed by humans, nobody remembered to reset the bids, and the account spent months with subpar performance because its best keywords were lingering on page two of the search results.
The problem with many systems built today is that they have narrow goals that can fail due to self-reinforcing feedback loops that can cause a downward spiral:
bad performance → bid down a bit → even worse performance → bid down some more → doom!
We can also look beyond what our own automations are doing to find weaknesses to exploit in our competitors’ algorithms. Remember that many automations are doing tasks that are well-defined, and this makes them predictable. For example, I once had to cross four lanes of traffic on my bike and was going to wait to let a car pass me first. But when I noticed it was a Google self-driving car, I went for the turn anyway because I knew the car had perfect vision and was programmed not to hit bicyclists. And since I’m sharing this story, things went well for me in that scenario.
Sometimes, we can learn from what the machine does. Lee Sedol, the world-champion Go player who was beaten by DeepMind’s AlphaGo computer, became a better player from the experience of losing to a machine. He, as well as many others watching the game, were perplexed by move 37 that the computer made. It was simply not a move any human would have played. But it was the move that set the computer up for the win, and now humans have added it to their own repertoire.
And sometimes your job as copilot is to see something that’s not there but that should have been. The book “How Not To Be Wrong” by Jordan Ellenberg tells the story of mathematician Abram Wald, who figured out what part of an airplane should be made stronger to resist being shot down by enemy aircraft during World War II. The data from planes that returned with bullet holes showed that there were more bullet holes in the fuel system than the engine. Scientists concluded that they should re-enforce the fuel system. But Wald argued that planes that were hit in the engine probably crashed and never returned, and this skewed the data.
Let’s put that into a PPC example. When you look at what leads to a conversion because you want to do more of that, maybe you should also ask what doesn’t lead to a conversion and do less of that. For example, high shipping fees may tank your conversion rate, but you wouldn’t find this out if you asked the wrong question.
4. Agencies will have the empathy machines lack
Even when computers will be doing every part of PPC management, they still won’t have the same human connection that you have with your clients. Understanding the nuances of your client’s business (which will help you come up with new ideas to test), understanding their fears about PPC, understanding their frustrations with the last account manager and so on. All this will help you have a more productive relationship with them.
One surprising profession that is leveraging AI is medical doctors. They simply can’t read as much of the existing research as Watson, so IBM’s supercomputer can be a magnificent diagnostician. But Watson may not be able to explain conditions to a patient, and it certainly will not have the empathy of a human when sharing potentially devastating news. There is still a place for doctors even when they have a supercomputer to help them.
And as PPC experts, a large part of our role will be to know which expert automations to test in an account. For bid management alone, there is an overwhelming number of options, ranging from Google’s free Portfolio Bid Strategies to upstart bid management companies that charge thousands of dollars for the promise of a slightly better result. Knowing what is available, what is worth testing and how to calculate the trade-offs is certain to be a large part of the value agencies provide.
Automation is taking over a lot of the tasks humans have historically done in PPC; but as this shift continues, there will be plenty of new opportunities for PPC experts and agencies to provide value to their clients.
Next time, I’ll cover new strategies and processes that will help bridge the gap between humans and artificially intelligent PPC machines.