Advertisement
November 6, 2024 | 11 min read
Listen to article 4 min
Search engine optimization (SEO) just got more difficult, but where there’s change, there’s opportunity. Comms strategist Aidan Muller talks us through how AI may throw a spanner into the works for brands.
Chessboard on a battlefield
Generative AI is fast being integrated into search (Google, Bing), and in some cases will replace search altogether. And since around two-thirds of online experiences reportedly start with a search, according to a 2019 report, there will be great rewards for those companies and organizations whose ideas, products and services are featured in those AI results.
Just as we did for search with SEO, we are starting to see a new professional industry emerge for the optimization of assets in order to shape AI results. However, unlike search, generative AI is expected to provide a synthesis and, as such, will be significantly more competitive than search. As the stakes inevitably get higher, I expect this will become a significant battleground for brands.
Some have called it AI Optimization (AIO), others have called it Generative Engine Optimization (GEO) – but I tend to refer to it as AI Results Optimization (AIRO) to distinguish it from optimization of the AI models themselves. Time will tell which acronym sticks!
Influencing AI results requires understanding how generative engines work. While we don’t all need to become AI engineers, it’s worth understanding the mechanics so we’re talking the same language.
Basically, there are three levels at which influence can be exerted:
The training data
The algorithm
Reinforcement learning from human feedback (RLHF)
Influencing the AI’s training data
Large Language Models (LLMs) are generally trained on large collections of texts, called corpora (e.g. Common Crawl, C4, BooksCorpus). Since these are aggregates, they are quite hard to influence directly. Marketers are better off thinking about the largest single data sources. Wikipedia, for example, is one of the largest datasets. As are GitHub, ArXiv, Quora, or Reddit.
The good news is that the recent(ish) implementation of retrieval-augmented generation (R.A.G.) – which allows AIs to fetch up-to-date information from a search engine or another data source – has made it easier and quicker to influence AI results. The bad news is that I expect AI models will become more discerning in time.
There have been interesting experiments, for example, to try and game AI results. Kevin Roose mentioned two of them in a recent New York Times piece: furnishing the data source with “strategic text sequences” or invisible keywords in white text (previously known as ‘keyword stuffing’).
We will hear of many more hacks of this type in the coming year. But much like the black-hat techniques that tried to game search algorithms in the early years of SEO, these will eventually be phased out after a few algorithm updates. And if the Google experience is anything to go by, we may even see the deployers of black-hat techniques penalized.
For the time being, the most responsible and coherent avenues to shape training data are drawn from best practice in content development, SEO, web development and traditional PR, with the addition of a new focus on large data-rich platforms.
Comms directors should be prioritizing:
Creating good quality content on owned properties (websites, microsites, social media to a lesser extent)
Try to answer concrete questions in a helpful and sourced way
Making sure your online properties are crawlable and your data is structured
Cultivating your domain’s authority to make sure your content gets found
Getting credible, authoritative news outlets and publishers to say nice things about your brand. (Although many of these platforms will have blocked AI access, I expect they will eventually reach commercial agreements with the key models.)
Work with a specialist to help you optimize relevant content or conversations on specialized data-rich platforms (e.g. Wikipedia, GitHub, ArXiv, Reddit).
These may take longer and be harder than black-hat techniques, but they are the most ethical and least damaging to your reputation.
As we saw earlier, we can also look to influence the algorithm or the reinforced learning process (RLHF). These processes, however, operate at company level – who will probably not take kindly to external forces looking to shape their product.
This option is an even heavier lift than shaping the training data, but it will be particularly attractive where the stakes are high, for organizations who don’t want to leave anything to chance. The variety of competing interests mean this is likely to become a fierce battleground for brands, products and ideas.
I foresee that these organizations will do this in a few different ways.
Commercial arrangements with the AI company/ies will undoubtedly play a big part. There will almost certainly be a space for a new advertising model, and there may be more specific sponsorship agreements (to feature one product or message rather than another).
But there will also be room to influence the rules of the game. Larger brands may want to shape the policy or regulatory framework, in a way that favors their products, services or ideas. In many ways, this battle is already underway with the debate around safety.
These activities are more likely to influence generative AI results in the negative than in the positive. In other words, they might not lead to the AI promoting your particular healthy snack brand, however they may eventually downgrade high-sugar alternatives as a matter of policy. They may not feature your specific hybrid car brand, but they might promote hybrid cars over inefficient petrol-powered cars.
Intervention in this area would be no different from the bans on advertising tobacco or alcohol in some countries or the watershed on TV content and advertising.
While AI will not completely replace search – some users will still want to see the source material – there is no doubt that AI results will replace a significant share of searches, especially where the output is synthesized information.
The process of influencing is hard-coded into our DNA. The shift from a handful of above-the-fold search results to a single AI result will make for a more competitive – and possibly more adversarial – environment.
In the short term, there will be a significant competitive advantage for the organizations at the top of AI results and for the professionals who master AIRO. As an industry around influencing AI results develops and professionalizes – and standards get defined – the stakes are likely to be raised, and organizations will go to ever greater lengths to influence them.
For citizens and consumers, the breadth of results may – at least initially – reduce, though this might favor newcomers. AI-generated results will increasingly be weighted towards the product, brand or idea with the highest bid. In some instances results will be manipulated by black-hat marketing professionals, unscrupulous political campaigners and ill-intentioned international actors.
We will need mechanisms to ensure trust and transparency (in the same way that search and social media have to signpost ad content). And we will need to become more discerning and even more distrustful of online information (with unknown societal consequences in the long run).
This is a call for professionals to understand the mechanics of AIRO and become better equipped. It’s inevitable that this will become an arms race. The onus is on us to do this ethically.
Aidan Muller is the director of Daimon Communications, and co-founder of the Appraise Network. Read more from The Drum opinion here.
Marketing can change the world.
© Carnyx Group Ltd 2024 | The Drum is a Registered Trademark and property of Carnyx Group Limited. All rights reserved.