Skip to the main content.

6 min read

Could your next campaign creator be something other than a human?

Could your next campaign creator be something other than a human?
Is your next creator in a campaign perhaps not human?
11:15

 

The Rise of AI Avatars

AI creators are becoming a real campaign option — not just a hype. For brands, the appeal is obvious: more control, faster production, and easier scaling across markets. However, adoption remains cautious. In a 2025 WFA survey on AI influencers, only 15% of major brands surveyed said they had already tested them, while 60% said they had no plans to use them. This gap is more telling than the hype itself. It shows that technology is advancing faster than the trust needed to handle it confidently.

When the authenticity of creators is no longer obvious

Influencer marketing has always relied on more than just visibility. It works because people have the feeling that there is a real person behind the content. This assumption is becoming increasingly difficult to maintain.

ammarathegoat_CoachellaThe Coachella AI influencer wave, reported on by The Verge, has made this clear.
Synthetic influencer accounts invaded one of the internet's most famous creator moments, and in several cases looked so convincing that the difference wasn't obvious at first glance. What made the story significant wasn't simply the existence of AI-generated profiles. It was the fact that they could take on the signals of real creator culture without being anchored in the same lived presence.

This is the key pressure point: when synthetic creators can replicate the appearance of authenticity, influencer teams no longer judge content alone. They are judging what kind of trust the content still deserves.

This tension is deepened by Vanity Fair's reporting on creator clones and licensed likenesses. The issue is no longer limited to obviously fictional avatars. It now includes systems that can reuse a creator's face, voice and identity beyond the original act of creation. What is being scaled is not just the output. It's the credibility itself.

This is one of the reasons why the issue cannot be dismissed as marginal. According to the IAB 2026 study on AI-generated ads, 83% of advertising executives say their company already uses AI in the creative process, and 85% say they use AI for ads on social media. This doesn't mean that AI avatars are already mainstream in influencer marketing. However, it does mean that synthetic production is becoming the norm in the wider marketing environment where creator campaigns take place.

 

Why brands are interested in AI avatars

The appeal isn't hard to understand. AI avatars speak directly to the operational pressure points that influencer teams are already feeling.

Campaigns increasingly need to operate across markets, adapt to multiple formats and deliver more output without giving teams more time. In this context, synthetic creators can feel less like an experiment and more like a production shortcut.

This is also the finding of the WFA in their 2025 survey. The main perceived benefits were cost efficiency at 77%, followed by reduced risk of influencer scandals at 58% and scalability at 58%. The exact figures may have changed since then, but the underlying logic is still relevant. Brands are not attracted to AI avatars because they appear more human. They are attracted because they promise more control over time, execution and variation.

This logic is particularly evident in Europe, where campaigns often have to overcome linguistic and market-specific boundaries without losing relevance. Human creators remain central to this work, but also come with obvious limitations. They need schedules, approvals, localization, reshoots and time. Synthetic formats promise to reduce some of this friction.

This is where the IROIN® by Stellar Tech 2026 Influencer Marketing Trend Guide offers readers a more practical perspective. It shows that brands are exploring synthetic creator formats where they need tighter message control, multilingual scaling and faster creative testing before investing in more expensive production. He also points to use cases such as digital twins for multilingual output and AI-generated UGC for faster concept testing.

In other words, the appeal is less about novelty and more about operational convenience.

 

The trust problem begins where the efficiency story ends

This is where the logic behind AI avatars becomes harder to defend. What feels efficient to a brand team is not automatically credible to an audience.

AI_Ad GapThe IAB 2026 study clearly shows this gap. It found that 82% of advertising executives believed Gen-Z and Millennial consumers viewed AI-generated ads favorably, while only 45% of consumers actually did. It also found that 39% of Gen-Z respondents viewed AI ads negatively, compared to 20% of Millennials. For influencer marketing, this gap is even more significant than in traditional advertising, as creator campaigns are based on perceived proximity, familiarity and trust.

The same tension is evident in our IROIN® by Stellar Tech 2026 Influencer Marketing Trend Guide, where we sharpen the point further. He argues that virtual creators create a conflict between brand control and audience authenticity, as according to Forbes, only 12% of consumers trust virtual influencers as much as human influencers, while 15% say they never trust them. It also highlights a broader set of risks around lack of clarity, the Uncanny Valley and the control paradox, where highly controlled content can be easier to manage but less emotionally compelling.

This is the point at which synthetic creators cease to be a production story and become a credibility story. A brand can gain speed, consistency and cleaner execution, but still lose something more valuable if the content feels contrived rather than believable. The real friction is not technical. It is relational in nature.

 

Brand safety starts now, before the content is published

Once synthetic creators enter the workflow, brand safety can no longer be treated as a final content check. The real risk often starts much earlier - the moment a brand decides who or what to work with.

This is particularly important with AI avatars, cloned likenesses or AI-generated UGC, as the content itself can look polished, compliant and visually innocuous, while still creating deeper issues around transparency, consent, context or audience trust. A synthetic creator doesn't have to post anything obviously controversial to become a brand risk. The problem can be more subtle. It may be that the format seems misleading, that the identity behind it is unclear, or that the campaign creates discomfort precisely because it seems too frictionless to be fully credible.

This is where brand safety needs to move from a narrow screening exercise to a broader suitability decision. The question is no longer just whether the content contains a risk signal. The question is whether the campaign structure itself creates one.

This shift is already implicit in the wider European discussion around AI-generated content. The European Commission's work on labeling and tagging AI-generated content makes it clear that transparency is becoming a more formal expectation - rather than something that brands can treat as optional or cosmetic. At the same time, older but still useful WFA research from 2025 showed that only 22% of respondents had internal guidelines for the use of AI influencers.

For influencer teams, it's no longer enough to review the final deliverable and ask if it fits the brand. Teams also need to know if the creator identity is clearly understood, if the use of synthetic elements is appropriately disclosed, if the audience context fits the format, and if the campaign relies on a kind of credibility that it hasn't fully earned.

 

What influencer teams should check before working with AI creators

Once brand safety is understood as an upstream decision, the next step is of a practical nature. Not every campaign is a suitable candidate for synthetic creators, even if the format looks efficient on paper.

The first question is whether the campaign is based on control or on lived credibility. A synthetic creator can work for high volume testing, multilingual customization or highly scripted product presentations. However, in campaigns where trust is based on personal experience, vulnerability or a recognizable human point of view, it is much harder to justify. This distinction is more important than the novelty of the format itself.

The second question is whether the disclosure will be visible enough to shape the audience's understanding in real time. The Coachella examples reported on by The Verge are useful because they show how easily synthetic content can infiltrate creator culture while remaining only partially legible to the viewer. If the audience can't recognize what they're seeing, the campaign may be creating friction before performance is even measured.

The third question is whether rights and consents are specific enough for the format used. As Vanity Fair reported, the market is moving towards creator clones, licensed likenesses and reusable synthetic identities. This places higher demands on contracts. Brands need to know not only that a creator has agreed to participate, but also what exactly can be reused, for how long, in what languages and in what future contexts.

The fourth question is whether performance is properly evaluated. Faster production and lower costs are easy to measure. Loss of credibility is more difficult to quantify. Therefore, campaigns with synthetic creators should not be measured solely on output. A campaign that delivers more assets but weakens trust can be operationally more efficient and commercially less effective. This is also the point at which qualitative signals become important. In addition to reach, clicks or conversions, teams need to understand how the audience responds to the content itself. Using tools such as sentiment analysis from IROIN® by Stellar Tech can help brands go beyond purely quantitative reporting and assess whether a synthetic creator activation is perceived as credible, confusing, off-putting or truly compelling.

 

The future of creator marketing will require no less human judgment

AI avatars, creator clones and synthetic creator formats will almost certainly become more common. The commercial logic behind them is too real to be ignored. They offer control, speed and scalability that many influencer teams are already under pressure to deliver.

But influencer marketing has never been valuable because it produces content efficiently. It was valuable because it turns visibility into credibility. That's why the next phase of creator marketing won't be defined by automation alone. It will be defined by whether brands know when synthetic formats reinforce a campaign and when they quietly undermine the trust on which that campaign is based.

So the most useful conclusion is not that AI should be rejected, nor that human creators are about to disappear. Rather, it's that the easier it becomes to simulate a person, the more carefully brands need to define what kind of trust they actually want to gain. AI can improve execution. It can support discovery, testing, localization and scaling. But it can't decide where credibility comes from or how much of it a brand is willing to risk. That remains a human judgment - and in influencer marketing, it's becoming an increasingly important one.