PwC researchers published a piece in Harvard Business Review this month with a recommendation that should worry every brand still running a traditional SEO playbook: structure your content for machines. Not for Google's crawler. For AI agents that are starting to make purchasing decisions on behalf of consumers.
The article, written by four PwC partners, identifies five risks brands face as AI agents become shoppers. Risk number one? Product misunderstanding, where agents guess when data isn't machine-structured. Their top recommendation: adopt generative engine optimization and make your product attributes machine-readable.
Here's what we've learned about what AI search actually cites, how to create it at scale, and how to track whether it's working.
The Scale of the Shift
If you're wondering whether AI search is big enough to care about, here are the numbers.
ChatGPT hit 900 million weekly active users by December 2025, according to Sam Altman. It processes over 2 billion queries per day. Google AI Overviews reach 2 billion monthly users globally. Perplexity handles 780 million queries per month and is growing fast.
Meanwhile, traditional search is shrinking. Gartner projects traditional search engine volume will drop 25% by 2026 due to AI chatbots. Zero-click searches, where users get their answer without visiting any website, now account for 58.5% of U.S. Google searches according to Click Vision data. In Google's AI Mode, that number climbs to 93% (Semrush).
But here's the part most people miss. Brands that get cited in AI responses see significantly more organic traffic than competitors who don't appear. The new "page one" isn't a list of ten blue links. It's being the source an AI model trusts enough to reference.
What Content AI Actually Cites
This is where most SEO firms get it wrong. They treat AI visibility like traditional SEO with a fresh coat of paint. More keywords, more backlinks, more volume. But AI models evaluate content fundamentally differently than a search crawler.
A study by researchers from Princeton, Georgia Tech, and the Allen Institute for AI (published at KDD 2024, one of the top data science conferences) found that three specific tactics improved visibility in AI responses by up to 40%: adding citations to credible sources, including relevant statistics, and using direct quotations from authoritative figures.
An SE Ranking analysis of 129,000 domains, published in Search Engine Journal in November 2025, put finer numbers on it:
- Pages with expert quotes averaged 4.1 ChatGPT citations vs. 2.4 without
- Content with 19+ statistical data points averaged 5.4 citations vs. 2.8 for data-light pages
- Articles over 2,900 words averaged 5.1 citations vs. 3.2 for those under 800 words
- 44% of ChatGPT citations come from the first third of the content (Search Engine Land)
Google and Microsoft both confirmed in March 2025 that they use Schema Markup for their generative AI features. ChatGPT confirmed it uses structured data to determine which products appear in results. Both called structured data "critical for modern search features." If your product data isn't structured, AI systems are guessing. And when they guess, they pick competitors who made it easy.
This is what we've been building at Texin.ai. Our glossary and guides are designed from the ground up for AI citability: JSON-LD schema on every page, cited statistics with named sources, structured FAQ sections, and content thick enough to actually answer questions. Not 300-word keyword-stuffed stubs. Real substance.
The SEO Agency Problem
I've been talking with brands across CPG, healthcare, and professional services. They all tell me the same thing: they pay an SEO firm thousands of dollars a month and get almost nothing in return.
The data backs this up. A Backlinko survey of 1,200 U.S. small business owners found that only 30% would recommend their current SEO provider. The number one reason for switching: 44% left because of dissatisfaction with results. Mid-market businesses typically spend $2,100 to $5,000 per month on these services (OuterBox), with quality agencies charging $3,000 to $7,500 (Siege Media).
The agencies producing thin, templated content aren't just wasting money. They're creating a liability. Google's March 2024 core update deindexed over 800 websites and specifically targeted what Google calls "scaled content abuse": mass-produced pages created to game rankings, whether by automation, humans, or both. Google's stated goal was to reduce low-quality content in search results by 40%.
Some of the brands I've spoken with have been penalized because their SEO firms created exactly this type of content. Thin templates across hundreds of city pages. AI-generated articles with no fact-checking. Content that Google's quality raters now flag as "fake E-E-A-T," a category Google added to its Search Quality Rater Guidelines in May 2025 specifically to catch AI content dressed up with fabricated author profiles and false expertise claims (reported by Search Engine Journal).
And here's the kicker: Ahrefs data shows that 82% of articles cited by ChatGPT and Perplexity were written by humans. Templated AI content doesn't get cited. It gets ignored.
How to Create It Autonomously (With a Human in the Loop)
Here's the tension. According to HubSpot's 2026 State of Marketing report, 94% of marketers plan to use AI in content creation. But 71% of employees say they prefer AI-generated content be reviewed by a human before use (Index.dev).
That gap between automation and quality control is exactly what autonomous content pipelines solve. Not "AI replaces writers." AI handles the research, drafting, and optimization. A human handles judgment, fact-checking, and approval.
We run our content operation on OpenClaw, an open-source AI agent platform that connects AI models to messaging apps and automation tools. Here's our actual workflow:
- Every weekday morning, the agent reads our site inventory, search console data, keyword gaps, and competitor activity
- It generates a full 1,000 to 2,000 word article optimized for both Google organic and AI search citability
- The article hits my phone via Telegram. I read it, then reply APPROVE, REVISE, or SKIP
- On approval, the system publishes to the website automatically, creates a newsletter draft in our email platform, and logs the publication
The entire cycle from generation to live publication takes about two minutes after I hit APPROVE. No copy-pasting into a CMS. No waiting for a developer. No forgetting to send the newsletter.
The human approval step isn't optional. It's critical, especially in regulated industries. Healthcare organizations face HIPAA requirements governing any content involving patient information. Financial advisors are bound by the SEC Marketing Rule (206(4)-1) on advertisement content. Law firms in states like Florida must file advertisements for review 20 days before publication. AI can generate the content, but a qualified human needs to sign off before it goes live.
How to Monitor It
Creating content is half the equation. You also need to know how your brand shows up when someone asks ChatGPT, Perplexity, or Google a question about your category.
PwC's Harvard Business Review article calls this "monitoring brand presence in agent ecosystems." They list it as recommendation four out of five. This is what AI Radar does.
The challenge is simple: your brand is either appearing in AI-generated answers or it isn't. And you can't manage what you can't measure. AI Radar tracks how brands show up across AI search platforms, what gets cited, what gets recommended, and where the gaps are. Think of it as the analytics layer for a channel that handles billions of daily queries.
This isn't a nice-to-have. When Semrush analyzed 10 million keywords, they found only 20-26% overlap between AI Overview citations and traditional organic top-10 results. Being on page one of Google doesn't mean you're being cited by AI. They're different systems with different selection criteria. You need visibility into both.
The Bigger Picture: One System, Not Three Vendors
What we're building at Texin.ai connects these layers into a single system. Autonomous content creation feeds your organic AI visibility. AI Radar monitors how your brand appears across AI platforms. Paid ad optimization, including the new ChatGPT ad inventory ($60 CPM, $200K minimum spend, per ALM Corp reporting), runs alongside it. The data from each layer informs the others.
Content marketing costs 62% less than traditional outbound marketing and generates 3x as many leads, according to DemandMetric research. Companies that publish regularly see 13x higher ROI than those that don't. But the bar for "regular publishing" has gone up. AI search rewards depth, structure, accuracy, and recency. Not volume for its own sake.
The brands that figure this out now, while AI search is still growing, will have entrenched advantages by the time their competitors catch up. PwC sees it. Google's own guidance confirms it. The research supports it.
The question is whether you'll build the pipeline or keep paying an agency to produce content that AI models ignore.
Texin.ai helps brands build AI-optimized content strategies with autonomous creation, monitoring, and paid media management. Get in touch to discuss how this applies to your business.
