Talk Commerce Talk Commerce
We Audited Our Own SEO and Here Is What We Found
| 6 min read

We Audited Our Own SEO and Here Is What We Found

By Brent W. Peterson


If you run a website in 2026 and you are not auditing your SEO regularly, you are falling behind. Not slowly. Fast.

We know this because we just audited Talk Commerce and found issues we did not expect. Structured data gaps that blocked rich results on 674 blog posts. Missing AI crawler directives. A homepage that AI systems could not cite properly. These are not edge cases. These are the basics, and we missed them.

Here is what happened and what we learned.

The Tool We Used

We ran the audit using a Claude Code SEO skill that orchestrates 13 specialized sub-skills and spawns parallel agents to analyze a site across seven categories: technical SEO, content quality, schema markup, sitemap structure, performance, image optimization, and AI search readiness (what some call Generative Engine Optimization or GEO).

One command triggers the full audit. The agents run in parallel, each focused on its own category, then report back with a unified score and prioritized action plan. It checks everything from robots.txt configuration to E-E-A-T signals to whether your content is structured in a way that AI systems can actually quote.

The important part is not the tool itself. The important part is what it found.

What We Found

Our overall SEO health score came back at 68 out of 100. Not terrible. Not good enough.

674 Blog Posts Missing Publisher Schema

Every blog post on Talk Commerce was missing the publisher field in its BlogPosting structured data. Google requires this field for Article rich results. Without it, none of our 674 posts were eligible for rich snippets in search. One missing field, applied to every post, because it was missing from the template.

The fix took 30 seconds. Adding the publisher object to the BlogPost layout template fixed all 674 posts at once.

AI Crawlers Were Partially Blocked

Our robots.txt allowed GPTBot (OpenAI) and Google-Extended (Gemini) but was missing directives for ClaudeBot, PerplexityBot, Bytespider, and Applebot-Extended. That means when someone asked Perplexity or Apple Intelligence about e-commerce podcasts, our content was harder to surface.

We added explicit Allow directives for all major AI crawlers. This is a two-minute change that affects whether your content appears in AI-generated answers.

The Homepage Was Not Citable

This one surprised us. Our homepage had “The Commerce Media Network” as a hero headline with stat cards (410+ episodes, 30K LinkedIn followers). It looked great. But when an AI system asked “What is Talk Commerce?” there was no clean definitional sentence to extract.

Marketing language is not the same as citable content. We added a clear paragraph: “Talk Commerce is an e-commerce podcast and media platform hosted by Brent Peterson. Since 2020, the show has produced over 410 episodes featuring interviews with founders, developers, agency leaders, and merchants.”

That sentence is what AI systems need. Subject, verb, object. What the thing is, who runs it, what it does.

No Person Schema for the Host

The About page talked about Brent Peterson but had no Person schema in the structured data. This matters for E-E-A-T (Experience, Expertise, Authoritativeness, Trust) and for Knowledge Panel eligibility. If Google does not know who your people are as entities, your expertise signals are weaker.

We added Person schema with jobTitle, worksFor, sameAs links (LinkedIn, Twitter), and knowsAbout fields. This connects the person to the organization in a way search engines understand.

Sitemap Was Incomplete

The sitemap index file only referenced 26 static pages. The full sitemap with all 706 URLs (blog posts, podcast episodes, category pages) existed at a different path. If a crawler followed the index, it missed 680 content pages.

LLM Discovery Is Not Optional Anymore

This is the part that matters most. Traditional SEO (titles, meta descriptions, backlinks) is still important. But LLM discovery, making sure your content is findable and citable by AI systems, is now equally critical.

When someone asks ChatGPT, Perplexity, or Google’s AI Overview about e-commerce podcasts, your site needs to be in that answer. That requires:

  1. AI crawler access. Your robots.txt needs to explicitly allow GPTBot, ClaudeBot, PerplexityBot, and others. If they cannot crawl your site, they cannot cite it.
  1. An llms.txt file. This is a plain text file at your site root that tells AI systems what your site is about, what topics you cover, and how your content is organized. Think of it as a README for AI crawlers.
  1. Citable content. Your key pages need clear, factual statements that an AI can extract and quote. “The Commerce Media Network” is a branding phrase. “Talk Commerce is an e-commerce podcast hosted by Brent Peterson” is a citable fact.
  1. Structured data. Schema markup tells search engines and AI systems what type of content is on each page. Person, Organization, Product, Event, PodcastSeries, BlogPosting. Each type has required and recommended fields. Missing fields mean missing opportunities.
  1. FAQ sections with FAQPage schema. These are prime targets for AI Overviews. Clear question-and-answer pairs that an AI can surface directly in search results.

How Often Should You Audit?

Industry best practice is a full technical audit quarterly with lighter checks monthly. But that was the old world. In 2026, with AI search changing how content gets discovered, I recommend:

Weekly: Check your key pages in Google Search Console for structured data errors, crawl issues, and new queries driving traffic. This takes 10 minutes.

Monthly: Run a focused audit on your highest-value pages. Check schema validation, content freshness, and AI crawler access. Look at whether your content is appearing in AI-generated answers.

Quarterly: Full site audit covering all categories. Technical, content, schema, sitemap, performance, and GEO readiness. This is what we just did on Talk Commerce.

After every major deploy: If you add new pages, change navigation, or restructure content, audit the affected areas immediately. We added 10+ new pages to Talk Commerce this week (services, events, PR landing page) and ran the audit right after to catch issues before they compounded.

The cost of not checking is invisible until it is not. We had 674 posts without publisher schema. That could have been affecting our rich result eligibility for months without us noticing.

What We Fixed in One Session

In a single session, we:

  • Added publisher schema to all blog posts (one template fix, 674 posts updated)
  • Added Person schema for Brent Peterson on the About page
  • Added a definitional paragraph and FAQ to the homepage
  • Updated robots.txt with 5 additional AI crawler directives
  • Added all new pages to the sitemap
  • Fixed trailing slash inconsistency between canonical tags and sitemap
  • Moved our SEO health score from 68 to an estimated 80+

None of these fixes were difficult. Most took under a minute each. The hard part was knowing they needed to be done. That is what the audit is for.

The Bottom Line

SEO in 2026 is not just about ranking in Google’s traditional blue links. It is about being discoverable by AI systems that are increasingly how people find information. If your site is not optimized for both traditional search and AI-powered discovery, you are only doing half the job.

Run an audit. Fix what it finds. Check again next month. The sites that treat SEO as an ongoing practice rather than a one-time project are the ones that show up when it matters.