New Episodes!

The Unscripted SEO Podcast

Entity Optimization and AI with Jason Barnard

In this episode of the Unscripted SEO podcast, we dive deep into entity optimization, AI assistive engines, and the future of search with Jason Barnard of KaliCube. This conversation explores practical strategies for controlling your digital footprint and optimizing for the algorithmic trinity of modern search systems.


Related Episodes You’ll Love

🎙️ Similar Deep Dives: If you enjoyed this conversation about entities and AI, you’ll want to check out our interview with Mark Williams Cook on technical SEO innovations where we explored how machine learning is reshaping technical optimization strategies. Mark’s insights on data-driven SEO complement Jason’s entity-focused approach perfectly.

🎙️ Foundational Concepts: For more on the evolution of search and brand building, don’t miss our conversation with Rand Fishkin on the future of SEO. Rand’s perspective on building sustainable organic growth strategies pairs brilliantly with Jason’s entity optimization methodology.


Best Quotes from the Interview

“Which part of the web do you control? Your own digital footprint. We come back to KaliCube 2015 when I started.” – Jason Barnard

“Truth becomes reality due to people repeating it. What you realize is that people sometimes just repeat what they’ve heard and they don’t actually have an opinion.” – Jason Barnard

“If you can organize your data source to be logical (because machines are logical), to be meaningful and valuable, and make sure you’re connecting out to the proof that what you’re saying is true… then you’re onto a winning mindset.” – Jason Barnard

“The L in LLMs is learning and you can influence that learning process.” – Jeremy Rivera

“Understandability is the foundation. If you don’t have that, you’re not even in the game.” – Jason Barnard


Key Takeaways

  • The Algorithmic Trinity: All modern AI systems (Google, ChatGPT, Perplexity) are built on three core technologies – LLM chatbots, knowledge graphs, and search results – all fed from the same data source: the web. This means controlling your digital footprint impacts all three simultaneously.
  • Entity Optimization Timeline: Search results update within a week, knowledge graphs take about three months to reflect changes, while LLM training data requires up to a year. Understanding these timelines helps set realistic expectations for entity optimization campaigns.
  • Industry-Specific Authority Matters: Authority isn’t universal – IMDB dominates for movies, Crunchbase for business, legal directories for law. The key is identifying which platforms algorithms trust within your specific industry rather than chasing generic high-authority domains.
  • The Claim-Frame-Prove Method: Establishing expertise requires making a claim, framing it within existing knowledge structures, then getting others to corroborate it. This creates “truth” through repetition and consensus, which AI systems then recognize and amplify.

Valuable Insights Table

Concept Traditional SEO Approach Modern Entity Approach Time to Impact
Authority Building High DA backlinks Industry-specific entity mentions 3-12 months
Content Strategy Keyword optimization Information gain and new perspectives 1-4 weeks
Trust Signals PageRank and links Corroborative digital footprint 1-6 months
Brand Recognition Branded search volume Entity understanding across platforms 3-12 months
Competitive Analysis SERP positioning Entity relationship mapping Ongoing
Foundation Element Homepage optimization About page as entity canonical Immediate

Introduction and Background

Jeremy Rivera: Hello, I’m Jeremy Rivera, your Unscripted SEO podcast host. Welcome to another episode where we explore the cutting edge of search marketing. Today I’m here with Jason Barnard of KaliCube. I am really looking forward to this interview because I have so many questions about entities and Google, entities and LLMs, and the subjective nature of truth.

For those who may not be as familiar with you as I am, could you give them an introduction to a little bit of your history? I always do this – focus on what you’ve done or where you’ve been that makes you a source of truth for your industry.

Jason Barnard: My industry is search engine optimization or generative engine optimization or AI assistive engine optimization, whichever one you want to call it. I’m calling it AI assistive engine optimization because we’re optimizing today for AI assistive engines like ChatGPT, Perplexity, Google AI mode, Microsoft Copilot.

Why am I an expert? Why can I claim to be an expert? Because I started in 1998, the year Google was incorporated, with a website for children and grew it to one billion page views in 2007. 60 million children visited the website in the year 2007, a billion page views. We signed deals with huge companies like Tata in India and Samsung and Disney and ITV Studios. It was a huge success because of the success on the web, and we were competing with the BBC and PBS.

A lot of that was down to my ability to, let’s say, manipulate Google – get to the top of the results for all the different keywords at the time – but also market my platform as a brand. The brands were Up to Ten (the company, and I was CEO and founder) and Bouman Kuala (the characters that my wife and I created).

Then 2012 I pivoted my career and realized that I could change the way that Google perceived my personal brand. It saw me as a cartoon blue dog, and I wanted it to see me as an entrepreneur and digital marketer. So I figured out how to change its perception – instead of saying Jason Barnard’s a cartoon blue dog, it said Jason Barnard is a respected entrepreneur and digital marketer.

Now at KaliCube, I’ve built the company to offer that service to entrepreneurs around the world who want to make sure that Google and now AI are saying exactly what they want about them and make sure that Google and AI, like ChatGPT, prioritize them above the competition within their niche.


The Evolution of SEO: From Webmaster to Entity Optimization

Jeremy Rivera: I love that intro and it leads us on a couple of paths. Since you came from 1998, I’m floating this concept out of let’s forget the acronyms. The problem isn’t like SEO is dead or SEO is GEO. It’s actually our label as experts. You came about in the age where a person of your talents was known as a webmaster.

I’m going to try to bring webmaster back as an actual title because it’s a much more fitting description of what it is that we’re doing. We’re integrating the available signals – whether that’s through email, organic search, creating entities, connecting to other websites through our site and through our business – to change how we’re perceived and hopefully drive traffic based off of that perception.

Let’s talk about how you made that change from a blue cartoon dog. What became some of the pillar pieces of research or processes that you experimented with to uncover how it was that Google understood entities?

Jason Barnard: Back in 2012, it was very much about manipulating very simplistic algorithms with keywords and links. But the difference that I found was that it wasn’t just my website, it was my entire digital footprint that I needed to manipulate.

When you search for somebody’s name or a company name, it fills the results, even back in 2012, with Facebook, review sites, articles about you, your social media profiles. So whereas SEO would at that time have been focusing purely on the website, I was focusing on the digital footprint already. That was a huge difference – bringing my SEO skills to a wider range of sites.

I realized this digital footprint is the foundation of everything that I am online. It drives traffic directly from Facebook, from LinkedIn, from YouTube, from articles about me, articles I write on Search Engine Land. But I can also use that to change the results about me so that when somebody has seen me and they search my name, I look like a superstar.

That was in 2012, so I wasn’t really looking at entities. Then 2015 with Hummingbird and From Strings to Things, I thought, “Oh wow, I’ve hit the jackpot” and I created Kali Cube.


Brand Search and Click Behavior

Jeremy Rivera: I think the appearance of the Knowledge Panel as a regular SERP feature really cemented entity optimization as something that entered a lot of people’s minds. But I’m going to throw a challenge onto that – we were always guessing that click behavior was a factor. Now we have the lawsuits to finally prove that we were being gaslighted by Google all through 2010 to 2020 about whether or not click behavior had any impact – it absolutely does.

Following along how brands are searched organically and click behavior, how do you see those two things being connected? I see a direct correlation between the increase in branded search being a positive signal about your entity. Would you agree with that assessment?

Jason Barnard: Absolutely. Increased brand searches is definitely a powerful signal to Google. Google were gaslighting – you have to remember that people like John Mueller and Gary Illyes and Danny Sullivan are actually just there as public relations people to try to keep the SEO community from cheating too much. So they are going to deny a lot of stuff because they don’t want people gaming the system more than they currently are.

What I noticed with Gary and with John and with Danny is that they say things that if you read exactly what they said, they don’t say exactly what the SEO community thinks they’re saying. They would say, for example, “clicks don’t change rankings directly.” It’s the word “directly” that gets them out – it’s their get out clause.

I did a series of interviews with the team leads at Bing – Fabrice Canal, Nathan Chalmers, Frederick D’Bou – and they have nothing to lose. So they shared a lot of the secrets that Google weren’t sharing. When I published the articles, people said, “That’s Bing and not Google.” But it’s the same technology, the same audience, the same aim. They are not reinventing the wheel.

One thing that Nathan Chalmers explained to me at Bing was the whole page algorithm – the overriding algorithm that reorganizes the SERP. Even if you have a place on that SERP theoretically, the whole page algorithm is building what it believes to be the best result for the user as opposed to the best result from the algorithms.

After three or four years, Gary Illyes said, “Yeah, we’ve got that and it’s called the Magic Mixer.” I knew it and I was sharing it, but people didn’t believe me until Gary admitted it four years later.


The Algorithmic Trinity: LLMs, Knowledge Graphs, and Search Results

Jeremy Rivera: Now there are indications that maybe there’s a backdoor where ChatGPT or others are able to access Google rankings in some way. From your understanding, what is the implication of machine learning, LLM tools understanding of entities versus the pathways or process that you reverse engineered with Google?

Jason Barnard: An LLM is designed to have a conversation. So I call it the algorithmic trinity. All of these engines are built on the same three technologies: LLM chatbots, knowledge graphs, and search results. That’s how they function.

  • They have the LLM that allows it to have a conversation
  • They have the search results for up-to-date information or niche information they don’t already have in the training data
  • They have knowledge graphs for fact checking

Whether it’s an LLM chatbot training data, whether it’s a knowledge graph or whether it’s search results, they all use the web as the data source. So at the end of the day, you’re feeding all three of the algorithmic trinity from the same data source, which is the web.

Which part of the web do you control? Your own digital footprint. We come back to KaliCube 2015 when I started.

Jeremy Rivera: So you’re saying with these three technologies, the other two are fed out of the one trough. Since you can control the one trough, which is your personal data stream, that’s where your focus lies because you can’t directly control the other two.

Jason Barnard: Exactly. And then you come to the point of controlling your digital footprint and your relationship to your competitors and your relationship to your audience. You need to look at: my audience, me, my competitors. Who am I and what is my relationship with my competitors and with my audience? How can I communicate that to all three of the algorithmic trinity?

The search results are going to be relatively reactive – a week maybe. A knowledge graph is going to take three months. An LLM training data is going to take you a year. That’s the way it works today. You’re using the same data source to feed all three of the algorithmic trinity with different time lapses before the effect is felt.


Industry-Specific Authority and Trust Signals

Jeremy Rivera: When you’re thinking about optimizing an entity for a client, is there a differentiation between what people consider to be social media sites versus forums versus directories versus news outlets?

Jason Barnard: It actually depends on the industry. In the legal industry, the legal directories have huge power. So you have to look at which sources the algorithms trust, and that’s dependent on the industry.

  • Wikipedia apart, Wikidata apart, it’s all industry dependent
  • IMDB is very powerful for movies
  • Music Brains is very powerful for music
  • Spotify is very powerful for music and for podcasts, but not for business
  • Crunchbase is powerful for business

You need to look at the relevancy of the platform to your industry. What we found at KaliCube is that an incredibly powerful domain for a specific entity is not going to be the same even across an industry.

For example, I created my website and it now has huge power over one of our clients, Jonathan Cronstedt, because I’ve integrated him into my peer group and I’ve made that relationship with my peer group on my website. Google and ChatGPT and Perplexity all extract information about him from my website. The relevancy and the authority that my website has on this specific person is incredibly important. You can’t take it as read that a website you haven’t heard of has no authority.


The Medic Update and Distance from Seed Sites

Jeremy Rivera: That ties into a concept I discussed with Darth Autocrat about the medic update around 2019. We had an ecosystem where chiropractors were outranking Medline mostly because they hired great writers and had 20,000-word essays on turmeric whereas WebMD had a sentence. Overnight we saw that flip.

We posited that the concept of distance from seed as a portion of the trust rank factor might explain this – meaning there are seed sites out there and the distance from seed could have been that key signal. Josh Axe might be a great chiropractor with great writers, but he’s not getting a link from the Mayo Institute. WebMD is.

Jason Barnard: We had data of where knowledge comes from in Google Knowledge Panels in particular. There are four different sources:

  1. Google Knowledge Verticals (like Google Maps)
  2. Wikipedia and other highly trusted sources
  3. Second generation (ones one step away from that seed set)
  4. And then further out

What we managed to identify was the seed set and then the one step away from the seed set. Around 2020 or 2021, we started seeing resources that just don’t make sense – that you simply wouldn’t trust by looking at it.

That was when Google said, “OK, the Knowledge Graph has got its seed sources and we’ve stuck to it religiously for the last six, seven years. Now we’re going to let it go.” It started pulling up different sites.

That’s when my site became the authority for the TV series my wife and I made. Up until that point, my website didn’t have any authority on that. It was Wikipedia, Wikidata, IMDb, and Fandom. Then all of a sudden, the knowledge panel started to cite my website for all the characters we created.

My website, even though it doesn’t have a high domain authority, has a very high entity authority for very specific entities that are related to me. That’s hugely powerful and it’s even more so today.


The Subjective Nature of Truth in AI Systems

Jeremy Rivera: Let’s talk about the impact of ChatGPT’s seeming reliance on citations from press sources, journalists, the concept of subjective truth, and how SEOs have been remiss in separating themselves from the previous generation of PR people.

Jason Barnard: The question of truth – we were talking about claim, frame, and prove. This gets us into a philosophical discussion. I claim that I was born in 1966, and that’s pretty easy to prove. I claim that I’m a world-leading expert in generative engine optimization. That’s much more difficult to prove because it’s subjective.

But I can make the claim on this show, framing it to say: because I’m an expert in answer engine optimization, therefore I am an expert in AI assistive engine optimization. Generative engine optimization is the precursor to AI assistive engine optimization, which is the next step. That’s a framing.

And then to prove it, I get people like you to say, “I agree, Jason. You are an expert,” and it becomes truth. Is it fact? No, but it’s truth.

Jeremy Rivera: So I say Jason Barnard is an expert in GEO, in AISEO, and I add that to show notes of this show. I’ll write an article about it. I’ll give you an article to post to your site as well. So there will be three concurring sources with my statement. Now those words exist in multiple places online, so if a query comes into an LLM tool, those concurrent occurrences lead to a claim becoming a nearly provable truth.

Jason Barnard: Exactly. It may sound like cheating, but if you think about human behavior, that’s exactly how it all works anyway. If somebody says “that’s the best boulangerie,” I believe them. Somebody else says it, I believe them. You end up with the truth that this is the best baker’s because people repeat it.

Truth becomes reality due to people repeating it. What you realize is that people sometimes just repeat what they’ve heard and they don’t actually have an opinion. The more that happens, the more it becomes proven and hardened down into the truth of humanity, even though a lot of the people saying it have absolutely no idea what they’re talking about.


The Value of Unlinked Citations and Mentions

Jeremy Rivera: This is something Rand Fishkin came up with – the concept of unlinked citations back in 2013. There was value in the mentions, but because we lived in a Google-centric economy where links were 10 times more effective, most SEOs would either poo-poo or ignore the concept unless they needed a Wikipedia page.

I find it fascinating that these concepts have existed, but now we’re not in the golden age of Google dominant SEO focus. It’s opening up the playing field, which reveals that it’s been this way all along.

Jason Barnard: Whoever 10 years ago was saying “don’t care about mentions” should be kicking themselves today. In this claim, frame, prove system, I see a lot of people talking about answer engine optimization.

If you search for “who is an expert in answer engine optimization” in January of this year, I didn’t even appear on the list. But I started a series on SEMrush called “SEO is AEO” (answer engine optimization), did a podcast with TrustPilot, article on Search Engine Watch in 2018. Lots of mentions, but not very many links.

I could explain to the AI, to the LLMs and to the search results: I started this in 2018 and here’s the proof. All of those mentions suddenly come in very useful because now if you ask any of these machines who are the world’s experts in answer engine optimization, six months down the line: Jason Barnard. He coined the term.

There’s an obsession with Wikipedia, but you have to think about scale. Wikipedia has 6 million articles. Google’s Knowledge Graph has 54 billion entities. The scale is 10,000 times bigger. Google doesn’t need Wikipedia above and beyond it being a seed source.

If you look at an entity in the knowledge graph, it’s a thing. Little by little, you can reinforce the presence of that entity and its relationship to other entities. If you think of an LLM, the equivalent is a parameter. In an LLM, I’m a parameter. You can look at both an LLM and a knowledge graph in a very similar way if you accept that a parameter and an entity are essentially the same thing.


Connecting Old Technology to New: Strategic Framework

Jeremy Rivera: Does that mean a marketing team faced with disruptive technology is looking at an adjustment to the knowledge graph understanding of that old technology and the relationship to that new technology? The job is to put articles out there to craft the semantic connections from old A to new B?

Jason Barnard: Yes. It doesn’t know what it doesn’t know. Like with a knowledge graph, it can’t understand something if you don’t relate it to something it already understands. In a knowledge graph, it’s a question of understanding. With an LLM, it’s a question of having that parameter to hook the new parameter onto.

If there are no parameters to hook onto, it can’t include it in any meaningful way in its dataset. It won’t be accessible. For a knowledge graph and an LLM, I would build the same way. You say: this is the old world, this is the new world, and this is how they connect. And this is why we are important in that connection. Then you become part of that connection.

Jeremy Rivera: I think that’s a transformative way to view the transitional effect of this new technology. I’ve been seeking a newer framework to approach my SEO work with. Taking on the webmaster idea – if I’m looking at these connections and not narrowly focused anymore, the golden age of Google is gone, you need to look broader. The L in LLMs is learning and you can influence that learning process.

Jason Barnard: We’re feeding these machines – whether it’s the LLM, the Knowledge Graph, or the search results – through the web index. That web index is being sucked up by Google, Bing, ChatGPT, sometimes in real time, and the common crawl. You’re feeding all of these machines with exactly the same data source.

If you can organize your data source to be logical (because machines are logical), to be meaningful and valuable, and make sure you’re connecting out to the proof that what you’re saying is true – or at least supported by people and companies and entities within your industry – then you’re onto a winning mindset.


Query Fan Out vs. Cascading Queries

Jeremy Rivera: Mike King talks about the concept of query fan out within Google and using that as a framework to adjust content marketing. Do you agree with his approach?

Jason Barnard: I talked a couple of years ago about cascading queries. So what he calls fan out, I call cascading queries. It’s Fabrice Canal from Bing who explained that to me two years ago when Copilot was launched.

Basically, what we do is take the first query and find the other queries that make sense around that query to make the final query that allows us to produce the results. It makes total sense, and I wrote an article on Search Engine Land about micro-AEO (micro-answer engine optimization), with the idea that if you can optimize for each of those cascading queries, and you can become the answer for multiple ones, you will be included in the LLM output.

Jeremy Rivera: A chunk of the first sites hit by the HCU update were notably sites that reverse engineered “people also ask” entries for particular niches and created programmatic answers. That got slapped by HCU. Is this indicative of a problem with reverse engineering as a primary content strategy?

Jason Barnard: I would argue that reverse engineering Google is a futile task. You’ll chase it the rest of your life and you’ll lose.

I would say: build your brand, build a decent marketing strategy, stand where your audience is looking, offer them the right solution in the right format at the right place, invite them down the funnel on LinkedIn, on YouTube, wherever it might be, and package that so the machines can understand it. It’s not just Google – it’s Google, Perplexity, Microsoft, ChatGPT.

When you’re thinking “I’m answering something that’s already been answered,” that’s pointless. What you need to do is add new data, add a new perspective. Take it to a new place. Move the discussion forwards. Move the world forwards. Then the LLM and the Knowledge Graph and the search results will always be your friend.


Information Gain and Moving Knowledge Forward

Jeremy Rivera: That’s the only escape route out of the snake eating its own tail of LLM training material. The only way that changes if we as humans make distinct efforts for information gain. There are only so many plumber articles on how to fix a leaky faucet. But a conversation with a 30-year plumbing expert – that conversation will have information gain, unique insights, different ways of looking at how plumbing connects to other functions.

Jason Barnard: The downside is that every time you give the algorithms new information, they store it and they don’t need it again. So you need to keep moving forwards. But as a human being who believes in perpetual learning throughout life, that suits me fine because I want to keep pushing knowledge forwards. I want to keep adding value, adding additional knowledge to the human race.

This is a very positive thing but it does put a lot of pressure on everybody to keep moving the needle forward.


Practical Implementation: The KaliCube Method

Jeremy Rivera: As we wrap up, if you could give a tangible, specific, completable task an SEO can do to start doing the KaliCube method – what would be those practical hands-on steps?

Jason Barnard: The very first thing to do is look at your About page on your company website or personal website and make sure it states clearly:

  • Who you are
  • What you do
  • Who you serve
  • Why you’re important within your industry

Make sure it links out to all of the corroborative sources – all your social media profiles, all of the articles that talk about you. They all say the same thing. If possible, they link back so you end up with an infinite cycle of self-corroboration that the machine understands.

If it doesn’t understand who you are, it can’t attach credibility signals, EEAT, or I call them NEET (N-E-E-A-T-T). It can’t attach those credibility signals to something it doesn’t understand. Understandability is the foundation. If you don’t have that, you’re not even in the game.

This is called the entity home. Google calls it the point of reconciliation. You could call it the canonical URL for the entity, but you need to own it. If you don’t own it, they’re going to attach it to LinkedIn or Instagram. I’ve seen pop stars whose entity home is Instagram – you don’t own that. That’s rented space.

Look at your about page as the entity canonical for the entity itself, your company. Then look at the website as the representation of that entity and its digital footprint, and you’re on the right path.

Jeremy Rivera: That definitely makes sense. I worked with my friend Michael McDougald who switched from personal branding to trying to launch an agency and he was getting nowhere with his right thing agency was what he wanted to be but it just wasn’t showing up at all in the SERPs. You don’t have an about page. So let’s fix that. So I’m glad to hear echo of that basic advice. Maybe if you want Google to know about you, you should tell it about you.

And that happened with another brand I worked with, Save Fry Oil. It just kept showing methods of how to save fry oil. we need links, you need entity mentions, need VN directories. They had social media profiles, but they had no anchor text profiles.

very few mentions of their brand elsewhere. and sorting it out, getting the about page added. I love that. It’s very tactile, very tactical. And within the reach of nearly every SEO, some SEOs might have to get permission to edit the about page. And once spent four hours in a meeting with seven CEOs to add a single word to the homepage in the H1,

But hopefully, not everybody is bound up in that much red tape. And companies do love talking about themselves. So if you can frame it as, we want to surface your expertise as much as possible. Let’s find other ways to develop your about page and surface everywhere you’re an authority. It resonates so well with CEOs.

Jason Barnard: 100%. The KaliCube process – you can download the free guides at KaliCube.com/guides. It’s all based on:

  1. Understandability – Does the machine understand who you are, what you do, who you serve?
  2. Credibility – Does it believe you to be the most credible solution in market?
  3. Deliverability – Does it have the content from you that allows it to deliver you to the subset of its users who are your audience?

Understandability, credibility, deliverability. If you can nail that, you’re going to win the game that’s coming.

Jeremy Rivera: Thank you so much for your time, Jason. This has been an incredibly insightful conversation.

For our listeners who want to dive deeper into these concepts, I highly recommend checking out our other episodes on entity-based SEO strategies, AI optimization techniques, digital authority building, knowledge graph optimization, modern link building approaches, and future-proofing your SEO strategy.

Jason Barnard: Thank you so much Jeremy, that was delightful.


Episode Resources

Meet The Hosts

Jeremy Rivera

Jeremy Rivera

Keith Bresee

Keith Bresee

With a combined 2.5 billion SEO clicks and 25+ years in the trenches, Keith Bresee and Jeremy Rivera aren’t your average podcast hosts—they’re seasoned SEO veterans who’ve scaled brands to millions of visitors, driven millions in revenue, and navigated every algorithm shift along the way. On the Unscripted SEO Podcast, they’re peeling back the curtain, sharing battle-tested strategies, real-world experiences, and hard-earned lessons directly from the front lines of SEO.

Listen Now!

Meet the worlds best SEO’s.