All About AI: Mike King on AI, Google Leaks, and How the DOJ Helped Digital Marketers
Hosted by Aaron Burnett with Special Guest Mike King
Mike King, CEO of iPullRank and acclaimed conference speaker, pulls back the curtain for an inside view on how AI, search, and digital marketing are evolving in our third “All About AI” episode. Mike sits down with Aaron to dive into how AI is transforming digital marketing – and search landscape in general, what Google’s antitrust testimony reveals about how search really works, and why leaked Google documents are a treasure trove of information for digital marketers.
About Mike King
Mike King is the CEO of iPullRank, an enterprise SEO agency based in New York. If you’ve been to a digital marketing conference in the past decade, you already know Mike. You know he’s a gifted public speaker who can be electric on stage. You also know he’s insightful, thoughtful, and deeply technical in his knowledge of digital marketing.
Over the past year, Mike has been at the center of three very important developments:
- A detailed review of Google’s testimony in their multiple antitrust trials. This is interesting and important in search marketing because it highlights the difference between what Google has publicly disclosed about reviewing, evaluating, and surfacing content in their search results versus what they’re actually doing.
- A technical analysis of the Google Docs leak – a review of internal Google documentation that surfaced 8,000 signals that Google uses to decide what content to rank and how high to rank it. Mike has conducted an amazing analysis of these Google documents and what they mean for search marketing.
- His role as Chief Marketing Officer for an AI prompt management library, which gives him an insider’s view on the practical application of AI, particularly in digital marketing.
Early Years and Career Path
Aaron: Tell me a little bit about your personal and professional background.
Mike: I was a bored kid who discovered computers and the internet at 12, back in the early nineties. I wanted to make computer games, so I kept going to Barnes & Noble and buying books on coding. I started with QBasic, learning from help files. Then I found out about Pascal, learned that, then C, and it just kept going.
Parallel to that, I got heavily into hip hop. It’s funny – I used the Fresh Prince of Bel Air in my talk yesterday. I very much have a Fresh Prince of Bel Air story in that I’m from Philadelphia. My parents moved me to Connecticut to attend a private school because they thought I would get into too much trouble in Philly. Being in Connecticut, I wanted ways to stay connected to my Philly roots, so I got really into all things hip hop: graffiti, breakdancing, DJing, and rapping.
In high school, I had an internship at Microsoft, building websites for the OLADB and ODBC teams. I also built a couple of sites for the external Microsoft.com stuff. I was really into hacking back then. The first money I ever earned was because I kept hacking this site that provided web space. The guy came on the Unix chat one day and said, “Hey, could you please just stop hacking me? I’ll pay you to do something.” At this point, I was 14 or 15, and he paid me about $150 to write guestbook tutorials and CGI script tutorials. I used that money to buy a pair of Jason Kidd shoes, and I thought, “Yes, I can make money on the internet!”
I went to Howard University for computer science, but I just wanted to make music instead. I had a couple of internships because my stepmom was a VC at IDG Ventures, and she was getting me internships at companies they were bringing into the fund. But I was having a lot more fun making music, so I decided to pursue that. I did that for a living for eight years, playing all over the States, Canada, Europe, Africa once, and Australia a couple of times. I had this indie rap career going.
Transition to SEO
In 2006, I got into a bicycle accident. I didn’t have health insurance because it was pre-Obamacare, so I needed to get a job to pay my medical bills. The first place to hire me was an SEO agency because of my development background. They said, “Yeah, you can figure this out.” But it turned out they had no idea what they were doing – it was very much an SEO chop shop. They had a room of 30 people robo-calling people and selling, and a room of five people doing the work. We had a hundred accounts.
I work really hard, so I thought, “Okay, how does this work? You don’t know how this works? Let me figure it out.” No one was teaching me things that worked. They were saying things like, “Just bold some text on these pages,” and for link building, “Just put in your query and append ‘add URL’ to the end of it so you can build links on guestbook sites.”
I knew this wasn’t what we were supposed to be doing. When the SEO production manager wasn’t answering the phone one day, I just answered it and started handling issues. I became the person staying until 10 at night, just getting things done. This wasn’t consulting – we had the keys to the websites and just optimized them. Eventually, the owner realized the production manager wasn’t doing anything, fired him, and made me the production manager.
Growth and Evolution
I only planned to keep the job for three months because I was about to go on my first European tour. When I went to put in my notice, the production manager said, “Wait, what do you mean you’re leaving? You’re the only one doing things here.” He suggested I could work from home while on tour. I had this giant Dell laptop that probably needed a hand truck to carry it, but I brought it with me on tours, doing work on the train and uploading it once I got to WiFi, since WiFi wasn’t everywhere at that point.
I eventually ended up at Razorfish in Philly, and I really enjoyed the work because we were working with big brands. I was heavy into Polo Ralph Lauren – that was one of my main clients, and we got them ranking number one for the keyword “sweaters.” This was great; it wasn’t like that low-level stuff I was doing before. I was working on brands that I buy and making significant impacts.
Professional Development and Community Involvement
At Razorfish, I was just a contractor with 24 hours a week, but I was still overworking because I really wanted a full-time job there. I was building software for them – this was around 2010, 2011, when there wasn’t a lot of SEO software available. I didn’t even know about the SEO community at that point. I was just reinventing wheels; I had come up with article spinning myself, without knowing that other people were doing it.
When I got to Razorfish, I discovered the SEO community, and Screaming Frog SEO Spider had just come out. I thought, “Oh, other people are making software. This is great,” because all we had before was Web Position Gold and Xenu Link Sleuth, which were both terrible.
With the 24-hour schedule, I worked Tuesday through Thursday, which allowed me to still do shows. One of my good friends, Dan, who I was touring with – we were on a train in Sweden one day when he said, “Mike, I know you’re doing this SEO thing. My cousin owns an SEO software company.” I asked who his cousin was, and he said “Rand Fishkin.” My mind was blown that my worlds were colliding this way.
When I was at Razorfish, I emailed Rand saying, “Hey, I know your cousin, I’m in this space, I’ve read your book.” He was super cool from day one. I asked him what he’d recommend for someone looking to get involved in the space, like blogging, and he said, “Yeah, don’t blog – it’s already too crowded.” I thought, “Cool, nice to meet you.”
But then they had a Moz meetup in New York where Rand and Geraldine were present. I met them in person, and a few months later, I started writing blog posts for their UGC section. My first post was really popular, so I kept doing it. That summer, I knew I had to go to MozCon because I was connecting with all these people in the community.
Career Progression
I was at Publicis Modem at this point – it doesn’t exist anymore. Since Razorfish wouldn’t offer me the full-time job, I thought, “I’ll go to another Publicis agency, get a job offer, bring it back here, and then they’ll have to match it.” But they still wouldn’t match the offer, so I moved to New York.
I was blogging and asked my manager if they’d send me to MozCon. They gave me the runaround for months before saying no. I decided to spend the money myself and take vacation days. It was the best investment I made because I connected with all these people in person after having a popular series of blog posts. I reconnected with Jamie Steven, who I had worked with during my internship at Microsoft.
That quickly turned into them asking if I wanted to be more involved with Moz. They had their associates program. But also, that was the first time I saw Will Reynolds speak. While he was on stage, I had this moment thinking, “Oh wow, they let black people do this. Cool.” So I immediately pitched for SMX East, and they accepted me. I did a great job there, and that snowballed into speaking everywhere and caused a bidding war for people looking to hire me for SEO roles in New York.
Moz even tried to hire me, but I had just moved to New York and wasn’t moving to Seattle. I met my future wife at that SMX East where I spoke. MozCon 2011 and the events directly after it set me up for what I am now. I ended up working at an agency called iAcquire where I became the director of marketing, and they were putting me on the road even more.
Founding iPullRank
After leaving iAcquire and going to another agency, I thought, “Why am I doing this? I should just start my own shop.” At that point, my thought leadership was so solidified that it was easy to turn that into a business. Here we are 10 years later.
Aaron: The name of your agency is iPullRank?
Mike: Yes, we’re a digital marketing agency based in New York City. We do SEO, content strategy, and generative AI. We work in pretty much every vertical, but we have a concentration in e-commerce, financial services, and media and publishing – basically all things that are very much New York-based.
We’ve had a history of doing innovative work for clients, especially for SEO teams that are highly technical and don’t know what else to do. They come to us because we always have those next-level ideas. We’ve been doing work that I’m very proud of.
It’s weird to think of the impact you have on the internet because it’s easy to minimize it like, “Oh yeah, you guys are just an SEO shop.” But the way we’ve impacted content for such big organizations and the downstream impact of that on American society – we’ve had a significant impact. I really appreciate who people have become working in our company.
Building Team Success
There’s a guy, Gaetano DiNardi – he’s a consultant now, but he started at iPullRank. I’d be shocked if you haven’t seen him on LinkedIn; he’s one of those personalities. When he came to us, he had no experience – he was an R&B singer with random jobs. When he showed up, and I always tell this story while he laughs at it, he was a very typical New York Italian guy who came in wearing a pinstripe shirt, looking like he stepped off the set of The Sopranos.
But he was just so earnest in what he was explaining to us during his presentation and interview process. I knew that if you’re someone who can make it in music, even if it’s a small amount of success, that’s incredibly difficult when there’s no money involved. I thought, if I put you in front of a big brand where there are resources, and if I train you up in how to do these things, you’re going to be really effective because you’re going to try harder than someone who just came from a marketing environment.
That was exactly what happened. This kid was sleeping in the office – and I don’t encourage people to sleep in the office – but he was doing that because he knew that’s what he needed to do to ramp up to where he needed to get. Now he’s literally someone that B2B SaaS companies looking for search or growth marketing expertise put at the top of their list.
Impact and Legacy
Aaron: You mentioned some of the massive impacts that you’ve been able to have on the Internet and on people’s lives. Can you give a couple of examples?
Mike: One of the biggest things we did was for JPMorgan Chase. It wasn’t just about helping them sell more mortgages; we also supported a program they call Advancing Black Pathways, which was all about diversity and inclusion. We helped them really understand those audiences and how they could approach different audiences through their core marketing.
For example, how do we reach more Hispanic people? How do we reach more Black people who are trying to be first-time home buyers? Yes, we helped them grow generally, but we also helped them grow in those segments that banks have historically not been good at reaching. That sort of work is really important to me – that we can do more than just make a business money, we can come in and support initiatives that matter.
Google’s Antitrust Cases and Internal Documentation
Aaron: I’d love to talk with you about a couple of things that have happened over the last 12 months or so. Google’s testimony, DOJ testimony – take me back to what happened with Google, their testimony, why it was important, and what you did about it.
Mike: Google has been going through a series of antitrust cases – they’re making a career of it. But this main one that’s been talked about a lot has to do with search distribution, really about the deals they do with Apple and Firefox where Google is the default search engine. Because the reality is that most people don’t even know how to change their search engine, so they want to be the primary choice so they can maintain the search volume and sell ads.
The complaint is that this is monopolistic behavior because you’re squeezing out Bing, DuckDuckGo, and other search engines. But as part of that, there’s been a very deep dive into how Google works. Various people in leadership at Google gave testimony, but most importantly, a gentleman named Pandu Nayak, who used to be the VP of search, gave testimony about how Google uses user signals to reinforce and inform what ranks.
He talks about two different systems: one called Nav Boost and another called Glue. Nav Boost is for the core “10 blue links” or the core organic stuff, and Glue is for all the other SERP features around it. The simplest way to understand it is that whatever gets clicked will be the thing that maintains the higher positions. If something is number one but users skip it because it isn’t the right thing, and they click number three instead, that becomes number one.
This is really important because there’s always been a question about whether Google uses those signals. But in information retrieval, these are all best practices – measuring session success as a function of what users click on, how long they stay there, whether they come back to the SERPs. These are all things that every search engine uses, but Google has denied it repeatedly.
Google’s Historical Stance and the Leaked Documentation
Even when people like Rand Fishkin got on stages and said, “Hey, everybody take out your phone, let’s all search for the same thing, and we’ll watch in real time as it becomes the number one result,” Google would try to discredit him. They’d say, “Oh, he’s making up metrics, this doesn’t work,” sometimes in very personal ways – calling him out and saying he doesn’t know what he’s talking about.
There are pockets of the SEO community that just don’t like Rand for whatever reason. For me, he’s always been someone whose heart has been in the right place, even if he’s been wrong about something. He’s not someone you should hate on just for being wrong. He’s always trying to uncover the right information.
Coming into this year, I gave a presentation called “Everything Google Lied to Us About.” It centered on a lot of these things that have been revealed through that testimony – things we’re learning more about that Google is saying, “Oh yeah, this is a new thing in our guidelines,” when it was actually an old thing. They’re effectively saying now that they figured that out, but these guidelines are aspirational, and they’re trying to make their technology do it. But there are so many holes in it.
Then that led into what happened with the leaked documentation. Basically, accidentally, one of Google’s internal systems pushed their documentation live on GitHub. There are various things plugged into GitHub that you can automate, like saving documentation – there’s a system called Hex Docs which does that. And that documentation now lives publicly forever because of this.
A couple of people found it, and a gentleman named Erfan Azimi handed it off to Rand. Rand reached out to me saying, “Hey, I think this is real. You should check it out.” And it was real – it was a whole lot of telemetry, for lack of a better term, that Google is looking for in pages when they’re thinking about how to store them, how to rank them, score them, things like that.
Aaron: And to clarify or add context, I think you’ve described this well – this is the list of ingredients, but not the recipe.
Mike: Exactly. Super detailed list of ingredients – 14,000 attributes, 8,000 of which pertain to search. We now have a very internal view of what Google is doing, and we get even more information around all this click activity. They’ve got a series of measures like bad clicks, good clicks, long clicks, last longest click – all these sorts of things that are measures of user behavior when they’ve seen this page in search.
Implications for SEO and UX
What this ultimately tells us is that SEO and UX need to work closer together because we’ve always looked at those disciplines as completely separate. Maybe you bring a UX person in after you’ve done the SEO, but really they need to work together because your ultimate goal is keeping people on your page as long as possible – that’s the best signal you can send back to Google.
It changes the complexion of search and how we do it. At the very least, we now need clickstream data to understand how often people are bouncing from our competitors’ pages and what aspects of those pages we can learn from to improve our user experience to keep people on the pages.
Even with clickstream data, that’s typically a separate set of tools – it’s like Similarweb versus Screaming Frog or something like that. SEMrush has a tool called .Trends where they’re collecting data from DATOS to provide that sort of stuff, but it still lives separate from the core SEO functionality.
Technical Insights from the Leak
Beyond the clicks, there’s just a lot of information about what Google is doing. For example, one of the really interesting findings was about something called “source type” related to links. The index is stratified into four different buckets: high quality, medium quality, low quality, and then fresh docs, which is like news.
The high-quality stuff is stored in memory because they know they need to access it quickly. Fresh docs are probably stored in memory as well. The next layer down is stored on solid-state drives – not as fast as memory, but faster storage. And then the low-quality stuff is stored in what’s called “terror Google,” which is like the graveyard. It’s on standard HDDs because they know they don’t have to access that regularly.
Understanding Google’s Index Structure
There’s a sliding scale of value applied to links based on where it lives in the index. That’s not something we could ever really know about before. I knew that the index was stratified because there are patents where they talk about this multi-tier index system, but now we know you can use as a proxy: how well does this page rank, and how much traffic does this page get? If it gets a lot and ranks well for many things, it’s in the higher end of the index, but there are no metrics for that in SEO. You’d have to put all that stuff together, which is what I’m doing.
I’m working on a project where we are trying to replicate as many of those attributes that Google is storing as possible so we can have a more comprehensive view of how they’re looking at the content. That way we can do things in alignment with what they’re measuring and we can better optimize.
Aaron: So you reviewed all this testimony, you reviewed Google Docs leaks. What have you changed as an agency as a result?
Mike: It’s more that we’ve doubled down on things because I already had a series of hypotheses from all my research, but we have a better understanding of how Google is doing it. As an example, I’ve talked a lot about vector embeddings in the last year because they underpin everything in generative AI, everything in search – that’s how relevance is scored. Google vectorizes your pages, they vectorize the queries, and then they perform distance measures, particularly cosine similarity. So whatever page is closest in multi-dimensional space to the query is the most relevant page.
We were already doing things like that, but we didn’t really have an understanding that Google creates vector embeddings on the site level, and then they compare each page to the vector embeddings for the site. Now that we know that, we have that scoring in our content audits and our keyword research and all that sort of stuff. So we can say through the lens of how Google is looking at it, this page scores this way.
It’s really us zeroing in better on what Google is doing and then using that to enhance what we’re putting in front of our clients because there are levels to it. I can’t say “Oh yeah, cosine similarity” to a client, but I can say this is scoring a 50 versus the 100 that you’re comparing against for what’s ranking number one. And so that simplification of these hardcore engineering concepts is what’s allowed us to do things that clients haven’t seen elsewhere.
Client Communication and Implementation
It also allows us to get a lot of buy-in because clients see that we really understand this. It’s not just “Oh yeah, it depends” and “It’s 200 signals, who knows?” We’re saying, “No, it’s many signals, but these are the most important ones, and here’s how you score side by side with your competitors.”
They’re a lot more likely to take action because when we’re doing projections and all that, it’s all centered around this data that no one else has. And we’re able to call our shots a lot better than what they’ve seen before.
Aaron: The distillation to a score is also really powerful.
Mike: Yeah, and there’s a lot of that happening in the SEO space, but I think that going after that simplicity has abstracted a lot of the things that we need to be doing to do better. Here’s what I mean: there are a lot of tools out there that are like SEO content editors where you put in a keyword, you put in your content, it gives you a score. It’ll say “Oh, your page is a 70, and use all these keywords so you can get to a 90 or 100.”
Those tools are not doing what Google is doing. Google isn’t just going to look at the one target keyword. They’re going to look at keywords across a graph, and then they’re going to look at the pages across that graph, extract features, and compare your page against what’s in the graph. Whereas these tools will just take the keyword, perform the search, grab the pages in the top 20, do TF-IDF on them, and then compare your page against that. So you’re going to get closer with that approach, but you’re not going to get any closer to where Google is looking.
Evolution of Search Technology
Being able to understand that, communicate that to clients, and tell them where the gaps are has been really powerful. Here’s another example: Google moved from what’s called lexical retrieval to semantic retrieval to hybrid retrieval, which is a combination of the two. When I talk about TF-IDF or even BM25 – these different ways of scoring pages versus keywords – that’s lexical retrieval. But when you’re doing things where you’re comparing the vector embeddings, you’re doing semantic retrieval. Like I said, Google is combining the two approaches to generate rankings, but almost all SEO tools still operate on the lexical model. Again, doing that will get you close, but it’s not going to get you to 100 percent optimum. And I’m trying to get to 100 percent optimum.
AI and Digital Marketing
Aaron: You’ve been close up to a lot of interesting developments in the AI community. You’ve seen a lot of custom GPTs. You’ve seen a marketplace for those GPTs. You have a deep understanding of the technology. What’s your assessment of AI and its implications for digital marketing?
Mike: We’re in a space where there’s just so much going on, and the definition of AI is getting smaller when it should be getting bigger. When people are talking about AI now, they’re generally talking about generative AI, but AI is a lot of things like computer vision, predictive analytics, all these sorts of things, which are super valuable in a variety of ways.
For generative AI specifically, I think there’s a lot of opportunity to eliminate busy work, or at least make it easier to do, and a lot of opportunity for us to be more creative and also more strategic. But here’s the thing: generative AI has been good for a year, and you have the ability to make movies, music, really interesting imagery. Despite that, our feeds are still the same boring stuff that they’ve been.
I think the analog here is to think about YouTube versus Instagram. We’ve had the ability to do streaming video in the way that people are doing it on TikTok for 15 years, but it’s only been in the last five to eight years that you have people making a living off of making skits and really interesting things using that same streaming technology.
What I believe is that things are not going to change substantially as far as the content output being better until a generation grows up with generative AI where they’re like, “Okay, this is just a tool I use every day – I’ll make a movie today just because I want to make a movie and that’s what I put on Twitter.”
Current State of AI Implementation
In the meantime, it’s really people just replicating their busy work through these tools – whether it’s doing keyword research or writing a blog post. They are going to look for ways to replace themselves or scale themselves for these tasks that no one should really have to be doing themselves anyway at this point.
Perhaps blog posts is not a good example because there’s still a lot of great knowledge, information, and perspective you can get by creating your own blog posts. But I suspect that in the near future, it’s really just going to be about optimizing for scaling yourself.
A lot of that is what we’re doing. We’re plugging it into various aspects of our deliverables because we were already automating things to some degree, using things like App Script or whatever and plugging in the APIs. But now with generative AI – as an example, we do a lot of persona modeling, and then we align that with keywords and the need states of keywords. That has been a very manual process historically.
Now it’s cool – we’ve got the persona, plug it in, and then you have basically a classifier through a generative AI platform where it determines which of these personas aligns with this keyword and what need state that persona is in when they’re searching for that keyword. So that goes from taking 10 hours to two minutes.
Practical Applications of AI
There’s a lot of value in that sort of automation in that we can get the busy work out of the way and get back to more strategic discussions. A lot of that automation work is what we’ve been focused on. For example, with reporting, if you’re using Looker Data Studio or Looker, whatever it’s called, you can use the API to grab the right template and then have it do all the data connections. It can also compare the data points and give you a first draft of that report. Then you go in, look at it strategically, and determine, “Okay, these are the things that are most important here and what the next steps need to be.”
So again, we’re cutting down that time of putting together a report from probably four or five hours to 15-20 minutes. There’s a lot of value in us doing that because it gives us the space and time to think about how we can improve this, where the opportunities are that we wouldn’t have had the time to look for before, and how we capitalize on them.
I think that’s what everyone is attempting, but people are still defaulting to two or three tools – Chat GPT, Midjourney, Copilot, things like that. But we’re in a space where we now have open source versions of this stuff that are also state of the art. Like Llama 3.2 is fantastic, and I can run it on my laptop. I don’t have an RTX 4090 on my machine – I have some GPU from three or four years ago, and it runs just fine.
So I’m excited for people to have this power to really play with and not be so concerned with “Am I using too many tokens?” or that sort of thing. Because I think the creativity is just going to be exponential once everyone has this stuff where they can play with it to whatever level they want.
Framework for AI Implementation
Aaron: What’s interesting about your experience is that you’ve seen hundreds, maybe thousands of different GPT use cases in particular. Based on that experience, based on running iPullRank, do you have a mental framework that you use to consider when and how AI should be employed and when not?
Mike: I think it’s one of those power tools that people are just overusing in a lot of ways. I’ll see people do things with it where it’s like – that could have just been a regex statement. There are just a lot of situations where it’s being overused. And I think that it’s difficult because people don’t know what they don’t know. They’re just like, “I have a problem, I want to solve it, I don’t care if running this prompt is the equivalent of taking three flights or whatever – I just want the output that I want.”
For us, we’re primarily using retrieval augmented generation (RAG) for a lot of what we’re doing. We’re building RAG pipelines, and for any viewer that doesn’t know what that is, it’s basically when you combine a search engine with a large language model. What we do is we build these custom indexes of content and data for our clients and use that for generating whatever it is that we’re trying to generate. So there’s a lower likelihood of hallucination and a higher likelihood of something that’s actually usable.
Practical Applications in Business
Almost all of our use cases start from there. For example, when we create content for a client, we’ll create a custom GPT, which custom GPTs are basically RAG pipelines where we will upload the brand style guide and then say, “Okay, we wrote this piece of content. How does this align or misalign with the style guide?” And it’ll give us recommendations.
We can do a lot of reviews like that, so we cut down on the back and forth between us and the client. We can also do that with legal review – I’m not saying replace lawyers with it, but it’s like a first pass. So if there’s anything glaring, we can pick that up pretty easily.
Aaron: I find it’s pretty effective in a legal context for omission – what am I missing? What risks am I not considering in this?
AI Implementation Strategy
Mike: That sort of stuff is just creating feedback loops that are really effective for what we’re trying to do. As far as a framework of when to use it and when not to use it, I leave my team to their own devices to some degree. I’m like, “How do you want to solve this?” And then if they bring me something and I think they didn’t need to use AI, that’s an education opportunity.
Dan Petrovic, one of the guys who was involved in the leak stuff, has been training his own small models. So I’m wanting us to go more in that direction where we have these small expert models for different things. But there’s a bit of a learning curve to training models, so I don’t know if I’m in a space where I can get my team to do it just yet. I’m trying to get to a space where I’ve mastered it where I can say, “Okay, here’s how we simplify this process.”
I think a lot of it is just trial and error. Everyone is doing trial and error, all the way up to the computer scientists that are building these things. They’re like, “What works, what doesn’t?” And we’re just taking a lot of notes as a team. We have a whole generative AI channel where we share what we did and compare learnings.
Generative AI is still a content strategy problem. People just look at it as a technical problem. But really, you want consistency of output. Part of that is having shared prompts, part of it is making sure everyone’s working from the same platform, set of models, set of tools. So we’ve done a lot of documenting what’s working for us and what’s not. I think ultimately we will land on a stronger framework for that, but right now it’s really just trial and error.
Model Selection and Performance
Aaron: So lots of different technical contexts and environments, lots of different models.
Mike: Yeah, and there are certain things that work better for certain tasks. Like I said, when we’re doing most of our RAG stuff, we’re using Llama 3.2 rather than ChatGPT because you’re dealing with a lot of tokens, and that’s going to be pretty expensive when you’re doing these RAG pipelines.
I know Gemini has a much bigger context window – I think they’re going towards 2 million tokens. They definitely have a million right now, and they’ve also launched prompt caching, so you end up saving money for these big contexts. But I’ve just not been impressed by Gemini’s outputs. It’s so funny to me because Google invented this technology. Why isn’t theirs the best?
Aaron: I have that same question. I look at Gemini and think, surely this is not the best of what you have to offer. You’re using something else internally that’s better.
Mike: I think they are. Yeah, they must be.
Future of Agencies
Aaron: So a few months ago, Sam Altman, in I’m sure a deliberately provocative statement, claimed that in three or five years, AI would be doing 95 percent of what an agency does. What’s your sense of that?
Mike: I don’t think that’s wrong. Because when I think about it, most of what we do in SEO is building a series of ETL pipelines. ETL stands for extract, transform, load. So think of it like this: when you do rank tracking, you’re extracting the data from Google, then you take it out, put it in Excel and transform it into something else, and it might be a series of charts, which you then load into a PowerPoint.
There’s a series of heuristics for how you’re transforming the data, but almost all of that can be done programmatically at this point. So yes, I agree that if we think of most of what we do as an ETL pipeline, it can be replaced by generative AI right now.
The special sauce of your creativity, your strategy, or whatever – that’s what we’re all banking on, but creativity is just synthesis. We talk about it like we’re thinking of things out of thin air, and we have these fantastic ideas. No, you have a series of experiences that you synthesize into this idea. What do computers do really well? Synthesize.
The Future of AI and Creativity
Aaron: I agree. Pretty much everything’s derivative. LLMs are derivative engines.
Mike: Absolutely. And if we get to a point where those have encoded enough of human knowledge – which we’re not there because we’re just training it on the internet and books and things like that. There’s so much more that we have in our heads. If there is some way for them to capture more of that collective knowledge and then be able to synthesize it, then yeah, it can replace what we do.
Aaron: But then how do you think about the future? How do you evolve your agency and agency models to deliver value in the future? What matters now? What muscles should we be building?
Mike: I think the creativity thing, the synthesis for creativity thing is a long way off. We keep talking about reasoning and all that these models are doing. That’s the wrong word for it – it’s not reasoning. It’s not even guessing. It’s generating a series of outputs, comparing that to other outputs.
Aaron: It’s predictive outputs.
Mike: Exactly. And so I don’t think we are going to get that level of synthesis in the next five years. I think we’re going to get some other things that are interesting. I think we are likely to see another architecture like the transformer that shifts things in the near future, but I don’t know that what we think of as creativity is going to pop out of these machines in the next five years.
So it’s really about how do we clear the way by having it do all the menial tasks to be more creative and more strategic? Those are the things that we can add here. And I think to some degree we will also evolve in a way where we’re doing something that we’re not doing now that is more valuable. Once all the menial tasks are taken away, like something I’m not even thinking about – we may find that there’s more value in this one thing that people just weren’t doing because it took too long. But now we have all this time on our hands, let’s do that thing. I think it’s just going to change as the technology changes.
Aaron: If you can automate everything that can be defined through process documentation, all the menial stuff, then what’s left is deep expertise, deep understanding of strategy and orchestration. So you can automate things within given disciplines and channels, but that doesn’t necessarily replace strategic orchestration to understand when to invoke one in complement with another.
Mike: I agree with that. And the other thing is that I don’t think it’s ever been the shortage of ideas that have stopped things.
Aaron: That’s a good point.
Mike: You know what I mean? It’s been navigating the organization to make those ideas actually happen. And if we get to a space where so much of deployment is automated and we can say, “Here’s the idea, let’s get it going” – that speed to deploy could change everything.
Societal Implications
Think of the inefficiencies that we have collectively in aggregate as the human species. Think about politics and how inefficient that is. Think about just anything where we could say, “This should take five minutes,” but instead it took us five months. The ability to get more things out faster will have such a ripple effect across everything that it’s not so much about “What is the creativity?” It’s more about how do we keep up with what we’re able to do now?
And I think to some degree that’s always been true. We’ve had technology that could do things, but we’ve had people who have stood in the way of it. But with generative AI, we can have things be so plugged in where it’s “I have an idea” and then five minutes later, it’s done.
There are still a lot of things that have become a lot more feasible that were just science fiction before. Think about the Star Trek holodeck – we could probably have that now with generative AI, where it’s like, “Cool, generate me the scene in this room as I’ve described it.” That was science fiction. Now you just need a bunch of panels and 3D projectors, and you could do all this now.
I think that once we have these different modalities that become real, like we’re trying to do with VR and AR and all that sort of stuff – and I don’t feel like any of that has really ever stuck – but we’re quickly approaching a space where we can do our wildest dreams. And I think what happens then is we have a different set of things that we want to do as people. If you’re in a space where your computers do everything for you, we get to a space of – I don’t know, universal basic income because nobody has to work. And then we’re just playing with technology all day, creating the worlds that we want.
It’s like a shift in what people want to get up and do every day. And I think that’s the danger of this technology because we’re not going to have to do so many things that have been core to the human experience.
Aaron: I would almost guarantee that there are things that people would very quickly offload as trivial, menial stuff – “I don’t want to do that” – that turn out to be like on a soulful level, actually really important to your well-being.
Mike: Exactly. There’s a – I’m hesitant to go heavy into sci-fi stuff, but there was a movie, I think it’s called “Surrogates.” It starred Bruce Willis. And basically everyone was by themselves in their home, in these machines, and they lived the entirety of their life online. No one went outside and things like that.
I feel like we’re – everyone is always on their phone, right? We’re all addicted to the internet. And I see a world where we slide into that where everyone is just having a very individual experience, but doing whatever they wanted at any given moment. I think that sounds so bad for society, but we could so easily slide into that.
Aaron: Yeah, this is the first circumstance in my professional life where I feel like I struggle, and I think that most other people do too, to fully understand the implications.
Mike: Yeah, I feel like my thinking isn’t fast enough to keep up with how quickly the technology is changing and what that means, not next week, but a year from now, two years from now. When ChatGPT first came out, there was such an explosion – it’s still happening right now, but like that initial explosion, it was like every day there was a new paper, a new open source library, and it was so anxiety-inducing because you’re like, “I’m behind, but I read everything from yesterday.”
Keeping Up with AI Advancement
Even now there’s still every week, if you go to arXiv.org, there’s so many new papers. I’m at the point where it’s like – I don’t go on TikTok, I just wait till the best of TikTok ends up on Instagram. So now I’m like, I don’t go to arXiv directly. I wait for one of the papers to be in my feed on Twitter a number of times, and I’m like, “Okay, that’s the one I need to read.” Because otherwise it’s whiplash. It’s just so much stuff coming out.
There’s part of me that’s like – I want to stop doing SEO and go back and get my PhD in AI, really contribute to this world. But I also believe that there needs to be marketers like you and me who can understand this stuff and make it real and accessible to the brands that we’re working with, because it’s just very difficult for them to do it right.
Like in a lot of these organizations, especially these big ones, they’ll have a whole AI team. And I’m like, “What are y’all doing?” Because they’re just so wrapped up in the red tape of the organization that they can’t really innovate at the pace that an agency can. And so I think there’s a lot of value in us continuing to educate ourselves, play with things, and help these companies ramp up and find all these opportunities.
Aaron: Yeah, I agree. What do you think the digital marketing agency of the future looks like?
Mike: It’s smaller. It’s very technology enabled. I think you need data scientists on staff. If you don’t already have them, like we’ve historically had them. I think data engineering needs to be a huge part of it.
And the people who are doing the work, whether they’re strategists or whatever people are calling them – they need to have stronger technical skill sets. I think in the agency world, people have gotten away with just being deep in their one discipline, like UX, SEO, whatever. You really have to be that T-shaped marketer or M-shaped marketer, where you have a deep understanding of how generative AI works, you have a deep understanding of analytics, you’ve got a deep understanding, obviously, of your core thing, because the interplay is just becoming so deep now. You can’t just do the one thing.
So you’re going to have deeper – I don’t want to call them generalists – like it’s a combination of specialists. They’re strategists that can really understand the channels of technology and have the ability to synthesize where the opportunities are. You can’t just be the one thing anymore.
Aaron: I agree. I think interestingly, that’s one of the big implications of all three of the things we talked about, right? The DOJ testimony and Google Docs leak and what’s happening with AI. They all point to: you actually have to be a marketer, broadly capable marketer.
Mike: I agree. And for me, that’s what I’ve always been because I’ve always had broad interests. It’s always been weird to me because when I talk about these first jobs I had, I was a webmaster, and webmaster splintered into 30 different jobs. You’ve got front-end developer, back-end developer, network administrator, DevOps – I was doing all those things, and so it’s always made sense for me to know as much as I can about everything so I can speak intelligently to those people and work with them. And if for whatever reason, we don’t have those roles, I can step in and do at least a decent job on it.
The over-specialization to me has always been a weakness for people that we’ve worked with. I’ve always thought of it as like people speaking multiple languages. You may be someone who speaks SEO, but now you want to learn how to do copywriting. I’ve always encouraged people that have worked on our teams – what language do you want to learn? Let’s support that so you can do better work when you interface with those sorts of people.
“The Science of SEO”
Aaron: So tell me about the book you’ve been writing.
Mike: Yeah, it’s called “The Science of SEO” and it’s very much an information retrieval book for SEOs. I walk through the historic innovations of Google from a technical perspective to explain here’s what they did, here’s how it operated. I’m effectively stitching together a bunch of information from patents, white papers, things I know from my own experience, a lot of the core information retrieval advancements and how Google uses them and what they’ve contributed.
I walk through how the internet works, how the web works, how browsers work. I teach people how to make a simple search engine so they can really understand on a granular level what is lexical search versus semantic? How do you do hybrid? All of that. And then I take you through okay, here’s how you do SEO with all that information.
So it’s basically taking everything that’s in my head and putting it in a book in hopes that it helps our whole industry level up in a variety of ways. The way I landed on wanting to write this book was I got tired of saying “it depends.” I wanted to know what it depends on.
I want to have more intelligent conversations with my clients. Every time I run into something – there was a point where we were doing page speed audits and I felt like I’m just parroting what it says in Page Speed Insights. I wanted to really understand how performance works on the internet. I went down to understanding that speed on the internet is a function of the speed of light traveling through cables. I got that in-depth on it. And then that allowed me to restructure how we did our page speed audits so that they’re a lot more actionable and valuable to our clients.
I want everyone to be able to do that because I see a lot in our space where people are just exporting PDFs from SEMrush and saying “here’s your audit.” And that’s not – why is anyone paying you for that? That hurts everybody.
Aaron: The thing is like a tool like SEMrush will say everything’s on fire anyway.
Mike: There’s no context for what the business is doing and why they’re doing it. So I want everyone to have the ability to have those conversations. And also just want a capstone on my SEO career – here’s this artifact that represents what I was able to accomplish, and I just want to share that information with everyone else.
Aaron: That’s fantastic. When does the book come out?
Mike: It should be early 2025.
Aaron: You’ve got a landing page for the book as well, right?
Mike: Yeah, it’s on Amazon and all that already.
Aaron: All right. So we’ll include links in the show notes as well. Really enjoyed the conversation.
Mike: Yeah, thanks for having me.