E33: How to get AI to recommend your brand with FERMAT
In this episode of the SaaS Operators Podcast, we talk about how Fermat is building AI search analytics and optimization as an acquisition channel for FERMAT. Rishabh explains how a free AI search product is compatible with a premium saas product that starts at $500 a month, why monitoring should be priced at zero, and how the real value is in workflows that generate shoppable, LLM friendly content. We get into how AI search outcompetes gray and black hat SEO tactics, how models like ChatGPT, Gemini, Claude and Grok actually rank products, and what it means to win with truthful “white hat” content. The episode ends by looking at the end of chat first AI, FERMAT’s agentic AI search product that feels more like a hot or not UI than a prompt box, and what operators should be building now if they want their SaaS products to be competitive in the future.
On this episode of the SaaS Operators, Zach, Jeremiah, Rishabh and I went deep on something that keeps coming up on the show but rarely gets executed well in the wild: AI search as a real acquisition channel.
We talked about how Rishabh and the team at FERMAT went from a 500 dollars a month minimum price point to launching a free AI search product, why they decided monitoring should be priced at zero, and why the real product is the workflow that turns AI visibility into shoppable surfaces that actually convert.
The conversation zoomed out into a bigger question too: what happens when most AI is not chat based at all, and your product has to plug into that world as an agent that just goes and does the work for you?
Here is how I am thinking about it after that episode.
AI search, free products, and the end of “chat first” AI
AI search keeps coming up on the podcast. Every operator I know is thinking about it. Nobody is really doing anything meaningful with it.
Except Rishabh.
On the episode he walked us through how they launched an AI search product at FERMAT, why they made monitoring free, and what he thinks this whole space is going to look like. It turned into a bigger conversation about LLM arbitrage, truth, and a future where most AI is not chat at all.
I want to lay out the parts that matter if you run a brand or a SaaS company and you are trying to figure out how to show up in AI search in a way that leads to actual revenue, not vanity metrics.
Why they made AI search monitoring free
FERMAT used to have no free product. The cheapest thing they sold was a funnel builder at 500 dollars a month. Everything was paid, everything was premium.
Then AI search hit.
Rishabh had a simple realization: the end state of what they are trying to accomplish is not “know how you show up in AI search.” The end state is “turn AI search into a real acquisition channel.”
Monitoring is just “knowing.” The hard part is what you do after you know.
He also realized AI search monitoring is not technically that hard. The real work is:
- Building the integrations
- Understanding how people shop
- Turning that into shoppable stores and content that LLMs can actually use
Once they wired up all the integrations and workflows, they started testing with alpha customers. Those brands did not just see impressions when users were redirected from ChatGPT or AI Overviews. They saw conversions.
At that point the strategy is obvious. Your job is to just get people started.
So they made a simple call:
- Monitoring: priced at zero
- Content workflows: priced where the real value is
Free monitoring, plus a generous “starter” tier of generated content, is enough for a brand to see movement. Once you see that this new channel can drive revenue, you want more. That is where the paid product kicks in.
Monitoring is a commodity. Workflows are the product.
Everyone is spinning up “AI search dashboards” right now. That layer is not where the defensible value is.
FERMAT has a structural advantage because they already sit on top of a lot of shopping behavior. For their customers they are plugged into:
- Ad and marketing platforms like Meta, Klaviyo, and others
- The backend, so they see order data
- First party behavioral data on the store
- Third party sources like Reddit and social
That means they do not just see what people are asking. They see what people ask right before they buy.
When a new brand plugs into their AI search product, they use all of that data to figure out:
- What are the hundreds of queries that actually matter for this brand
- Who else shows up for those queries
- How people currently buy from the brand
Then, because they are already connected to the brand’s content and have the brand voice and product catalog, they can automatically generate shoppable content mapped to those queries.
You hit one approve button. It spins up collections, narratives, and product groupings that LLMs can ingest and use to answer those exact queries.
That is the important distinction. Monitoring tells you where you stand. Workflow turns that knowledge into shoppable surfaces.
Monitoring should be free. Workflows are the product.
How the free tier is structured
The free tier is not just a toy.
Right now it covers:
- Monitoring up to around 100 queries
- Generating up to 20 pieces of shoppable content
That is enough to see:
- Your brand starting to get cited in AI overviews and chat answers
- Real traffic and conversions coming from those surfaces
If you want to keep scaling beyond those first 20 pieces of content, that is when you pay.
The psychology is deliberate. You should not be paying to “find out” if this channel works. You should be paying to scale something that is already working.
Will AI search become the new SEO content farm?
Jeremiah asked the obvious question: does this just become another arms race where everyone floods the internet with useless content, like SEO in its worst years, except now at LLM scale?
My view is that the game rhymes with SEO, but the mechanics are different.
LLMs consume a ridiculous amount of information compared to old school search. They can cross reference your claims against both first party and third party sources in one shot.
In SEO you could get away with a lot of garbage if you knew how to do internal links, backlinks, domain authority and all the usual tricks. You could game “world’s most handsome man” and get some random Irish SEO guy’s face ranking beside Brad Pitt for a while.
Try that now. That kind of cheap arbitrage gets shut down fast.
The new game looks more like this:
- You produce content that accurately and usefully describes what your product can do
- You cover the long tail of “but also” capabilities that never make it into your main landing page copy
- You make sure genuine signals from real users are visible and accessible
Think about restaurant reviews.
If a restaurant has five thousand five star reviews from people who were actually at that physical location, that is an insane signal. That probably matters more to an LLM than some critic at a major newspaper. The same thing will apply to products, apps, and services.
In traditional SEO we talked about white hat, gray hat, and black hat tactics. In the LLM world, gray and black get crushed much faster because the system can see so many more perspectives at once.
So what survives looks a lot more like white hat:
- Honest descriptions
- Real usage data
- Verified reviews and third party context
You can still “play the game,” but you are doing it by exposing the truth in the most useful way, not by inventing a story that cannot survive a cross check.
How LLMs are actually recommending products right now
There is also a weird gap between what these models could do and what they are doing today.
Jeremiah told a story about shopping for a heater. Category level recommendations from ChatGPT were decent. The specific product suggestions were not. He had to go to Amazon and dig for better options manually.
Rishabh pointed out you can literally ask the model how it is ranking products. One of the top factors is price, by a mile. Then convenience. Then variety.
That means if you want good recommendations, you have to explicitly tell it:
- I am not optimizing for price
- Here is what I care about instead
Only then do you get closer to the “best” product for your use case.
On the other side, my girlfriend is building a baby registry and she has given up on ChatGPT for product decisions. She uses Gemini because it can fetch Google reviews natively and fuse that into the answer. ChatGPT keeps telling her it cannot access those reviews, which means Gemini feels more useful for that specific job.
Zach has had his own version of that frustration and has basically moved his daily usage over to Claude and Grok. His argument is simple. Over a long enough time horizon the only thing that matters for an LLM is truth, and he thinks the leadership around Grok is more obsessed with truth than anything else.
You can agree or disagree with his bet, but the point stands. As a brand or a SaaS company, you should assume:
- The ranking functions are still shifting
- Some models overweight price
- Some are better at pulling third party reviews
- Some will be more conservative or more aggressive in what they surface
Your job is to feed each of them the best possible structured, accurate view of your product so that when they fix their ranking functions, you are already in position.
The future is not chat
One of the parts of this conversation I liked most was about interface.
Right now, most of our interaction with AI is still a chat box. It is already starting to feel wrong.
Zach dictates prompts through Whisperflow all day and then hits a bottleneck reading the responses. Thought speed is outpacing text speed. It is annoying.
I have been using tools like Cursor that plug into your entire dev environment so you can hand a job over and let it run for a while without babysitting it. It already feels better than sitting in a chat box.
Rishabh was clear about it: most AI products in future will not be text based interfaces.
His AI search product is a good example. It is fully agentic.
It goes and gets the data.
It generates the content.
It builds the shoppable stores.
It comes back with a visual feed of proposed “cards” and asks you one question: approve or not.
The interface is basically Tinder for content. Swipe left, swipe right, ship.
No prompt engineering. No wall of text.
He gave another example that feels like the future bleeding into the present. There is a tool they use that plugs into customer feedback inside Slack. It:
- Reads the feedback
- Clusters and prioritizes it
- Creates tickets
- Writes the code to solve the issue
- Opens a pull request for an engineer to review
Without anyone talking to anyone.
Customer writes a message. A little while later there is a PR sitting there ready to ship.
That is “hands free AI” in a very literal sense. You are not chatting with it, you are working around it while it handles a bunch of execution in the background.
From thought to execution
Once you start thinking like that, it becomes obvious where this is going.
Zach joked that he wants a Fathom style summary of his walks. If your best ideas show up when you are outside, why is there no system automatically capturing, clustering, and executing on them?
I pushed that one step further. The only reason you need a summary is because you still have to turn the idea into a plan and then into work.
Imagine a world where there is no summary at all.
You think. The machine listens. The system routes that idea to the right context and it gets executed without you ever needing to “retrieve” it again.
We are not there yet, but you can see the pieces.
Starlink gives you connectivity anywhere.
Neuralink or similar tech gives you a direct brain interface.
Agentic systems like the ones we talked about give you an execution layer that does not need to be micromanaged in a chat log.
Put those together and the distance between thought and execution collapses.
What to do with all of this if you run a brand or SaaS
If you strip this whole conversation down to operator level takeaways, it looks like this:
- Do not stop at AI search monitoring. That is table stakes and will quickly become a commodity.
- Invest in workflows that turn AI search insight into shoppable, structured content at scale.
- Focus on white hat strategies: honest descriptions, rich “but also” details, and real user signals.
- Understand that models are still biased toward price and convenience. Prompt around that if you want quality.
- Expect the most valuable AI products to feel more like agents and less like chat.
Most people are still looking at AI search like a dashboard problem.
Rishabh is treating it like an execution problem.
That feels like the right way to think about it.
