Use case · May 2026
How AI gets your brand wrong.
When ChatGPT or Gemini describes your products incorrectly, the instinct is to blame the model. The model isn't the problem. The problem is that your product truth lives in five internal systems — and almost none of it has been published to the layer the model can actually read.
When a customer asks ChatGPT "what's a good wool overcoat under $1,500," the model answers with confidence. It names brands. It describes materials. It quotes prices. Some of those answers are right. Many are not.
The instinct is to blame the model. That's the wrong instinct.
The model didn't make this up. It found it.
A modern language model isn't inventing your brand from thin air. It's stitching together fragments — old reviews, archived press releases, cached product pages, third-party retailers, marketplace listings, Reddit threads, runway recaps, a six-year-old Vogue article about your founder. Whatever it can find, ranked by signal strength.
The problem is that most of those fragments are stale, contradictory, or written by someone other than your brand. The model doesn't know that. To the model, every source is a vote.
If your authoritative product description lives in five internal systems and nowhere on the open web in machine-legible form, the model has no choice but to vote with what it can find.
The five places your product actually lives.
Walk a buyer through your own organization and ask where the truth about a single SKU lives. The honest answer is usually:
- SAP / ERP. Material composition, supplier metadata, cost, country of origin. Authoritative. Not designed for narrative.
- PIM. Product copy, attributes, basic taxonomy. Designed for ecommerce, but written by a team different from the brand voice team.
- DAM. Campaign imagery, lookbook references, runway photography. Often versioned, often disconnected from product IDs.
- Spreadsheets and shared drives. Sustainability documentation, regional translations, certifications, materials provenance, photographer notes.
- Email and creative directors' heads. The story. Why the wool. Why the cut. What the collection is responding to.
A model doesn't get to read any of those. The model reads what's been published outward — your website, marketplaces, third-party retailers, archived content. Whatever leaked, in whatever shape.
This is the gap. Internal truth is rich. External signal is thin and fragmented. The model fills the difference with whatever sounds plausible from the rest of the web.
Three failure modes you can audit today.
You can see this in fifteen minutes without buying anything. Open ChatGPT and ask three questions about your own brand:
1. Material accuracy.
"What is the [your hero product] made of?"
If the model answers with the wrong fiber, the wrong blend percentage, or a generic "high-quality wool" hedge — that's a sign your material composition lives in SAP and was never published to the open web in a structured form a model could trust.
2. Voice authenticity.
"How would [your brand] describe their fall collection?"
If the answer reads like a generic luxury brand — restrained, timeless, elevated — your brand voice has been flattened by averaging. The model has no distinctive corpus to draw from, so it pattern-matches to category.
3. Price and tier accuracy.
"Is [your brand] in the same tier as [a comparable house]?"
If the model places you next to brands you do not consider peers, that's third-party retailer pricing and resale signals winning the vote against your own positioning.
Most brands fail at least two of three. The pattern across them is the same: there is no canonical, machine-legible description of the brand that the model can ground its answer in.
Reconciliation, not retraining.
The instinct of "we should fine-tune a model on our brand" misreads the problem. Fine-tuning teaches a single model your brand voice for your own use. It does nothing for the dozen other models a customer might use to ask about you, and it doesn't solve the fragmentation underneath — it just adds another inconsistent layer on top.
The actual fix is reconciliation. You need a single, canonical, machine-legible representation of every product — pulled from SAP, PIM, DAM, spreadsheets, and creative archives, resolved into one source of truth, and made available to the surfaces models actually retrieve from.
Reconciliation is unglamorous. It involves matching SKU IDs across systems that disagree on identifiers, deduplicating attribute names, resolving translations, picking authoritative imagery, attaching provenance to claims, and keeping all of it current as inventory and seasons turn over.
It is also the only thing that fixes the underlying problem. Every other intervention — feed pushes, visibility tracking, prompt engineering, GEO content — is downstream of whether your data is reconciled or not.
A working definition of canonical.
A canonical product representation has four properties:
Single source.
One record per product, not five.
Authored by the brand.
The voice and the facts come from inside, not from third-party averaging.
Machine-legible.
Structured, attribute-rich, and consumable by retrieval systems and protocol-based agents.
Current.
Updated as the brand updates — same day, not next quarter.
This is what Trevise builds. We pull the fragments out of SAP, PIM, DAM, spreadsheets, and creative archives, reconcile them in real time, and serve a unified, brand-authored, machine-legible representation to every AI surface that will describe the brand.
The model still answers the customer. But now it has a source it can ground itself in, written by you, kept current. The hallucination space collapses.
When the model has somewhere to look.
The brands AI gets right are not the ones with the loudest marketing. They are the ones that have already done the unglamorous work of resolving their own product truth into a layer the model can find.
That's the whole shift. Not better prompting. Not better SEO. A canonical source the model is allowed to trust.
Trevise is that source.
To run the three-question audit on your own brand and see what AI is currently saying about you, reach out.