Manifesto · May 2026
What the model does not know.
The case for brand authorship in the AI commerce era. Human-in-the-loop is not a workflow constraint — it is the architectural commitment that lets a brand keep speaking for itself when models are answering for everyone else.
In every commerce era before this one, the brand had a say in how it appeared. The runway, the magazine, the storefront, the mailer, the homepage, the marketplace listing — each surface had a hand on it. The brand chose the words. The brand chose the imagery. The brand chose what to leave out.
The AI era is the first one in which that authorship is being silently transferred — not by takeover, not by malice, but by signal scarcity. A model's job is to answer. When a customer asks ChatGPT what to buy and the brand's authoritative description of itself isn't anywhere a model can retrieve it, the model improvises. It uses what it can find. Old press releases. Retired campaign images. Resale-site blurbs. A six-year-old paragraph in a magazine archive. The model assembles a description. The customer receives an answer. The transaction proceeds.
Every step of that loop happens without the brand in the room.
This is the quiet violence of AI commerce. Most brands haven't noticed it yet. By the time they do, the model has already learned a wrong story about them.
We refuse the trade.
The disappearing brand.
A brand is a series of choices the brand has made about itself: this fabric, this last, this fit, this language, this photograph, this colorway, not that one, never that one. The choice is the brand. Strip the choice away and you do not have a smaller brand or a quieter brand — you have a generic one. A luxury house with a formal evening collection. A heritage label with a French sensibility. A premium handbag in the $4,000 range.
These descriptions are not wrong. They are also not anyone in particular. The model offers them because that is what it can defend from the public web. It is the linguistic average of the category, and the brand has been averaged into it.
What is being lost in the average is the choice. The atelier where every clasp is hand-finished. The reference drawn from a particular archive year. The woman the founder had in mind, decades ago, when the studio was still a single room. None of that survives the model's averaging — not because the model rejects it, but because the model never saw it. The proprietary truth lives in SAP, in PIM, in the creative director's email, in spreadsheets the marketing team curates by hand.
The proprietary truth is the brand. The averaged version is what the customer hears.
This is the gap Trevise was built for.
The honest truth about what AI is doing.
We do not think models are villains. They are computational instruments, and they do what they can with what they can reach. The fault, such as it is, is architectural. The brand has built its truth in places models cannot retrieve from. The model has built its answers from places the brand never spoke.
This is fixable. It does not require slowing AI down. It does not require fighting AI. It requires giving the brand a place to speak that the model can hear.
That place is what we call a canonical source. A single, authoritative, machine-legible representation of every product the brand sells — assembled from the brand's own systems, reconciled into one record per product, expressed in the formats AI surfaces actually consume.
But a canonical source is the easy part. The hard part is what we say next.
Authorship is a right, not a feature.
The brand must be the author.
Not the model. Not Trevise. Not a third party. The brand.
This is the principle that decides what kind of company Trevise becomes. Without it, we are an enrichment vendor — a tool that generates AI-friendly descriptions and pushes them to the open web. With it, we are something different: an infrastructure layer that lets the brand keep authoring itself in an era that would otherwise quietly take that authoring away.
The difference is everything.
We have watched, from inside the houses, how the older infrastructure of commerce treats authorship. A line sheet is not signed by a software vendor. A campaign image is not approved by an ad platform. A product description is not pushed to retailers without review. There is always, somewhere in the operation, a human looking at the work and saying yes, this is us; ship it.
The era of agentic commerce is the first era in which someone — the model, the agent, the third-party feed — proposes to write that final sentence on the brand's behalf, at the speed of inference, on every surface a customer might use to ask about the brand. To accept that proposal is to accept that the brand has agreed to be authored by something else.
We do not accept it. Neither does any brand we have worked with.
What human-in-the-loop actually means.
Human-in-the-loop is a phrase the AI industry has used in many shapes. We use it in one specific way.
It does not mean humans rubber-stamping AI output for compliance theater. It does not mean adding a review queue that everyone routes around when speed matters. It does not mean having a button that says "approve" so a brand can tell its lawyers there is a button.
It means three contracts the brand can hold against us, in writing, every day:
The data contract.
What you give us stays where you put it. Future-season designs, unreleased lookbooks, regional exclusives — none of it leaks to a model, an open feed, or another customer. The brand's internal truth is the brand's internal truth. Trevise's job is to make the current and past-season truth legible to AI surfaces — never to expose what has not shipped.
The claims contract.
Every AI-generated claim about a brand-sensitive attribute — material composition, sustainability provenance, regional positioning, comparison to a peer brand, best-for or not-best-for tags — is staged for review. The brand's team approves what goes live. Auto-approval is reserved for purely operational fields where a wrong answer is a small commerce error, not a brand error: price, inventory level, available variants. Anything that touches the brand's voice or its claims is touched by a human first.
The evidence contract.
Every claim Trevise generates carries a provenance trail back to the source data. Where the material composition came from. Which campaign image is the source of the visual reference. Which season the price reflects. The brand can audit every line. The brand can revoke any line. The brand can show its lawyer what it has shipped, in what version, to which surface, on which date.
Three contracts. Together, they are the difference between a brand willingly extending its voice into AI commerce and a brand discovering, six months later, that AI has been speaking for it without permission.
What this looks like in practice.
A brand connects to Trevise. The connection ingests product master data from SAP and PIM, imagery and lookbook references from the DAM, inventory and pricing from the ERP and commerce platform, customer reviews from the review system, and the spreadsheets where regional translations and certifications live.
The data is reconciled into a canonical record per product. Where the systems disagree — and they always do — the brand's team picks the authoritative source. This is not done by Trevise alone. The reconciliation surface is shared. The brand sees every conflict and every resolution.
On top of the canonical record, Trevise generates the enrichments AI shopping agents actually consume: intent matching, scenario tags, comparison anchors, best-for and not-best-for, FAQs, summaries of customer reviews. Each enrichment is generated as a draft. The brand's team reviews the drafts. Approved drafts go live. Rejected drafts are retired. Every action is logged.
The approved record publishes to OpenAI's ChatGPT Product Feed and the standards emerging across Gemini, Claude, and Perplexity. A daily catalog snapshot keeps the structural truth current; intraday updates flow for price, inventory, and availability where the surfaces support them.
The customer asks AI a question. The model retrieves what the brand has authored. The brand appears in the answer — in the brand's own voice, with its current product, its current price, its current language localizations.
A measurement layer feeds the loop back: AI-referred sessions, conversion, revenue, SKU-level performance, feed-version impact. The brand sees what AI is selling, and which version of its authored truth produced the result.
This is brand authorship as an operating system — the answer to what the model does not know.
The brand that stays.
If we are right about the era — if AI commerce becomes, in five years, the surface where most product discovery actually happens — then the brands that survive it intact will be the ones that authored themselves through it. The brands that did not will not vanish. They will simply become harder to recognize, in their own answers, as themselves. The clasp drawn from the archive will become a metallic closure. The Florentine satin will become a high-quality silk blend. The woman in the founder's mind will become the modern luxury consumer. Every paraphrase will be defensible. None of them will be the brand.
We are not building Trevise to make AI smarter. We are building Trevise to make the brand louder.
In the layer the model can hear, in the brand's own voice, with the brand's hand still on the work.
When someone asks an AI for a recommendation, the brand finally shows up like itself.
That is the whole work. Everything else is engineering.
To see how Trevise reconciles, distributes, and measures what your brand says about itself — with your team approving every brand-sensitive claim before it ships — reach out.