From traditional search to generative synthesis: how public judgment about companies and entrepreneurs is evolving
Marianna Valletta, Founder Valletta PR Advisory
The evolution of artificial intelligence is structurally transforming the relationship between information and reputation. This is not merely a technological issue, but a shift that affects how a company’s name is searched, interpreted, and represented in the public space.
For years, online visibility has been based on a relatively stable model: users typed the name of a company or entrepreneur, accessed a range of results, and compared different sources. Reputation was shaped through a layered reading process, where content hierarchy did not eliminate the possibility of deeper exploration. Today, this model is gradually changing.
According to a Gartner forecast, by 2026 the volume of searches on traditional search engines could decrease by 25%, with a significant shift toward AI-based chatbots and virtual assistants. The so-called “click era,” built on navigating multiple links, is giving way to concise answers generated by language models.
The difference is substantial: tools such as Google AI, ChatGPT, or Perplexity do not simply return a list of sources—they generate a synthesis. They aggregate available online information, assign it weight, and construct an overall representation of the searched name.
Even if this is not an opinion in the human sense, in practice that synthesis functions as one.
When a system selects certain elements and omits others, presenting them as a summary, it contributes to shaping an image. Yet synthesis, by its nature, simplifies: it may be accurate in individual data points while still being incomplete, isolating an episode from its broader history or giving prominence to an event that has lost relevance over time.
In this context, digital memory does not disappear—it stratifies and is continuously reprocessed.
Companies understand that reputation is not merely symbolic capital. It is an asset that impacts credibility, financial relationships, market access, and attractiveness to talent and partners. It produces measurable economic effects and, for this reason, requires protection.
Companies are the primary custodians of their own name. They cannot limit themselves to reacting when issues arise or seeking favorable visibility sporadically. They must continuously monitor the quality of information about them, verify how they are represented in generative search systems, and build over time a coherent and updated body of information.
However, responsibility cannot be unilateral.
The information ecosystem feeding artificial intelligence consists of editorial content, digital archives, platforms, and databases. Every piece of information put into circulation can become a source for systems that will synthesize and re-present it over time.
The press retains a central role and, at this stage, becomes even more decisive: journalistic sources are considered authoritative by AI models and significantly contribute to the formation of synthesized representations.
Publishing today means introducing content into a system that tends toward permanence, replicability, and automated synthesis. This implies greater responsibility: not to limit freedom of information, but to acknowledge that the systemic impact of information has changed.
For this reason, a more mature dialogue is needed among companies, the media ecosystem, and reputation experts. A dialogue not aimed at controlling narratives, but at understanding consequences. The quality of information and the protection of reputation are not opposing interests—they are elements that must be balanced within an evolved digital ecosystem.
In today’s economy, reputation is not an ancillary variable but a legally relevant and strategically decisive asset.
And in a context where artificial intelligence selects and synthesizes what deserves to be remembered, protecting it is not a defensive choice—it is a systemic responsibility.



