We are entering the era of the ‘synthetic words’

ChatGPT is part of a paradigm shift that we cannot underestimate in our industry, says Enrico Gallorini, CEO of GRS Research & Strategy, president of Info Salons ME, and considered forward-thinker and expert on the Experience Industry. Here he takes us on a deep dive into the topic and its likely influence on the exhibition industry and also offers some insights on how we might harness it for the common good.

ChatGPT's ability to simulate language is not just a technical advance in “NLP” natural language processing, it is rather a passage of civilisation that will undermine many economies, businesses and markets. For this reason, the exhibition industry must also investigate how and why this innovative technology will enter our sector, and in the sectors in which our collective events represent the shared moment of confrontation and evolution for our target niches, and what will be the effects in them.

By now everyone is talking about ChatGPT, and many are keen to try the chatbot. It is free, for the moment, and therefore is often blocked due to excess requests.

As of time of writing, it has reached 200m users, an incredible number compared to any other innovative tech when they enter the market, and the speed of growth is just unprecedented.

Table

But, as always happens with the introduction of new technologies, we must try to go beyond the curiosity and “hype” of the instant and try to better understand the deepest meaning of this radical change in language management.

I will try to give an interpretation to the phenomenon, with a perspective connected to our sector and how this new technology can change many dynamics.

 

ChatGPT and the issue of synthetic languages

Facing the question of synthetic and simulation languages like ChatGPT today means facing an epochal and not an episodic passage of civilisation (Cosimo Accoto 2022), therefore my reasoning starts from the assumption that we are at the beginning of a change, we can imagine trajectories, but we should observe and study how they evolve in the coming months and years, with a specific focus on the exhibition industry.

The use of ChatGTP is much commented aont the moment, but perhaps little explored and understood in its scope (technologically and also socially). What it can mean for the world of exhibitions and large events has been studied even less, and certainly not in depth.

From a technical point of view, we are talking about an extremely refined algorithm that aims at managing the 'large-scale linguistic model' (called: "LLM" or large language model), and it is a generative socio-technical assemblage made up of different skills related to multiple computational architectures and information resources.

The ability to simulate language in its textual form, to adjust it in contextual mode, to archive knowledge and information, to execute linguistic instructions and tasks, to synthesise topics with scalar refinement, to originate sequences of arguments and step-by-step reasoning attempts, to articulating answers and building dialogues…  all of these things are the result of a complex orchestration made up of software programs, data and information archives, deep learning algorithms and also human reinforcement and mathematical-stochastic models of the language.

Therefore, it is a set of intertwined engineering-computational techniques and operations (training on code, transformers, pre-training modelling, instruction tuning, words tokenisation, reinforcement learning with human feedback…) capable of statistically sequencing the natural human language.

But we need to remember that: All this is without a meaningful relationship with reality.

That is to say, that is without that synthetic language actually knowing anything about the world and without having any understanding of its meanings.

 

What is it ChatGTP?

As always in a strategic path, we must try to start with the information we have from the ‘beginning"’ and for this I use the ‘Cosimo Accoto way’ of presenting the ChatGTP meaning.

What is an LLM, large language model?

It is a mathematical model of the probability distribution of words in a written language which strives to minimise crossentropy (i.e. the gap between two potential frequency distributions) thereby maximising its performative capacity as a text predictor.

This approach is the result of a long journey (Li, 2022) in the modern history of natural language processing (NLP) which, starting from the Markov chains and Turing Models at the beginning of the 20th century applied to literature (sequence of vowels and consonants in a novel) and passing the works of Shannon and Weaver in the mid-fifties on the measurement of entropy and the distribution of probabilities (n-grams and probabilistic sequence of words in the language), arrives at the beginning of the 2000s with (Bengio, 2002) and colleagues to the application of artificial neural networks for natural language processing (neural NPL). Important recent developments include the use of transformers capable of incorporating the contextual dimension of words in sentences into the probabilistic analysis of language.

For this, as written by Shanahan (2022): “It is very important to keep in mind that this is what large language models actually do”.

Suppose we give an LLM the request “the first person to walk on the moon was…” and suppose the reply is “Neil Armstrong”.

What are we actually asking for?

To a significant extent, we are not asking who was the first to walk on the moon.

What we are asking the model is the following question: given the statistical distribution of words in the vast ‘corpus publico’ of texts (for now in English), which words are most likely to follow the sequence “The first person to walk on the Moon was…”?

Statistically, according to the system and its articulated algorithm, a statistical excellent answer to this question is “Neil Armstrong”.

 

The need to imagine new dimensions of exhibitions

We have re-discovered the importance of ‘live’ events and how much the centrality of the ‘human’ experience is at the basis of the very existence of collective events, which we generally call ‘exhibitions’.

This ‘human effect’ is not only the keystone on which events are based, but it is also the reason why today there is a rediscovery of the human element throughout the customer journey: the greeting of a human when you arrive at the venue, the welcoming side of the onsite registration, the handshake in the exhibition floor, the smiles, looking into each other's eyes, the dialogue made of physical interactions...

The example of the ChatGTP, or rather the use of an artificial intelligence that simulates human language, could on the one hand seem like a new and important challenge to the world of the media (of which exhibitions are a part), which knows how to base itself on content… on the other hand it is a huge opportunity to refresh the complexity of the true ‘inteligentia’ which is the basis of the production of a collective event…

I would like to go deeper into this point.

ChatGTP in its actual standard use does what it's supposed to do. And it does it very well. It has an extraordinary ability to synthesise putting the main things together. But if you ask it for a minimum of reasoning on less usual things it doesn’t work.

For example to a ‘simple’ question, such as “Laura’s mother has two children: is one of them called Laura?”. And its answer is that there are not enough elements in the question to answer correctly.

This highlights that there is no understanding of the text.

Today this technology is just like the spelling-book or calculator that doesn't understand numbers, however precise the calculations.

That is, it behaves intelligently, even though it is not intelligent. Tools like ChatGPT emphasise the separation between:

- acting successfully, like an artificial intelligence does

- acting intelligently to arrive at that success, as a person generally does.

ChatGTP has an enormous ability to act, but without ‘intelligence’.

This is closely connected to the experience of virtual events, hybrid platforms and all the more or less refined ‘matchmaking’ tools.

They are tools.

They are essential to make the experience of the collective event more complete and enhance it, but to work you need commitment, creativity, deep understanding of the needs of the beneficiaries (visitors and exhibitors), precise structuring of requests... a profound use of human intelligence. Then the tools do their job very well.

I believe (for years now) that maybe we should stop talking about ‘intelligence’ when referring to all these tools, starting from ChatGTP.

In fact, to do something successfully, a human being needs intelligence, even a minimum. Today several processes can be done by machines with zero intelligence. In fact, they can do it even better than us. But they do it computationally, not intelligently.

Today its answers are trivial. Perhaps tomorrow they will be less so, but we and our industry must focus on ‘distinguishing’ through the Human Touch of the experience at the exhibition.

In particular, we need to focus on the best way to make the Exhibition Experience the most “Individual and Social” possible, starting from registration welcoming, the support for the visit, and enhancing the intelligence that is there it's behind the tools.

There is nothing more pleasant at an event than receiving a warm ‘welcome’, with a beautiful ‘human’ smile. We started from there ... and we must stay there for the good of that incredible ‘human experience’ that is in all our industry business models.

Having machines that welcome and smile just doesn't work; from a neurological point of view it pushes you away and creates anxiety and stress!

From this simple ‘physical’ element, as important as it is irreplaceable, returning to the value of the ChatBot, we understand more and more often that there really is a need to include more ‘philosophers’ (thinkers) and fewer ‘technicians/analysts’ within our organisations, because the tools are improving, and although there is always technical work, it will be less and less value-added. What changes completely is understanding what to do and what to have done with these tools – starting from the ‘why’.

We must ask ourselves how far this structured evolutionary form of tools can go, and we must imagine that to date there is no theoretical limit to their improvement.

But there is a limit on resources – financial, computational, of investments and of industrial interest. When I think of very useful and fundamental technologies that then disappear, I think of the airplane Concorde, for example: a technology that exists, but is not convenient, that'’ why it is not used.

Returning to our industry, there are things that AI can easily replace. There are programs that know how to write articles. Simple items maybe. And tomorrow for a content provider there will be no reason for a journalist to do the ‘simple’ part. But no program can replace the depth of analysis of an article, for example from an expert, its ability to go beyond the mere publication of words, into what they imply and entail for the reference industry. Even if I'm starting to suspect that maybe not immediately but tomorrow an artificial intelligence will be able to do that, too. How will it be able to do all the tasks, the jobs that a person carries out today? But the important thing is to understand that intuition, the connection of the various pieces of the puzzle and strategic vision are very complex activities that require:

-        Experience

-        Intuition

-        Passion

At the output level of a process we must imagine that in the short term we may not be able to distinguish the output produced by a machine and by a human. But the input and, above all, the process will remain different.

The fact that some things are made by men and have their value because men made them. The ‘artistic’ component of choice, of those who know how to imagine the future, to undermine the status quo becomes unique.

The human being will have to focus more and more on the ‘Unicum’ of his thought, why things are done and the process involved in that.

For this reason, it is also necessary to think within our exhibition system to develop ‘new’ professional figures and skills who know how to maximise the tools. Introducing new disciplinary practices such as prompt engineering and design. Expert skills of making queries, instructions, data, examples are normally the inputs used to solicit the machine to produce, through an optimised mathematical model on linguistic tokens, the desired output (a conversation, a text, a summary ...).

Observing these simulation tools of reality, the Platonic text of the imitative arts comes to mind ‘The imitator knows nothing of the imitated thing that is worth anything’ wrote Plato in The Republic.

In its contemporary version the terminology ‘stochastic parrot’ was introduced.

From time to time, humans face the word taken by the machine either with clear condescension (there is no understanding of meaning) or with easy enthusiasm (a turning point in the generation of language). However, they are weak philosophical visions of the moment and of the transition we are experiencing because they try to weaken or trivialise the disorienting cultural impact of the arrival of synthetic languages. Which does not concern the question of assigning and recognising or not intelligence, consciousness, sentience to machines. Rather, and in perspective, the arrival of the ‘synthetic language’ (Bratton) deeply undermines and deconstructs (Gunkel) the apparatuses, domains and institutional devices of discourse, the word and the speaker as well as of writing and authorship.

The speaking of the machine will be a more profound and disorienting operation in the long run.

This is perfectly connected to the Metaverse or the synthetic immersive worlds currently produced mainly for playful purposes, where after the initial hype there is a strong ‘negationist’ vision. But they are present, and are evolving quickly, and just look at the Gen -Alpha (children born from 2010 to 2024) that use these tools to understand that they are here to stay, and as such, we are here to study them and put them at the service of humanity (for these reason I’m creating a series of articles called “Building Block for the Exhibition of the future” that will cover all these innovations).

This pushes the role of the exhibition of the future even stronger, where as a collective event where the industries meet and discuss hyper-specialised dynamics, it brings with it the real and intelligent ‘profiling’ of the value of participation.

It will not be the algorithm that participates in an exhibition, but the person, with his intelligence and his ability to create value.

More and more the events will have to work on the value of this collective moment, going in the direction that best preserves humanity, keeping that ‘sacred’ place (not only for its revenues) as human as possible, like an island of pure humanity, where the tools are just a reinforcement to the physical experience. Where humans gather and have that valuable experience live, humans meet, and let’s keep the ‘machines’ away for now.

This step will also require a strong investment in all the people who support the exhibition throughout its duration. There could be no greater mistake (on my part) in making the exhibition experience predominantly made up of tools/machines (turnstiles, screens, computer bots, etc.), while the strength of the collective event lies in the fact of speaking with real people, who will probably always have to be more oriented towards ‘solving problems’ and be informed, rather than reporting a piece of data (in which case the machine would win, but would destroy the human serendipity effect that make our industry unique).

The central point is that we must prepare now for what will happen in the next 10/15 years. The evolution of these technologies is accelerating, and the impact it can have on exhibitions is partly limited if the event organiser is able to keep the HUMAN orientation of the experience towards the collective event clear and strong.

But the tool accelerates its acceleration. And, therefore, exhibitions must understand more and more what impact ChatGTP can have, for example in their reference industry, because it is inevitable that there will be –as there was with the Internet and there will be with the Metaverse – technologies that arrived to stay.

True intelligence comes back into play, the role of the organiser is to understand and appreciate what will become of his own reference niche in the near future.

In this regard, we will experience a ‘thump’ of disillusionment with these tools in a few months vis-a-vis what this new tool can really do. But in my opinion ChatGTP must not be banned from our industry, nor must we pretend that it will not touch it , but it must be studied, taught and enriched within our organisation.

Companies should be given the tools to understand and use these new technologies.

Many years ago I spoke about Blockchain at the 2018 UFI conference, as a foundational technology. In other words, it generates new economies. Here we are talking about artificial intelligence, and we must have a different approach, we have to start thinking about a non-utopian world of pre-retired people, or of really different skills. We are already heading towards a world where people work less, and if they work more it is because they work badly.

They work less because they use their intelligence starting from the ‘why’ of things.

With Covid, I believe that we have all understood that, at least in our context, the quality of human relationships, in its form of socialisation, is linked to the quality of life, and is no longer harnessed to how much one works.