AI, almost love at first [ENTER]

In the last few weeks, the world got taken by the AI wave, ChatGPT from Open.ai being at the forefront of the news. Launched in November 2022, it has become, in 2 months, the fastest-growing app in history, reaching almost 100 million users. As a former PhD student in AI, I know its technological development is remarkable. The software and the magnitude of data ingested to build the models are insanely complex and vast. I can see industries changed in ways we could not believe were possible a few months ago. Let's dive into this and look at it from a different perspective.

What is GPT? What is stands for? So, the abbreviation comes from Generative Pre-trained Transformer, representing the latest AI algorithms (large language models). Why ChatGPT? Simply because it reached the "conversational" level. How it got so "smart"? by using human-supervised training and reinforced learning techniques over the massive amount of tagged (known) corpus (like Wikipedia, etc.)

I will not waste your time explaining why it is good and how to use it (there are thousands of resources on the internet which explain absolutely everything, including the multitude of .ai solutions that surfaced in the last weeks). However, I will invite you to look differently (unlike the mainstream) at this matter.


The critical point worth analyzing is how much we trust / shall we trust the machines and the potential impact on our life.


Let us start with our endless love for technology. Until now, technology was very obedient and gave us the comfort of being "in control" (I will do an exercise in the next blog, and I will count how many devices we interact with on average in 24h). Why is that? Because its deterministic approach (until now) create for us a specific trust factor: we turn on the light, and it is happening; we start the car, and it is happening; we listen to music, etc. .… every device we use, is executing immediately the commands we convey: no delays and no deviations from a predictable result. Simply put, we knew what to expect.

On this "bedframe" comes into place ChatGPT (and any other last-generation transfer learning / reinforced learning algorithms), which no longer has a similar/accessible, predictable output. It is almost the same as speaking with another super-highly advanced human. The multivalence of AI algorithms available for public use enables the convergence of our methods of interacting with technology: text, audio, and video. What until now was almost exclusively in human hands, today, algorithms are getting insanely faster than us: text to image, video, audio, and all the mixtures possible. The novelty comes from mixing existing digital "knowledge" and producing new outputs (see Midjourney or Dall-E). This "novelty" will become the input in our work, life, and existence and the "trained" information for the same algorithms. It is a cycle that will streamline and smooth the integration of AI into our daily life, affecting every aspect of our existence.


If so, how will it affect us? It will make our life easier and more comfortable. Still, it also could generate the extension of "digital amnesia," a cognitive bias caused by our trust in technology (in its ubiquitous availability). We no longer will put effort into storing information in our brains (because we know it is easy, we get it from the nearby AI assistant). Instead, we will shift our attention and interest towards something new (easy to consume, fast to "expire"). However, human innovation and implicit evolution depend exclusively on how we "connect the dots"; I dare to ask what will happen when "fewer dots" are available. Is it safe to "delegate" the innovation responsibility to some algorithm-driven machine? Moreover, if so, how can we know the real direction of the innovation process?

Technology will not help us only with repetitive, rich computational or mechanical problems for the first time, but it has reached an exceptional place – our reasoning mechanism. This is both scary and impressive at the same time. It is the beginning of a new technological revolution, which can optimize our existence and bring existential threats that we cannot comprehend now.


Let's imagine for a second that we can separate AI technology in itself from its training data (that's super easy), and then we can have the ability to combine custom data sources relevant to a specific domain (like intelligence, security, biology, medicine, etc.; meaning data sources not publicly available) and kick start a "research mode"? (an interesting post – I hope – on how innovation works will follow soon). Can we use it to discover new things? (it already happens in medicine, and I am convinced not only in that domain) If so, how fast can we accelerate the evolution of our society? And for how long can we be in control of this process? Who shall be the beneficiary of these discoveries? To whom will those discoveries be attributed? How about hacking? I foresee different perspectives, where the main objective will no longer be to steal data but to interconnect data models or to alter the training dataset; in that case, the AI in "itself" could become a spy/asset for an adverse party while the hacker no longer steals information but actually will add data?

With the right volume of data and the proper ongoing feeding mechanism, these algorithms will change almost all aspects of our life in the next five years. We will see industries and domains transformed in ways we will perceive as convenient for us, that will make our life more accessible, will solve medical issues, which will facilitate us to feel more experiences (drop a line if you are interested in talking about how AI will or could affect your industry)


However, the pace of evolution towards such a type of society will be highly different in all parts of the world. A new division will be formed: society's digital/AI division. And that will be the moment when it will radically change the fundamental mechanisms because how to reach the next level for society and its members will no longer be a "political" decision; it will be a "technical plan" perfectly elaborated by different AI with a deep understanding of all aspects. How will AI-based political parties (like Synthetic Party with its Leader Lars already registered in Denmark) fight each other? In such a scenario, will we say that AI is leading us? Or is it helping us? Will we see AI-based political parties who are fighting in algorithms or objectives? And how those "political parties" will be "elected"? How can humans run negotiations in a common law system (based on precedents) with an AI trained in absolutely all cases? Indeed, a new type of "Clash of Civilizations" is ahead of us.


By then, let's prepare for the next major evolution event: the intersection of AI systems with automation robots (already developed in the last years) aggregated in some Intelligent Digital Workers or Services… 


Meanwhile, a few tech-based wars are already undergoing (mentioning just a few with significant potential impact):

Search engines war 

Today, we have the feeling that access to information is free just because we do not have to pay for it; in fact, our every search is "reverse paid" by the advertisers, and our search becomes part of our digital profile used to fine-tune the digital ad engines for higher accuracy. With a potential threat to classical search engines, the new AI-based conversational tools will shift from this model to pay-per-use (number of queries, words generated, etc.). How will this change fuel society's digital / AI divide? Would access to AI "wisdom" be limited to only those who can afford to?   

Video/audio collaboration platforms (zoom, meet, teams, etc.) 

Having an AI assistant that will transform into actionable information, the conversation (regardless of the language) is pretty close. This will be the start of AI "wars" where these platforms (which changed how we do business in the last three years) must secure their partnership with the right technology provider and data supplier. Exciting times are ahead of us.

 

Finally, yet significantly, I will raise some ethical issues worth enough to think about:

- Who will be the author of an article written by AI? (or a Master/PhD thesis); can we have a "signature" for each output? If we determine that AI-generated a piece of text from a thesis, will that be considered plagiarism?

- How can we know which sources were used for a specific output? How can we be sure that the answer given by AI is correct? What is happening with the sensitive issues? (for which today people are dying in wars, conflicts and there are two "correct" versions of respective issues) and for which there is no worldwide agreed unique "correct" answer?

- How can we "trust" a computed opinion (what are the "arguments" sources, mechanics)? What if we do not agree with one/few of the sources used by AI to compute the answer? What is the "level of trust" we can assign to an AI-generated answer? Can it be the same regardless of domain/subject?

- How easily can AI be influenced? (by intentionally adapting the training data or other ways…). How can we be informed when AI's "opinion" on a specific matter is changed…. And why?

- How should we be informed when AI generates a piece of digital content? Will it be more or less valuable than content if AI generated it?

 

Therefore, the conclusion: AI has come and is here to stay; we must understand how to live with this kind of power so close to us and shape the safest way of doing so. Over the next five years, AI will massively influence every aspect of our life. Now, as humans, we must adapt … once again. 

 

 

Crafting memories of the future,


Dan

 PS Waiting for Google's answer to ChatGPT (my bets are still on them)

Comments