Index Blog

Humans, machines and NFXs: The fabric of collective intelligence

February 15, 2024

Humans, machines and NFXs: The fabric of collective intelligence
Radu PoclitariRadu Poclitari, Copywriter

Language is the ‘functionality’ that connects our society. When machines master it, a powerful connection between humans with technology will happen.

Generated by DALL-E

Introduction

Our perception of intelligence echoes the voice we hear and the written signs we read.

Language is the medium through which we share ideas, express emotions, and construct the narrative of our collective existence. As we edge ever closer to the day when machines master this complex communication system, an unprecedented fusion of human cognition with technology is on the horizon. This convergence promises to reshape our societal fabric, stimulating potent network effects (NFX) that could radically alter how we perceive and interact with the world around us.

Humanity feared before, with the steam engine, electricity, and internal combustion, so why Generative AI (GAI) should be different? Like any technological advancement, Large Language Models (LLMs) through the generative pre-trained transformers (GPT) cascade waves of complementary innovations. Yet not qualified to catalyse a new industrial revolution but so impressive to shine sentiments of excitement and trepidation.

Painted by DALL-E

Symbiotic Existence of Humans and Machines

What can machines do better than humans, and what can humans do better than machines?

Humans have been using machines to process large amounts of data and perform computations since the advent of modern computing in the mid-20th century. Manhattan Project in the 1940s led to the development of the atomic bomb. The complex calculations required for this project were performed using some of the earliest digital computers, including the IBM punch-card system. ENIAC (Electronic Numerical Integrator and Computer), one of the earliest general-purpose computers, calculated artillery firing tables for the U.S. Army during World War II. In the 1960s and 70s, the Apollo moon missions relied heavily on computers for navigation and system management. The Apollo Guidance Computer, though less powerful than a modern smartphone, was critical in guiding the spacecraft to the moon, landing, and returning safely to Earth.

In more recent times, humans used machines to simulate complex climate patterns and even used machines to decode the human genome. All these advancements are possible because of the machines’ capacity to process, analyse, and interpret vast amounts of data. Much faster than any human. And still, machines are going further! Google’s Quantum Computer, Sycamore, performed a computation in 200 seconds that would have taken the world’s most powerful supercomputer 10,000 years to complete, achieving so-called “quantum supremacy”. While in the last few days, LLMs like GPT are becoming powerful tools for humans to interact with machines.

But the ground truths remain! Any machine needs a large amount of data to identify patterns, make predictions, and produce insights. Therefore, machines still need massive data and hard-to-imagine computational power to generalise from the context or go beyond the abstract thinking of humans, not to mention to show empathy and emotional intelligence. Yet, we are so impressed to fear the technological breakthrough of LLM — from 2017 Google’s Transformers to 2022 OpenAI’s GPT.

Collective Intelligence Powered by NFX

We can enhance the symbiosis to upgrade our Collective Intelligence.

We can leverage the AI of a machine through the fabric of NFX — new users join because of the utility received from an AI Model. With each new user, the AI model improves to reciprocate more value to existing users.

AI and NFX are intricately interwoven. As more users engage with an AI model — be it a language model, recommendation engine, or a complex system for forecasting — each interaction serves as a data point that can be used to fine-tune and enhance the AI’s capabilities. In turn, these improvements can generate additional value for existing users, which may attract even more users.

This dynamic creates a virtuous cycle: As the user base grows, the AI model becomes more sophisticated and effective, and as the model improves, it becomes more attractive to potential users. This augments the AI's utility and intensifies the network effects, making the system progressively more robust and beneficial.

The genuinely transformative potential of AI will likely emerge from this symbiosis between ever-improving AI models and their growing user bases. In harnessing these network effects, we can unlock unprecedented opportunities for innovation and problem-solving, enhancing our ability to understand and navigate an increasingly complex world — enhancing our Collective Intelligence.

Building the Virtuous Cycle of Intelligence

We, humans, have our intuition!

Embarking on this collective journey of human-machine symbiosis, three propositions are critical pathways for integrating AI models into our societal fabric. Each proposition offers a unique perspective on how we can leverage the strengths of both humans and machines to maximise the potential of our collective intelligence:

Proposition 1: input-Human Intuition to output-Artificial Intelligence

For Machines is impossible to solve intuitive tasks. Even if those tasks are repetitive, they become predictable after a significant amount of data. Mechanical Turk or Timeworx.io, by leveraging the power of crowdsourcing, provided labelled data to teach machines how to solve repetitive tasks—taking it further index.dev trained its copilot to assist software engineers in writing unit tests from functionality code. The idea can be the same. The application can be open-ended. The more humans provide labelled data, the better the AI models respond to their users.

Proposition 2: input-Artificial Intelligence to output-Collective Intelligence

The better the AI models respond to their users, the more humans will use those AI models. NFX of Netflix and Spotify leverage machine learning algorithms to offer personalised recommendations based on user behaviour. The AI model learns and improves with each interaction, tailoring its responses to individual users. Consequently, the system becomes more valuable, attracting more users and contributing to greater collective intelligence. Similarly, Waze, the navigation app, leverages user-reported data and machine learning to provide real-time traffic updates, creating a collective intelligence of current road conditions.

Proposition 3: Factor in LLM

Frameworks like Langchain can integrate an LLM with existing software, providing a valuable interface between humans and AI models. The LLM is a translator, converting complex AI-driven data into plain language that users can easily understand and interact with. This makes the system more accessible and user-friendly, encouraging more engagement. This increased engagement leads to more data being fed into the AI model, improving the model’s performance and the system’s overall value. This creates a virtuous cycle of intelligence, where the more users engage with the system, the more valuable it becomes.

The Hypothesis for Humans, Machines and NFX

Let us take an industry — deploy AI-enabled through NFX, and we factor LLM to enhance adoption.

I will take a fascinating example of AI adoption in the agriculture industry. Traditionally, farming has been a non-digital industry, and it was humanity's first step towards an intelligent society.

Painted by DALL-E

With the rise of “precision farming” or “smart farming,” AI is beginning to transform this age-old sector. Farmers are now using AI-based systems to optimise crop yields and resource usage. For example, AI algorithms can analyse satellite and drone imagery data to detect plant health, soil conditions, pest infestations, or crop diseases. With machine learning, these systems can adapt and improve their predictive capabilities, providing increasingly accurate guidance to farmers. The more data these systems collect from different farms (users), the more effective their predictions and recommendations become.

In this scenario, the network effect is palpable. Each new farm that adopts such AI technology contributes more data to the collective pool, improving the system’s effectiveness for the new and existing farms. Moreover, more farms are incentivised to join as the system improves and word spreads about its benefits. The AI system henceforth stimulates potent network effects that amplify the collective intelligence of the entire agricultural sector.

Factor LLM. The LLM’s role is two-fold: it makes adopting and using AI technology easier for farmers and collects and processes farmers’ feedback, enhancing the AI model’s learning efficiency. As more farmers use the LLM, the value generated by their feedback and interaction grows, amplifying the network effects and enhancing the collective intelligence of the platform.

Now let us go to numbers. You can skip if you are not fun of models and hypothesis

We can express the value creation of precision farming using an AI-enabled network through Network Effects (NFX) that incorporates a Large Language Model (LLM) as an interface:

V = Σk αk * n_k^βk + λ(LLM) * Σf ψf * n_f^γf, for all k ∈ user groups, f ∈ farmer group

Value Creation (V): This is the base value created across the network, considering all user groups (k) — including AI providers, consumers, environmental agencies, etc. The value for each group is calculated as αk * n_k^βk, where:

  • n_k is the number of users in each group.
  • αk represents the relative importance or influence of each group on the network.
  • βk represents the effect of the network’s size within each group on its total value.

Farmer Interaction (Σf ψf * n_f^γf): This element captures the network effect within the farmer user group. The value for the farmer group is calculated as ψf * n_f^γf, where:

  • n_f represents the number of farmers in the network.
  • ψf shows the relative importance of farmers in the system.
  • γf shows the effect of the network’s size on its overall value.

LLM as an Interface (λ(LLM)): This term represents the value added by integrating an LLM as an interface for farmers. The LLM translates complex AI-driven data into plain language, making it easier for farmers to interact and understand the AI system. The strength of this effect is represented by λ.

Total Value (V_total): The total value of the network is calculated by adding the value across the platform and the additional value created by having an efficient LLM interface that facilitates farmer-AI interactions. This is represented as: V_total = Σk αk * n_k^βk + λ(LLM) * Σf ψf * n_f^γf

The term λ(LLM) * Σf ψf * n_f^γf reflects the improved efficiency and value derived from using an LLM as an interface for farmers to interact with the AI model. The twofold impact of the LLM is (1) AI technology adoption by farmers increasing n_f. (2) Collecting feedback and enhancing the learning efficiency (captured by the factor λ(LLM)).

By adding the LLM and farmer interaction term to the network’s original value, we can estimate the total value of a precision farming network that incorporates an LLM as a user-friendly interface. As per our hypothesis, the more user-friendly and efficient the LLM becomes (λ(LLM) increases), and the more farmers interact with the AI system through the LLM (n_f increases), the more valuable the platform becomes.

Limitations

The world is more complex than my model. I have omitted the computational power and hardware requirements for AI long-term memory. Many fun innovations are happening in vector databases and parallel and distributed computing. Check them out, Pinecone.

Conclusions from an LLM

The LLM provided a conclusion that sounds good, I will leave it below for your own thoughts and conclusion.

This model attempts to quantify the value created by integrating an LLM into a precision farming network. It considers the network effects within different user groups, the additional value created by the LLM’s role as a user-friendly interface, and the impact of farmer interactions on the network’s value. By adjusting the parameters αk, βk, ψf, γf, and λ(LLM) based on real-world data, this model can provide valuable insights into how to maximise the network’s value and effectively promote AI technology adoption in the farming sector.

Contact us to learn how Index.dev tech recruitment platform can help you accelerate your hiring, reduce bias and put the right AI talent in the right role -->