Categories
Artificial Intelligence Automation Autonomous Agents Business Deep Learning GPT4o Information Technology Reinforcement Learning Reinforcement Learning Strategy Technology Technology Strategy

Navigating the Future with Generative AI: Part 4, Unstoppable AGI and Superintelligence?

AGI and Superintelligence 1

1. Connecting the Dots Between Two Life-Changing Milestones for Humanity

In a Times Magazine interview, Yann Lecun remarked, “I don’t like to call [it] AGI because human intelligence is not general at all.” This viewpoint challenges our common understanding of Artificial General Intelligence (AGI) versus the supposed limitations of human intelligence. The term “artificial general intelligence” itself seems overused and often misunderstood. While it initially appears intuitive, upon closer examination, nearly everyone with an informed perspective offers a different definition of AGI.

The fog only thickens with Ilya Sutskever, Chief Scientist behind the wildly popular GPT generative AI model. In an MIT Technology Review interview, he states, “They’ll see things more deeply. They’ll see things we don’t see,” followed by, “We’ve seen an example of a very narrow superintelligence in AlphaGo. […] It figured out how to play Go in ways that are different from what humanity collectively had developed over thousands of years. […] It came up with new ideas.”

Before DeepMind’s AlphaGo versus Lee Sedol showdown in 2016, we had IBM’s Deep Blue chess victory against Garry Kasparov in 1997. The unique aspect of these AIs is their mastery within a single, specific domain. They aren’t general, but superintelligent—surpassing human capability—within their respective areas.

In this article within the “Navigating the Future with Generative AI” series, we’ll explore two inevitable stages in humanity’s future: AGI and Superintelligence.

2. Defining AGI: What Do We Really Mean?

Numerous definitions exist for what we call AGI and superintelligence. These terms often intertwine in contemporary discussions around artificial intelligence. However, these are two very distinct concepts.

Firstly, AGI stands for Artificial General Intelligence. This signifies a state of artificial intelligence built upon several building blocks: machine learning, deep learning, reinforcement learning, the latest advancements in Generative AI and Imitation Learning algorithms, and basic code. These all contribute to a level of versatility in task execution and reasoning. This developmental stage of synthetic intelligence mirrors what an average human can achieve autonomously in various areas, demonstrating a generalized capability to perform diverse tasks.

These tasks stem from a foundation of knowledge—akin to schooling—combined with basic learning for completing new, periodically defined objectives to achieve specific goals. These goals exist within a work setting: finalizing an audit ensuring corporate compliance with AI regulations, ultimately advising teams on mitigation strategies. Conversely, they exist in daily life: grocery shopping, meal preparation for the next day, or organizing upcoming tasks. This AGI, working on behalf of a real human, benefits from globally accessible expertise. These attributes enable assistance, augmentation, and ultimately, complementation of everyday actions and professional endeavors. In essence, it acts as a controllable assistant: available on demand and capable of executing both ad-hoc and everyday tasks. The operative word here is general, implying a certain universality in skillsets and the capacity to execute the spectrum of daily tasks.

I share Yann Lecun’s view: a key missing element in current AI models is an understanding of the physical world. Let’s be more precise:

  • An AI requires a representation of physics’ laws but also an operational model determining when these laws apply. A child, after initial stumbles, inherently understands future falls will occur similarly, even without knowledge of the gravitational force field. They can learn, sense, and anticipate the effects of Earth’s gravity. Similarly, our bodies grasp the concept of weight calculation without comprehending its mathematical expression before formal learning.
  • Beyond this world model, an AI needs to superimpose a system of constraints, continuously reaffirming the very notion of reality. For example, we understand that wearing shoes negates the feeling of the hard ground beneath. Our preferred sneakers, due to their soles, elevate us a couple of centimeters, offering a slight cushioning effect while running. We trust the shoes won’t detach, having secured the laces. We vividly recall fastening those blue shoes before beginning our run as usual. Most importantly, we possess the unshakeable belief we won’t sink into the asphalt, knowing it doesn’t share mud’s consistency. Thus, we can confidently traverse our favorite path, striving for personal satisfaction, aiming to break that regional record.
  • An AI needs not only the ability to plan but also the capacity to simulate, adapt, and optimize plans and their execution. Recall your last meticulously planned trip. Coordinates meticulously plotted on your GPS, you set off with time to spare. But alas, the urban data was outdated, missing the detour at the A13 freeway entrance. Then, misfortune struck: an accident reported on the south freeway, traffic condensing from three lanes into one. Stuck in a bottleneck, only two options remain—pushing forward in hope or finding an alternate route. Checking your watch: 23 minutes left to reach your destination. This is how dynamic and complex planning a task can be. And yet, humans are capable of handling this all the time.
  • An AI requires grounding in reliable and idempotent functionalities, echoing the foundation of classical computing: programming, logic, and arithmetic calculation. The ability to call upon an internal library, utilize external APIs, and perform computations is paramount. This forms the basis of real-world grounding, maintaining “truth” as the very infrastructure of AGI. It’s about providing an action space yielding predictable, stable results over time, much like the verified mathematical theorems and laws of physics backed by countless empirical papers. Take, for instance, the capacity to predict a forest drone fleet’s movements using telemetric data, factoring in wind speed and direction, geospatial positioning, the relative locations of each drone and its neighbors, interpreting visual fields, and detecting obstacles (trees, foliage, birds, and so on).
  • An AI have to capitalize on real-time sensory input to infer, deduce, and trigger a decision-action-observation-correction loop akin to humans. For instance, smelling smoke immediately raises an alarm, compelling us to locate the fire source and prevent potential danger. Smartphones, equipped with cameras and microphones, display similar capabilities. Taking this further, devices like Raspberry Pis, when combined with diverse electronic sensory components, can even surpass human sensory capacities. Consider a robot with ultraviolet, infrared, or ultrasonic sensors, allowing it to “sense” things beyond our perception. This lends literal meaning to Ilya Sutskever’s statement.

This implies that AGI won’t necessarily be beneficial or provide significant added value in highly specialized fields, especially in areas where humans have been traditionally adept. This applies to domains like fundamental research, inventiveness, and engineering design – areas I believe will remain constrained by the currently available knowledge pool on the internet. This limitation arises because AGI’s continued advancement is largely driven by companies tailoring it to their specific expertise, often regarded as intellectual property.

Thus, we progressively journey towards AEI: Artificial Expert Intelligence. This translates to a model or agent, a pinnacle expert in its field. Imagine an AEI on par with the top 5% of experts (> 2σ) on this planet, reaching Olympian levels, like AlphaGeometry and AlphaProof, who secured the Silver Medal at the International Mathematical Olympiad.

The architectures with the most potential rely on active collaboration between expert models (Mixture of Experts) and between agents (Mixture of Agents). Even when individual model performance within this collaborative framework isn’t the absolute best, the collaborative outcome exhibits a quality level on par with, if not exceeding, that of the best individual models like GPT4-o. It’s a striking testament that collaboration, be it human or artificial, remains the most effective avenue to reach any objective.

3. Humanity’s Inevitable Ascent Towards Superintelligence

Revisiting the human versus machine narrative, 2018 marked a pivotal encounter: AlphaStar versus TLO (Dario Wunsch), then MaNa (Grzegorz Komincz), two professional gamers from the renowned StarCraft Team Liquid. Created by Google DeepMind, AlphaStar is a digital prodigy trained on the collective experience of 600 agents, equivalent to 200 years of playing StarCraft.

Consider the inherent imbalance when directly contrasting human capabilities against those of AI:

  1. Replication Capacity: AIs can be copied indefinitely.
  2. Relentless Training: AIs train ceaselessly, needing no sleep, nourishment, or breaks.
  3. Absolute Focus: AIs exhibit unwavering concentration on their designated tasks.
  4. Self-improvement through concurrent learning: AIs hone their abilities by training against their evolving intelligence, devising novel strategies to secure victory.
  5. Linear scalability: the more computing and memory resources you add, the greater the performance

The outcome: an AI consistently outmaneuvering the crème de la crème of a strategic open-world video game’s premier league. And as if that weren’t enough, it maintains its position within the Grandmaster league.

Here lies the very essence of an intelligence surpassing human decision-making abilities within a similarly vast and dynamic environment: this is what we classify as Superintelligence, or ASI.

Superintelligence, from my perspective, transcends mere human intelligence and even surpasses collective human intelligence. It indicates that even a group of individuals, regardless of their combined expertise and knowledge, would be outpaced, left trailing by an artificial intelligence capable of going beyond their cumulative potential.

Imagine instead a new form of synergy: a “super” human system collaboratively engaged in highly cognitive functions with this Superintelligence. This involves humans directing or, perhaps more accurately, guiding this Superintelligence based on our needs. While this Superintelligence operates with its own raison d’être, it wouldn’t clash with the fundamental purpose of humanity. This Superintelligence possesses access to those superior functions—understanding the universal model within which humanity exists. It possesses the model of reality itself.

Moreover, it resides within a self-improvement and discovery paradigm, continuously unveiling novel operations, new paradigms, and potentially even new forms of energy. Think entirely new physics laws that govern our universe; laws that humans, as of yet, have not uncovered. This encompasses diverse domains: medicine, engineering, revolutionary material science, new composite development, and engineering breakthroughs for unprecedented construction methods. Envision a symbiotic relationship between humans and machines fulfilling humanity’s ambitions. The limitations posed by individual human existence or the current state of collective human intelligence dissolve; no longer a barrier, it morphs into an expansive vision of human evolution, a potential accelerator for progress.

It even prompts new questions: How far can humans evolve? Or more precisely, how quickly?

However, we shouldn’t discount the possibility that artificial Superintelligence won’t be seen—or won’t see itself—as a novel species.

Therefore, being as rational as possible, we cannot accurately predict if this species would afford humanity the same compassion and civil collaboration that we strive for with our fellow human beings. It’s even plausible that they won’t hold any particular regard, instead pursuing their objectives, much like we think little of stepping on ants while daydreaming in a beautiful landscape, lost in contemplation, our thoughts oscillating between everyday worries and future aspirations.

4. What Would Constitute Human Superintelligence?

Human superintelligence embodies the culmination of all accumulated knowledge, discoveries, experiences, and yes, even the mistakes made by our ancestors to this point. Ultimately, this human superintelligence represents the collective “us” of today. It’s what fuels our intricate logistics and supply chains, our relentless pursuit of natural resources. It underpins our scientific endeavors: from breakthroughs in biology, mathematics, and agriculture, to understanding our global economic system – allowing us to manage our resources effectively, allocate them efficiently, and strategize our reinvestments. Money, in this light, transforms into a socio-economic technology.

Essentially, when comparing human superintelligence—today’s collective human intellect—with artificial superintelligence, a stark contrast emerges in their evolutionary cycles. Artificial intelligence advances at a significantly faster pace, powered by recent breakthroughs in training using our data. This data, importantly, reflects our findings, the mirror to thousands of years of human advancement accessible through the internet. This hints that artificial superintelligence would evolve at a much faster rate than humanity itself.

This rapid advancement stokes anxieties about potential disruption within the job market. Tech titans like Sam Altman advocate for Universal Basic Income (UBI) as a safety net for those displaced by artificial intelligence or robotics, allowing individuals to meet their basic needs even after losing their jobs. At that juncture, work itself detaches from its traditional role: that direct link between labor, contribution to the value chain, recognized worth, and societal standing. Instead, we confront the image of an economic umbilical cord, individuals sustained by the state-funded by fellow citizens.

While I remain undecided on my stance regarding UBI’s necessity, it compels contemplation. When UBI becomes a reality for a significant portion of the population, what function does money truly serve within our society? How do we sustain work motivation beyond “earning a living” when basic needs are met without active contribution? What ripples will be felt throughout a sovereign currency? Will the collective of people continue to control the economy, or is the future in the hands of AI-driven megacorporations?

There are so many answers yet to be uncovered.

After all, maybe “computing” should be considered a universal right. Therefore, we would shift the focus from UBI to UBC, Universal Basic Computing.

5. AGI and Superintelligence: Steering Toward a Future of Abundance or Ruin?

The next cycle hinges on resource accessibility and access to “programming” the world. Initially, artificial intelligence, at the very least, will permeate our daily lives. We are transitioning to personalized AI assistants, specializing in our chosen pursuits, whether robotics for errands, learning assistance for mastering a new language, or perfecting one’s singing voice. Next to none, specialized AI coaches will emerge to achieve elite athletic status, along with AI tutors guiding our artistic development beyond the readily available generated art of today.

Simultaneously, this superintelligence would be managing our complex systems: national infrastructures, electricity grids, vast transportation and logistical networks. Thus, it can drive early warning systems for natural disasters or power next-generation weather prediction platforms that incorporate oceanic currents. It will even account for stellar events such as shifts in the sun’s activity, factoring in our solar system’s dynamic positioning.

In conclusion, these are just glimpses into the potential futures shaped by AGI and superintelligence. However, the core message remains: we stand at a critical juncture. Depending on our collective appetite for progress, we could be headed toward a future of abundance or stumble along the path toward our own undoing.

Science offers an incredible opportunity: the chance to break free from a civilization driven by profit-motivated conflicts and ideological clashes. Instead, it enables collaboration guided by a neutral, third-party entity—one that embodies the best of what we, as a species, have strived for, built, and imagined. This collaboration offers a path for our societal framework to truly evolve.

The future is bright if we make it right.

🫡

Categories
Artificial Intelligence Reinforcement Learning

This is how Open-ended Learning gives A.I. these brilliant moves

To see A.I. agents playing “cat-and-mouse”, “king of the hill” games, then taking their own decisions, is so much fun to watch!

Reinforcement learning is just amazing.

Open-ending Learning demonstration

#AI #RL #google #deepmind #tech2check