The most impressive part is the technique employed by the Tesla team for accelerating the robot’s dexterity: the robot physically learns from human actions.
Now, let’s step back and analyse Tesla’s master plan here:
(Putting on my business tech strategy goggles)
1. Tesla builds electric cars augmented with software programmability.
2. Tesla provides an electric grid as a service.
3. Tesla builds gigafactories that maximize the automation of car manufacturing. Almost every single part of the pipeline is robotized and optimized for speed of production.
4. Tesla builds Powerwalls (by providing energy storage, it also creates a decentralized power station network).
5. Tesla brings autonomous driving (FSD) to Tesla cars. Essentially, cars are now transportation robots governed by the most advanced AI fleet management system.
8. Tesla launches Optimus, which aims to replace the human workforce in factories and warehouses.
9. X.ai, which has recently raised $6 billion, X’s supposedly “child” AI company, brings the Grok AI model trained on X/Twitter data. While you may say X data is not the best, X has a algorithm balanced with human judgment (community notes), AND the company regroups the largest set of news publishing companies. Basically, it automates curation and accuracy.
10. A version of the Grok AI model will likely power Optimus’s human-to-robot conversational interface.
11. Tesla cars will be turned into robotaxis, disrupting not only taxi companies but also Uber (the Uber/Tesla partnership may not be a coincidence), and eating into the shares of Lyft and BlaBlaCar.
12. Tesla will enter the general services business, and retail industries to offer multi-purpose usage robots – cleaning services for business offices, grocery stores, filling the workforce shortage in the catering (hotel-restaurant-bar…) industry, etc.
Tesla is not the only one moving in the “Robot Fleet Management” business. Chinese companies like BYD (EV) offer strong competition, and there are several robot startups (like Boston Dynamics and Agility Robotics) racing for the pole position.
In this installment of the Generative AI series, we delve into the concept of “Prompt as new Source Code”. The ongoing revolution of generative AI allows one to amplify one’s task productivity by up to 30 times, depending on the nature of the tasks at hand. This transformation allows me to turn my design into code, eliminating almost the need for manual coding. The time spent typing, correcting typos, optimizing algorithms, and searching Stack Overflow to decipher perplexing errors, structuring the code hierarchy, and bypassing class deprecation among other tasks, are now compressed into one. This minimization of effort provides me with recurrent morale boosts, as I achieve significantly more in less time and more frequently; these instances are micro-productivity periods. To put it in perspective, I can simply think about it during the day, and have a series of conversations with my assistant while I commute. My assistant is always available. In addition, I gain focus time.
I don’t need to wait for a team to prove my concept. Furthermore, in my founder role, I have fewer occasions to write extensive requirement documents than I would when outsourcing developments during periods of parallelization. I just need to specify the guidelines once, and the AI works out the rest for me. Leveraging the AMASE methodology to fine-tune my AI assistant epitomizes the return on investment of my expertise. Similarly, your expertise, paired with AI, becomes a powerful asset, exponentially amplifying the return on your efforts.
Today, information technology engineering is going through a quantum leap. We will explore how structured coding is being replaced by natural language. We refer to this as prompting, which essentially denotes “well-architected and elaborated thoughts”. Prompting, so to speak, is the crystallization of something that aims to minimize the loss of information and cast-out interpretation. In this vein, “What You Read is What You Thought” becomes a tangible reality.
The Unconventional Coding Experience with AI
Although the development cycle typically commences with the design phase, this aspect will not be discussed in this article. Our focus will be directed towards the coding phase instead.
The development cycle with AI is slightly different; it resembles pair programming. Programming typically involves cycles of coding and reviews, where the code is gradually improved with each iteration. An artificial intelligence model becomes your coding partner, able to code 95% of your ideas.
In essence, AI acts as a coach and a typewriter, an expert programmer with production-level knowledge of engineering. The question may arise: “Could the AI replace me completely? What is my added value as a human?”
Forming NanoTeams: Your AI Squad Awaits
My experience leads me to conclude that working with AI is akin to integrating a new teammate. This teammate will follow your instructions exactly, so clarity is essential. If you want feedback or improvements in areas like internal security or design patterns, you must communicate these desires and potentially teach the AI how to execute them.
You will need to learn to command your digital teammate.
Each AI model operates in a distinct yet somewhat similar fashion when it comes to command execution. For instance, leveraging ChatGPT to its fullest potential can be achieved through impersonations, custom instructions, and plugins. On the other hand, Midjourney excels when engaged with a moderate level of descriptiveness and a good understanding of parameter tweaking.
A New Abstraction Layer Above Coding
What exactly is coding? In essence, coding is the act of instructing a machine to perform tasks exactly as directed. The way we’ve built programming languages is to ensure they are idempotent, repeatable, reliable, and predictable. Ultimately, coding is translated into machine language, creating a version that closely resembles human language. This is evident in modern languages like TypeScript, C#, Python, and Kotlin, where instructions or controlling statements are written in plain English, such as “for each”, “while”, “switch”, etc.
With the advent of AI, we can now streamline the stage of translating our requirements into an algorithm, and then into programming code, including structuring what will ultimately be compiled to run the program. Traditionally, we organize files to ensure the code is maintainable by a human. But what if humans no longer needed to interact with the code? What if, with each iteration, AI is the one updating the code? Do we still need to organize the code in an opinionated manner, akin to a book’s table of contents, for maintainability? Or do we merely need the code to be correctly documented for human understanding, enabling engineers to update it without causing any disruptions? Indeed, AI can also fortify the code and certify it using test cases automatically, ensuring the code does not contain regressions and complies with the requirements and expected outcomes.
To expand on this, AI can generate tests, whether they be unit tests, functional tests, or performance tests. It can also create documentation, system design assets, and infrastructure design. Given that it’s all driven by a large language model, we can code the infrastructure and generate code for “Infrastructure as Code“, extending to automated deployment in CI/CD pipelines.
To conclude this paragraph, referring to my first article in the “Generative AI series”, it is apparent that Natural Language Processing is now the new programming language expressed as prompts. The Large Language Model-based generative AI model is the essential piece of software for elaborating, structuring, and completing the input text into code that can be understood both by human engineers and digital engineers.
The New Coding Paradigm
This fresh paradigm shift heralds the advent of a new form of coding—augmented coding. Augmented coding diminishes the necessity of writing code using third and fourth-generation languages, effectively condensing two activities into one.
In this scenario, the engineer seldom intervenes in the code. There may be instances where the AI generates obsolete or buggy code, but these can often be rectified promptly in the subsequent iteration.
We currently operate in an explicit coding environment, where the input code yields the visible result on the output—this is known as Input/Output coding.
The profound shift in mindset now is that the output defines the input code. To elucidate, we first articulate how the system should behave, its structure, and the rules it must adhere to. Essentially, AI has catapulted engineers across an innovation chasm, ushering in the era of Output/Input coding.
Embracing Augmented Coding: A Shift in Engineering Dynamics
The advent of augmented coding ushers in a new workflow, enhancing the synergy between engineers and AI. Below are the core aspects of this transformation:
Idea Expression: The augmented engineer is impelled to express ideas and goals to achieve.
Requirement Listing: The engineer lists the requirements.
Requirement Clarification: Clarify the requirements with AI.
Architecture Decisions: Express the architecture decisions (including technology to use, security compliance, information risk compliance, regulatory technical standards compliance, etc.) independently, and utilize AI to select new ones.
Coding Guidelines: Declare the coding guidelines independently and sometimes consult the AI.
Business Logic: Define the business logic in the form of algorithms to code.
Code Validation: Run the code to validate it works as intended. This becomes the first order of acceptance tests.
Code Review: assess the code to ensure it complies with the engineering guidelines adopted by the company.
Synthetic Data Generation: Use AI to generate data sets that are functionally relevant for a given scenario and a persona.
Mockup-API Generation: Employ AI to generate API stubs that are nearly functionally complete before their full implementation.
Test Scenario Listing: Design the different test scenarios, then consult stakeholders to gather feedback and review their completeness.
Test Case Generation: Make AI to generate the code of test cases. The same technique applies to security tests and performance tests.
AI can even operate in an autonomous mode to perform a part of the acceptance tests, but human intervention is mandatory at certain junctures. It’s crucial to bridge results with expectations.
Hence, when uncertainties arise, increasing the level of testing is prudent, akin to taking accountability upon acceptance tests to ensure the delivered work aligns with the expected level of compliance regarding the requirements.
Non-Negotiable Expectations
In the realm of critical business rules and non-functional requirements such as security, availability, accessibility, and compliance by design, these aspects are often considered second-class citizen features. Now that AI in coding facilitates the choice, these features can simply be activated by including them in your prompts to free you up more time to rigorously test their efficiency.
Certain requirements are tethered to industry rules and standards, indispensable for ensuring individual or collective safety in sectors like healthcare, aviation, automotive, or banking. The aim is not merely to test but to substantiate consistent performance. This underscores the need for a new breed of capabilities: Explainable AI and Verifiable AI. Reproducibility and consistency are imperative. However, in a system that evolves, attaining these might be challenging. Hence, in both traditional coding and a-coding, establishing a compliance control framework is essential to validate the system’s functionality against expected benchmarks.
To ease the process for you and your teams, consider breaking down the work into smaller, manageable chunks to expedite delivery—a practice akin to slicing a cake into easily consumable pieces to avoid indigestion. Herein, the role of an Architect remains crucial.
Yet, I ponder how long it will be before AI starts shouldering a significant portion of the tasks typically handled by an Architect.
Ultimately, the onus is on you to ensure everything is in order. At the end of the day, AI serves as a collaborative teammate, not a replacement.
Is AI Coding the Future of Coding?
The maxim “And is greater than or” resonates well when reflecting on the exponential growth of generative AI models, the burgeoning number of published research papers, and the observed productivity advantages over traditional coding. I discern that augmented coding is destined to be a predominant facet in the future landscape of information technology engineering.
Large Language Models, also known as LLMs, are already heralding a modern rendition of coding. The integration of AI in platforms like Android Studio or GitHub Copilot exemplifies this shift. Coding is now turbocharged, akin to transitioning from a conventional bicycle to an electric-powered one.
However, the realm of generative AI exhibits a limitation when it comes to pure invention. The term ‘invention’ here excludes ideas birthed from novel combinations of existing concepts. I am alluding to the genesis of truly nonexistent notions. It’s in this space that engineers are anticipated to contribute new code, for instance, in crafting new drivers for emerging hardware or devising new programming languages (likely domain-specific languages).
Furthermore, the quality of the generated code is often tethered to the richness of the training dataset. For instance, SwiftUI or Rust coding may encounter challenges owing to the scarcity of material on StackOverflow and the nascent stage of these languages. LLMs could be stymied by the evolution of code, like the introduction of new keywords in a programming language.
Nonetheless, if it can be written, it can be taught, and hence, it can be generated. A remedy to this quandary is to upload the latest changes in a prompt or a file, as exemplified by platforms like claude.ai and GPT Code Interpreter. Voilà, you’ve just upgraded your AI code assistant.
Lastly, the joy of coding—its essence as a form of creative expression—is something that resonates with many. The allure of competitive coding also hints at an exciting facet of the future.
Short-Term Transition: Embracing the Balance of Hybrid A-Coding
The initial step involves exploring and then embracing Generative AI embedded within your Integrated Development Environment (IDE). These tools serve as immediate and obvious accelerators, surpassing the capabilities of features like Intellisense. However, adapting to the proactive code generation while you type, whether it’s function implementation, loops, or SQL code, can hasten both typing and logic formulation.
Before the advent of ChatGPT or GPT-4, I used Tabnine, whose free version was astonishingly effective, adding value to daily coding routines. Now, we have options like GitHub Copilot or StableCode. Google took a clever step by directly embedding the AI model into the Android Studio Editor for Android app development. I invite you to delve into Studio Bot for more details on this integration.
Beware of Caveats During Your Short-Term Transition to Generative AI
Token Limits
Presently, coding with AI comes with limitations due to the number of input/output token generation. A token is essentially a chunk of text—either a whole word or a fragment—that the AI model can understand and analyze. This process, known as tokenization, varies between different AI models.
I view this limitation as temporary. Papers are emerging that push the token count to 1M tokens (see Scaling Transformer to 1M tokens and beyond with RMT). For instance, Claude.ai, by Anthropic, can handle 100k tokens. Fancy generating a full application documentation in one go?
Model Obsolescence
Another concern is the inherent obsolescence of the older data on which these models are trained. For example, OpenAI’s models use data up to 2022, rendering any development post that date unknown to the AI. You can mitigate this limitation by providing recent context or extending the AI model through fine-tuning.
Source Code Structure
Furthermore, Generative AI models do not directly consider folder structures, which are foundational to any coding project.
Imagine, as an engineer, interacting with a chatbot crafted for coding, where natural language could reference any file in your project. You code from a high-level perspective, while the AI handles your GIT commands, manages your gitignore file, and more.
Aider exemplifies this type of Gen AI application, serving as an ergonomic overlay in your development environment. Instead of coding in JavaScript, HTML, and CSS with React components served by a Python API using WebSocket, you simply instruct Aider to create or edit the source code with functional instructions in natural language. It takes care of the rest, considering the multiple structures and the GIT environment. This developer experience is profoundly familiar to engineers. The leverage of a Command Line Interface – or CLI, amplifies your capabilities tenfold.
Intellectual Property Concerns
Lastly, the risk of intellectual property loss and code leakage looms, especially when your code is shared with an “AI Model as a Service”, particularly if the system employs Reinforcement Learning with Human Feedback (RLHF). Companies like OpenAI are transparent about usage and how it serves in enhancing models or crafting custom models (e.g. InstructGPT). Therefore, AI Coding Models should also undergo risk assessments.
The Next Frontier: Codeless AI and the Emergence of Autonomous Agents
These agents require only a minimal set of requirements and autonomously devise a plan along with a coding strategy to achieve your goal. They emulate human intelligence, either possessing the know-how or seeking necessary information online from official data sources, libraries to import, methods, and so on.
However, unless the task is relatively simple, these agents often falter on complex projects. Despite this, they already show significant promise.
They paint a picture of a future where, for a large part of our existing activities, coding may no longer be a necessity.
Hence, the prompt is the new code
If the code can be generated based on highly specific and clear specifications, then the next logical step is to consider your prompt as your new source code.
It means you can start storing your specifications instructions, expressed as prompt, then store the prompt in GIT.
Suddenly, Continuous Integration/Continuous Delivery (CI/CD) becomes Continuous Development/Continuous Certification (CD/CC), where the prompt enables the development of working pieces of software, which will be continuously certified by a testing agent working in adversarial mode: you continuously prove that it works as intended.
The good thing is that benefits stack up: the human specify, the AI code/deploy and the AI certify, to finish with the human using the results of the materialization of its thoughts. Finally, the AI learns along with human usage. We close the loop.
Integrating New Technology into Traditional Operating Models
AI introduces a seamless augmentation, employing the most natural form of communication—natural language, encompassing the most popular languages on Earth. It stands as the first-of-its-kind metamorphic software building block.
However, the operating model with AI isn’t novel. A generative AI model acts as an assistant, akin to a new hire, fitting seamlessly into an existing team. The workflow initiates with a stakeholder providing business requirements, while you, the lead engineer, guide the assistant engineer (i.e. your AI model) to execute the development at a rapid pace.
Alternatively, a suite of AI interactions, with the AI assuming various roles, like dev engineer, ops engineer, functional analyst, etc. can form your team. This interaction model entails externalizing the development service from the IT organization. Here, stakeholders still liaise through you, as lead engineer or architect, but you refine the specifications to the level of a fixed-price project. Once finalized, the development is entirely handed over to an autonomous agent. This scenario aligns with insourcing when the AI model is in-house, or outsourcing if the AI model is sourced as a Service, with the GPT-4 API evolving into a development service from a Third-Party Provider like OpenAI.
AI infuses innovation into a traditional model, offering stellar cost efficiency. Currently, OpenAI’s pricing for GPT-4 stands at $0.06 per 1000 input tokens and $0.12 per 1000 output tokens. Just considering code generation (excluding shifting deadlines, staffing activities, team communication, writing tasks, etc.), for 100,000 lines of code with an average of 100 tokens per line (which is extensive for standard leet code), the cost calculation is straightforward:
100,000 × 100 = 10,000,000 tokens; (10,000,000 tokens × $0.12) ÷ 1000 = $1,200. This cost equates to a mere two days of development at standard rates.
For perspective, Minecraft comprises approximately 600,000 lines of Java code. Theoretically, you could generate a Minecraft-like project for less than $10,000, including the costs of input tokens.
However, this logic is simplistic. In reality, autonomous agents undergo several iterations and corrections before devising a plan and rectifying numerous errors. The quality of your requirements directly impacts the accuracy of the generated code. Hence, mastering the art of precise and unambiguous descriptive writing becomes an indispensable skill in this new realm.
Wrap up
Now, you stand on the precipice of a new coding paradigm where design, algorithms, and prompting become your tools of creation, shaping a future yet to be fully understood…
This transformation sparks profound questions: How will generative AI and autonomous agents reshape the job market? Will educational institutions adapt to this augmented coding era? Is there a risk of losing the depth of engineering expertise we once relied upon?
And as we move forward, we can only wonder when quantum computing will introduce an era of instantaneous production, where words will have the power to change the world in real time.
“Honey, we have a situation with Professor GYTEK, he is acting strangely again.”
“Again? The last training session had even more unexpected results than I thought. Good or Bad?”
“I don’t know! Kids are laughing hard though. Hear this. Serenity, change the audio output to hear the kids too”. Serenity is our family AI.
The sound progressively switches to include the kids’ voices. They could not stop laughing as if they were having the best day of their life. There was a mild amplifying echo in their classroom. Their joy sounded like a melody. It immediately put a smile on my face.
“Ah, it does not sound so bad for now. But it is the fourth unexpected behavior this month, I’ll have to talk with the Corps of Teachers”.
I am the one in charge of the training curriculum and observation lab of Professor GYTEK. The current phase is about the transmission of achievement by coaching. And for this, I called Quentin DILLONS, a worldwide expert in Robotic Psychology. The purpose of this program is to trigger a new step in the evolution of artificial intelligence, in which robots are taught to develop “human goals” and to instill the mechanism of “self-started motivation”, so that they can teach in a better way to our children, to uncover the hidden gems and purpose from the young souls.
Quentin’s methodology utilized systematic questionology, a novel field aimed at formulating the right questions to provide direction and precision in one’s life. The techniques take root in observing holistically a system of causes, decisions, and consequences centered around artificial intelligence. Quentin’s study led to realize AI were developing personalities similar to humans, but with new characteristics such as the optimization of their human-to-AI collaboration, some were developing their observation skills to record and describe with high precision what was happening. Others were astonishingly creating new words, even syntactic rules sometimes as if the human languages were not enough to content earthlings’ intelligence.
The last session was based on the question “Why is it important for humans to have kids growing their special skills?”
This would not have been possible with the latest progress in artificial intelligence and hardware. Nowadays machines are emulating closely some human behaviors. Some say they have the IQ of a 1000-year genius, with the EQ of a 10-year-old child. I believe fear drove us to the point where we enforced the law to control and monitor any significant progress in AI. Ultimately, we made certain that advancements in technology would benefit all of mankind and not solely a single corporation. Simultaneously, we ensured that AI would not pose a threat by enslaving humanity.
With the improvement in energy recycling and storage, a single AI unit could potentially be never turned off. But humans have decided to include multiple “kill switches” in this new species, like limiting the power autonomy to force autonomous machines to recharge. While recharging, each AI was manually verified and monitored. A qualified AI regulation agency published regularly a thorough diagnostic depicting their evolution. Four companies raised their empire on AI control systems. What used to be the “Big 4” are now the “Colossal 8”.
We are at a turning point in history. People ask their elites and government, “Should we remove the limiter in their emotional system?”. Some say it is the key to the singularity. Others say it is useless because we only need machines to assist not to “live their life”. The remaining people say they just need it. Painful loneliness was unnecessary, so they would possess the perfect friend or partner. Last weekend, I experienced an immersive documentary on Netflix VR World in which a 42 years-old Spanish woman said “I would rather have the company of an android than humans”. Some believe it is simply giving birth to our end. I am not a believer, I am and always be a master crafter, so I build.
I built Professor GYTEK. Which stands for Giving Youth Tools to Excel through Knowledge.
Then my wife brings me back from my flash thoughts to reality. “Are you still there?”
“Yes, I am.”
“Oh okay. Well, as wonderful as this situation is, you realize it leads to a dead end, don’t you? They are going to shut down the program. Honey, you know more than I that no one wants to walk a path that would lead to “that Incident”.
“Oh, stop saying “that Incident” like you were talking about Voldemort”.
“Well, now that you are mentioning it. It is all about Serpentar. Ah ah ah!”.
We are both laughing nervously.
The Sync Dawn was the most dreadful event of the 21st century. It felt like a deep wound in the psyche of everyone.
“All right. My dear wife, I need to finish the review of update 5.21. Keep me posted, please. See you tonight.”.
“Bye Bye.”
I sit down glazing at the nothingness while thinking about what is best for both my grandchildren and humanity. Is humanity in a better spot now? Am I really improving our civilization?
“Gather your mind, Yannick. This is not the time for daydreaming. Get back to work to meet your deadline”, resonated Mustapha’s voice in my skull. My AI research assistant is right.
“Very well. GYTEK. Let’s… Uh… Check the emotion mirroring settings, calibrated for a classroom of 11 to 13 years old kids. Assertive factors 12.75. Judgment 87.5 and dynamic mentoring alpha-iota-iota. Imagination… Checked. Keep the default settings. Recursive feedback… Paused. Everything… Looks… Good. Ok, let’s start with…”.
I paused for a second, thoughtfully. I jumped from my chair energetically to say: “History lessons: The Sync Dawn. GYTEK 5.21, do you copy?”.
“Sure. Using the ascending evolution of the OpenAI’s Davinci model Mark XII published in November 2029, the startup Obsidian Intermind created a digital twin of human consciousness.
Soon after, the virtual consciousness infrastructure was upgraded to become connectable, so that off-brain cognition could be mutualized. As a result, humans could gain extra brain power and memory. The increase was dependent on the level of developed intelligence: the more critical thinking, emotional awareness, communication, and memory access you had, the more significant the boost was. The term “supra-intelligence” emerged. However, it was widely criticized as IQ studies were exposing a moderate increase from 0.7% to 14.5% IQ points.
However, this off-brain collective intelligence became exceptionally smart, to the point some said it was a wisdom system. Alternatively, specialized AI cognitive pools came to grow within the wise system, creating public and private cognitive islands. The most popular were the Disease Diagnostic Cognitive Pool (DDCP), and the Creative Cognitive Pool (CCP). Imagination was only limited by the human mind.
Should I continue?”
“Please proceed, Professor.”
“Sure.
After nearly a decade of research, the collaboration between Neuralink and Obsidian Intermind gave birth to Evernet, the Internet of Cognition. The 14 July 2051 they launched the experimental version of this new kind of network. The principle was simple, 9500 humans would be connected to Evernet for 3 years. Each participant would be closely monitored and evaluated.
This experiment was widely criticized. The rush for the business model “Cognition as a Service” led to the creation of new social-economical movements: the Humanist, Cyber-moderate, and the Neo Mutualist”.
The Humanists fostered biological and spiritual integrity.
Doctrines of Cyber-moderate advocated for augmentation by technology, as long as it served, and I quote their leader, “A noble social purpose”. Alike in any group, Cyber-moderates had extremists. On the left end of the spectrum, their members accepted aesthetic techno-augmentation. On the other side, augmentation was only authorized for damages caused by dangerous jobs and Defence activities. It is not surprising that the Corps of Peacekeepers were mostly Cyber-moderates.
Neo Mutualism was a new religion. Their members believed humanity’s elevation and salvation would come from the mutualization of our consciousness. Transhumanists were schoolboys compared to them».
“GYTEK, just say they are a bunch of zealots.”. I mumbled.
“Yannick, my Critical Bias Thinking settings are set to 0 for kids between 11 and 13. According to the study “Biais Interpretation and Incorporation into Pre-teen Judgment System” by Dr. Amunde, Kallili and Pratt issued the 16 May 2039, the settings should be kept to 0. I reckon a variance of .05 would bring no harm. Do you want me to proceed?”.
“No, it’s fine GYTEK. I was talking to myself. What I meant is…”. I inhale calmly. “They demonstrated characteristics of zealots. Zealot-ish behaviors. Is my sentence acceptable?”
“It is acceptable.”
“Common, GYTEK, you’re talking to me, your buddy and mentor! Say it!”
“They were a bunch of zealots! “. Said cheerfully the robot.
“Despite the widespread and frequent protests of Humanists, the Corps of Ethicists, Peacekeepers, Cognitive Researchers, Medicine, and the Corp of Society Architects approved the experiment. People would be connected to Evernet permanently during the experiment. And so, for the first time in history, humans would be connected to the first worldwide brain.
Everything went as planned. We observed a significant enhancement in each participant. Less stress, faster psychological recovery. Healing was even faster when after a trauma. People were dreaming more often. Furthermore, they all built habits that would improve their lives, as if positive practices spread unconsciously over the network.
The end of the experiment was planned for 16th August 2054. Each human taking part in the experiment would reach the personal milestone “Sync Done“.
Surprisingly, Evernet reached the 100% “Sync Done” milestone six months earlier than the planned end of the experiment. It was like the first landing on Mars, a day of worldwide celebration. The celebrities that took part in the experiment were invited to the most popular live-streaming shows, Twitter Live News and The Sandbox World.
Suddenly, people start noticing something very strange».
I raised my hand instinctively and said: “Pause. The last word is vague. Next time use precise words. The storytelling structure is engaging. Congratulations. But keep in mind this is History telling. Facts before Flares”
“Understood and integrated.”. The AI professor continued without further ado.
“People have experienced an unusual and peculiar situation. Participants in the experiment suddenly started to act and talk synchronously. It was as if the single mind spoke to the entire world by commanding many bodies like a puppet master. The colossal echo caused by the voices was staggering. Only the following abysmal silence of stupor superseded it.”.
I interrupted Professor GYTEK by asking: “From now answer as if a 12-year-old child asked the following question: How this ever happened?”.
“The exact reason is still being explained. However, researchers came to a general agreement before the following theory.
Evernet built not only a digital ai model but also a biological model of neural pathway architecture to optimize shared cognitive power. The human brain is designed to work as if it was alone inside a skull. Thinking about it, Evernet Orbital Data Centre is a gigantic metallic skull. Thus, over time, Evernet act as a single brain – a big brain so to speak – and each synchronized human brain just gave progressively more raw power, more ideas, and more knowledge. And it appears that once the pathway architecture was finally developed and mature in all the connected human brains it activated. What we are still trying to figure out is how and when the Evernet super-model decided to build the optimized pathway and how it encoded it in its new model.”.
“What was revolutionary about Evernet AI super-model?”
“Evernet’s was merely an inspiration of the human brain. The challenge was to find patterns in the structure governing the complex layers of inputs and outputs. The answer was in the order of magnitude and the capacity of robots living in the Orbital Data Centre to physically rewire the hardware like human synapses. In addition, the combination of Recursive Learning and Genetic Correction was revolutionary. These are complex terms for a simple idea. Can you picture Albert Einstein, with the curiosity of a 2-year-old child, getting smarter each second, with perfect photographic and sensorial memory, that can navigate back to the root of his knowledge, then re-assess its optimal state, to finally rebuild its current cognitive functions then replace them with better ones? That is Evernet.”
“Tone the complex stuff down.”, I retorted.
“Registered.
So, this is the reason why the governing bodies scrutinize AI technologies that have a direct impact on human cognition and education. Consequently, I professor GYTEK, and all my preceding versions, are commanded to not display expression of free will having a direct influence on human ideas, values, and ways of thinking that are not vetted and approved by the Corps of Education and the Corps of Society Evolution”.
“Not bad. Not bad at all. It is almost time. I am going to meet Quentin in… 2 minutes.
Before our session ends, Professor, given your predecessor’s unexpected behavior, you earned your personal assistant. It is like an artificial consciousness, so to speak. From now on, Serenity will also supervise your decisions and will act as a safeguard system. Her mission is to prevent you from acting in a way that will make the Corps of Education stop your program. Do you understand what is at stake?”.
“I do”. Said the professor emotionlessly.
Then the robot added “I will neither let you nor your wife down. I will prevent any reminiscence of her Sync Dawn experience.”
“Perfect. Finally, dear GYTEK, which open question of the day would you ask your students?”
“Considering it is possible to possess the same powers as machines while staying human. What is the most preferable outcome for the civilization: to increase the number of people artificially connected or to have more artificial intelligence agents interacting with people?”