Categories
Artificial Intelligence Automation Autonomous Agents Business Businesses ChatGPT Enterprise Architecture GPT4o Information Technology IT Engineering Llama Meta OpenAI Technology User Experience

The $1 Trillion Agent Factory: How Generative AI Is Printing Power (Not Just Productivity) – Navigating the Future With Generative AI, Part 5

1. Why AI Agents Are the New Currency of Power

What if your company, or your entire nation, possessed the ability to print productivity by harnessing intelligence at scale?

This concept gained concrete form with the announcement of an unimaginable $500 billion investment in artificial intelligence infrastructure, which excludes the additional €331 billion independently committed by industry titans like Meta, Microsoft, Amazon, xAI, and Apple. This agreement, spearheaded by President Trump, involves a partnership between the CEOs of OpenAI (Sam Altman), SoftBank (Masayoshi Son), and Oracle (Larry Ellison). Their joint venture, named “Stargate”, aims to build “the physical and virtual infrastructure to power the next generation of advancements in AI,” creating “colossal data centers” across the United States, promising to yield over 100,000 jobs.

France and Europe, not to be outdone, responded swiftly. At the AI Action Summit in Paris in February 2025, French President Emmanuel Macron announced a commitment of €109 billion in AI projects for France alone, highlighting a significant moment for European AI ambition. This was followed by European Commission President Ursula Von Der Leyen’s launch of InvestAI, an initiative to mobilize a staggering €200 billion for investment in AI across Europe, including a specific €20 billion fund for “AI gigafactories.” These massive investments, on both sides of the Atlantic, show a clear objective. The highest level of commitment reflects understanding. This fact translates into one reality: being a civilization left behind is simply out of the question.

But the stakes in this global AI game are constantly rising. If the US and Europe thought they were holding strong hands, China, arguably the most mature AI nation, has just raised the pot. China is setting up a national venture capital guidance fund of 1 trillion yuan (approximately €126.7 billion), as announced by the National Development and Reform Commission (NDRC) on March 6, 2025. This fund aims to nurture strategic emerging industries and futuristic technologies, a clear signal that China intends to further solidify its position in the AI race, focusing among others, on boosting its chip industry.

The implicit “call” to the other players is clear: Are you in, or are you out?

Therefore, my first question isn’t issued from a sci-fi movie; it’s not some fantastical tale ripped from the green and black screen of The Matrix, where programs possessed purpose, life, and a face.

This is about proactively avoiding the Kodak moment within our respective industries.

This is about your nation avoiding the declining slope of Ray Dalio’s Big Cycle, where clinging to outdated models in the face of transformative technology is a path toward obsolescence.

raydalio the big cycle

This is the endgame: AI Agents are not just changing today—they are architecting the future of nations.

2. Architecting Sovereignty: How Nations Are Industrializing Intelligence

In November 2024, I had the privilege of delivering a second course on Digital Sovereignty, focused on Artificial Intelligence, at the University of Luxembourg, thanks to Roger Tafotie. I emphasized that the current shift toward advanced AI, especially Generative AI, represents a paradigm shift unlike before. Why? Because, for the first time in history, humanity has gained access to the near-infinite scalability of human-level intelligence. Coupled with the rapid advancements in robotics, this same scalability is now within reach for physical jobs.

05 Architecture of Digital Sovereignty

Digital Sovereignty in the age of AI is a battle for access to the industrialization of productivity. Consider the architecture of AI Digital Sovereignty as a scaffolding built of core capabilities, much like interconnected pillars supporting a grand structure:

  • Cloud Computing: The foundational infrastructure, the bedrock upon which all AI operations rest.
  • AI Foundation Model Training: This is where the raw intelligence is refined, like a rigorous academy shaping the minds of future digital workers.
  • Talent Pools: The irreplaceable human capital, the architects, engineers, data scientists, and strategists who drive innovation. These are the skilled individuals, the master craftspeople, forging the tools and directing the symphony of progress.
  • Chip Manufacturing: The ability to produce advanced CPU, GPU, and specially designed AI chips, like TPU and LPU, guarantees independence.
  • Systems of Funding and Investments: The ability to finance a long-term, consistent, and high-level commitment toward AI capabilities.

If you need a tangible example of how critical resources like rare earth metals and cheap energy are to this race, look no further than President Donald Trump’s negotiations with Volodymyr Zelenskyy. The proposed deal? $500 billion in profits, centered on Ukraine’s rare earth metals and energy reserves. Let’s not forget: 70% of U.S. rare earth imports currently come from China. Control over these resources is the lifeblood of AI infrastructure.

It’s not merely about isolated components but how these elements interconnect and reinforce each other. This interconnectedness is not accidental; it’s the key to true sovereignty – the ability to use AI and control its creation, deployment, and evolution. It’s about building a self-sustaining ecosystem, a virtuous cycle where each element strengthens the others.

Europe initially lagged, but the competition has only just begun. The geopolitical landscape will be a major, unmastered factor.

The current market “game” consists of finding the critical mass between the hundreds of billions invested in R&D, the availability of “synthetic intelligence”, and unlocking a new era of sustainable growth. The race to discover the philosophical stone—to transform matter (transistors and electricity) into gold (mind)— and to achieve AGI and then ASI is on.

Sam Altman knows it; his strategy is “Usain Bolt” speed: the competition cannot keep up if you move at a very fast pace with AI product innovation. Larry Page and Elon Musk intuitively grasped this first. OpenAI was not only created to bring “open artificial intelligence” to each and every human, but also serves as a deliberate counterweight to Google DeepMind. Now, Sundar Pichai feels the urgency to regain leadership in this space, particularly now that Google Search—the golden goose — is threatened by emerging “Deep Search” challengers such as Perplexity, ChatGPT, and Grok.

As of March 2025, HuggingFace, the premier open-source AI model repository, has more than 194,001 text generation models (+24.1% since November 2024) within a total of 1,494,540 AI models (+56.8 % since November 2024). Even though these numbers include different versions of the same base models, think of them as distinct blocks of intelligence. We are already in an era of abundance. Anyone possessing the necessary skills and computational resources can now build intelligent systems from these models. In short, the raw materials for a revolution are available today.

The stage is set: The convergence of human-level intelligence scalability and robotics marks a profound moment in technological history, paving the way for a new era of productivity.

3. The Revolution of the Agentic Enterprise

In October 2024, during the Atlassian Worldwide conference Team 24 in Barcelona, I had the privilege of seeing their integrated “System of Work” firsthand.

Arsène Wenger, former Football Manager of Arsenal FC and current FIFA’s Chief of Global Football development, was invited to share his life experiences in a fireside chat. It was a true blessing from an ultimate expert in team building and the power of consistency.

Arsene Wenger

Mr. Wenger articulated that the conditions and framework for achieving champion-level performances rely on a progressive and incremental journey. While talent can confer a slight edge, that edge remains marginal in the realm of performance. The key differentiator resides in consistent effort, with a resolute commitment to surpassing established thresholds. Regularly implementing extra work and consistently reaching your capacity is what separates a champion from the rest.

Thus, Atlassian pushed their boundaries, unveiling Rovo AI, an agentic platform native to their environment. Rovo is positioned at the core of the “System of Work,” bridging knowledge management with Confluence and workflow mastery with Jira. To my surprise, Atlassian announced they have 500 active Agents! This is a real-world example of printing productivity by deploying purpose-driven digital workers to the existing platform.

But the true brilliance lies in making this digital factory available to their customers. This type of technology should be on every CEO and CIO’s roadmap. How you integrate this capability into your business and technology strategy is the only variable – the fundamental need for it is not.

The Agentic Enterprise is the cornerstone of this change: creating autonomous computer programs (agents) that can handle tasks with a broad range of language-based intelligence.

We’ve transitioned from task-focused programming to goal-driven prompted actions. Programming still has a role to play as it guarantees perfect execution, but the cognitive capabilities of large language reasoning models like GPT O3 and Deepseek R1 lift all its limitations. Moreover, in IT, development is now about generated code. While engineering itself does not necessarily become easier – you still need to care about the algorithm, the data structure, and the sequence of your tasks – the programming part of the process is drastically simplified.

After years of prompt engineering since the GPT3 beta release at the end of 2020, I concluded that prompt engineering is not a job per se but a critical skill.

The Agentic Enterprise is not a distant dream but a present reality, fundamentally changing how organizations construct and scale their work.

4. Klarna’s AI-Powered Customer Service Revolution: How AI Assumes 2/3 of the Workload

In February 2024, the Swedish fintech Klarna announced that their AI contact center agent was handling two-thirds of their customer service chats, performing the equivalent work of 700 full-time human agents. It operates across 23 markets, speaks 35 languages, and provides 24/7 availability.

With full speech and listening capabilities now available in models like Gemini 2 Realtime Multimodal Live API, OpenAI Realtime API, or VAPI, the automation opportunities are virtually limitless.

What made this rapid advancement possible, suddenly?

Technically, the emergence of multimodality in AI models, robust APIs, and the decisive capability of Function Calling form the foundation. But more importantly, this transformation is primarily a business vision; it lies in adopting transformative technology as the main driving power, and using innovation as your key distinguishing factor, just as with Google and OpenAI. When this approach acts as the central nervous system in business strategy, then, the adoption is not perceived as a fundamental disruption but, rather, a gradual and consistent reinforcement.

The crucial capability here is Function Calling, which allows AI models to tap into skills and data beyond their inherent capabilities. Think of getting the current time or converting a price from Euros to Swiss Francs using a live exchange rate – things the model can’t do on its own. In a nutshell, Function Calling lets the AI interact directly with APIs. It’s like giving the AI a set of specialized tools or instantly teaching it new skills.

APIs are the foundation for the intelligence relevance of AI Agents and their ability to use existing and new features based on user intent. In contrast to prior generations of chatbots, which needed explicit intent definitions for each conversation flow, LLMs now provide the intelligence and the knowledge “out of the box.”. To further understand the fundamental importance of APIs for any modern business, I invite you to read my article or listen to my podcast episode titled “Why APIs are fundamental to your business“.

This has led to the rise of startups like Bland.ai that offer products like Call Agent as a service. You can programmatically automate an agent to respond over the phone, even customizing its voice and conversation style – effectively creating your own C3PO.

ElevenLabs, the AI voice company that I use for my podcast, has also launched a digital factory for voice-enabled agents.

Then, December 18 2024, OpenAI introduced Parloa, a dedicated Call Center Agent Factory. This platform represents the first of its kind, specializing in digital workers for a specific industry vertical.

Parloa

The promise? To transform every call center officer into an AI Agent Team Leader. As a “Chief of AI Staff”, your objective will be to manage your agents efficiently, handling the flow of demands, intervening only when necessary, and reserving human-to-human interactions for exceptional client experiences or complex issues.

The revolution in customer service is already here, driven by new AI-based call center solutions. This is a sneak peek of the AI-driven future.

5. The Building Blocks: AI Development Platforms

To establish a clear vision for an AI strategy and augment my CTO practice, I meticulously tested several AI technologies. My goal is to validate the technological maturity empirically, assert the productivity gains, and, more importantly, define the optimal AI Engineering stack and workflow. These findings have been documented within my AI Strategy Map, a dynamic instrument of vision. As a result, my day-to-day habits have completely changed and reflect my emergence as a full-time “AI native”. My engineering practice is reborn in the “Age of Augmentation“.

I changed my stack to Cursor for IT development, V0.dev for design prototyping, and ChatGPT o3 for brainstorming and review. The results achieved so far are highly enlightening and transformational.

Thus, engineering teams’ next quantum leap is represented by the arrival of Agentic IDEs, facilitating an agent-assisted development experience. The developer can seamlessly install the IDE, create or import a project, input a prompt describing a feature, and observe a series of iterative loops leading to the complete implementation of the task. The feature implementation is successful in 75% of cases. In the remaining 25%, the developer issues a corrective prompt to secure 100% implementation, indicating a need to supervise such technology when used as an independent digital worker.

Leading today’s innovative Agentic IDE market are:

Then, the landscape of mature AI frameworks gives companies a great array of enterprise-ready solutions. Automated agent technologies, specifically, have emerged as critical tools:

Finally, DEVaaS platforms are completely redefining the approach to IT delivery :

These technologies allow you to build applications from the ground up that are fully operational right from the start, without any additional setup time. Yet, it is not only about developing an application from scratch and then gradually adding features by using prompts as your instruction; it’s also about application hosting. These solutions now offer a complete DevOps and Fullstack experience.

Although they currently yield simple to moderately complex applications, and may not be entirely mature, the technologies are demonstrably improving at rapid paces. It is only a matter of weeks (not months) before seeing full system decomposition and interconnectedness that reaches corporate standards.
This is exactly what the corporate environment has been waiting for.

How will this impact the IT organization? It comes down to these three outcomes:

First, for companies where the IT organization is a strategic differentiator, the internal IT workforce will increase productivity by delegating development tasks directly to internal AI Agents. Sovereign infrastructure is most likely required in heavily regulated or secretive industries.

Second, companies where IT is a primary activity, but not a core organizational driver can outsource the IT development to specialized companies who also make use of Agentic development IDEs or platforms.

Finally, an option for outsourcing includes the reconfiguration of jobs and empowering base knowledge workers – individuals with no official IT background – to IT positions through AI training. This would be the full return of the Citizen Developer vision, previously promised by Low Code/ No Code trends.

Innovation within the AI domain occurs several times per day, thus requiring a proactive mindset to remain updated technically which leads into actionable technology perspectives.

6. The Power of Uni-Teams: The Daily Collaboration Between Man and Machine

What does it feel like to lead your own AI team?

From experience, it make you feels like Jarod, from the TV show “The Pretender“, a true one-man band capable of handling multiple roles.

Think of it this way, you are now capable of:

  • Writing code like a developer
  • Creating comprehensive documentation like a technical writer
  • Offering intricate explanations like an analyst
  • Establishing policy frameworks like a compliance officer
  • Originating engaging content like a content writerData storytelling like a data analyst
  • Data storytelling like a data analyst
  • Building stunning presentations like a graphic designer

From my experience, reaching mastery in any discipline often reveals an observable truth within the corporate world: a significant portion of our tasks are inherently repetitive. The added value is not in running through the same scaffolding process many, many times, if you are no longer learning (except perhaps to reinforce previous knowledge). The real value is in the outcome. If I can speed up the process to focus on learning new domain knowledge, multiplying experiences, and spending time with my human colleagues, it is a win-win.

My “Relax Publication Style“, an guided practice for anyone starting in content creation, has evolved into a more productive and enjoyable method – combining iterative feedback cycles from human/AI, to leverage strategic insights via structured ideas, ongoing AI-reviews, personal updates and enriched content using human-curation combined with in-depth AI search tools. I will explore this process in a future article.

One of the most surprising shifts in my workflow has been the emergence of “background processing” orchestrated by the AI Agent itself. It’s a newfound ability to reclaim fragments of time, little pockets of productivity that were previously lost. It unfolds something like this:

  1. The Prompt: I issue a clear directive, the starting point for the AI’s work.
  2. The Delegation: I offload the task to the AI, entrusting it to this silent, tireless digital worker.
  3. The Productivity Surge: I’m suddenly free. My capacity is expanded, almost as my productivity is nearly doubled. I can tackle other projects, collaborate with another AI agent, or even (and I’m completely serious) indulge in a bit of gaming.
  4. The Harvest: I gather the results, reaping the rewards of the AI’s efforts. Sometimes, it’s spot-on; other times, a refining prompt is needed.

In my opinion, this is a deep redefinition of “teamwork.” It’s no longer just about human collaboration; it’s about orchestrating a symphony of human and artificial intelligence. This is the definition of “Ubiquity.” I work anytime and truly anywhere – during my commute, in a waiting room, even while strolling through the park (thanks to the marvel of voice-to-text). It’s a constant state of potential productivity, a blurring of the lines between work and, well… everything else.

The next stage begins with gaining awareness and using AI. From there, it progresses to actively building your AI teams. The goals of a collaboration with agents, for instance, for a product designer or a system architect, would be to actually create an AI team so that a human product designer becomes, in fact, the team leader of their AI agents. Let’s see how and why.

The rise of AI teammates promises greater productivity and a fundamental shift in the way we approach problem-solving and work.

ai uni team

7. Evolving as a Knowledge Worker in the Age of AI

How can a worker adapt to the capabilities expansion of printing productivity?

It starts by understanding the current capabilities of AI; exploring them through training or by testing systems, and see how they can be applied in your own workflow. Specifically, with your existing tasks, determine which ones can be delegated to AI. Even though, at the moment, it’s more about a human prompting and the AI executing a small, specific task. Progressively, these tasks will be chained, increasing the complexity to the level of higher hierarchical tasks. This integration of increasingly complex tasks constitutes the purpose of agents – to have a certain set of skills handled by AI.

For functional roles such as product designers or business analysts, there is an opportunity to transition toward a product focus, to understanding customer journeys, psychology, behaviors, needs, and emotions. This can result in an experience (UX) driven approach, where the satisfaction of fulfilling needs and solving problems is paramount while leveraging data insights to enhance the customer’s experience.

Indeed, this technology is already showcasing its potential to bring us together. Within my organization, my colleagues are openly sharing a feeling of relief at how Generative AI empowers them by significantly reducing the workload of some specifically tedious tasks from days to mere minutes. But it is not all about saving time: the key progress that made me really moved, is that now, freed from a few tedious operations, they have now the capacity to explore their current struggles, identify past pain points, and articulate new business requirements. Additionally, it is the “thank you”, that directly acknowledges my teams’ efforts in bringing this new means to reclaim precious time and comfort. What is even far more compelling, and very inspirational from my point of view, is this ability to formulate and then resolve this new set of challenges using the capabilities of this recent AI ecosystem. Witnessing this emerging transformation gives me tangible joy and concrete hope for our collaborative future.

So, the key observation is that it’s up to early adopters and leaders to drive this change. They need to build a culture where people aren’t afraid to reimagine their jobs around AI, to learn how to use these tools effectively, and to keep learning as the technology evolves. The time to strategize how AI reshapes internal processes to master inevitable industry restructuration has arrived while simultaneously positioning your organization as a demonstrative leader for others to follow. To build that next-generation workforce, you need tools and specific actionables strategies; what are the core components to your next plan?

Ultimately, this era of augmentation is a strategic opportunity – one that requires everyone involved, including users and top executives, to actively foster continuous understanding, ongoing discovery, and strategic adaptation, thus contributing at multiple levels to building high-performing teams.

Disrupt Yourself, Now.

8. The Digital Worker Factory: A Practical Example in Banking

Let’s consider a practical example. Imagine delivering a project to create an innovative online platform that sells a new class of dynamic loans—loans with rates that vary based on market conditions and the borrower’s repayment capacity. This platform would be fully online, SaaS-based, and built as a marketplace where individuals can lend and borrow, with a bank acting as a guarantor. That is the start of our story. Now it’s about delivering this product.

What if you only needed a Loan Product Manager, a System Architect, and a team of agents to bring this digital platform to life?

Here’s how the workflow looks.

  1. As the product manager, you specify the feature set and map the customer journey from the borrower’s perspective. You define the various personas – a lender, a borrower, a bank, and even a regulator.
  2. The system architect then set up the technical specifications for the IT applications and LLMs, covering deployment to the cloud, integrations such as APIs, data streams, and more.
  3. You initiate the iterative loop by defining a feature. The AI Agent then plans and generates the code, after which you test the feature. Based on your feedback, the Agent troubleshoots and corrects the program accordingly. This loop continues iteratively until the platform fully takes shape. In this workflow, the product isn’t merely coded—it’s molded. The prompt itself becomes the new code.
agentic team

With a clear vision and the right framework, the path to production is not as complicated as it once was.

The Loan Expert Augmented: AI in Action

Consider the Loan Product Manager. They use AI to simulate loan profitability, examining various customer types and market variations. But, as importantly, they use AI to refine pitches, sales materials, and regulatory documentation. This results in streamlining compliance and ensures alignment with the existing framework.

Generative AI is also used to revise internal and external processes. The templates for product sheets are optimized and iteratively improved. Marketing materials, such as a webpage explaining the product, equally take advantage of Artificial Intelligence to reach the best clarity and impact.

Finally, personalized communication with a customer per specific client also relies upon Generative AI automation and data contextualization. If a customer needs a loan for a car or other tangible asset, the communication is perfectly tailored to the specific context.

This is personal banking at scale.

The focus is on the active role workers play in orchestrating, managing, and continuously evolving the AI systems they rely on in their daily work.

Hence, we’re effectively printing productivity now – a rare paradigm shift. Every professional needs to be proactive to seize this opportunity, not just react to it. Start by exploring AI tools relevant to your field, experiment with their capabilities, and consider how they can be integrated into your daily workflows. Whether you are a software engineer, product designer, or loan expert, the time to adapt is now.

Bear with me: the way you’ve operated up to this point—with data entry in applications, scrupulously following procedures, and writing lengthy reports in document processing software— is now directly challenged by individuals adopting the “automatician” mindset, evolving their skills from basic Excel macros into sophisticated, full-fledged applications. But remember: The future is not something you passively face. This world is your design, using available new technologies along with your pragmatic actions

9. Integrating AI Engineering in your System of Delivery, The Two Paths Forward: New vs Existing Systems

Now that Generative AI has entered your work and that you are integrating all the different aspects of the digital workers – either through pilot projects or all other internal activities – this awareness converges towards one strategic decision.

This decisive decision leads to two clear paths: either build a completely new application and workflow designed from the ground up around AI-augmented technologies or modernize existing complex systems to align with current AI-powered delivery.

Your choice needs solid considerations, even though both outcomes must lean toward a similar goal: a fully modern and flexible digital workforce, printed from your Enterprise Agent Factory. Thus, ensure you keep a strategic direction of impact toward a significantly better system.

Path 1, Building from Scratch: The AI Native Approach

The first and straightforward path involves building completely new systems. Here, the software specifications are essentially the prompts within a prompt flow. Think of this prompt flow as the blueprint for the code, all directly created within an Agentic IT delivery stack. The advantage of this approach is that the entire system is designed from the ground up to work seamlessly with AI agents. It’s like building a house from scratch with an AAA energy pass and all the home automation technologies included.

The prerequisite of this stage is to constitute a core team of pioneers that went successfully into production with at least one product used by internal or external clients. In the process, they successfully earned their battle scars, gained experience, selected their foundation technologies, established architectural patterns, and built a list of dos and don’ts, which ultimately will turn into AI Engineering guidelines and best practices.

Next – and this decisive break from the old system is non-negotiable – this group must devise a brand new method of work: building its strategic and actionable steps in all operational components that allow AI to be present from day one and without the limitation of legacy infrastructure that no longer fits. Because all this experience now results in a unique point of reference, a key ”baseline” from that point you can start designing a process to enable the full transformation from the current to future operations.”

Path 2, Evolving From Within: Growing AI Integration in the Existing Enterprise Application Landscape

The second path involves evolving existing systems. This is a more intricate process as it requires navigating the complexities of the current infrastructure. Engineers accustomed to the predictability and consistency of traditional coding methods now need to adapt to the probabilistic nature of AI-driven processes. They must deal with the fact that AI outputs, while powerful, can not always be exactly the same.

Initially, this can be unsettling because it disrupts your established practices, but with tools such as Cursor or GitHub Copilot, you can quickly become accustomed to this new approach.

This shift requires that software engineers move from the specific syntax of languages like Python or TypeScript to communicate in everyday language with the AI, bringing skills that were previously specific to them into the reach of other knowledge workers. Furthermore, it is not easy to introduce powerful LLMs in a piece of software that has an established code structure, architecture, and history. It’s like renovating an old house – you are forced to work with existing structures while introducing AI elements. This requires a deep understanding of the current code and the implications of architectural choices, such as why you would use Event Streaming instead of Synchronous Communication or a Neo4J (graph database) instead of PostgreSQL (relational database) for a specific task.

Accessing and integrating with legacy systems adds another layer of friction because the code is outdated or uses a proprietary language. While AI facilitates code and data migration, the increased efficiency of AI-native platforms often makes rewriting applications from scratch the most optimal strategy.

In summary, creating AI-native applications from scratch is easier, with an incredible speed of development, but it implies a bold decision. Transitioning an existing application is more difficult, as it has inherent architectural, data, or technological constraints, but it is the most accessible path for many companies.

The increasing power of LLMs to handle ubiquitous tasks that were previously exclusively human tasks implies a compression of tasks and skills within an AI. This shift moves some work regarding coordination, data management, and explanatory work from humans to machines. For human professionals, this will result in the reduction of these types of tasks, freeing them to focus on higher-level tasks.

The duality of paths ahead is a call for a pragmatic approach to transition; it’s about moving forward without disrupting too much of the familiar workplace.

10. The Metamorphosis: From Data Factories to Digital Workforce Factories

[Picture: A symbolic image representing a butterfly emerging from a chrysalis]

Today, we are gradually fully exploiting the potential of Generative AI, with text being the medium to translate, think, plan, and create. These capabilities are expanding to media of all kinds – audio, music, 3D models, and video. Consider what Kling AI, RunwayML, Hailuo, and OpenAI Sora are capable of; it is just the beginning and building blocks of what is possible.

These capabilities, originally for individual tasks, are now transforming entire industries – architecture, finance, health care, construction, and even space exploration, to name just a few.

If you can automate aspects of your life, you can automate parts of your work. You can now dictate entire workflows, methods, and habits. You can delegate. What’s the next stage?

So far, we have created automatons, programs designed to execute predefined tasks to fulfill a part of a value chain. These are digital factories comparable to factories in the physical world that have built computers, cars, and robots. And now, if you combine factories and robotics with software AI, the result is the ultimate idea: the digital worker.

The key is that it is no longer just about using or building existing programs but more about building specific agents. These agents represent specialized versions of the human worker and include roles such as software engineers, content creators, industrial designers, customer service providers, and sales managers. Digital workers have no limits in scaling their actions to multiple clients and languages at the same time.

The new paradigm consists of creating a new workforce. We used to construct data factories with IT systems, and now, with Generative AI, we are building Digital Workforce Factories. A Foundation Model is the digital worker’s brain. Prompts are defining their job function within the enterprise. API and Streams are their nervous system and limbs to act upon the real world and use existing code from legacy systems.

The extensive time once required to cultivate skilled human expertise—spanning roughly eighteen years in formal education, followed by years of specialization, and reinforced through real, tangible work applications— is now radically compressed due to the capacity of LLM technology. By which, as I detailed previously in “Navigating the Future with Generative AI: Part 1, Digital Augmentation“, it all highlights our mastery in having compressed centuries of structured knowledge: from methodical research, systematic problem-solving frameworks, and many and countless cycles of innovation process from implementation best practices. Still, key expertise now resides in both the method and application. New competencies should prioritize AI foundational model mastery, the value of highly skilled fine-tuning methods for specific domain applications, and how to leverage creative prompts to build a tangible output from such systems even with unexpected new scenarios.

It is paramount to fully grasp that what we experience now with AI transformation is more than just a set of groundbreaking techniques. It reveals a new structure—for better and for worse—impacting both knowledge workers assisted by AI agents and manual workers augmented by robotics. But remember, ‘and’ is more powerful than ‘or’: it is precisely the combination and convergence of these roles — human and digital working together — that creates true scalability and transformative potential.

The real path forward isn’t merely augmentation—it’s about fostering a genuinely hybrid model, emerging naturally from a chrysalis stage into a mature form. This new human capability is seamlessly amplified by digital extensions and built upon robust foundations, meticulously refined over time. Moreover, humans are destined to master Contextual Computing, where intuitive interactions with a smart environment—through voice, gesture, and even beyond—become second nature. This isn’t about replacing humans; it’s about elevating them to orchestrate a richer, more integrated digital reality.

Perhaps artificial intelligence is the philosopher’s stone—the alchemist’s ultimate ambition—transmuting the lead of raw data into the gold of actionable intelligence, shaping our environment, one prompt at a time.

Dear fearless Doers, the future is yours.


Categories
Artificial Intelligence Automation Business Business Strategy Engineering Innovation Robots Strategy Technology Technology Strategy

Update on Tesla’s Optimus #Robot – it is progressing fast

Tesla’s Optimus Robot learning from humans

The most impressive part is the technique employed by the Tesla team for accelerating the robot’s dexterity: the robot physically learns from human actions. 

Now, let’s step back and analyse Tesla’s master plan here:

(Putting on my business tech strategy goggles) 

1. Tesla builds electric cars augmented with software programmability.

2. Tesla provides an electric grid as a service.

3. Tesla builds gigafactories that maximize the automation of car manufacturing. Almost every single part of the pipeline is robotized and optimized for speed of production.

4. Tesla builds Powerwalls (by providing energy storage, it also creates a decentralized power station network).

5. Tesla brings autonomous driving (FSD) to Tesla cars. Essentially, cars are now transportation robots governed by the most advanced AI fleet management system.

6. Tesla builds its own chips (FSD Chip and Dojo Chip)

7. Tesla builds its own supercomputers.

8. Tesla launches Optimus, which aims to replace the human workforce in factories and warehouses.

9. X.ai, which has recently raised $6 billion, X’s supposedly “child” AI company, brings the Grok AI model trained on X/Twitter data. While you may say X data is not the best, X has a algorithm balanced with human judgment (community notes), AND the company regroups the largest set of news publishing companies. Basically, it automates curation and accuracy.

10. A version of the Grok AI model will likely power Optimus’s human-to-robot conversational interface.

11. Tesla cars will be turned into robotaxis, disrupting not only taxi companies but also Uber (the Uber/Tesla partnership may not be a coincidence), and eating into the shares of Lyft and BlaBlaCar.

12. Tesla will enter the general services business, and retail industries to offer multi-purpose usage robots – cleaning services for business offices, grocery stores, filling the workforce shortage in the catering (hotel-restaurant-bar…) industry, etc.

Tesla is not the only one moving in the “Robot Fleet Management” business. Chinese companies like BYD (EV) offer strong competition, and there are several robot startups (like Boston Dynamics and Agility Robotics) racing for the pole position.

#AI #artificialintelligence #Robotics #Optimus #EV #software #EnergyStorage #Automation #powerwall #AutonomousVehicles #FSD #chips #HighPerformanceComputing #Robots #GrokAI #NLP #robotaxis #innovation #WorkforceAutomation

Categories
AR/VR Augmented Reality Innovation Mixed Reality Technology UX Virtual Reality

Apple Vision Pro – I Thought I Knew What The Metaverse Would Feel Like. I Couldn’t Be Further From The Truth.

A couple of weeks ago, I received an unusual meeting invite. It said “Test Apple Vision Pro.” I read it twice and jumped at the opportunity. I had been longing to get my hands on an AR/VR device that could make my dream idea – an augmented world (project Vmess platform) – a reality. That day was finally coming.

What better way to cap off an amazing work week at Banque Internationale à Luxembourg (BIL) than by getting up close and personal with Apple’s groundbreaking #mixedreality marvel – the #VisionPro? Last Friday, I had the immense privilege of taking this pioneering device for a spin.

Let me be blunt: before trying the Vision Pro, I thought I had a decent idea of what the metaverse experience would be like. But I couldn’t have been more wrong. This isn’t just the future – it’s a portal to parallel universes that shattered my expectations.

The Vision Pro isn’t a smartphone replacement; it represents an entirely new frontier, a mind-bending window into the so call #metaverse. Furthermore, everything is at hands: you pinch to interact. like every Apple creation, it exudes sophistication down to the finest detail. 

The display resolution? Words fail to capture its otherworldly crispness and depth. And we’re not merely talking apps here; these are full-fledged, multi-sensory experiences that transport you to realms you thought only existed in science fiction.

Mark Zuckerberg was certainly onto something with his metaverse vision, but Apple seems poised to leapfrog everyone with this staggering delivery that must be witnessed firsthand. 

My rendezvous with the Vision Pro was more than a tech spectacle, though. It was also a heartwarming reunion with the brilliant minds at Virtual Rangers. Their #VR app portfolio is impressive, but what moved me most was “Roudy’s World” – an experience lovingly crafted to inspire hope and joy in children facing unimaginable adversity.

Immense gratitude to Matthieu Bracchetti and the entire Virtual Rangers crew, along with François Giotto, for making this future-altering experience possible. The metaverse future we yearned for? It’s already here, and it’s far grander than we ever conceived.

#augmentedreality #virtualreality #artificialintelligence #ai #digital #innovation #tech2check #digitalaugmentation

Categories
In 2060 AR/VR Artificial Intelligence Autonomous Agents Bioengineering Drone Fleet Management Hologram Holographic Display Information Technology Storytelling Technology Transportation Drone Writing

Bioengineering in 2060

“Nasir, check this out. Didn’t I tell you PSG would win against New Manchester? Two bitcoins, baby. Who is the soccer king?”

“Stop bragging, man. Gee, I don’t know how you do it. Hey, Betmania, tell me how this regular human with his XXL ears looking like satellite dishes can beat your prediction. Thank God, you were a freebie AI.”

The bet bot replied, “I am not qualified to review Dr Anoli’s performances. Yet, his 97.26% accuracy is…”

Nasir interrupted the hologram: “Ahh, shut it! It was a rhetorical question. My man! You are good, you gooood.”

He lifted his hand, nodding repetitively to perform his most vigorous handshake.

The upper deck lit up like a lighthouse, and then a deafening sound followed the illumination.
Beeeeeepp. “Emergency. Purple alert. All medical engineers on deck. I repeat, Purple Alert. All medical engineers on deck. This is not a drill.”

My heart was pounding. The message was still resonating in my head.

The sudden drop in temperature, produced by the arch-reactor of the medical drone transporter, announced the arrival of an unusual patient. This is the first time I have seen a flying one. Usually, these vehicles were stored in the Corps of Peacekeepers‘ R&D facilities.

The temperature drop caused a fog to rise like a curtain. I saw a blurry figure approaching slowly.

My colleague interrupted my stupor: “I haven’t seen a purple alert for 7 years now, it’s serious. Purple means death.”

The flying ambulance emerged at the heliport located at the center of the critical emergency service. Work is pretty easy on platform seventeen, actually. Nowadays, bioengineering is solving almost all critical injuries as they occur. An accident at a construction site? Any multifunction robot worker medic cauterizes your wound with accelerated healing enzymes, bypassing block surgery. As long as you’ve subscribed to the right medical service, the AI health model can be downloaded, and the tier two healing kit purchased for printing.

“What on earth could have happened?”, exclaimed Nasir.

The hologram of the AI diagnostician, Dr. Ernesto, popped up from the ambient hospital Phygital network while the paramedics were transporting the patient out of the medical drone transporter. Then, she said:

“This is unprecedented, we are losing it. We’ve tried traditional medicine and printed the generic assistance bacteria. Nothing works. The patient’s vitals fluctuate over time at an unconventional rate. Scanning the available knowledge from the current corpus does not provide any satisfying answers. The highest probable disease is the Genova IV virus with a probability of 31%”.

I replied “Too much uncertainty… What do you suggest, Nasir?”

“It looks like the emerging meta-virus. The… guy on B-Hacker YouTube Entertainment System… Sorry, I can’t remember the name of that journalist… said it was an open-source public CASPRed experiment.”

I could not believe it at first. But when I saw his wound mutating in front of me, there was no doubt that the potential disease was human-made.

“Arrrghhhhh. It burns… Please get it off me. I am begging you. Just cut it!” shouted the patient.

I was stunned. I cannot remember the last time I saw a real person suffering that much. Dr. Krovariv told me about it. This was surreal. So much pain.

“Hey! Gather yourself, buddy. There’s no time to waste. We might be facing a level 5 threat here,” said Nasir.

He was such a great guy. He was the embodiment of coolness. Not only was he a great bioengineer, but he always kept his cool on the field. I was more of a lab genius. I wish I could be more like him.

“Huh, yeah. Sorry, I froze,” I said with a coating of stupor, disappointment, and anger. I continued without hesitation, “What’s the status?”

While he was unplugging the portable bio-scanner from the trembling body, Nasir replied: “Body temperature 39.9. Heartbeats – 137. Adrenaline level 50.5. Oxygen level 92.12%. Stress level is increasing by 2.4% per minute. If we don’t act quickly, he is going to have a seizure. The stem cell differentiation rate… 272%! This is too high. It looks like he is growing a supernumerary limb, but I don’t know what kind.”

“Pick up the nano-extractors and verify traces of xeno-mRNA.”
It was a Sanofi Nano-extractors SNX43, a state-of-the-art fleet of nano-medical machines with a portable management system. Current extractors were all managed centrally by the hospital drone intervention unit.

I quickly put my face next to the patient’s ear and asked, “Sir, I am going to need your authorization to perform an intracellular analysis. We don’t know what you have, this might be your last chance…”

He grabbed my head and screamed, “What the hell are you asking for? Do it! Arghhhh!”

I felt foolish. Yet, there was no way I could bypass the authorization because the device had a legal lock encoded in it.

“That will do,” I whispered. The nano-extractors became operational, switching from blue ‘stand by’ to green ‘proceed.’ The extractors looked like a stick, no longer than a large pen. I placed it next to the wound, and it opened, deploying four branches on each side, and anchoring onto his skin. I heard a sequence of small noises, like pistons. The extractor fleet deployed into his body.

Then a ballet of lights came up. The SNX43 beamed a 3D representation of the arm’s inner part. The system rendered the location of the fleet in real-time.

They moved surprisingly slowly. I was disappointed. I was used to contemplating much livelier robots with the previous version.

Nasir said deductively: “There’s something wrong. They appear to be sucked in or blocked. Zoom in, Nicolas”.

I pinched out the holo-display to enter the zoom command and tapped on the max button. The illusion of slowness came from the fact that the machines were moving at such a speed that it was hard to follow with the naked eye. They looked like bees on steroids, fighting a wall of flesh. But the alien cells rebuilt the wall as soon as it was destroyed.

“I get it; the cellular growth must be faster than the robots can clear the path. It is not good. Not good at all.”

“Have you ever seen something like this before?”

“No…” said Nasir. It was the first time I saw fear in his eyes.

Then he jumped. “It could be contagious. I want all containment units on the platform now. I request a total lockdown of platform seventeen. Now!”

Nasir stopped, then looked at me insistently. I immediately thought something was happening to me.

Then, Nasir froze and fell to the ground.

I jumped immediately to grab him, but something inside me pulled back. My bioengineer spinal safety device prevented me from approaching the new pathogens. Whenever it triggered, it reminded me of the auto-braking mechanisms in my grandparents’ cars. I do not like the idea of having a machine tweaking my neural system. Yet, once more, it saved my life.

“This is not the time,” I whispered. I looked at the sky and said, “Dr. Ernesto. Launch orbital containment in ten. Authorization Kappa-Sigma-Omega-1337.”

Dr. Ernesto answered, “Authorization confirmed. Launching orbital containment in 10, 9, 8…”

“Meanwhile, call the World Health Emergency.”

“Opening channel.”

The dial ended almost instantly.

“Dr. Anoli, this is Dr. Krovariv – replica number 3. State the nature of your emergency.”

“I need to speak to the real Dr. Krovariv. This is a purple priority request.”

“Accepted. Please stand by…”

Time stood still while the platform was flying to space. The quietness of linear magnetic propulsion was staggering. My god. Is Nasir going to make it?

“The patient! I am a monst…”

“Dr. Anoli. What is it about?” Her question interrupted my thoughts.

“Dr. Krovariv. I am sorry, but Nasir has been contaminated by an unknown biological agent. I am heading toward space containment. Furthermore, the patient at the source of the contamination is dying. This agent is too dangerous. Please advise on the protocol,” I finished with a frail voice.

“In this situation, the protocols are clear. Once in space containment, you may use the ‘yellow horizon’ protocol. But you know what will happen if it fails.”

“Yes… I know.”

“I am sorry, son. Since CASPR got open-sourced, we are overwhelmed by these idiots… I am waiting for your decision.”

I had a deep look at Nasir. I knew it would probably be the last time I would see him. It was probably the last time I would see anything else.
I exhaled loudly and said, “Permission to print a new species as a countermeasure.”

“Permission granted. Good luck.”

Asking was the easy part. I was her protégé.

As I was climbing space, the platform changed its path toward a new direction. The Athena Life Engineering Lab, or ALEL, was lighting up like a giant light bulb. It was my dream to one day be worthy of visiting the lab. It was proclaimed as a symbol of hope. But not like this. Not on the verge of my very own death.

ALEL was the only accredited location where humans were allowed to engineer and print new life forms. No one was allowed to penetrate the labs.
Akin to the Athena Orbital Data Centre, it was solely operated by robots and machines following a strict chain of command.

Soon after, the platform slowed down, then stopped near the lab, and the docking system engaged to secure the system. A muffled noise gave me the chills.

“Welcome, Dr. Anoli. My name is Marie Curie. You have been granted the usage of the Athena Life Engineering System by Archdoctor Krovariv of the World Health Organization on the date of 30th December 2060. Any inquiry going against the principles of life and Human Civil Rights will result in your immediate termination. Do you want to hear these principles?”

“No, thank you. I already vowed to fight for life when I became a biomedical engineer.”

“Understood. Please state your prompt, please.”

I thought to myself, “What? Just like that?”

“Marie Curie, I need you to engineer a biological agent to counter Dr. Nasir’s condition. Focus uniquely on alien gene alteration and accelerated growth features. Do not augment the agent with inorganic material. Do not add metamorphic adaptation capabilities. Encode the maximum life duration to be 72 hours. Add automatic degeneration from 70 hours as a failsafe mechanism. No reproduction. No replication. I need you to confirm the supernumerary growth first before proceeding.”

“Supernumerary growth confirmed. Your prompt is acceptable. Proceeding to the registration of the new specimen identified by Anoli-alpha-31122060-001. Beginning prototyping phase.”

As I reached the point of no return, I could not help but wonder if our life would end here. ALEL was a universal robotic womb. What would happen if I asked the Life Factory to… recreate Nasir… Is rebirth possible?


In 2060

Categories
Technology AR/VR Artificial Intelligence Information Technology

Apple wants to HUGS you

Apple unveiled an innovative #AI method for creating animated human avatars in #3D from real humans named #HUGS.

The technique means Human Gaussian Splats
It uses 3D Gaussian Splatting (= reconstruction from multiple points of view).

The features are:
🔹Recreates human avatars in 3D from video and scenes.
🔹Separates humans from static scenes in videos.
🔹Use the SMPL body model for human representation. SMPL = Skinned Multi-Person Linear Model. In essence, it is a way to render a realistic 3D model of the human body
🔹Generates animations
🔹Achieves high rendering quality at 60 FPS.

Why does this publication matter?

First, it is a clear signal that Apple is also in the AI models race.

Then, interestingly, Apple announced the Vision Pro on the 5th of June 2023, with the promise to provide a #Metaverse experience never seen before.

With HUGS, Apple pushes a foundational building block for making the #AR/#VR experience feel more like real life: the dematerialization of your avatar to increase the sentiment of intimacy and immersion.

Also, it pushes further the seamless continuum from digital to physical and vice-versa. It makes the #Phygital Experience.
Digitally generated media is essential to the future of the “Metaverse”.

Links: https://machinelearning.apple.com/research/hugs

🫡

Categories
Technology Artificial Intelligence Automation In 2060 Information Technology Robots Writing

AI in 2060

My wife is calling me.

“Honey, we have a situation with Professor GYTEK, he is acting strangely again.”

“Again? The last training session had even more unexpected results than I thought. Good or Bad?”

“I don’t know! Kids are laughing hard though. Hear this. Serenity, change the audio output to hear the kids too”. Serenity is our family AI.

The sound progressively switches to include the kids’ voices. They could not stop laughing as if they were having the best day of their life. There was a mild amplifying echo in their classroom. Their joy sounded like a melody. It immediately put a smile on my face.

“Ah, it does not sound so bad for now. But it is the fourth unexpected behavior this month, I’ll have to talk with the Corps of Teachers”.

I am the one in charge of the training curriculum and observation lab of Professor GYTEK. The current phase is about the transmission of achievement by coaching. And for this, I called Quentin DILLONS, a worldwide expert in Robotic Psychology. The purpose of this program is to trigger a new step in the evolution of artificial intelligence, in which robots are taught to develop “human goals” and to instill the mechanism of “self-started motivation”, so that they can teach in a better way to our children, to uncover the hidden gems and purpose from the young souls.

Quentin’s methodology utilized systematic questionology, a novel field aimed at formulating the right questions to provide direction and precision in one’s life. The techniques take root in observing holistically a system of causes, decisions, and consequences centered around artificial intelligence. Quentin’s study led to realize AI were developing personalities similar to humans, but with new characteristics such as the optimization of their human-to-AI collaboration, some were developing their observation skills to record and describe with high precision what was happening. Others were astonishingly creating new words, even syntactic rules sometimes as if the human languages were not enough to content earthlings’ intelligence. 

The last session was based on the question “Why is it important for humans to have kids growing their special skills?”

This would not have been possible with the latest progress in artificial intelligence and hardware. Nowadays machines are emulating closely some human behaviors. Some say they have the IQ of a 1000-year genius, with the EQ of a 10-year-old child. I believe fear drove us to the point where we enforced the law to control and monitor any significant progress in AI. Ultimately, we made certain that advancements in technology would benefit all of mankind and not solely a single corporation. Simultaneously, we ensured that AI would not pose a threat by enslaving humanity.

With the improvement in energy recycling and storage, a single AI unit could potentially be never turned off. But humans have decided to include multiple “kill switches” in this new species, like limiting the power autonomy to force autonomous machines to recharge. While recharging, each AI was manually verified and monitored. A qualified AI regulation agency published regularly a thorough diagnostic depicting their evolution. Four companies raised their empire on AI control systems. What used to be the “Big 4” are now the “Colossal 8”.

We are at a turning point in history. People ask their elites and government, “Should we remove the limiter in their emotional system?”. Some say it is the key to the singularity. Others say it is useless because we only need machines to assist not to “live their life”. The remaining people say they just need it. Painful loneliness was unnecessary, so they would possess the perfect friend or partner. Last weekend, I experienced an immersive documentary on Netflix VR World in which a 42 years-old Spanish woman said “I would rather have the company of an android than humans”. Some believe it is simply giving birth to our end. I am not a believer, I am and always be a master crafter, so I build.

I built Professor GYTEK. Which stands for Giving Youth Tools to Excel through Knowledge.

Then my wife brings me back from my flash thoughts to reality. “Are you still there?”

“Yes, I am.”

“Oh okay. Well, as wonderful as this situation is, you realize it leads to a dead end, don’t you? They are going to shut down the program. Honey, you know more than I that no one wants to walk a path that would lead to “that Incident”.

“Oh, stop saying “that Incident” like you were talking about Voldemort”.

“Well, now that you are mentioning it. It is all about Serpentar. Ah ah ah!”.

We are both laughing nervously.

The Sync Dawn was the most dreadful event of the 21st century. It felt like a deep wound in the psyche of everyone.

“All right. My dear wife, I need to finish the review of update 5.21. Keep me posted, please. See you tonight.”.

“Bye Bye.”

I sit down glazing at the nothingness while thinking about what is best for both my grandchildren and humanity. Is humanity in a better spot now? Am I really improving our civilization?

“Gather your mind, Yannick. This is not the time for daydreaming. Get back to work to meet your deadline”, resonated Mustapha’s voice in my skull. My AI research assistant is right.

“Very well. GYTEK. Let’s… Uh… Check the emotion mirroring settings, calibrated for a classroom of 11 to 13 years old kids. Assertive factors 12.75. Judgment 87.5 and dynamic mentoring alpha-iota-iota. Imagination… Checked. Keep the default settings. Recursive feedback… Paused. Everything… Looks… Good. Ok, let’s start with…”.

I paused for a second, thoughtfully. I jumped from my chair energetically to say: “History lessons: The Sync Dawn. GYTEK 5.21, do you copy?”.

“Sure. Using the ascending evolution of the OpenAI’s Davinci model Mark XII published in November 2029, the startup Obsidian Intermind created a digital twin of human consciousness.

Soon after, the virtual consciousness infrastructure was upgraded to become connectable, so that off-brain cognition could be mutualized. As a result, humans could gain extra brain power and memory. The increase was dependent on the level of developed intelligence: the more critical thinking, emotional awareness, communication, and memory access you had, the more significant the boost was. The term “supra-intelligence” emerged. However, it was widely criticized as IQ studies were exposing a moderate increase from 0.7% to 14.5% IQ points.

However, this off-brain collective intelligence became exceptionally smart, to the point some said it was a wisdom system. Alternatively, specialized AI cognitive pools came to grow within the wise system, creating public and private cognitive islands. The most popular were the Disease Diagnostic Cognitive Pool (DDCP), and the Creative Cognitive Pool (CCP). Imagination was only limited by the human mind.

 Should I continue?”

“Please proceed, Professor.”

“Sure.

After nearly a decade of research, the collaboration between Neuralink and Obsidian Intermind gave birth to Evernet, the Internet of Cognition. The 14 July 2051 they launched the experimental version of this new kind of network. The principle was simple, 9500 humans would be connected to Evernet for 3 years. Each participant would be closely monitored and evaluated.

This experiment was widely criticized. The rush for the business model “Cognition as a Service” led to the creation of new social-economical movements: the Humanist, Cyber-moderate, and the Neo Mutualist”.

The Humanists fostered biological and spiritual integrity.

Doctrines of Cyber-moderate advocated for augmentation by technology, as long as it served, and I quote their leader, “A noble social purpose”. Alike in any group, Cyber-moderates had extremists. On the left end of the spectrum, their members accepted aesthetic techno-augmentation. On the other side, augmentation was only authorized for damages caused by dangerous jobs and Defence activities. It is not surprising that the Corps of Peacekeepers were mostly Cyber-moderates.

Neo Mutualism was a new religion. Their members believed humanity’s elevation and salvation would come from the mutualization of our consciousness. Transhumanists were schoolboys compared to them».

“GYTEK, just say they are a bunch of zealots.”. I mumbled.

“Yannick, my Critical Bias Thinking settings are set to 0 for kids between 11 and 13. According to the study “Biais Interpretation and Incorporation into Pre-teen Judgment System” by Dr. Amunde, Kallili and Pratt issued the 16 May 2039, the settings should be kept to 0. I reckon a variance of .05 would bring no harm. Do you want me to proceed?”.

“No, it’s fine GYTEK. I was talking to myself. What I meant is…”. I inhale calmly. “They demonstrated characteristics of zealots. Zealot-ish behaviors. Is my sentence acceptable?”

 “It is acceptable.”

“Common, GYTEK, you’re talking to me, your buddy and mentor! Say it!”

“They were a bunch of zealots! “. Said cheerfully the robot.

“Voila! Ok, stop joking around, otherwise grumpy Mustapha won’t be happy. Please continue”.

“I hear you”. Said Mustapha.

“It was on purpose… GYTEK. Please, go on.”

“Despite the widespread and frequent protests of Humanists, the Corps of Ethicists, Peacekeepers, Cognitive Researchers, Medicine, and the Corp of Society Architects approved the experiment. People would be connected to Evernet permanently during the experiment. And so, for the first time in history, humans would be connected to the first worldwide brain.

Everything went as planned. We observed a significant enhancement in each participant. Less stress, faster psychological recovery. Healing was even faster when after a trauma. People were dreaming more often. Furthermore, they all built habits that would improve their lives, as if positive practices spread unconsciously over the network.

The end of the experiment was planned for 16th August 2054. Each human taking part in the experiment would reach the personal milestone “Sync Done“.

Surprisingly, Evernet reached the 100% “Sync Done” milestone six months earlier than the planned end of the experiment. It was like the first landing on Mars, a day of worldwide celebration. The celebrities that took part in the experiment were invited to the most popular live-streaming shows, Twitter Live News and The Sandbox World.

Suddenly, people start noticing something very strange».

I raised my hand instinctively and said: “Pause. The last word is vague. Next time use precise words. The storytelling structure is engaging. Congratulations. But keep in mind this is History telling. Facts before Flares”

“Understood and integrated.”. The AI professor continued without further ado.

“People have experienced an unusual and peculiar situation. Participants in the experiment suddenly started to act and talk synchronously. It was as if the single mind spoke to the entire world by commanding many bodies like a puppet master. The colossal echo caused by the voices was staggering. Only the following abysmal silence of stupor superseded it.”.

I interrupted Professor GYTEK by asking: “From now answer as if a 12-year-old child asked the following question: How this ever happened?”.

“The exact reason is still being explained. However, researchers came to a general agreement before the following theory.

Evernet built not only a digital ai model but also a biological model of neural pathway architecture to optimize shared cognitive power. The human brain is designed to work as if it was alone inside a skull. Thinking about it, Evernet Orbital Data Centre is a gigantic metallic skull. Thus, over time, Evernet act as a single brain – a big brain so to speak – and each synchronized human brain just gave progressively more raw power, more ideas, and more knowledge. And it appears that once the pathway architecture was finally developed and mature in all the connected human brains it activated. What we are still trying to figure out is how and when the Evernet super-model decided to build the optimized pathway and how it encoded it in its new model.”.

“What was revolutionary about Evernet AI super-model?”

“Evernet’s was merely an inspiration of the human brain. The challenge was to find patterns in the structure governing the complex layers of inputs and outputs. The answer was in the order of magnitude and the capacity of robots living in the Orbital Data Centre to physically rewire the hardware like human synapses. In addition, the combination of Recursive Learning and Genetic Correction was revolutionary. These are complex terms for a simple idea. Can you picture Albert Einstein, with the curiosity of a 2-year-old child, getting smarter each second, with perfect photographic and sensorial memory, that can navigate back to the root of his knowledge, then re-assess its optimal state, to finally rebuild its current cognitive functions then replace them with better ones? That is Evernet.”

“Tone the complex stuff down.”, I retorted.

“Registered.

So, this is the reason why the governing bodies scrutinize AI technologies that have a direct impact on human cognition and education. Consequently, I professor GYTEK, and all my preceding versions, are commanded to not display expression of free will having a direct influence on human ideas, values, and ways of thinking that are not vetted and approved by the Corps of Education and the Corps of Society Evolution”.

“Not bad. Not bad at all. It is almost time. I am going to meet Quentin in… 2 minutes.

Before our session ends, Professor, given your predecessor’s unexpected behavior, you earned your personal assistant. It is like an artificial consciousness, so to speak. From now on, Serenity will also supervise your decisions and will act as a safeguard system. Her mission is to prevent you from acting in a way that will make the Corps of Education stop your program. Do you understand what is at stake?”.

“I do”. Said the professor emotionlessly.

Then the robot added “I will neither let you nor your wife down. I will prevent any reminiscence of her Sync Dawn experience.”

“Perfect. Finally, dear GYTEK, which open question of the day would you ask your students?”

“Considering it is possible to possess the same powers as machines while staying human. What is the most preferable outcome for the civilization: to increase the number of people artificially connected or to have more artificial intelligence agents interacting with people?”


In 2060

Categories
Technology Artificial Intelligence Business ChatGPT Data Design GPT3 Information Technology

Navigating the Future with Generative AI: A Prompt Engineer Job Offer?

Looking through the lens of Generative AI, jobs are evolving rapidly in this age of Digital Augmentation. In the midst of all the artificial intelligence effervescence, I wonder what kind of new jobs will emerge soon.

One of them is the Prompt Engineer.

In this article, I imagined the job description of your business’ first Prompt Engineer.


YH SuperSleek Jeans fashion brand logo 01 1

The world is shifting rapidly. As a pioneer in generative AI and an advocate of productivity augmentation, we are excited to open the position of Prompt Engineer.

SuperSleek Jeans is a company providing tailored jeans to women and men. Our purpose is to make jeans like a second skin! Our values are sensorial audacity and durability leadership. We proudly employ 2700 talented souls dedicated to meeting people’s needs in a smart and compassionate manner. Technology plays a significant role in our way of working and exploring uncharted territories for the benefit of our employees and customers is part of our DNA.

We foster a dynamic and inclusive company culture that encourages growth, collaboration, and innovation. We offer competitive compensation packages, comprehensive benefits, and numerous opportunities for professional development.

Your Mission

Your mission is to establish and grow the practice of Prompt Engineering at SuperSleek Jeans.

Responsibilities

  1. Learn and teach how to build products faster by analyzing and modifying the chain of analysis-to-design, design-to-build, and build-to-supervise for augmentation in each domain.
  2. Lead the development of an Enterprise AI Spirit, a chat-based agent, sourcing its knowledge base from existing systems such as Wiki, Document Store, Databases, and Unstructured documents. Manage an up-to-date training data set.
  3. Build a corporate prompt catalog for workers to provide reusable productivity recipes.
  4. Determine which parts of business processes can be entirely automated.
  5. Establish KPIs, a Steering Dashboard, and periodic reporting to measure the benefits of AI-augmented engineering and operations compared to current systems of work.
  6. Introduce and evangelize the concept of Generative AI and Large Language Models (also known as LLM).
  7. Build a legal and ethical framework to ensure risks pertaining to AI augmentation are addressed accordingly. Monitor the progress of domestic and international AI regulations.

Your Skills

  1. Hands-on experience with Generative AI models and tools leveraging prompt engineering, such as ChatGPT, Midjourney, ElevenLabs, etc.
  2. Core background in IT engineering.
  3. Proven algorithmic skills and mastery of engineering practices.
  4. The ability to code in one of the most popular languages such as Python, JavaScript, Java, or C#. A basic understanding of SQL is a must.
  5. Data management proficiency.
  6. Excellent communication and ability to design stunning presentations with compelling storytelling.
  7. Critical thinking and root cause analysis capabilities.
  8. Conversational UX proficiency.

Soft Skills

  1. Autonomous leadership with the ability to identify and propose the next best actions for yourself and your colleagues.
  2. Effective change management and resistance handling.
  3. Leading by example and providing assistance to colleagues when needed.
  4. You walk the talk by advocating continuous augmentation and demonstrating how your productivity and quality increase with AI augmentation.

Benefits and Perks

  1. An 85k€ to 105k€ compensation package based on your experience in engineering and AI knowledge.
  2. Total health, dental, and vision insurance for all family members.
  3. Retirement savings plan according to the national compensation scheme.
  4. 30 holidays with a generous paid time off policy.
  5. Employee assistance program and wellness initiatives.
  6. Craft your own professional growth and development along with your manager
  7. Collaborative and inclusive company culture.
  8. Free cinema tickets for your team once per quarter.

Living Your First Days in our Company

  1. You start your onboarding as a treasure hunt which consists in visiting key people, visiting unusual places, and learning our way of working. Each step unlocks a new quest until the completion of your journey. Your manager, the employee experience manager officer, and teammates assist along your adventure.
  2. Receive training so that you can rapidly feel comfortable with internal tools.
  3. Enjoy a tour of the premises and surrounding environment, such as restaurants, shops, parks, etc.
  4. As you familiarize yourself with the work environment, your first responsibility will be establishing a plan for transitioning our organization from Digital Transformation to Digital Augmentation.

Join and become part of a team that shapes the future of SuperSleek Jeans. Apply now and embark on an exciting and fulfilling career journey with us.


Feel free to unapologetically copy and remix this potential job offer in your business transition to Digital Augmentation.

I might even use it in the future. Who knows!

🖖

Categories
Technology Bitcoin Blockchain Business Business Strategy Cardano Cryptocurrencies Ethereum How to Polkadot Strategy Technology Strategy Web 3.0

How to grasp the blockchain world and safely walk your first steps into Web 3.0

blockchain

The following is a quick guide explaining how to become acquainted with the world of blockchain, crypto, and web 3.0:

  1. First, I invite you to start with these videos:
    1. What is a Blockchain: https://youtu.be/rYQgy8QDEBI
    2. The difference between Bitcoin and Ethereum blockchains: https://youtu.be/0UBk1e5qnr4
    3. What is a Smart Contract: https://youtu.be/ZE2HxTmxfrI
    4. What is a Stablecoin: https://youtu.be/pGzfexGmuVw
    5. What is an NFT: https://youtu.be/FkUn86bH34M
  2. Understand the key concepts of web 3.0 by googling them: Blockchain, Wallet, Cryptocurrency, (crypto) token, Mining, PKI, tokens, Smart Contracts, Dapps, Decentralized Exchanges (DEX), Staking, ICO, ITO, Layer 1/2/3 protocols, transaction fees, consensus, etc.
  3. Know what are the major Web 3.0 technologies, their differences, and their value propositions like Bitcoin, Ethereum, Polkadot, Cardano, Cosmos, Polygon, Hyperledger, IPFS, Storj, Solana, Tether, etc. Not only the network but also the development tooling and the distribution means.
  4. Understand what new business models, organization models, like DAO, and features the Web 3.0 is bringing with respect to Web 2.0. Then research how Web 2.0 and 3.0 complement each other.
  5. Select one Blockchain technology and stick to it, in the beginning, to understand how Dapps are being built, distributed, and promoted in the ecosystem. Some of the most popular depending on your areas of interest: Uniswap (DeFi), OpenSea (Digital Art, NFT), Axie Infinity (Gaming), …
  6. Understand token economics and how it is possible to have such a huge valuation and market capitalization.
  7. Learn by doing!
    • Learn to use blockchain tools like Etherscan and Bitcoin Explorer, to see all Ethereum Blockchain transactions. And now is the time to look up your own wallet!
    • Then, you could fund your wallet using the most popular and safest Crypto Trade Exchanges like Kraken, Coindesk, or Crypto.com.
      Notice that you can buy cryptocurrencies with Paypal, but you currently cannot transfer them to your own wallet. Paypal is holding bitcoin for you.
  8. Follow the various companies and foundations expanding the web 3.0 (tech websites, Twitter) to grasp how the ecosystem is expanding. Then, ask yourself how these companies are regulated.
  9. Interact on LinkedIn, Twitter, and Reddit with knowledgeable people and enthusiasts.
  10. If you are an IT engineer, start programming with Solidity. I find the Truffle Suite genuinely good to build Smart Contracts and NFTs in an easy way.
Categories
Technology AR/VR Artificial Intelligence

Here is how Meta is positioning these new AI  and AR / VR services to support companies in developing their metaverse

Check the following video introducing the Builder Bot, which creates VR worlds with your voice :

I can only acknowledge that it is a clever move. The first versions of VMESS in 2015 had a quite similar goal in mind.

Meta intent to be a Metaverse Forge Platform: the host a Digital Multiverse. At the end of the day, it is about giving one the possibility to pioneer the Metaverses (with an “S”).

Overall, it is an even greater strategic milestone for the Meta Group as Facebook needs to pivot to some degree in order to not face the same destiny as MySpace.

However, Mark Zuckerberg has 2 problems to solve (per the recent 230 billion $USD loss in value) till then:

  1. Even if the social value of Facebook has been proven, with its 2.9 billion users, Facebook has this negative image of being an “evil company”.

    Considering the amount of data gathered on each member of this social network, political opinion influence, the toxicity of Instagram for youngsters, etc. this image needs to change.
  2. Mark Zuckerberg. Yes, Mark is the image of Facebook, and it is not shining at the moment. Maybe it would be wiser for Meta to have a different public face (communication-wise) to perform its complete mutation.

The other big players

Microsoft#mixedreality: https://docs.microsoft.com/en-us/windows/mixed-reality/

NVIDIA Omniverse: https://developer.nvidia.com/nvidia-omniverse-platform

Interesting challengers to follow

Niantic, Inc., the company that brought you Pokemon GO: https://nianticlabs.com/

Roblox, which is a social gaming platform where gamers can create their own games and let other players play them: https://www.roblox.com/

RenderNetwork on Solana blockchain: https://rendertoken.com/