Categories
Artificial Intelligence Automation Autonomous Agents Business Businesses ChatGPT Enterprise Architecture GPT4o Information Technology IT Engineering Llama Meta OpenAI Technology User Experience

The $1 Trillion Agent Factory: How Generative AI Is Printing Power (Not Just Productivity) – Navigating the Future With Generative AI, Part 5

1. Why AI Agents Are the New Currency of Power

What if your company, or your entire nation, possessed the ability to print productivity by harnessing intelligence at scale?

This concept gained concrete form with the announcement of an unimaginable $500 billion investment in artificial intelligence infrastructure, which excludes the additional €331 billion independently committed by industry titans like Meta, Microsoft, Amazon, xAI, and Apple. This agreement, spearheaded by President Trump, involves a partnership between the CEOs of OpenAI (Sam Altman), SoftBank (Masayoshi Son), and Oracle (Larry Ellison). Their joint venture, named “Stargate”, aims to build “the physical and virtual infrastructure to power the next generation of advancements in AI,” creating “colossal data centers” across the United States, promising to yield over 100,000 jobs.

France and Europe, not to be outdone, responded swiftly. At the AI Action Summit in Paris in February 2025, French President Emmanuel Macron announced a commitment of €109 billion in AI projects for France alone, highlighting a significant moment for European AI ambition. This was followed by European Commission President Ursula Von Der Leyen’s launch of InvestAI, an initiative to mobilize a staggering €200 billion for investment in AI across Europe, including a specific €20 billion fund for “AI gigafactories.” These massive investments, on both sides of the Atlantic, show a clear objective. The highest level of commitment reflects understanding. This fact translates into one reality: being a civilization left behind is simply out of the question.

But the stakes in this global AI game are constantly rising. If the US and Europe thought they were holding strong hands, China, arguably the most mature AI nation, has just raised the pot. China is setting up a national venture capital guidance fund of 1 trillion yuan (approximately €126.7 billion), as announced by the National Development and Reform Commission (NDRC) on March 6, 2025. This fund aims to nurture strategic emerging industries and futuristic technologies, a clear signal that China intends to further solidify its position in the AI race, focusing among others, on boosting its chip industry.

The implicit “call” to the other players is clear: Are you in, or are you out?

Therefore, my first question isn’t issued from a sci-fi movie; it’s not some fantastical tale ripped from the green and black screen of The Matrix, where programs possessed purpose, life, and a face.

This is about proactively avoiding the Kodak moment within our respective industries.

This is about your nation avoiding the declining slope of Ray Dalio’s Big Cycle, where clinging to outdated models in the face of transformative technology is a path toward obsolescence.

raydalio the big cycle

This is the endgame: AI Agents are not just changing today—they are architecting the future of nations.

2. Architecting Sovereignty: How Nations Are Industrializing Intelligence

In November 2024, I had the privilege of delivering a second course on Digital Sovereignty, focused on Artificial Intelligence, at the University of Luxembourg, thanks to Roger Tafotie. I emphasized that the current shift toward advanced AI, especially Generative AI, represents a paradigm shift unlike before. Why? Because, for the first time in history, humanity has gained access to the near-infinite scalability of human-level intelligence. Coupled with the rapid advancements in robotics, this same scalability is now within reach for physical jobs.

05 Architecture of Digital Sovereignty

Digital Sovereignty in the age of AI is a battle for access to the industrialization of productivity. Consider the architecture of AI Digital Sovereignty as a scaffolding built of core capabilities, much like interconnected pillars supporting a grand structure:

  • Cloud Computing: The foundational infrastructure, the bedrock upon which all AI operations rest.
  • AI Foundation Model Training: This is where the raw intelligence is refined, like a rigorous academy shaping the minds of future digital workers.
  • Talent Pools: The irreplaceable human capital, the architects, engineers, data scientists, and strategists who drive innovation. These are the skilled individuals, the master craftspeople, forging the tools and directing the symphony of progress.
  • Chip Manufacturing: The ability to produce advanced CPU, GPU, and specially designed AI chips, like TPU and LPU, guarantees independence.
  • Systems of Funding and Investments: The ability to finance a long-term, consistent, and high-level commitment toward AI capabilities.

If you need a tangible example of how critical resources like rare earth metals and cheap energy are to this race, look no further than President Donald Trump’s negotiations with Volodymyr Zelenskyy. The proposed deal? $500 billion in profits, centered on Ukraine’s rare earth metals and energy reserves. Let’s not forget: 70% of U.S. rare earth imports currently come from China. Control over these resources is the lifeblood of AI infrastructure.

It’s not merely about isolated components but how these elements interconnect and reinforce each other. This interconnectedness is not accidental; it’s the key to true sovereignty – the ability to use AI and control its creation, deployment, and evolution. It’s about building a self-sustaining ecosystem, a virtuous cycle where each element strengthens the others.

Europe initially lagged, but the competition has only just begun. The geopolitical landscape will be a major, unmastered factor.

The current market “game” consists of finding the critical mass between the hundreds of billions invested in R&D, the availability of “synthetic intelligence”, and unlocking a new era of sustainable growth. The race to discover the philosophical stone—to transform matter (transistors and electricity) into gold (mind)— and to achieve AGI and then ASI is on.

Sam Altman knows it; his strategy is “Usain Bolt” speed: the competition cannot keep up if you move at a very fast pace with AI product innovation. Larry Page and Elon Musk intuitively grasped this first. OpenAI was not only created to bring “open artificial intelligence” to each and every human, but also serves as a deliberate counterweight to Google DeepMind. Now, Sundar Pichai feels the urgency to regain leadership in this space, particularly now that Google Search—the golden goose — is threatened by emerging “Deep Search” challengers such as Perplexity, ChatGPT, and Grok.

As of March 2025, HuggingFace, the premier open-source AI model repository, has more than 194,001 text generation models (+24.1% since November 2024) within a total of 1,494,540 AI models (+56.8 % since November 2024). Even though these numbers include different versions of the same base models, think of them as distinct blocks of intelligence. We are already in an era of abundance. Anyone possessing the necessary skills and computational resources can now build intelligent systems from these models. In short, the raw materials for a revolution are available today.

The stage is set: The convergence of human-level intelligence scalability and robotics marks a profound moment in technological history, paving the way for a new era of productivity.

3. The Revolution of the Agentic Enterprise

In October 2024, during the Atlassian Worldwide conference Team 24 in Barcelona, I had the privilege of seeing their integrated “System of Work” firsthand.

Arsène Wenger, former Football Manager of Arsenal FC and current FIFA’s Chief of Global Football development, was invited to share his life experiences in a fireside chat. It was a true blessing from an ultimate expert in team building and the power of consistency.

Arsene Wenger

Mr. Wenger articulated that the conditions and framework for achieving champion-level performances rely on a progressive and incremental journey. While talent can confer a slight edge, that edge remains marginal in the realm of performance. The key differentiator resides in consistent effort, with a resolute commitment to surpassing established thresholds. Regularly implementing extra work and consistently reaching your capacity is what separates a champion from the rest.

Thus, Atlassian pushed their boundaries, unveiling Rovo AI, an agentic platform native to their environment. Rovo is positioned at the core of the “System of Work,” bridging knowledge management with Confluence and workflow mastery with Jira. To my surprise, Atlassian announced they have 500 active Agents! This is a real-world example of printing productivity by deploying purpose-driven digital workers to the existing platform.

But the true brilliance lies in making this digital factory available to their customers. This type of technology should be on every CEO and CIO’s roadmap. How you integrate this capability into your business and technology strategy is the only variable – the fundamental need for it is not.

The Agentic Enterprise is the cornerstone of this change: creating autonomous computer programs (agents) that can handle tasks with a broad range of language-based intelligence.

We’ve transitioned from task-focused programming to goal-driven prompted actions. Programming still has a role to play as it guarantees perfect execution, but the cognitive capabilities of large language reasoning models like GPT O3 and Deepseek R1 lift all its limitations. Moreover, in IT, development is now about generated code. While engineering itself does not necessarily become easier – you still need to care about the algorithm, the data structure, and the sequence of your tasks – the programming part of the process is drastically simplified.

After years of prompt engineering since the GPT3 beta release at the end of 2020, I concluded that prompt engineering is not a job per se but a critical skill.

The Agentic Enterprise is not a distant dream but a present reality, fundamentally changing how organizations construct and scale their work.

4. Klarna’s AI-Powered Customer Service Revolution: How AI Assumes 2/3 of the Workload

In February 2024, the Swedish fintech Klarna announced that their AI contact center agent was handling two-thirds of their customer service chats, performing the equivalent work of 700 full-time human agents. It operates across 23 markets, speaks 35 languages, and provides 24/7 availability.

With full speech and listening capabilities now available in models like Gemini 2 Realtime Multimodal Live API, OpenAI Realtime API, or VAPI, the automation opportunities are virtually limitless.

What made this rapid advancement possible, suddenly?

Technically, the emergence of multimodality in AI models, robust APIs, and the decisive capability of Function Calling form the foundation. But more importantly, this transformation is primarily a business vision; it lies in adopting transformative technology as the main driving power, and using innovation as your key distinguishing factor, just as with Google and OpenAI. When this approach acts as the central nervous system in business strategy, then, the adoption is not perceived as a fundamental disruption but, rather, a gradual and consistent reinforcement.

The crucial capability here is Function Calling, which allows AI models to tap into skills and data beyond their inherent capabilities. Think of getting the current time or converting a price from Euros to Swiss Francs using a live exchange rate – things the model can’t do on its own. In a nutshell, Function Calling lets the AI interact directly with APIs. It’s like giving the AI a set of specialized tools or instantly teaching it new skills.

APIs are the foundation for the intelligence relevance of AI Agents and their ability to use existing and new features based on user intent. In contrast to prior generations of chatbots, which needed explicit intent definitions for each conversation flow, LLMs now provide the intelligence and the knowledge “out of the box.”. To further understand the fundamental importance of APIs for any modern business, I invite you to read my article or listen to my podcast episode titled “Why APIs are fundamental to your business“.

This has led to the rise of startups like Bland.ai that offer products like Call Agent as a service. You can programmatically automate an agent to respond over the phone, even customizing its voice and conversation style – effectively creating your own C3PO.

ElevenLabs, the AI voice company that I use for my podcast, has also launched a digital factory for voice-enabled agents.

Then, December 18 2024, OpenAI introduced Parloa, a dedicated Call Center Agent Factory. This platform represents the first of its kind, specializing in digital workers for a specific industry vertical.

Parloa

The promise? To transform every call center officer into an AI Agent Team Leader. As a “Chief of AI Staff”, your objective will be to manage your agents efficiently, handling the flow of demands, intervening only when necessary, and reserving human-to-human interactions for exceptional client experiences or complex issues.

The revolution in customer service is already here, driven by new AI-based call center solutions. This is a sneak peek of the AI-driven future.

5. The Building Blocks: AI Development Platforms

To establish a clear vision for an AI strategy and augment my CTO practice, I meticulously tested several AI technologies. My goal is to validate the technological maturity empirically, assert the productivity gains, and, more importantly, define the optimal AI Engineering stack and workflow. These findings have been documented within my AI Strategy Map, a dynamic instrument of vision. As a result, my day-to-day habits have completely changed and reflect my emergence as a full-time “AI native”. My engineering practice is reborn in the “Age of Augmentation“.

I changed my stack to Cursor for IT development, V0.dev for design prototyping, and ChatGPT o3 for brainstorming and review. The results achieved so far are highly enlightening and transformational.

Thus, engineering teams’ next quantum leap is represented by the arrival of Agentic IDEs, facilitating an agent-assisted development experience. The developer can seamlessly install the IDE, create or import a project, input a prompt describing a feature, and observe a series of iterative loops leading to the complete implementation of the task. The feature implementation is successful in 75% of cases. In the remaining 25%, the developer issues a corrective prompt to secure 100% implementation, indicating a need to supervise such technology when used as an independent digital worker.

Leading today’s innovative Agentic IDE market are:

Then, the landscape of mature AI frameworks gives companies a great array of enterprise-ready solutions. Automated agent technologies, specifically, have emerged as critical tools:

Finally, DEVaaS platforms are completely redefining the approach to IT delivery :

These technologies allow you to build applications from the ground up that are fully operational right from the start, without any additional setup time. Yet, it is not only about developing an application from scratch and then gradually adding features by using prompts as your instruction; it’s also about application hosting. These solutions now offer a complete DevOps and Fullstack experience.

Although they currently yield simple to moderately complex applications, and may not be entirely mature, the technologies are demonstrably improving at rapid paces. It is only a matter of weeks (not months) before seeing full system decomposition and interconnectedness that reaches corporate standards.
This is exactly what the corporate environment has been waiting for.

How will this impact the IT organization? It comes down to these three outcomes:

First, for companies where the IT organization is a strategic differentiator, the internal IT workforce will increase productivity by delegating development tasks directly to internal AI Agents. Sovereign infrastructure is most likely required in heavily regulated or secretive industries.

Second, companies where IT is a primary activity, but not a core organizational driver can outsource the IT development to specialized companies who also make use of Agentic development IDEs or platforms.

Finally, an option for outsourcing includes the reconfiguration of jobs and empowering base knowledge workers – individuals with no official IT background – to IT positions through AI training. This would be the full return of the Citizen Developer vision, previously promised by Low Code/ No Code trends.

Innovation within the AI domain occurs several times per day, thus requiring a proactive mindset to remain updated technically which leads into actionable technology perspectives.

6. The Power of Uni-Teams: The Daily Collaboration Between Man and Machine

What does it feel like to lead your own AI team?

From experience, it make you feels like Jarod, from the TV show “The Pretender“, a true one-man band capable of handling multiple roles.

Think of it this way, you are now capable of:

  • Writing code like a developer
  • Creating comprehensive documentation like a technical writer
  • Offering intricate explanations like an analyst
  • Establishing policy frameworks like a compliance officer
  • Originating engaging content like a content writerData storytelling like a data analyst
  • Data storytelling like a data analyst
  • Building stunning presentations like a graphic designer

From my experience, reaching mastery in any discipline often reveals an observable truth within the corporate world: a significant portion of our tasks are inherently repetitive. The added value is not in running through the same scaffolding process many, many times, if you are no longer learning (except perhaps to reinforce previous knowledge). The real value is in the outcome. If I can speed up the process to focus on learning new domain knowledge, multiplying experiences, and spending time with my human colleagues, it is a win-win.

My “Relax Publication Style“, an guided practice for anyone starting in content creation, has evolved into a more productive and enjoyable method – combining iterative feedback cycles from human/AI, to leverage strategic insights via structured ideas, ongoing AI-reviews, personal updates and enriched content using human-curation combined with in-depth AI search tools. I will explore this process in a future article.

One of the most surprising shifts in my workflow has been the emergence of “background processing” orchestrated by the AI Agent itself. It’s a newfound ability to reclaim fragments of time, little pockets of productivity that were previously lost. It unfolds something like this:

  1. The Prompt: I issue a clear directive, the starting point for the AI’s work.
  2. The Delegation: I offload the task to the AI, entrusting it to this silent, tireless digital worker.
  3. The Productivity Surge: I’m suddenly free. My capacity is expanded, almost as my productivity is nearly doubled. I can tackle other projects, collaborate with another AI agent, or even (and I’m completely serious) indulge in a bit of gaming.
  4. The Harvest: I gather the results, reaping the rewards of the AI’s efforts. Sometimes, it’s spot-on; other times, a refining prompt is needed.

In my opinion, this is a deep redefinition of “teamwork.” It’s no longer just about human collaboration; it’s about orchestrating a symphony of human and artificial intelligence. This is the definition of “Ubiquity.” I work anytime and truly anywhere – during my commute, in a waiting room, even while strolling through the park (thanks to the marvel of voice-to-text). It’s a constant state of potential productivity, a blurring of the lines between work and, well… everything else.

The next stage begins with gaining awareness and using AI. From there, it progresses to actively building your AI teams. The goals of a collaboration with agents, for instance, for a product designer or a system architect, would be to actually create an AI team so that a human product designer becomes, in fact, the team leader of their AI agents. Let’s see how and why.

The rise of AI teammates promises greater productivity and a fundamental shift in the way we approach problem-solving and work.

ai uni team

7. Evolving as a Knowledge Worker in the Age of AI

How can a worker adapt to the capabilities expansion of printing productivity?

It starts by understanding the current capabilities of AI; exploring them through training or by testing systems, and see how they can be applied in your own workflow. Specifically, with your existing tasks, determine which ones can be delegated to AI. Even though, at the moment, it’s more about a human prompting and the AI executing a small, specific task. Progressively, these tasks will be chained, increasing the complexity to the level of higher hierarchical tasks. This integration of increasingly complex tasks constitutes the purpose of agents – to have a certain set of skills handled by AI.

For functional roles such as product designers or business analysts, there is an opportunity to transition toward a product focus, to understanding customer journeys, psychology, behaviors, needs, and emotions. This can result in an experience (UX) driven approach, where the satisfaction of fulfilling needs and solving problems is paramount while leveraging data insights to enhance the customer’s experience.

Indeed, this technology is already showcasing its potential to bring us together. Within my organization, my colleagues are openly sharing a feeling of relief at how Generative AI empowers them by significantly reducing the workload of some specifically tedious tasks from days to mere minutes. But it is not all about saving time: the key progress that made me really moved, is that now, freed from a few tedious operations, they have now the capacity to explore their current struggles, identify past pain points, and articulate new business requirements. Additionally, it is the “thank you”, that directly acknowledges my teams’ efforts in bringing this new means to reclaim precious time and comfort. What is even far more compelling, and very inspirational from my point of view, is this ability to formulate and then resolve this new set of challenges using the capabilities of this recent AI ecosystem. Witnessing this emerging transformation gives me tangible joy and concrete hope for our collaborative future.

So, the key observation is that it’s up to early adopters and leaders to drive this change. They need to build a culture where people aren’t afraid to reimagine their jobs around AI, to learn how to use these tools effectively, and to keep learning as the technology evolves. The time to strategize how AI reshapes internal processes to master inevitable industry restructuration has arrived while simultaneously positioning your organization as a demonstrative leader for others to follow. To build that next-generation workforce, you need tools and specific actionables strategies; what are the core components to your next plan?

Ultimately, this era of augmentation is a strategic opportunity – one that requires everyone involved, including users and top executives, to actively foster continuous understanding, ongoing discovery, and strategic adaptation, thus contributing at multiple levels to building high-performing teams.

Disrupt Yourself, Now.

8. The Digital Worker Factory: A Practical Example in Banking

Let’s consider a practical example. Imagine delivering a project to create an innovative online platform that sells a new class of dynamic loans—loans with rates that vary based on market conditions and the borrower’s repayment capacity. This platform would be fully online, SaaS-based, and built as a marketplace where individuals can lend and borrow, with a bank acting as a guarantor. That is the start of our story. Now it’s about delivering this product.

What if you only needed a Loan Product Manager, a System Architect, and a team of agents to bring this digital platform to life?

Here’s how the workflow looks.

  1. As the product manager, you specify the feature set and map the customer journey from the borrower’s perspective. You define the various personas – a lender, a borrower, a bank, and even a regulator.
  2. The system architect then set up the technical specifications for the IT applications and LLMs, covering deployment to the cloud, integrations such as APIs, data streams, and more.
  3. You initiate the iterative loop by defining a feature. The AI Agent then plans and generates the code, after which you test the feature. Based on your feedback, the Agent troubleshoots and corrects the program accordingly. This loop continues iteratively until the platform fully takes shape. In this workflow, the product isn’t merely coded—it’s molded. The prompt itself becomes the new code.
agentic team

With a clear vision and the right framework, the path to production is not as complicated as it once was.

The Loan Expert Augmented: AI in Action

Consider the Loan Product Manager. They use AI to simulate loan profitability, examining various customer types and market variations. But, as importantly, they use AI to refine pitches, sales materials, and regulatory documentation. This results in streamlining compliance and ensures alignment with the existing framework.

Generative AI is also used to revise internal and external processes. The templates for product sheets are optimized and iteratively improved. Marketing materials, such as a webpage explaining the product, equally take advantage of Artificial Intelligence to reach the best clarity and impact.

Finally, personalized communication with a customer per specific client also relies upon Generative AI automation and data contextualization. If a customer needs a loan for a car or other tangible asset, the communication is perfectly tailored to the specific context.

This is personal banking at scale.

The focus is on the active role workers play in orchestrating, managing, and continuously evolving the AI systems they rely on in their daily work.

Hence, we’re effectively printing productivity now – a rare paradigm shift. Every professional needs to be proactive to seize this opportunity, not just react to it. Start by exploring AI tools relevant to your field, experiment with their capabilities, and consider how they can be integrated into your daily workflows. Whether you are a software engineer, product designer, or loan expert, the time to adapt is now.

Bear with me: the way you’ve operated up to this point—with data entry in applications, scrupulously following procedures, and writing lengthy reports in document processing software— is now directly challenged by individuals adopting the “automatician” mindset, evolving their skills from basic Excel macros into sophisticated, full-fledged applications. But remember: The future is not something you passively face. This world is your design, using available new technologies along with your pragmatic actions

9. Integrating AI Engineering in your System of Delivery, The Two Paths Forward: New vs Existing Systems

Now that Generative AI has entered your work and that you are integrating all the different aspects of the digital workers – either through pilot projects or all other internal activities – this awareness converges towards one strategic decision.

This decisive decision leads to two clear paths: either build a completely new application and workflow designed from the ground up around AI-augmented technologies or modernize existing complex systems to align with current AI-powered delivery.

Your choice needs solid considerations, even though both outcomes must lean toward a similar goal: a fully modern and flexible digital workforce, printed from your Enterprise Agent Factory. Thus, ensure you keep a strategic direction of impact toward a significantly better system.

Path 1, Building from Scratch: The AI Native Approach

The first and straightforward path involves building completely new systems. Here, the software specifications are essentially the prompts within a prompt flow. Think of this prompt flow as the blueprint for the code, all directly created within an Agentic IT delivery stack. The advantage of this approach is that the entire system is designed from the ground up to work seamlessly with AI agents. It’s like building a house from scratch with an AAA energy pass and all the home automation technologies included.

The prerequisite of this stage is to constitute a core team of pioneers that went successfully into production with at least one product used by internal or external clients. In the process, they successfully earned their battle scars, gained experience, selected their foundation technologies, established architectural patterns, and built a list of dos and don’ts, which ultimately will turn into AI Engineering guidelines and best practices.

Next – and this decisive break from the old system is non-negotiable – this group must devise a brand new method of work: building its strategic and actionable steps in all operational components that allow AI to be present from day one and without the limitation of legacy infrastructure that no longer fits. Because all this experience now results in a unique point of reference, a key ”baseline” from that point you can start designing a process to enable the full transformation from the current to future operations.”

Path 2, Evolving From Within: Growing AI Integration in the Existing Enterprise Application Landscape

The second path involves evolving existing systems. This is a more intricate process as it requires navigating the complexities of the current infrastructure. Engineers accustomed to the predictability and consistency of traditional coding methods now need to adapt to the probabilistic nature of AI-driven processes. They must deal with the fact that AI outputs, while powerful, can not always be exactly the same.

Initially, this can be unsettling because it disrupts your established practices, but with tools such as Cursor or GitHub Copilot, you can quickly become accustomed to this new approach.

This shift requires that software engineers move from the specific syntax of languages like Python or TypeScript to communicate in everyday language with the AI, bringing skills that were previously specific to them into the reach of other knowledge workers. Furthermore, it is not easy to introduce powerful LLMs in a piece of software that has an established code structure, architecture, and history. It’s like renovating an old house – you are forced to work with existing structures while introducing AI elements. This requires a deep understanding of the current code and the implications of architectural choices, such as why you would use Event Streaming instead of Synchronous Communication or a Neo4J (graph database) instead of PostgreSQL (relational database) for a specific task.

Accessing and integrating with legacy systems adds another layer of friction because the code is outdated or uses a proprietary language. While AI facilitates code and data migration, the increased efficiency of AI-native platforms often makes rewriting applications from scratch the most optimal strategy.

In summary, creating AI-native applications from scratch is easier, with an incredible speed of development, but it implies a bold decision. Transitioning an existing application is more difficult, as it has inherent architectural, data, or technological constraints, but it is the most accessible path for many companies.

The increasing power of LLMs to handle ubiquitous tasks that were previously exclusively human tasks implies a compression of tasks and skills within an AI. This shift moves some work regarding coordination, data management, and explanatory work from humans to machines. For human professionals, this will result in the reduction of these types of tasks, freeing them to focus on higher-level tasks.

The duality of paths ahead is a call for a pragmatic approach to transition; it’s about moving forward without disrupting too much of the familiar workplace.

10. The Metamorphosis: From Data Factories to Digital Workforce Factories

[Picture: A symbolic image representing a butterfly emerging from a chrysalis]

Today, we are gradually fully exploiting the potential of Generative AI, with text being the medium to translate, think, plan, and create. These capabilities are expanding to media of all kinds – audio, music, 3D models, and video. Consider what Kling AI, RunwayML, Hailuo, and OpenAI Sora are capable of; it is just the beginning and building blocks of what is possible.

These capabilities, originally for individual tasks, are now transforming entire industries – architecture, finance, health care, construction, and even space exploration, to name just a few.

If you can automate aspects of your life, you can automate parts of your work. You can now dictate entire workflows, methods, and habits. You can delegate. What’s the next stage?

So far, we have created automatons, programs designed to execute predefined tasks to fulfill a part of a value chain. These are digital factories comparable to factories in the physical world that have built computers, cars, and robots. And now, if you combine factories and robotics with software AI, the result is the ultimate idea: the digital worker.

The key is that it is no longer just about using or building existing programs but more about building specific agents. These agents represent specialized versions of the human worker and include roles such as software engineers, content creators, industrial designers, customer service providers, and sales managers. Digital workers have no limits in scaling their actions to multiple clients and languages at the same time.

The new paradigm consists of creating a new workforce. We used to construct data factories with IT systems, and now, with Generative AI, we are building Digital Workforce Factories. A Foundation Model is the digital worker’s brain. Prompts are defining their job function within the enterprise. API and Streams are their nervous system and limbs to act upon the real world and use existing code from legacy systems.

The extensive time once required to cultivate skilled human expertise—spanning roughly eighteen years in formal education, followed by years of specialization, and reinforced through real, tangible work applications— is now radically compressed due to the capacity of LLM technology. By which, as I detailed previously in “Navigating the Future with Generative AI: Part 1, Digital Augmentation“, it all highlights our mastery in having compressed centuries of structured knowledge: from methodical research, systematic problem-solving frameworks, and many and countless cycles of innovation process from implementation best practices. Still, key expertise now resides in both the method and application. New competencies should prioritize AI foundational model mastery, the value of highly skilled fine-tuning methods for specific domain applications, and how to leverage creative prompts to build a tangible output from such systems even with unexpected new scenarios.

It is paramount to fully grasp that what we experience now with AI transformation is more than just a set of groundbreaking techniques. It reveals a new structure—for better and for worse—impacting both knowledge workers assisted by AI agents and manual workers augmented by robotics. But remember, ‘and’ is more powerful than ‘or’: it is precisely the combination and convergence of these roles — human and digital working together — that creates true scalability and transformative potential.

The real path forward isn’t merely augmentation—it’s about fostering a genuinely hybrid model, emerging naturally from a chrysalis stage into a mature form. This new human capability is seamlessly amplified by digital extensions and built upon robust foundations, meticulously refined over time. Moreover, humans are destined to master Contextual Computing, where intuitive interactions with a smart environment—through voice, gesture, and even beyond—become second nature. This isn’t about replacing humans; it’s about elevating them to orchestrate a richer, more integrated digital reality.

Perhaps artificial intelligence is the philosopher’s stone—the alchemist’s ultimate ambition—transmuting the lead of raw data into the gold of actionable intelligence, shaping our environment, one prompt at a time.

Dear fearless Doers, the future is yours.


Categories
Artificial Intelligence ChatGPT OpenAI Technology

ChatGPT Launches New Search Feature: OpenAI Challenges Perplexity and Google in AI Search

#ChatGPT just released the #search feature.

Nothing revolutionary here. OpenAI is catching up with Perplexity, the company that has pioneered #AI search.

There’s a new button (globe icon) on the left of the chat input. When you tap on it, it opens the “Browse mode,” similar to using a custom #GPT.

Search results clearly display the sources. You have the ability to go through them all.

It is clear, the new battle of the new search experience is openly starting. This new breed is ready to take on Google Search.

🫡

Categories
Artificial Intelligence Business Businesses ChatGPT Engineering EU AI Act GPT4 GPT4o Information Technology Innovation Llama Meta OpenAI Regulation Technology

✨ Llama 3.1, Meta and the EU AI Act – Where are the areas of synergy between innovation and regulation?

img 20240727 wa00033116017234937086313
Llama 3.1 AI model

Llama 3.1, a 405 Billion parameters model, has just been released by Meta.

It comes with increased performances. Some early tests make it comparable to “GPT4o“.

A few perks:

  • Still #opensource
  • 128K token context window
  • Improved Multilingual Support. Meta is a leader in multilanguage models.
  • Comes with a new security and safety tool for advanced moderation and control mechanisms to ensure safe interactions.
  • Improved capabilities for creating synthetic data.

I find the partner ecosystem, including NVIDIA, Google Cloud, Microsoft, Groq supporting Llama already quite impressive (see picture).

But also…

While the EU AI Act has been officially published on July 12, 2024, in the EU official journal, to come into force on August 2, 2024, Meta made worrisome news for the #artificialintelligence open source community.

In a nutshell, Meta will withhold the rollout of multimodal AI models in the EU region until the regulatory rules are clarified.

The EU AI Act contains explicit rules for foundation models, also known as “general-purpose AI models”, amongst the following:

  • Article 51: Classification of general-purpose AI models as general-purpose #AI models with systemic risk
  • Article 53: Obligations for providers of general-purpose AI models
  • Article 55: Obligations for providers of general-purpose AI models with systemic risk
  • Article 56: Codes of practice

Let’s hope we will find a way to balance #innovation and #regulation.

🫡

Categories
Artificial Intelligence ChatGPT GPT3 GPT4 IT Architecture IT Engineering

API Hero 🤖” – The #GPT That Codes the API for You 🙌

APIs are key to scaling your #business within the global ecosystem. Moreover, your API is a fundamental building block for augmenting universally accessible #AI services, like ChatGPT.

Building an #API, however, can be daunting for non-IT individuals and junior engineers, as it involves complex concepts like API schema, selecting libraries, defining endpoints, and implementing authentication, among others.
On the other hand, for an expert backend #engineer, constructing your fiftieth API may feel repetitive.

That’s where “API Hero” comes in, specifically designed to address these challenges.

Consider an API for managing an “#Agile Planning Poker”. Given a list of functions in plain English, such as “Create Planning Poker”, “Add Participants”, “Estimate User Story”, etc., (including AI-suggested ones), the GPT will generate:

  1. The public interface of the API (for engineers, this corresponds to the OpenAPI/Swagger spec).
  2. #Code in the chosen #programming language, with a focus on modularity and GIT-friendly project structure.
  3. Features like API security, configuration management, and log management.
  4. An option to download the complete code package (no more copy-pasting needed 💪).

And there’s more!

Search for “API Hero 🤖| AMASE.io” on #ChatGPT’s GPT store. Give it a try and send your feedback for further improvement.

By the way:

  1. Currently, GPTs are accessible only to ChatGPT Plus users.
  2. If you want to know more about the decisive nature of API for your business, check my article/podcast “Why API are Fundamental to your Business”.

Link to the GPT: https://chat.openai.com/g/g-a5yLRJA1J-api-hero-amase-io

🫡

Categories
Technology Artificial Intelligence Automation Autonomous Agents ChatGPT GPT4 Information Technology IT Architecture IT Engineering Robots Testing

Navigating the Future with Generative AI: Part 2, Prompt Over Code – The New Face of Coding

In this installment of the Generative AI series, we delve into the concept of “Prompt as new Source Code”. The ongoing revolution of generative AI allows one to amplify one’s task productivity by up to 30 times, depending on the nature of the tasks at hand. This transformation allows me to turn my design into code, eliminating almost the need for manual coding. The time spent typing, correcting typos, optimizing algorithms, and searching Stack Overflow to decipher perplexing errors, structuring the code hierarchy, and bypassing class deprecation among other tasks, are now compressed into one. This minimization of effort provides me with recurrent morale boosts, as I achieve significantly more in less time and more frequently; these instances are micro-productivity periods. To put it in perspective, I can simply think about it during the day, and have a series of conversations with my assistant while I commute. My assistant is always available. In addition, I gain focus time.

I don’t need to wait for a team to prove my concept. Furthermore, in my founder role, I have fewer occasions to write extensive requirement documents than I would when outsourcing developments during periods of parallelization. I just need to specify the guidelines once, and the AI works out the rest for me. Leveraging the  AMASE methodology to fine-tune my AI assistant epitomizes the return on investment of my expertise. Similarly, your expertise, paired with AI, becomes a powerful asset, exponentially amplifying the return on your efforts.

Today, information technology engineering is going through a quantum leap. We will explore how structured coding is being replaced by natural language. We refer to this as prompting, which essentially denotes “well-architected and elaborated thoughts”. Prompting, so to speak, is the crystallization of something that aims to minimize the loss of information and cast-out interpretation. In this vein, “What You Read is What You Thought” becomes a tangible reality.

The Unconventional Coding Experience with AI

Although the development cycle typically commences with the design phase, this aspect will not be discussed in this article. Our focus will be directed towards the coding phase instead.

The development cycle with AI is slightly different; it resembles pair programming. Programming typically involves cycles of coding and reviews, where the code is gradually improved with each iteration. An artificial intelligence model becomes your coding partner, able to code 95% of your ideas.

In essence, AI acts as a coach and a typewriter, an expert programmer with production-level knowledge of engineering. The question may arise: “Could the AI replace me completely? What is my added value as a human?”

Forming NanoTeams: Your AI Squad Awaits

My experience leads me to conclude that working with AI is akin to integrating a new teammate. This teammate will follow your instructions exactly, so clarity is essential. If you want feedback or improvements in areas like internal security or design patterns, you must communicate these desires and potentially teach the AI how to execute them.

You will need to learn to command your digital teammate.

Each AI model operates in a distinct yet somewhat similar fashion when it comes to command execution. For instance, leveraging ChatGPT to its fullest potential can be achieved through impersonations, custom instructions, and plugins. On the other hand, Midjourney excels when engaged with a moderate level of descriptiveness and a good understanding of parameter tweaking.

A New Abstraction Layer Above Coding

What exactly is coding? In essence, coding is the act of instructing a machine to perform tasks exactly as directed. The way we’ve built programming languages is to ensure they are idempotent, repeatable, reliable, and predictable. Ultimately, coding is translated into machine language, creating a version that closely resembles human language. This is evident in modern languages like TypeScript, C#, Python, and Kotlin, where instructions or controlling statements are written in plain English, such as “for each”, “while”, “switch”, etc.

With the advent of AI, we can now streamline the stage of translating our requirements into an algorithm, and then into programming code, including structuring what will ultimately be compiled to run the program. Traditionally, we organize files to ensure the code is maintainable by a human. But what if humans no longer needed to interact with the code? What if, with each iteration, AI is the one updating the code? Do we still need to organize the code in an opinionated manner, akin to a book’s table of contents, for maintainability? Or do we merely need the code to be correctly documented for human understanding, enabling engineers to update it without causing any disruptions? Indeed, AI can also fortify the code and certify it using test cases automatically, ensuring the code does not contain regressions and complies with the requirements and expected outcomes.

To expand on this, AI can generate tests, whether they be unit tests, functional tests, or performance tests. It can also create documentation, system design assets, and infrastructure design. Given that it’s all driven by a large language model, we can code the infrastructure and generate code for “Infrastructure as Code“, extending to automated deployment in CI/CD pipelines.

To conclude this paragraph, referring to my first article in the “Generative AI series”, it is apparent that Natural Language Processing is now the new programming language expressed as prompts. The Large Language Model-based generative AI model is the essential piece of software for elaborating, structuring, and completing the input text into code that can be understood both by human engineers and digital engineers.

The New Coding Paradigm

This fresh paradigm shift heralds the advent of a new form of coding—augmented coding. Augmented coding diminishes the necessity of writing code using third and fourth-generation languages, effectively condensing two activities into one.

In this scenario, the engineer seldom intervenes in the code. There may be instances where the AI generates obsolete or buggy code, but these can often be rectified promptly in the subsequent iteration.

We currently operate in an explicit coding environment, where the input code yields the visible result on the output—this is known as Input/Output coding.

The profound shift in mindset now is that the output defines the input code. To elucidate, we first articulate how the system should behave, its structure, and the rules it must adhere to. Essentially, AI has catapulted engineers across an innovation chasm, ushering in the era of Output/Input coding.

Embracing Augmented Coding: A Shift in Engineering Dynamics

The advent of augmented coding ushers in a new workflow, enhancing the synergy between engineers and AI. Below are the core aspects of this transformation:

  1. Idea Expression: The augmented engineer is impelled to express ideas and goals to achieve.
  2. Requirement Listing: The engineer lists the requirements.
  3. Requirement Clarification: Clarify the requirements with AI.
  4. Architecture Decisions: Express the architecture decisions (including technology to use, security compliance, information risk compliance, regulatory technical standards compliance, etc.) independently, and utilize AI to select new ones.
  5. Coding Guidelines: Declare the coding guidelines independently and sometimes consult the AI.
  6. Business Logic: Define the business logic in the form of algorithms to code.
  7. Code Validation: Run the code to validate it works as intended. This becomes the first order of acceptance tests.
  8. Code Review: assess the code to ensure it complies with the engineering guidelines adopted by the company.
  9. Synthetic Data Generation: Use AI to generate data sets that are functionally relevant for a given scenario and a persona.
  10. Mockup-API Generation: Employ AI to generate API stubs that are nearly functionally complete before their full implementation.
  11. Test Scenario Listing: Design the different test scenarios, then consult stakeholders to gather feedback and review their completeness.
  12. Test Case Generation: Make AI to generate the code of test cases. The same technique applies to security tests and performance tests.

AI can even operate in an autonomous mode to perform a part of the acceptance tests, but human intervention is mandatory at certain junctures. It’s crucial to bridge results with expectations.

Hence, when uncertainties arise, increasing the level of testing is prudent, akin to taking accountability upon acceptance tests to ensure the delivered work aligns with the expected level of compliance regarding the requirements.

Non-Negotiable Expectations

In the realm of critical business rules and non-functional requirements such as security, availability, accessibility, and compliance by design, these aspects are often considered second-class citizen features. Now that AI in coding facilitates the choice, these features can simply be activated by including them in your prompts to free you up more time to rigorously test their efficiency.

Certain requirements are tethered to industry rules and standards, indispensable for ensuring individual or collective safety in sectors like healthcare, aviation, automotive, or banking. The aim is not merely to test but to substantiate consistent performance. This underscores the need for a new breed of capabilities: Explainable AI and Verifiable AI. Reproducibility and consistency are imperative. However, in a system that evolves, attaining these might be challenging. Hence, in both traditional coding and a-coding, establishing a compliance control framework is essential to validate the system’s functionality against expected benchmarks.

To ease the process for you and your teams, consider breaking down the work into smaller, manageable chunks to expedite delivery—a practice akin to slicing a cake into easily consumable pieces to avoid indigestion. Herein, the role of an Architect remains crucial.

Yet, I ponder how long it will be before AI starts shouldering a significant portion of the tasks typically handled by an Architect.

Ultimately, the onus is on you to ensure everything is in order. At the end of the day, AI serves as a collaborative teammate, not a replacement.

Is AI Coding the Future of Coding?

The maxim “And is greater than or” resonates well when reflecting on the exponential growth of generative AI models, the burgeoning number of published research papers, and the observed productivity advantages over traditional coding. I discern that augmented coding is destined to be a predominant facet in the future landscape of information technology engineering.

Large Language Models, also known as LLMs, are already heralding a modern rendition of coding. The integration of AI in platforms like Android Studio or GitHub Copilot exemplifies this shift. Coding is now turbocharged, akin to transitioning from a conventional bicycle to an electric-powered one.

However, the realm of generative AI exhibits a limitation when it comes to pure invention. The term ‘invention’ here excludes ideas birthed from novel combinations of existing concepts. I am alluding to the genesis of truly nonexistent notions. It’s in this space that engineers are anticipated to contribute new code, for instance, in crafting new drivers for emerging hardware or devising new programming languages (likely domain-specific languages).

Furthermore, the quality of the generated code is often tethered to the richness of the training dataset. For instance, SwiftUI or Rust coding may encounter challenges owing to the scarcity of material on StackOverflow and the nascent stage of these languages. LLMs could be stymied by the evolution of code, like the introduction of new keywords in a programming language.

Nonetheless, if it can be written, it can be taught, and hence, it can be generated. A remedy to this quandary is to upload the latest changes in a prompt or a file, as exemplified by platforms like claude.ai and GPT Code Interpreter. Voilà, you’ve just upgraded your AI code assistant.

Lastly, the joy of coding—its essence as a form of creative expression—is something that resonates with many. The allure of competitive coding also hints at an exciting facet of the future.

Short-Term Transition: Embracing the Balance of Hybrid A-Coding

The initial step involves exploring and then embracing Generative AI embedded within your Integrated Development Environment (IDE). These tools serve as immediate and obvious accelerators, surpassing the capabilities of features like Intellisense. However, adapting to the proactive code generation while you type, whether it’s function implementation, loops, or SQL code, can hasten both typing and logic formulation.

Before the advent of ChatGPT or GPT-4, I used Tabnine, whose free version was astonishingly effective, adding value to daily coding routines. Now, we have options like GitHub Copilot or StableCode. Google took a clever step by directly embedding the AI model into the Android Studio Editor for Android app development. I invite you to delve into Studio Bot for more details on this integration.

Beware of Caveats During Your Short-Term Transition to Generative AI

Token Limits

Presently, coding with AI comes with limitations due to the number of input/output token generation. A token is essentially a chunk of text—either a whole word or a fragment—that the AI model can understand and analyze. This process, known as tokenization, varies between different AI models.

I view this limitation as temporary. Papers are emerging that push the token count to 1M tokens (see Scaling Transformer to 1M tokens and beyond with RMT). For instance, Claude.ai, by Anthropic, can handle 100k tokens. Fancy generating a full application documentation in one go?

Model Obsolescence

Another concern is the inherent obsolescence of the older data on which these models are trained. For example, OpenAI’s models use data up to 2022, rendering any development post that date unknown to the AI. You can mitigate this limitation by providing recent context or extending the AI model through fine-tuning.

Source Code Structure

Furthermore, Generative AI models do not directly consider folder structures, which are foundational to any coding project.

Imagine, as an engineer, interacting with a chatbot crafted for coding, where natural language could reference any file in your project. You code from a high-level perspective, while the AI handles your GIT commands, manages your gitignore file, and more.

Aider exemplifies this type of Gen AI application, serving as an ergonomic overlay in your development environment. Instead of coding in JavaScript, HTML, and CSS with React components served by a Python API using WebSocket, you simply instruct Aider to create or edit the source code with functional instructions in natural language. It takes care of the rest, considering the multiple structures and the GIT environment. This developer experience is profoundly familiar to engineers. The leverage of a Command Line Interface – or CLI, amplifies your capabilities tenfold.

Intellectual Property Concerns

Lastly, the risk of intellectual property loss and code leakage looms, especially when your code is shared with an “AI Model as a Service”, particularly if the system employs Reinforcement Learning with Human Feedback (RLHF). Companies like OpenAI are transparent about usage and how it serves in enhancing models or crafting custom models (e.g. InstructGPT). Therefore, AI Coding Models should also undergo risk assessments.

The Next Frontier: Codeless AI and the Emergence of Autonomous Agents

Names like GPT Engineer, AutoGPT, BabyAGI, and MetaGPT herald a new branch in augmented coding: the era of auto-coding.

These agents require only a minimal set of requirements and autonomously devise a plan along with a coding strategy to achieve your goal. They emulate human intelligence, either possessing the know-how or seeking necessary information online from official data sources, libraries to import, methods, and so on.

However, unless the task is relatively simple, these agents often falter on complex projects. Despite this, they already show significant promise.

They paint a picture of a future where, for a large part of our existing activities, coding may no longer be a necessity.

Hence, the prompt is the new code

If the code can be generated based on highly specific and clear specifications, then the next logical step is to consider your prompt as your new source code.

It means you can start storing your specifications instructions, expressed as prompt, then store the prompt in GIT.

CD/CC with Adversarial AI Agent
Continuous Development/Continuous Certification (CD/CC) with Adversarial AI Agent

Suddenly, Continuous Integration/Continuous Delivery (CI/CD) becomes Continuous Development/Continuous Certification (CD/CC), where the prompt enables the development of working pieces of software, which will be continuously certified by a testing agent working in adversarial mode: you continuously prove that it works as intended.

The good thing is that benefits stack up: the human specify, the AI code/deploy and the AI certify, to finish with the human using the results of the materialization of its thoughts. Finally, the AI learns along with human usage. We close the loop.

Integrating New Technology into Traditional Operating Models

AI introduces a seamless augmentation, employing the most natural form of communication—natural language, encompassing the most popular languages on Earth. It stands as the first-of-its-kind metamorphic software building block.

However, the operating model with AI isn’t novel. A generative AI model acts as an assistant, akin to a new hire, fitting seamlessly into an existing team. The workflow initiates with a stakeholder providing business requirements, while you, the lead engineer, guide the assistant engineer (i.e. your AI model) to execute the development at a rapid pace.

Alternatively, a suite of AI interactions, with the AI assuming various roles, like dev engineer, ops engineer, functional analyst, etc. can form your team. This interaction model entails externalizing the development service from the IT organization. Here, stakeholders still liaise through you, as lead engineer or architect, but you refine the specifications to the level of a fixed-price project. Once finalized, the development is entirely handed over to an autonomous agent. This scenario aligns with insourcing when the AI model is in-house, or outsourcing if the AI model is sourced as a Service, with the GPT-4 API evolving into a development service from a Third-Party Provider like OpenAI.

AI infuses innovation into a traditional model, offering stellar cost efficiency. Currently, OpenAI’s pricing for GPT-4 stands at $0.06 per 1000 input tokens and $0.12 per 1000 output tokens. Just considering code generation (excluding shifting deadlines, staffing activities, team communication, writing tasks, etc.), for 100,000 lines of code with an average of 100 tokens per line (which is extensive for standard leet code), the cost calculation is straightforward:

100,000 × 100 = 10,000,000 tokens; (10,000,000 tokens × $0.12) ÷ 1000 = $1,200. This cost equates to a mere two days of development at standard rates.

For perspective, Minecraft comprises approximately 600,000 lines of Java code. Theoretically, you could generate a Minecraft-like project for less than $10,000, including the costs of input tokens.

However, this logic is simplistic. In reality, autonomous agents undergo several iterations and corrections before devising a plan and rectifying numerous errors. The quality of your requirements directly impacts the accuracy of the generated code. Hence, mastering the art of precise and unambiguous descriptive writing becomes an indispensable skill in this new realm.

Wrap up

Now, you stand on the precipice of a new coding paradigm where design, algorithms, and prompting become your tools of creation, shaping a future yet to be fully understood…

This transformation sparks profound questions: How will generative AI and autonomous agents reshape the job market? Will educational institutions adapt to this augmented coding era? Is there a risk of losing the depth of engineering expertise we once relied upon?

And as we move forward, we can only wonder when quantum computing will introduce an era of instantaneous production, where words will have the power to change the world in real time.

🖖

Categories
Technology Artificial Intelligence Business ChatGPT Data Design GPT3 Information Technology

Navigating the Future with Generative AI: A Prompt Engineer Job Offer?

Looking through the lens of Generative AI, jobs are evolving rapidly in this age of Digital Augmentation. In the midst of all the artificial intelligence effervescence, I wonder what kind of new jobs will emerge soon.

One of them is the Prompt Engineer.

In this article, I imagined the job description of your business’ first Prompt Engineer.


YH SuperSleek Jeans fashion brand logo 01 1

The world is shifting rapidly. As a pioneer in generative AI and an advocate of productivity augmentation, we are excited to open the position of Prompt Engineer.

SuperSleek Jeans is a company providing tailored jeans to women and men. Our purpose is to make jeans like a second skin! Our values are sensorial audacity and durability leadership. We proudly employ 2700 talented souls dedicated to meeting people’s needs in a smart and compassionate manner. Technology plays a significant role in our way of working and exploring uncharted territories for the benefit of our employees and customers is part of our DNA.

We foster a dynamic and inclusive company culture that encourages growth, collaboration, and innovation. We offer competitive compensation packages, comprehensive benefits, and numerous opportunities for professional development.

Your Mission

Your mission is to establish and grow the practice of Prompt Engineering at SuperSleek Jeans.

Responsibilities

  1. Learn and teach how to build products faster by analyzing and modifying the chain of analysis-to-design, design-to-build, and build-to-supervise for augmentation in each domain.
  2. Lead the development of an Enterprise AI Spirit, a chat-based agent, sourcing its knowledge base from existing systems such as Wiki, Document Store, Databases, and Unstructured documents. Manage an up-to-date training data set.
  3. Build a corporate prompt catalog for workers to provide reusable productivity recipes.
  4. Determine which parts of business processes can be entirely automated.
  5. Establish KPIs, a Steering Dashboard, and periodic reporting to measure the benefits of AI-augmented engineering and operations compared to current systems of work.
  6. Introduce and evangelize the concept of Generative AI and Large Language Models (also known as LLM).
  7. Build a legal and ethical framework to ensure risks pertaining to AI augmentation are addressed accordingly. Monitor the progress of domestic and international AI regulations.

Your Skills

  1. Hands-on experience with Generative AI models and tools leveraging prompt engineering, such as ChatGPT, Midjourney, ElevenLabs, etc.
  2. Core background in IT engineering.
  3. Proven algorithmic skills and mastery of engineering practices.
  4. The ability to code in one of the most popular languages such as Python, JavaScript, Java, or C#. A basic understanding of SQL is a must.
  5. Data management proficiency.
  6. Excellent communication and ability to design stunning presentations with compelling storytelling.
  7. Critical thinking and root cause analysis capabilities.
  8. Conversational UX proficiency.

Soft Skills

  1. Autonomous leadership with the ability to identify and propose the next best actions for yourself and your colleagues.
  2. Effective change management and resistance handling.
  3. Leading by example and providing assistance to colleagues when needed.
  4. You walk the talk by advocating continuous augmentation and demonstrating how your productivity and quality increase with AI augmentation.

Benefits and Perks

  1. An 85k€ to 105k€ compensation package based on your experience in engineering and AI knowledge.
  2. Total health, dental, and vision insurance for all family members.
  3. Retirement savings plan according to the national compensation scheme.
  4. 30 holidays with a generous paid time off policy.
  5. Employee assistance program and wellness initiatives.
  6. Craft your own professional growth and development along with your manager
  7. Collaborative and inclusive company culture.
  8. Free cinema tickets for your team once per quarter.

Living Your First Days in our Company

  1. You start your onboarding as a treasure hunt which consists in visiting key people, visiting unusual places, and learning our way of working. Each step unlocks a new quest until the completion of your journey. Your manager, the employee experience manager officer, and teammates assist along your adventure.
  2. Receive training so that you can rapidly feel comfortable with internal tools.
  3. Enjoy a tour of the premises and surrounding environment, such as restaurants, shops, parks, etc.
  4. As you familiarize yourself with the work environment, your first responsibility will be establishing a plan for transitioning our organization from Digital Transformation to Digital Augmentation.

Join and become part of a team that shapes the future of SuperSleek Jeans. Apply now and embark on an exciting and fulfilling career journey with us.


Feel free to unapologetically copy and remix this potential job offer in your business transition to Digital Augmentation.

I might even use it in the future. Who knows!

🖖

Categories
Technology Artificial Intelligence ChatGPT Deep Learning GPT3

Navigating the Future with Generative AI: Part 1, Digital Augmentation

In this series of articles, I explore the fascinating realm of Generative AI, as models of concentrated intelligence, and their profound impact on our society.

By tapping into the vast collective mind, digitization has enabled us to access the accumulated knowledge of humanity since the invention of writing.

Join me as we explore this intriguing topic in greater detail and uncover the exciting possibilities it presents.

https://open.spotify.com/episode/3H976fAfFmNDif1zmTjNuT?si=bsUrGirpQ5iKWwy1Hkw0hg

A Glimpse of the Future

In 2060, David dreams of becoming the best defense attorney in the country. After losing his best friend under heart-breaking circumstances, he vowed to prevent any woman from enduring domestic violence under his watch. He is a fourth-year student, and today, he is taking his most important exam of the year.

There is only one supervisor in a room of 52 students. The senior shepherd devours her blue book, while the school’s AI monitor scrutinizes candidates.

David looks very confident. He is good at case-solving patterns. Since he has an excellent visual memory, he also has a good toolbox for cases and amendments. However, deep inside, he is stressed by his average analytical skills in evidence analysis and forensic correlation abilities. To pass the exam, he has permission to use the Internet, the LegalGPT AI model, and the online state court database.

David articulates his dossier like a virtuoso. His first composition is made of brief sentences. Subsequently, he links these pieces of evidence to references and precedents from previous cases and legal decisions. Shortly after, the legal argument is a dense one-pager. Next to none, using LegalGPT, he generates his entire lawsuit, a symphony of 27 pages written in perfect legal language. Finally, he makes a few adjustments, then generates a new batch of updates.

And voila.

Satisfaction and relief radiate from his face while he submits his copy. He stands up, packs his stuff, then stops briefly as the supervisor interrupts his focus. The latter looks at him and says:

“40 years ago, I had to write those 27 pages. Obviously, it is the end of an era”.

Dorine UWATIMINA, law professor (retired), grand supervisor.

Beginning the Era of Augmentation

The launch of GPT3 API in 2021 marked the beginning of a new era: the age of individual augmentation as a service. We are now living in an era of thought materialization, in which one can manifest their desires simply by articulating them. Ideas are designed, illustrated, musically composed, rendered in 3D, explained, or revealed by the AI.

Companies like Google (BERT), OpenAI (GPT-4), and Meta (LLaMA) are revolutionizing the domain of deep learning. They mark a significant advancement in natural language processing: Large Language Models (LLM) are picking up the spotlights on the world stage.

This means we are experiencing the transition from “programming” to “narrating”.

It is a paradigm shift in which artificial intelligence overwhelmingly simplifies and amplifies 3/4 of the corporate work relying upon Information Technology such as development, user interface design, illustration, workflow, or reporting.

Generative AI is the digitized embodiment of our collective knowledge and expertise.
AI is us, collective knowledge in a single digitized mind

As a consequence, we are beginning the mass update of the cognitive-based work that is convertible into algorithms and crystalized by pure logic. It leverages the most popular high-level programming languages: human languages.

From now on, spoken languages directly translate to machine language as if you could translate them using Google Translate, except you use ChatGPT.

As programming gets one step easier, your engineering thinking system matters more than your coding skills.

The burning question

I hear your question: Am I going to lose my job?

The answer will come further down this series of articles. Long story short: it depends on your ability to adapt by learning a practice that is new for everyone.

Unlike any other disruptive technology, it has changed the rule of the game forever: people using AI are going to replace you.

And who are these people using and building AI? The adventurous, the curious, the experimenters, the techies, the entrepreneurs, the hustlers, the bad guys, and the future AI natives, our kids.

Homo Sapiens Sapiens vs Homo Auctus

Science is offering you a choice. For your own benefit, I am asking you to take the leap to understand what it is like to work with a digitized copilot and forge your thought opinion.

Should you take the red pill of adaptation, I recommend the following:

  1. Start by trying at least once ChatGPT, or Bing Conversation. The latter includes the GPT model and renews the search experience. It heightens the googling experience to a whole new level.
  2. Get acquainted with a Generative AI that is useful in your industry. For example Midjourney for generating images for email marketing.
  3. Discover how you can be productive with this technology. It is not a silver bullet, but you can instantly acquire an arsenal of skills.
  4. Build new habits so that you start feeling accustomed, connect the dots, and begin to improve your work until over-productivity.
  5. Think about how someone else using some AIs can replace you, then be that person: replace yourself with the new you, your augmented version.

Or simply ignore all of it, swallow the blue pill of comfort, and undergo the first “Great Upgrade”.

Eat your own dog food

I have been experimenting with OpenAI technologies since 2020 and used Google Dialogflow since 2018. I released my first chatbot, which answered regulatory questions about GPDR and PSD2. Developing with Natural Language Processing (NLP) was an eye-opener. I concluded chat provides the ultimate user experience for interacting with machines. It all sounds so obvious now, yet it was not back then despite all the buzz around Siri, Google, and Alexa.

I did the exercise of working within AI augmentation on my experiments since GPT-3 came out. Considering the hard skills, the conclusion is daunting: Generative AI can perform most of what I know and what I am mentally capable of. I can safely state I am outperformed in some areas.

In addition, AI is simply miles away in terms of depth of knowledge. Furthermore, it possesses infinitely better linguistic skills than mines when it comes to articulating ideas in languages other than French and English.

Yet the surprise comes from its ability to develop a simple idea and make it grow by putting words in concert. AI feels like the genius child of Humanity.

Words change the world

Generative AI comes with a new discipline: Prompt engineering. It consists in finding the right text, and the rights qualifiers that will narrate the desired output as close as you have imagined it.

For example, this prompt in Midjourney:

Prime Minister Xavier Bettel playing the finals of League of Legends world eSport championship at the Olympic games streaming on Twitch

generates the following picture:

YH Prime minister Xavier Bettel playing the finals of League of Legends
AI has generated this photo

Ultimately, prompt engineering uses natural language as a modeling interface to command the “commendable world”. The more there are smart systems and devices, the more words animate the world!

The widespread innovative applications based upon Generative AI marks the end of the road for this generation and the beginning of a new breed of workers and creators.

Yet, another finding is that we still need a “general assembly semantic”. It would choreograph a fuzzy set of ideas that will accurately animate the world based upon a well-written thought.

The assembly process, which can be summarized into the loop “decomposition-planning-action-correction”, will likely open the door to Artificial General Intelligence (AGI). Coupled with the widespread natural language programming interfaces (NPI), this is the real end game. In that matter, we are already observing some interesting experiments like AutoGPT as sparks of AGI.

Transitioning from the Digital Transformation to Digital Augmentation

Picture this familiar situation.

Your maturity in terms of digital adoption is high. You are developing a culture of digital awareness, offering mobile-first customer interaction, and your brand is fighting for its visibility on social media. You have the feeling of doing great.

Congratulations.

Yet, the market atmosphere is heavy. You feel the pressure every week goes by. The competition is fierce, you are still looking for an army of IT engineers and data analysts for the last six months. Furthermore, customers get pickier because the offering is abundant. Your analytics tell you a client can switch in the blink of an eye if your experience does not meet his rising standards. Then, just when you thought you nailed it with your latest Instagram reels, it receives negative feedback. Even worst, there is a relentless wave of new product offerings mimicking yours. These startups and VCs are constantly trying to uncover the mythical unicorn while pushing your visibility back to Google’s page 2. And you feel this moment when your industry will be shackled, disrupted, or crippled may happen at any moment.

Who would have thought even Google’s dominance would be threatened?

Fortunately, there is a nascent vision. Transformation is not enough anymore. If you cannot obtain more skilled people now, why not acquire more skills for your people now?

AI is the key to unleashing your talents.

And, slowly, Augmented Work is the evolution of work, as we know it, characterized by these two elements:

  • A human is the sole team leader of his digital workers: he has the Applications, Automatas, and specialized A.I. models for numerous parts of your job, such as programming, translation, video editing, illustration, design, and planning.
  • Teams, as we know, will still exist, obviously, but augmented by AI also at the team level. The team has the opportunity to exist as an independent entity either in the company AI or as a single team companion if you need explicit segregation of duty. The “team spirit” has a whole new meaning with AI.
Evolution of the flow of work using AI, by Yannick HUCHARD

The flow of work evolves toward:

A. Human generates instructions using prompt engineering as explicit command requirements. The prompt is actually the evolution of the Command Line Interface (CLI), for a much greater general purpose.

B. AI generates a first draft

C. Human amend the sketch with input and then detail with new commands

D. Once the AI-driven engineering cycles are good enough for release change into the real world, you ship it for user acceptance or production if the risk is low.

  • The interaction with the AI becomes talkative. Either by chat or voice. AI is your new colleague.
  • AI starts having digital bodies, existing in a form of familiar avatars, and will be in multiple places: in your phones, your mixed reality glasses, in your Metaverse. Avatars could be Non-Player Characters (NPC), digitized versions of yourself, or even the retired expert that used to be your mentor.

So, am I going to be replaced by Artificial Intelligence?

You vs AI: you (still) have the upper hand

Here is a bet: 80% of white collars will keep their job. 20% of us will either refuse to learn these new tools to evolve either because of our fear of overwhelming technological advancement, or of conviction. Eventually, this minority will rush toward retirement and use these AI-powered services anyway to buy recommended stuff on Amazon after having been oriented by Google Bard from Google Search.

Why do I think that way? Because if we can produce much more with the same number of people, why would we deliver the same amount of products with fewer people?

Let’s take the example of Apple. The company entered the AI game in 2017 by introducing Core ML, an on-device AI framework embedded in iOS. The same year, it released the first generation of Apple Neural Engine (ANE) under the iPhone X with the A11 CPU.

Apple’s immeasurable impact comes from its ability to create and materialize an idea that is at the intersection of beauty, function, storytelling, and branding. Do you think Apple will push its culture of product excellence with the same amount of people amplified by a myriad of AI models, or will the company prefer reducing its workforce by leveraging more AI?

Pause for a second and think about it.

The other side of the coin

Taking the employer perspective in the era of AI Augmentation: what constitutes the difference between you and another candidate?

Any individual having a team of AI has the upper hand as he or she will be digitally augmented with skills and experience that usually takes years to acquire. What remains to develop are the skills to get used to these new abilities and use them at their best like an orchestra’s conductor.

You become the manager of AI teammates.

Hence, from the employer’s perspective, it results in hiring a virtual team vs an individual.

It raises the responsibility of Managers and the Human Resources department in the whole equation. Colleagues require to be upskilled to stay ahead, not only for the sake of the company but also to help them to keep building their personal value with respect to the market. Thus, leaders and HR have to set things in motion by organizing the next steps, while their own jobs are being reshaped and augmented…

Unlock the Future of Office Jobs Now

First, let’s admit once and for all you cannot win a 1 on 1 battle against AI, as much as you cannot win a nailing contest against a hammer.

The battle is long lost.

The battle doesn’t even make sense.

Because AI is the cumulative result of all humans’ knowledge, born from successful and failed experiments. To put it another way, as a sole individual, you cannot win against all of us and our ancestors combined!

And this is the incorrect mindset.

Hence, you will want to construct the future, your future, with all of us and our ancestors combined! You only need to be aware the future will be vastly different, and you should be part of the solution rather than engineering your problems.

AI is here to stay.

The questions to ask from now are:

  1. Are we all going to benefit from it?
  2. What portion of handcrafting do we want to keep?
  3. How much evil is going to benefit from it?
  4. How long until we get robots as widespread as vacuum cleaners?
  5. When are we going to find truly sustainable and clean energy? (no, batteries are not sustainable)

The key is here and now: you need to invest in algorithmic and analytical skills to translate activities to algorithms in order to be augmentable.

Next, the winning companies and communities will be the ones tapping into their people’s intelligence combined with creativity augmented by AI, the physical resources to change the world, and their abilities to satisfy needs within an enjoyable experience while maintaining a transparent and engaging conversation.

The gap between “good” and “best” will be even smaller between businesses, but the proposed experience and the branding will have a tremendous impact. Then, consistency and coherence in how you serve the customer and engage with your fans will act as compound interests. This is how you win the perpetual game.

The term community inherits a new meaning given the free aspect of AI. You are not even needing to build companies to achieve your goals: you only need an organization that plans and organizes the agreed work, like in Open Source Communities and Decentralized Autonomous Organizations (DAO).

Hence, I encourage you to build an A.I. readiness.

How to be A.I. ready?

Here are my recommendations to get started as an individual, especially if you are a leader in a company:

  1. “Socialize” with Generative AI applications useful to your job.
  2. Know your data and data systems to identify candidates for augmentation.
  3. Have “good” data. Good = true + meaningful + contextualized + accessible. As such, information must be stored in a secured and accessible location. Fortunately, Large Language Models are unstructured data friendly.
  4. Have technologists that can pioneer lateral ideas. I recommend hands-on architects.
  5. Assess and promote simple ideas on a regular basis, and establish an AI-dedicated project portfolio pipeline.
  6. Select and run a set of competent AI in a fully autonomous fashion

You can find a complete list of AI services at FutureTools.io and ThereIsAnAIForThat.com.

Less is not always more.

Less is more until you reach the “optimal zone”, an inflection point that represents the optimal balance between effort, cost, and result. Exponentiality occurs when for minimal effort and expenses, you achieve unprecedented results.

The critical factor is this natural law: everything is born from need, will be driven by purpose, feeds on energy, is protected by self-preservation, and evolves to maturity.

Thus, until AI is not given the aforementioned five elements at the same time, then, its digital self-preservation is never programmed to be mutually exclusive with the preservation of living beings, and finally, AI self-evolution stays within boundaries, then AI growth will not be at the expense of humanity. Under these circumstances, humans can remain the dominating species.

As a consequence, one must consider what gives birth to a “trigger”: this initial impulsion taking the form of an idea that results in action delivered by willpower from the mind’s womb. Until then, an AI will not willingly use another AI, automaton, or application because it needs to, but because it has been commanded or programmed by us.

Until then, we are safe.

We are… Fine… Aren’t we?

This is not the right question

The right question is what is going to change for me?

Earlier I said, “It depends on your ability to adapt by learning a practice that is new for everyone”.

The long answer starts with a twist: the groups of humans producing AI and the others using AI as elements of augmentation and amplification of their skills will have an exponential upper hand because they can fulfill needs faster, optimally, and accurately at the cost of… just… time.

For example, building the next Instagram will depend on someone having:

  • The willpower
  • A distinguishingly desirable idea
  • A series of creative ideas
  • The skills
  • The drive to sell, communicate and promote their ideas to clients.
  • The resilience to continue developing the ideas

We can conclude that what consistently makes the difference are: the idea, the drive, the skills, the way user experience answers the client’s needs, and the resources you can obtain to make things happen.

But if ideas are cheap and abundant, and should cognitive skills can be acquired using virtually free AI Augmentation, then the remaining differentiators are the drive, the user experience, and the resources.

Thus, the Intellectual Property of a company becomes its Cognitive Know-how. Suddenly, high-value assets are the doers displaying high and consistent motivation, leaders that not only keep the Pole Star lighten but are able to keep their teamates inspired: the creative people, and the group of people having the capacity to invest and evolve in the same direction around the same flag: their brand, which I consider to be the result of maintaining a homogeneous identity of the combined people and products.

Graal or Pandora?

This new technology raises thousands of questions.

The development of Generative AI technology has opened up a vast array of possibilities, but it has also raised thousands of questions that need to be addressed.

For instance, one major question is how Generative AI will change our day-to-day interactions.

Furthermore, there is concern about whether this technology could lead to mass unemployment and economic inequality.

Another potential consequence is that it might devalue human creativity and originality.

Additionally, it is important to explore how Generative AI might impact human cognition and decision-making.

In terms of IT Engineering and Architecture, what is the impact of AI on these fields, and how will they adapt to this new technology?

Education is another area that could be significantly impacted, and it is worth considering how Generative AI might affect traditional learning methods.

Moreover, there is a concern that Generative AI could create a world in which we cannot distinguish between what is real and what is artificial. If this were to happen, what are the ethical implications?

Finally, the implications of Generative AI for democracy and governance are also important to consider, particularly with regard to its development and regulation.

Overall, the development of Generative AI technology raises many questions needing collaborative wisdom in order to fully prepare for its impacts on society.

I will attempt to answer these questions in upcoming articles of the “Navigating the Future with Generative AI” series.

Until then, if you are looking for the one thing to remember about this article: play with Generatice AI until it replaces just one activity of your daily routine, then boast your prompt engineering skills by spreading the word and educating your relatives.

🫡